Kinda, It depends more on the data given to it and the environment it learned in. For example when I created an Ai to detect people crossing the border, for the data i created 10000 temporal sequences which totals 30000 synthetic images of humans moving across terrain with random skin tones, clothing, walking paths, strides, speeds. But I only randomized the amount of people to be between 1 and 5. So the ai may fail with very large numbers, and it should fail with vehicles. It might also fail if there are any flashes of light from metallic objects or if the people are crawling. Basically, the more you randomize the data within realistic parameters, the more generalized the model and the more flexible the Ai will be to perform the task.
Kinda, It depends more on the data given to it and the environment it learned in. For example when I created an Ai to detect people crossing the border, for the data i created 10000 temporal sequences which totals 30000 synthetic images of humans moving across terrain with random skin tones, clothing, walking paths, strides, speeds. But I only randomized the amount of people to be between 1 and 5. So the ai may fail with very large numbers, and it should fail with vehicles. It might also fail if there are any flashes of light from metallic objects or if the people are crawling. Basically, the more you randomize the data within realistic parameters, the more generalized the model and the more flexible the Ai will be to perform the task.