There is a thing called the uncanny valley which happens mostly with faking human faces and movements. Some people are more sensitive to it than others, and as you work on removing the uncanny valley, you yourself become more sensitive to it. At a low level, people are good at recognizing people, much more than anything else. And when something is not exactly human, we feel weird and can for no conscious reason pick up on it. One major issue is data size. In order to get something to look realistic it needs to have the right level of detail, matching the surroundings and with movement, we can use mocapping but even that isn't perfect because we don't capture everything, it often looks too smooth, or choppy. Lighting, shadow, ambient occlusion, materials and their interaction with light and the environment are all approximations of real life and when it comes to motion, we can pick up on it. The way clothing works on someone, the way grass folds under someone's feet, how they react to the wind, all of this data needs to be interpreted to get something that would fool a small percentage of people working in my field. We can teach a machine to do a lot of this, and compress footage to make it look really, but even that has its own quirks which give it away. There is more I could say but I think you get the point.
There is a thing called the uncanny valley which happens mostly with faking human faces and movements. Some people are more sensitive to it than others, and as you work on removing the uncanny valley, you yourself become more sensitive to it. At a low level, people are good at recognizing people, much more than anything else. And when something is not exactly human, we feel weird and can for no conscious reason pick up on it. One major issue is data size. In order to get something to look realistic it needs to have the right level of detail, matching the surroundings and with movement, we can use mocapping but even that isn't perfect because we don't capture everything, it often looks too smooth, or choppy. Lighting, shadow, ambient occlusion, materials and their interaction with light and the environment are all approximations of real life and when it comes to motion, we can pick up on it. The way clothing works on someone, the way grass folds under someone's feet, how they react to the wind, all of this data needs to be interpreted to get something that would fool a small percentage of people working in my field. We can teach a machine to do a lot of this, and compress footage to make it look really, but even that has its own quirks which give it away. There is more I could say but I think you get the point.
No problem.