I work on several ML projects to produce photo realistic datasets for the US government and other companies. I am aware of bleeding edge technology since I am one of the few working and developing it. We aren't there yet where its seamless with reality, mainly with video.
There is a thing called the uncanny valley which happens mostly with faking human faces and movements. Some people are more sensitive to it than others, and as you work on removing the uncanny valley, you yourself become more sensitive to it. At a low level, people are good at recognizing people, much more than anything else. And when something is not exactly human, we feel weird and can for no conscious reason pick up on it. One major issue is data size. In order to get something to look realistic it needs to have the right level of detail, matching the surroundings and with movement, we can use mocapping but even that isn't perfect because we don't capture everything, it often looks too smooth, or choppy. Lighting, shadow, ambient occlusion, materials and their interaction with light and the environment are all approximations of real life and when it comes to motion, we can pick up on it. The way clothing works on someone, the way grass folds under someone's feet, how they react to the wind, all of this data needs to be interpreted to get something that would fool a small percentage of people working in my field. We can teach a machine to do a lot of this, and compress footage to make it look really, but even that has its own quirks which give it away. There is more I could say but I think you get the point.
What if they are using people to develop algorithms and software on what they believe is state of the art in performance, with the intention of using it on what is actually the state of the art.
What if something that would take you a solid week to render could be done in a few hours, what would that render look like if ran for a week on those machines...
Agree, not CGI but you shouldn't assume that known state of the art tech is the same as actual state of the art tech.
I mean with the decrease in size of cellphones since their inception people never think about the fact that they had suitcase nukes in the 70's, apply the celphone size refinement to nuke s and you suddenly have a hand grenade sized device capable of taking out a building, probably effective enough to do it with little to no residual radiation and little noticeable radiation outside the blast area.
Wonder what other things this chain of logical refinement expectations can be applied to(I mean the NSA had literal acres of server farms in the 70's...............)
I am one of the few people in the world making and inventing state-of-the-art technology. Im not assuming. I know. At least when it comes to machine-learning and cgi.
I work on several ML projects to produce photo realistic datasets for the US government and other companies. I am aware of bleeding edge technology since I am one of the few working and developing it. We aren't there yet where its seamless with reality, mainly with video.
Let me qualify this further. Humans and human movement is what I am talking about.
There is a thing called the uncanny valley which happens mostly with faking human faces and movements. Some people are more sensitive to it than others, and as you work on removing the uncanny valley, you yourself become more sensitive to it. At a low level, people are good at recognizing people, much more than anything else. And when something is not exactly human, we feel weird and can for no conscious reason pick up on it. One major issue is data size. In order to get something to look realistic it needs to have the right level of detail, matching the surroundings and with movement, we can use mocapping but even that isn't perfect because we don't capture everything, it often looks too smooth, or choppy. Lighting, shadow, ambient occlusion, materials and their interaction with light and the environment are all approximations of real life and when it comes to motion, we can pick up on it. The way clothing works on someone, the way grass folds under someone's feet, how they react to the wind, all of this data needs to be interpreted to get something that would fool a small percentage of people working in my field. We can teach a machine to do a lot of this, and compress footage to make it look really, but even that has its own quirks which give it away. There is more I could say but I think you get the point.
No problem.
What if they are using people to develop algorithms and software on what they believe is state of the art in performance, with the intention of using it on what is actually the state of the art.
What if something that would take you a solid week to render could be done in a few hours, what would that render look like if ran for a week on those machines...
Agree, not CGI but you shouldn't assume that known state of the art tech is the same as actual state of the art tech.
I mean with the decrease in size of cellphones since their inception people never think about the fact that they had suitcase nukes in the 70's, apply the celphone size refinement to nuke s and you suddenly have a hand grenade sized device capable of taking out a building, probably effective enough to do it with little to no residual radiation and little noticeable radiation outside the blast area.
Wonder what other things this chain of logical refinement expectations can be applied to(I mean the NSA had literal acres of server farms in the 70's...............)
I am one of the few people in the world making and inventing state-of-the-art technology. Im not assuming. I know. At least when it comes to machine-learning and cgi.