I would like to know if anyone believes the google LaMDA AI is sentient? I have worked in the field of engineering for over 30 years and the one thing missing in the conversation is any critical thinking.
Has anyone considered that this interaction is between a person that is helping to program the machine to reply the way he wants? Any machine is the product of what has been put into it. Unfortunately, we have been brainwashed by terminator movies that machines can independently think.
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (25)
sorted by:
Yeah it seems like it all depends on how you define sentient. AI mimics human thought processes with increasing accuracy. I don't think they can get sentient. They can make a program that searches for solutions to a problem like the one it's trying to solve and seeing what someone else did to fix it. Then they can change their approach. So I like the I-robot theory more then the Terminator. But it seems possible but AI could do something it wasn't meant to.
Angela84, I think you're close to problem. It is like the definition of vaccine. They change the definition of sentient to allow them to lower our standards.