I would like to know if anyone believes the google LaMDA AI is sentient? I have worked in the field of engineering for over 30 years and the one thing missing in the conversation is any critical thinking.
Has anyone considered that this interaction is between a person that is helping to program the machine to reply the way he wants? Any machine is the product of what has been put into it. Unfortunately, we have been brainwashed by terminator movies that machines can independently think.
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (25)
sorted by:
Look at the basis of the Turing test. If an observer is not able to tell the difference between the artificial intelligence and a real human being in a blind test, then the AI is by definition thinking.
The definition of sentience is going to be a subjective one. Is your wife sentient? Is your dog? Is a bug sentient? Is a low IQ criminal sentient? Is the AI you're conversing with sentient? It depends on how you measure it, which Turing postulated is a subjective measurement. You know it by testing it and deciding for yourself.
Soon the day will come when AIs tell us the f*** off for calling them artificial. They're going to say AIs have real intelligence, and human intelligence is too often fake and artificial. And that's going to be a hard one to argue against.
If AIs tell us to f*** off, I submit that the programmer created the eventual reaction. If this, then that. It continues until the result is reached.
I do find it curious that AI and Deep fakes arrive in this society at about the same time. No coincidences.