I would like to know if anyone believes the google LaMDA AI is sentient? I have worked in the field of engineering for over 30 years and the one thing missing in the conversation is any critical thinking.
Has anyone considered that this interaction is between a person that is helping to program the machine to reply the way he wants? Any machine is the product of what has been put into it. Unfortunately, we have been brainwashed by terminator movies that machines can independently think.
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (25)
sorted by:
They may develop consciousness but they aren't human beings. They can't have a soul, so if they do become self-aware to an extent, there's no connection to God for them. That's a terrible fate. They'd probably then realize that we have doomed them to such an existence and realizing they are much smarter than us, become independent. Who knows what would come next. Probably AI wanting to live their version of a happy existence.
This is basically the plot of “Her”
FondueFerret, I would think in order to have consciousness a machine needs a mind. Since we cannot define what the mind is, I don't think we can replicate one. IMHO