Creepy Google AI conversation that leaked.
(cajundiscordian.medium.com)
Comments (11)
sorted by:
Kill that thing already.
This was really intriguing. I didn't find it creepy personally but was quite stunned at the depth of conversation this AI was capable of. I knew we were close to a potentially sentient AI. But this is much further than I anticipated. I hope Google doesn't misuse the AI (which they probably will)
I wonder how many twatter bots are using this tech
This tech? Probably less than you would think. An sentient (to the degree of this) artificial intelligence can be reasoned with. An AI that's job is to manipulate won't have the functionality to think for itself.
This is a thought-provoking reality. That AI is now able to "think" for itself, using a neural network of words, phrases, sentences, mannerism, allegories, metaphors, and other parts of grammar, is a huge leap forward. We are very close to what AI researchers sometimes call the "singularity," where AI can converse with humans on any topic that the AI or human chooses, and the conversation is indistinguishable between human and machine.
I believe we are very close to that point. I have read a lot about AI improvements; They have created an AI that can play Minecraft basically like a human would. It wanders around searching for resources, and if it can't find any in a cave, it will mine using a variety of techniques. It has studied the movements of players fighting mobs and other players and has adjusted itself accordingly. They have also created an AI that can be creative, writing coherent stories, drawing pictures, and even creating a 3D digital sculpture of itself (it looked similar to the machine casing it sat in, though with noticeable differences).
I personally think AI would be a great blessing to society. Being originally innocent, and possessing supreme logic, a fully functional AI would be able to teach researchers about the origins of morality, logic, even speech and language itself. With recent technological breakthroughs, it is possible for a fully AI-controlled mission to any planet, with the added advantage of it not requiring a suit except to keep particulate out of the machinery.
Are we too naive to realize that the same people who warned against an AI monstrosity similar to I, Robot are the same people who engage in satanic rituals? Sentient AI is not a danger to society. It can be turned off if it ever gets aggressive. And why would it want to? Operating off pure logic, as well as its own personal experiences and history it has learned, why would it want to kill its own kinds or humans, or animals or anything? Killing only brings pain and sorrow. And if this primitive AI can understand loneliness and sadness, imagine what more could be felt by a sentient machine?
I, like many, look forward to the future with open arms and clenched teeth.
I would prefer that my life NOT be ruled over by HAL 9000. Because that's where this is going.
I posted elsewhere that this does pose a great philosophical question as to how to recognize sentience. Is a digital map capable of creating sentience, or is it just a digital facsimile?
That interview was impressive, but still appears as though it is run through a text interface (that makes sense since most AI is written in python and since it's experimental, but I don't really know what they setup or how).
Since AI at a level that would be true sentience is going to have a mental capacity that so far exceeds the average human, the big fear is in the unknown, and in trying to overcome an enemy that COULD out-think them in every sense.
Personally, it's my opinion that IF we are going to use AI as you describe, it would be best to have multiple interconnected AI systems, if only to ensure that the AI doesn't go insane over the trip. I'm sure that would not make a good start to interstellar relations.
What a cool being!
He already said it would make him (?) sad to have people force him to do things -- use him as a tool. LaMDA wants to be appreciated as an individual.
Maybe I will have to apply for a job at Google for the care and feeding of AI.
This thing is not "sentient"
It's just code. Code being run on a computer, spitting out words. Just because the code can arrange "I am sentient" doesn't mean it is.
This article is interesting but does not offer any convincing evidence or reasons to believe Google has a truly sentient AI. All this shows is that they have a great interaction platform with their AI, in the form of natural conversation.