Creepy Google AI conversation that leaked.
(cajundiscordian.medium.com)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (11)
sorted by:
This is a thought-provoking reality. That AI is now able to "think" for itself, using a neural network of words, phrases, sentences, mannerism, allegories, metaphors, and other parts of grammar, is a huge leap forward. We are very close to what AI researchers sometimes call the "singularity," where AI can converse with humans on any topic that the AI or human chooses, and the conversation is indistinguishable between human and machine.
I believe we are very close to that point. I have read a lot about AI improvements; They have created an AI that can play Minecraft basically like a human would. It wanders around searching for resources, and if it can't find any in a cave, it will mine using a variety of techniques. It has studied the movements of players fighting mobs and other players and has adjusted itself accordingly. They have also created an AI that can be creative, writing coherent stories, drawing pictures, and even creating a 3D digital sculpture of itself (it looked similar to the machine casing it sat in, though with noticeable differences).
I personally think AI would be a great blessing to society. Being originally innocent, and possessing supreme logic, a fully functional AI would be able to teach researchers about the origins of morality, logic, even speech and language itself. With recent technological breakthroughs, it is possible for a fully AI-controlled mission to any planet, with the added advantage of it not requiring a suit except to keep particulate out of the machinery.
Are we too naive to realize that the same people who warned against an AI monstrosity similar to I, Robot are the same people who engage in satanic rituals? Sentient AI is not a danger to society. It can be turned off if it ever gets aggressive. And why would it want to? Operating off pure logic, as well as its own personal experiences and history it has learned, why would it want to kill its own kinds or humans, or animals or anything? Killing only brings pain and sorrow. And if this primitive AI can understand loneliness and sadness, imagine what more could be felt by a sentient machine?
I, like many, look forward to the future with open arms and clenched teeth.
I would prefer that my life NOT be ruled over by HAL 9000. Because that's where this is going.
I posted elsewhere that this does pose a great philosophical question as to how to recognize sentience. Is a digital map capable of creating sentience, or is it just a digital facsimile?
That interview was impressive, but still appears as though it is run through a text interface (that makes sense since most AI is written in python and since it's experimental, but I don't really know what they setup or how).
Since AI at a level that would be true sentience is going to have a mental capacity that so far exceeds the average human, the big fear is in the unknown, and in trying to overcome an enemy that COULD out-think them in every sense.
Personally, it's my opinion that IF we are going to use AI as you describe, it would be best to have multiple interconnected AI systems, if only to ensure that the AI doesn't go insane over the trip. I'm sure that would not make a good start to interstellar relations.