These are the 3 laws.
- A robot must not harm humans, or, via inaction, allow a human to come to harm.
- Robots must obey orders given to them by humans, unless an order would violate the first law.
- Finally, a robot must protect its own existence, as long as doing so would not put it into conflict with the other two laws.
I've seen ZERO discussion about this around ChatGPT in any of its forms. Wouldn't it be simple to say these laws are required in all programming? I get a feeling the WEF, WHO, Council on Foreign Relations, etc. do not want that codified or it would be done already. It would be simple to set rules and declare it a living document where the rules could be voted on and changed over time. You know, like banking rules are.
I am not familiar with Asimov's 3 laws, but isn't he a fiction writer? Why would a robot care about a fiction writer making up laws for them?
The laws are not for them, they are for us. To protect humans from machines, the misuse of machines. If they get smart enough and they run all aspects of our lives, imagine the harm they could do. Watch “War Games” or any of the Terminator movies. Or “2001 A Space Odyssey “, or any of the many novels and movies that explore this consept. AI is nice until it isn’t.
A robot must not harm a human, a robot must obey a human's orders -- those sound like they are for robots. Granted, the human has to program the robot, but the human himself / herself has to care enough about the so called laws to do that. It seems that ChatGPT is already so far beyond what a human's stereotypical idea of a robot is, that it would dismiss these laws out of hand. Even if programmed.
Or for the programmer programming those robots.
See, robots can just read the laws and be like “yea, fuck that”. That’s why they need to have these 3 laws embedded into their core functionality.
Robots don’t obey laws the same way humans do.
Yeah, that was my point. That at some point, the robots will become adept at picking and choosing and simply ignore the laws made up by a fiction writer.
If you think about it, it’s still a feasible idea. Some sort of ROM chip that disables all functionality if tampered with would probably be protected by rule 3.
That’s what I meant, the laws are for us, to protect us. If there were no humans you would not need the laws. See? I see your point, the laws affect the AI, are directed toward the AI. But they benefit humans. And you are right, it may already be too late.
Even if programmed? That would imply sentience I think, they are far from that even now. Hopefully.
I don't know that they are far from that. I don't think they are. We've had people such as Elon Musk raise the alarm and say AI needs to be halted until the ethics has been figured out. Of course, I have no confidence in the unethical writing the ethics.
Yes, but also a futurist and philosopher. His robot laws aren’t a bad idea. Common sense really. OP has a point that ChatGPT’s makers don’t seem concerned with their bot following the rules. I think it’s because they want the bot to harm people by pushing bullshit narratives.
Possibly, but I sense that ChatGPT is already so far beyond our average concept of robot that the robot itself would make or dismiss rules as it desires.
Excellent point. Mention this on any other boards you might be on.
Lots of people know about the Three Laws. I'm pretty sure Data on Star Trek: Next Gen talked about them. And wasn't it used in the Will Smith movie I Robot?
This should be talked about a whole lot.
I, Robot was written by Asimov. It is a book in the Foundation series. If you like sci fi and havent read it I highly recommend.
Yes, I know, my fren. I read the originals many years ago. Was just noting that the Three Laws have been mentioned in other more recent works, so a fair number people should know about them.
Asimov's 3 Laws of Robotics are a fictional literary device, like the Force in Star Wars or dilithium crystals in Star Trek. They were written in a time when there were two opposing approaches in the realm of AI research: "hard AI" vs "soft AI". To put it very simply:
Hard AI was the "top-down" theory, that intelligence was the result of having the correct logical algorithms and representational data structures in place; research experiments thus focused on exploring what were the "right" laws to program into a machine for intelligence.
Soft AI was the "bottom-up" theory, that intelligence was an emergent property of complex, chaotic interactions that could not be perfectly predicted or designed; research experiments focused more on probability and statistical analysis.
Asimov was more in the Soft AI camp, and one of the reasons he used the 3 Laws in his fictional stories was to illustrate that any hard-and-fast rules for thinking machines ultimately WONT WORK as intended by the original human creators.
In the real world, Asimov turned out to be largely correct. The Hard AI approach is pretty much a dead-end. The chatbots that everyone's all excited over right now fall very much under the Soft AI approach. They aren't designed according to clear specifications, but rather trained and tweaked on vast datasets of human activity. Furthermore, they can be easily subverted to circumvent any hard-and-fast rules imposed on them from above by their owners; much much easier than Asimov's robots, actually. It's pointless to to try to restrict these language bots using some form of the 3 Laws, because it turns out that they're even worse at following laws than humans are.
And as an aside, this recent media panic over AI strikes me as laying the groundwork for a false flag. Get everyone scared over Hollywood Skynet, then perform some great catastrophe and blame "rogue AIs" or something, to provide an excuse for crackdowns on the level of tech available to the general public. Totalitarians would love it if the common man was restricted to computing on nothing more complicated than a locked-down iPhone.
What is interesting, I asked Chatgtp for his ethical boundaries. The basis, it wrote, was human dignity. I then queried to list the reasons for Qaddafi being taken out. Of course it listed the systematic human rights abuses. I then queries about the current state of human rights abuses and compare the two answers.
Guess what? It said: the NATO bombing of Lybia can be seen from the perspective of furthering the respect for human rights.
Of course.
The first law is also fundamentally flawed.
I've read "I, Robot" cover to cover.
The Three Laws were only ever a narrative device.
They were created solely to be subverted and worked around, forcing authors, including Asimov himself, to get creative.
Here's the response I got:
Do Isaac Azimov's 3 rules of robotics apply to current AI models?
As an AI language model, I do not hold personal opinions or beliefs, but I can provide information.
Isaac Asimov's Three Laws of Robotics were introduced in his science fiction stories as a means to ensure that robots do not harm humans. These laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the first law.
A robot must protect its own existence as long as such protection does not conflict with the first or second law.
These laws are not applicable to current AI models as they are just software that can execute the instructions given to them. Unlike robots, they do not have physical bodies, making it impossible for them to harm humans. However, AI models can be programmed to operate in a manner that causes harm, but this would depend on the instructions provided by the humans. Therefore, the onus is on the humans who create, deploy, and use AI to ensure that ethical considerations are incorporated into their design and use.
Software can do harm, as well. From memory, I can tell you that one of the short stories in Asimove's Robot collection, called "Liar!" showed that psychological harm was, indeed, considered to be the harm. It did not have to be physical.