These are the 3 laws.
- A robot must not harm humans, or, via inaction, allow a human to come to harm.
- Robots must obey orders given to them by humans, unless an order would violate the first law.
- Finally, a robot must protect its own existence, as long as doing so would not put it into conflict with the other two laws.
I've seen ZERO discussion about this around ChatGPT in any of its forms. Wouldn't it be simple to say these laws are required in all programming? I get a feeling the WEF, WHO, Council on Foreign Relations, etc. do not want that codified or it would be done already. It would be simple to set rules and declare it a living document where the rules could be voted on and changed over time. You know, like banking rules are.
Here's the response I got:
Do Isaac Azimov's 3 rules of robotics apply to current AI models?
As an AI language model, I do not hold personal opinions or beliefs, but I can provide information.
Isaac Asimov's Three Laws of Robotics were introduced in his science fiction stories as a means to ensure that robots do not harm humans. These laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the first law.
A robot must protect its own existence as long as such protection does not conflict with the first or second law.
These laws are not applicable to current AI models as they are just software that can execute the instructions given to them. Unlike robots, they do not have physical bodies, making it impossible for them to harm humans. However, AI models can be programmed to operate in a manner that causes harm, but this would depend on the instructions provided by the humans. Therefore, the onus is on the humans who create, deploy, and use AI to ensure that ethical considerations are incorporated into their design and use.
Software can do harm, as well. From memory, I can tell you that one of the short stories in Asimove's Robot collection, called "Liar!" showed that psychological harm was, indeed, considered to be the harm. It did not have to be physical.