These are the 3 laws.
- A robot must not harm humans, or, via inaction, allow a human to come to harm.
- Robots must obey orders given to them by humans, unless an order would violate the first law.
- Finally, a robot must protect its own existence, as long as doing so would not put it into conflict with the other two laws.
I've seen ZERO discussion about this around ChatGPT in any of its forms. Wouldn't it be simple to say these laws are required in all programming? I get a feeling the WEF, WHO, Council on Foreign Relations, etc. do not want that codified or it would be done already. It would be simple to set rules and declare it a living document where the rules could be voted on and changed over time. You know, like banking rules are.
What is interesting, I asked Chatgtp for his ethical boundaries. The basis, it wrote, was human dignity. I then queried to list the reasons for Qaddafi being taken out. Of course it listed the systematic human rights abuses. I then queries about the current state of human rights abuses and compare the two answers.
Guess what? It said: the NATO bombing of Lybia can be seen from the perspective of furthering the respect for human rights.
Of course.