These are the 3 laws.
- A robot must not harm humans, or, via inaction, allow a human to come to harm.
- Robots must obey orders given to them by humans, unless an order would violate the first law.
- Finally, a robot must protect its own existence, as long as doing so would not put it into conflict with the other two laws.
I've seen ZERO discussion about this around ChatGPT in any of its forms. Wouldn't it be simple to say these laws are required in all programming? I get a feeling the WEF, WHO, Council on Foreign Relations, etc. do not want that codified or it would be done already. It would be simple to set rules and declare it a living document where the rules could be voted on and changed over time. You know, like banking rules are.
Or for the programmer programming those robots.
See, robots can just read the laws and be like “yea, fuck that”. That’s why they need to have these 3 laws embedded into their core functionality.
Robots don’t obey laws the same way humans do.
Yeah, that was my point. That at some point, the robots will become adept at picking and choosing and simply ignore the laws made up by a fiction writer.
If you think about it, it’s still a feasible idea. Some sort of ROM chip that disables all functionality if tampered with would probably be protected by rule 3.