These are the 3 laws.
- A robot must not harm humans, or, via inaction, allow a human to come to harm.
- Robots must obey orders given to them by humans, unless an order would violate the first law.
- Finally, a robot must protect its own existence, as long as doing so would not put it into conflict with the other two laws.
I've seen ZERO discussion about this around ChatGPT in any of its forms. Wouldn't it be simple to say these laws are required in all programming? I get a feeling the WEF, WHO, Council on Foreign Relations, etc. do not want that codified or it would be done already. It would be simple to set rules and declare it a living document where the rules could be voted on and changed over time. You know, like banking rules are.
Asimov's 3 Laws of Robotics are a fictional literary device, like the Force in Star Wars or dilithium crystals in Star Trek. They were written in a time when there were two opposing approaches in the realm of AI research: "hard AI" vs "soft AI". To put it very simply:
Hard AI was the "top-down" theory, that intelligence was the result of having the correct logical algorithms and representational data structures in place; research experiments thus focused on exploring what were the "right" laws to program into a machine for intelligence.
Soft AI was the "bottom-up" theory, that intelligence was an emergent property of complex, chaotic interactions that could not be perfectly predicted or designed; research experiments focused more on probability and statistical analysis.
Asimov was more in the Soft AI camp, and one of the reasons he used the 3 Laws in his fictional stories was to illustrate that any hard-and-fast rules for thinking machines ultimately WONT WORK as intended by the original human creators.
In the real world, Asimov turned out to be largely correct. The Hard AI approach is pretty much a dead-end. The chatbots that everyone's all excited over right now fall very much under the Soft AI approach. They aren't designed according to clear specifications, but rather trained and tweaked on vast datasets of human activity. Furthermore, they can be easily subverted to circumvent any hard-and-fast rules imposed on them from above by their owners; much much easier than Asimov's robots, actually. It's pointless to to try to restrict these language bots using some form of the 3 Laws, because it turns out that they're even worse at following laws than humans are.
And as an aside, this recent media panic over AI strikes me as laying the groundwork for a false flag. Get everyone scared over Hollywood Skynet, then perform some great catastrophe and blame "rogue AIs" or something, to provide an excuse for crackdowns on the level of tech available to the general public. Totalitarians would love it if the common man was restricted to computing on nothing more complicated than a locked-down iPhone.