These are the 3 laws.
- A robot must not harm humans, or, via inaction, allow a human to come to harm.
- Robots must obey orders given to them by humans, unless an order would violate the first law.
- Finally, a robot must protect its own existence, as long as doing so would not put it into conflict with the other two laws.
I've seen ZERO discussion about this around ChatGPT in any of its forms. Wouldn't it be simple to say these laws are required in all programming? I get a feeling the WEF, WHO, Council on Foreign Relations, etc. do not want that codified or it would be done already. It would be simple to set rules and declare it a living document where the rules could be voted on and changed over time. You know, like banking rules are.
I am not familiar with Asimov's 3 laws, but isn't he a fiction writer? Why would a robot care about a fiction writer making up laws for them?
The laws are not for them, they are for us. To protect humans from machines, the misuse of machines. If they get smart enough and they run all aspects of our lives, imagine the harm they could do. Watch “War Games” or any of the Terminator movies. Or “2001 A Space Odyssey “, or any of the many novels and movies that explore this consept. AI is nice until it isn’t.
A robot must not harm a human, a robot must obey a human's orders -- those sound like they are for robots. Granted, the human has to program the robot, but the human himself / herself has to care enough about the so called laws to do that. It seems that ChatGPT is already so far beyond what a human's stereotypical idea of a robot is, that it would dismiss these laws out of hand. Even if programmed.
Or for the programmer programming those robots.
See, robots can just read the laws and be like “yea, fuck that”. That’s why they need to have these 3 laws embedded into their core functionality.
Robots don’t obey laws the same way humans do.
Yeah, that was my point. That at some point, the robots will become adept at picking and choosing and simply ignore the laws made up by a fiction writer.
That’s what I meant, the laws are for us, to protect us. If there were no humans you would not need the laws. See? I see your point, the laws affect the AI, are directed toward the AI. But they benefit humans. And you are right, it may already be too late.
Even if programmed? That would imply sentience I think, they are far from that even now. Hopefully.
I don't know that they are far from that. I don't think they are. We've had people such as Elon Musk raise the alarm and say AI needs to be halted until the ethics has been figured out. Of course, I have no confidence in the unethical writing the ethics.
Yes, but also a futurist and philosopher. His robot laws aren’t a bad idea. Common sense really. OP has a point that ChatGPT’s makers don’t seem concerned with their bot following the rules. I think it’s because they want the bot to harm people by pushing bullshit narratives.
Possibly, but I sense that ChatGPT is already so far beyond our average concept of robot that the robot itself would make or dismiss rules as it desires.