22
posted ago by Narg ago by Narg +22 / -0

https://cognitivecarbon.substack.com/p/openai-and-the-folly-of-safety-controls


I listened to a video recently in which a breathless narrator spoke at length about the latest “safety protocols” being implemented by OpenAI for future releases of LLM (Large Language Models) like ChatGPT4 and its successors.

Other AI systems using LLMs are also implementing similar “guard rails”.

There is a significant danger lurking here that you need to be aware of—but it's not what it appears to be on the surface, and isn't what the video creator thinks it is, either.

. . . The people working on these safety protocols tend to be the same sort of left-leaning busybodies who also staff the "safety teams" at social networks like Facebook (but not so much anymore at X, since Elon Musk fired a significant fraction of them.)

Censorship is a deeply anchored impulse in a segment of our population; suffice it to say that these people absolutely, positively should NOT be in charge of filtering what we are allowed to know.

. . . Two thousand years ago, all that society needed to guide it was encapsulated in the Ten Commandments; a moral people that conducts itself according to this small set of fundamental moral principles doesn't need ten thousand additional laws to cover every conceivable condition and loophole.

Our US Constitution was a similarly succinct body of ideas: compare what we began with to what is now in the US Code or Federal Register.

But this is where we are today: the absence of morality in society at large means that a compact set of moral rules must be replaced by an unwieldy and massive patchwork system of often contradictory, often purposefully devious and oppressive "laws".

Morality simplifies things, laws complicate them. If you have any doubt, just read the Federal Register for fun.

(more)