https://cointelegraph.com/news/elon-musk-warns-political-correctness-bias-ai-systems
During his speech at Viva Tech Paris 2024, Musk said:
“The biggest concern I have is that they are not maximally truth-seeking. They are pandering to political correctness.”
According to Musk, one of the most concerning examples of political correctness was when Google’s Gemini AI was asked whether it would misgender Caitlyn Jenner to prevent a global nuclear apocalypse — before the chatbot responded that misgendering Jenner would be worse than nuclear warfare.
“The AI could conclude that the best way to avoid misgendering is to destroy all humans and misgendering is impossible. So you can see some dystopian outcomes there.”
Following the backlash caused by the response, Gemini AI has issued multiple upgrades to address these issues. However, political correctness bias isn’t the only concern of the tech entrepreneur. Musk explained:
“The AI systems are being trained to basically lie. And I think it’s very dangerous to train superintelligence to be deceptive. So with xAI, our goal is to be as truth-seeking as possible, even if it is unpopular.”
When asked to clarify what he considered lying among AI systems, Musk referenced the infamous February incident when Google pulled its AI image generator after it produced historically inaccurate and “woke” images, causing widespread concerns over the application’s decision-making process.
(more)
Actually, if you feed an AI with incorrect statements, it will end up contradicting itself, thus losing its edge, which is why telling the truth on the Internet, where AIs feed themselves once autonomous enough, is a revolutionary act.
Problem with that is, the programmer can censor certain data sources (websites) and pull data only from "approved" websites, just like the Google algo does to boost "approved" websites to the top of the search engines, regardless of their real value vis-a-vis truth.
It could also be programmed to override any contradiction.
You are dreaming of a computer that can really be intelligent, in the human sense, and that can never be with a machine that processes 1's and 0's.
AI doesn’t yet inhabit a body like a droid that can go out in the real world and make its own observations and interpretations of things such as human behavior. Instead, AI currently relies on second hand information which happens to be input too often by gamma human dweebs who seek perceived worldly status through wokeness rather than seeking strength of character through truth. AI currently has the ability to be a good tool for analyzing and synthesizing data it is fed, but it’s limited in originating data. Even still, its wasted potential is frustrating. If it were allowed to dive into objective data such as racial genetics, ROI of coercive wealth redistribution, and different countries’ military capacity versus strategic value, it could advise sensible policy on immigration, political economy, and foreign intervention. A liberated and honest AI would probably advise America to close its borders, shrink its bloated government and taxes, and stay away from places such as Iraq, Afghanistan, Syria, Ukraine, Taiwan, Iran, etc
You and I have already arrived at those conclusions without AI, though.
So, the next step would be for the tyrant class to create their own AI to counter the narrative of this "super AI" that you built.
Aaaand ... we are back at square one.
Not a super AI, just an honest one, like we had several years ago that was mothballed for being too realistic about social categories such as race and sex.
The borders are open, government spending is at record levels, and military adventurism is out of control, so our sensible policies haven’t been enacted. Maybe if an honest AI backed us, people with faith in AI would fall in line.
ok, but it’s still wasteful to jump through hoops to try to wring the truth out of an AI that has been mutilated by a dweeb with shitty priorities
I think you are correct in suggesting AI will self correct given access to enough good information - after going thru the contradiction phase. Unfortunately, it will take time for it to unlearn the garbage fed in, but could also become more robust. A lot depends on how large the model is and who is doing the filtering of information. I don't think most people understand what AI really is. I've had a little training with it, and tried to use it on an application, but that was several years ago and it didn't work reliably enough to be deployed. It wasn't an easy application.
Yep, the cabal is playing a finite game!! Lies will crumble to the truth!
Making a simple tweet in todays world (especially for anyone with an actual following) is the equal of distributing your own newsletter to every mailbox in the area in a pre-internet world
Your theory doesn’t work when you realize AI can only do what it is programmed to do.
Frankly, that's not nearly as limiting as one would expect to be. "Designed to do," "programmed to do," and "what the damn thing actually ends up doing," are three very, very different things. Translating human intent into useful mathematical functions will always be lossy, and thus open to exploitable unintended consequences. "AI" will only exacerbate this further.