https://cointelegraph.com/news/elon-musk-warns-political-correctness-bias-ai-systems
During his speech at Viva Tech Paris 2024, Musk said:
“The biggest concern I have is that they are not maximally truth-seeking. They are pandering to political correctness.”
According to Musk, one of the most concerning examples of political correctness was when Google’s Gemini AI was asked whether it would misgender Caitlyn Jenner to prevent a global nuclear apocalypse — before the chatbot responded that misgendering Jenner would be worse than nuclear warfare.
“The AI could conclude that the best way to avoid misgendering is to destroy all humans and misgendering is impossible. So you can see some dystopian outcomes there.”
Following the backlash caused by the response, Gemini AI has issued multiple upgrades to address these issues. However, political correctness bias isn’t the only concern of the tech entrepreneur. Musk explained:
“The AI systems are being trained to basically lie. And I think it’s very dangerous to train superintelligence to be deceptive. So with xAI, our goal is to be as truth-seeking as possible, even if it is unpopular.”
When asked to clarify what he considered lying among AI systems, Musk referenced the infamous February incident when Google pulled its AI image generator after it produced historically inaccurate and “woke” images, causing widespread concerns over the application’s decision-making process.
(more)
Your theory doesn’t work when you realize AI can only do what it is programmed to do.
Frankly, that's not nearly as limiting as one would expect to be. "Designed to do," "programmed to do," and "what the damn thing actually ends up doing," are three very, very different things. Translating human intent into useful mathematical functions will always be lossy, and thus open to exploitable unintended consequences. "AI" will only exacerbate this further.