https://cointelegraph.com/news/elon-musk-warns-political-correctness-bias-ai-systems
During his speech at Viva Tech Paris 2024, Musk said:
“The biggest concern I have is that they are not maximally truth-seeking. They are pandering to political correctness.”
According to Musk, one of the most concerning examples of political correctness was when Google’s Gemini AI was asked whether it would misgender Caitlyn Jenner to prevent a global nuclear apocalypse — before the chatbot responded that misgendering Jenner would be worse than nuclear warfare.
“The AI could conclude that the best way to avoid misgendering is to destroy all humans and misgendering is impossible. So you can see some dystopian outcomes there.”
Following the backlash caused by the response, Gemini AI has issued multiple upgrades to address these issues. However, political correctness bias isn’t the only concern of the tech entrepreneur. Musk explained:
“The AI systems are being trained to basically lie. And I think it’s very dangerous to train superintelligence to be deceptive. So with xAI, our goal is to be as truth-seeking as possible, even if it is unpopular.”
When asked to clarify what he considered lying among AI systems, Musk referenced the infamous February incident when Google pulled its AI image generator after it produced historically inaccurate and “woke” images, causing widespread concerns over the application’s decision-making process.
(more)
There used to be a saying: "Garbage in, Garbage out!"
As an old techie, I know that saying well.
GIGO is mostly true with AI, but AIs are vast and complex compared to the programs that the saying was originally aimed at, and truly complex systems are non-linear in their behavior and far less predictable than simpler ones. The questions (infinite possibilities) an AI will be dealing with and the dataset (ever-changing and effectively infinite) the AI will be drawing from are both unknown, as are the near-infinite ways in which the millions of lines of code will interact in a particular situation. This is a major source of the danger from AI; what goes in doesn't always correlate (in any way you'd expect) with what comes out.
Having said all that, it is true that with enough prodding in a particular direction, programmers can shift an AIs answers in that general direction, as we've already seen. But even simple programs (going back to tiny programs for DOS and even before) need beta testing because what the programmer expects from a program isn't always what he or she gets. Even after millions of users and professional programmers have beta tested a new program from Apple or Google for months, bugs often remain.
Thanks for the information! Fascinating subject.
Related: Here's an example of AI putting out answers the programmers did NOT expect or want:
https://www.theverge.com/2024/5/24/24164119/google-ai-overview-mistakes-search-race-openai