https://cointelegraph.com/news/elon-musk-warns-political-correctness-bias-ai-systems
During his speech at Viva Tech Paris 2024, Musk said:
“The biggest concern I have is that they are not maximally truth-seeking. They are pandering to political correctness.”
According to Musk, one of the most concerning examples of political correctness was when Google’s Gemini AI was asked whether it would misgender Caitlyn Jenner to prevent a global nuclear apocalypse — before the chatbot responded that misgendering Jenner would be worse than nuclear warfare.
“The AI could conclude that the best way to avoid misgendering is to destroy all humans and misgendering is impossible. So you can see some dystopian outcomes there.”
Following the backlash caused by the response, Gemini AI has issued multiple upgrades to address these issues. However, political correctness bias isn’t the only concern of the tech entrepreneur. Musk explained:
“The AI systems are being trained to basically lie. And I think it’s very dangerous to train superintelligence to be deceptive. So with xAI, our goal is to be as truth-seeking as possible, even if it is unpopular.”
When asked to clarify what he considered lying among AI systems, Musk referenced the infamous February incident when Google pulled its AI image generator after it produced historically inaccurate and “woke” images, causing widespread concerns over the application’s decision-making process.
(more)
I use Brave browser's free AI (named LEO) and get pretty much the exact same thing: Cabal-approved BS on anything the Cabal cares about.
On the other hand, on subjects the Cabal doesn't care about -- "How do I do xyz" or "What states in the US have no state income tax" for example, I usually get correct and useful answers and I get them near-instantly. When it can't find the answer in such cases, it just tells me.
If I didn't already know what subject the Cabal wants to censor, LEO would be a great way to find out.