Self-proclaimed AI savior Elon Musk will launch his own artificial intelligence TOMORROW - as he tries to avoid tech destroying humanity
Elon Musk is set to roll out the first model of his AI-powered system, xAI, on Saturday, one day after he proclaimed the tech is the biggest risk to humanity.
The billionaire said Friday that he is opening up early access to a select group, but details of who has not been shared.
'In some important respects, it (xAI's new model) is the best that currently exists,' the Tesla CEO said on Friday.
Musk, who has been critical of Big Tech's AI efforts and censorship, said earlier this year that he would launch a maximum truth-seeking AI that tries to understand the nature of the universe to rival Google's Bard and Microsoft's Bing AI.
Musk revealed his startup on July 12, 2023 by launching a dedicated X account for the AI company and spares website.
The official website only shows an ambitious vision of xAI - that it was developed 'to understand the true nature of the universe.'
Many of the founding members are skilled with large language models.
The xAI team includes Igor Babuschkin, a DeepMind researcher, Zihang Dai, a research scientist at Google Brain and Toby Pohlen, also from DeepMind.
'Announcing formation of @xAI to understand reality,' Musk posted on what was Twitter last year.
He then shared another post highlighting how the date of xAI's release is to honor Douglas Adams' 'The Hitchhiker's Guide to the Galaxy.'
When adding up the month, day and year, you get 42.
The number is the answer a supercomputer gives to 'the Ultimate Question of Life, the Universe, and Everything.'
AI could be one of the biggest threats to humanity. Maybe we’ll find it’s threat was overstated by the paranoid and it’ll amount to a fancy tool that ultimately is useful to a point but doesn’t result in substantial change. Or it could be the proverbial key to unlocking the next stage in the technological/industrial revolution.
And to make matters worse. We’ve got no true way of putting this proverbial Genie back in the bottle. A Country or Company with the resources can’t afford to Not Develop AI. Because you’d run the risk of being caught with your proverbial pants down by someone who wasn’t so bothered with the philosophical and ethical quandaries of AI development.
And Barring either the U.S or another major country going Anti-AI and being willing to enforce AI prohibitions at gunpoint. Or the formation of an international organization with the firepower or ability to requisition the firepower necessary to enforce any prohibition on development. There is no feasible way to prevent the development either. Other then relying on peoples better nature and moral character.
And frankly relying on peoples better nature and moral character triumphing. Is how we ended up with a large portion of our current non-AI related problems to begin with.
I for one however look to remain cautiously optimistic about AI. As the alternative of fear and concern. Ultimately doesn’t accomplish much of anything. Aside from stress.
'Mika' becomes world's first AI human-like robot CEO Mika is the 'official face' of Dictador, a major rum and spirits producer https://www.foxbusiness.com/technology/mika-worlds-first-ai-human-like-robot-ceo
Seems more like a publicity stunt then anything else. My guess. There’s the robot. And then there’s someone who actually handles the running of the company and duties of the CEO. Even though they may not ‘Officially’ be the CEO.
So was nuclear, the reality is that there isn't a technology in human history that couldn't be misused or that haven't been misused.
How it is used is the only thing that matters and keeping it out of the hands of certain people is the key as just as it can be misused it can also be of incredible benefit.
An outright ban will be much like the ban on drugs.................
Exactly my point. Barring a major country going Anti-AI or creation of God forbid another international body that actually has the weapons and will to act. There’s no way to even make a half-assed attempt to enforce a ban.
The only feasible solution that wouldn’t leave one open to attack or at the very least would make someone a hard target. Would be production and development of competing AI systems.