‘We Are All Going to Die:’ Researcher Calls for Advanced AI Projects to Be Shut Down https://www.breitbart.com/tech/2023/03/30/we-are-all-going-to-die-researcher-calls-for-advanced-ai-projects-to-be-shut-down/ terminator.0Orion Pictures ALLUM BOKHARI30 Mar 2023376 4:11 A loud voice of doom in the debate over AI has emerged: Eliezer Yudkowsky of the Machine Intelligence Research Institute, who is calling for a total shutdown on the development of AI models more powerful than GPT-4, owing to the possibility that it could kill “every single member of the human species and all biological life on Earth.”
Yudkowsky’s apocalyptic warning came in response to an open letter from 1,000 experts and tech leaders, including Apple co-founder Steve Wozniak and Tesla and Twitter CEO Elon Musk, calling for a six month moratorium on the development of AI technologies more powerful than OpenAI’s GPT-4.
In an article for Time Ideas, Yudkowsky states he did not sign the letter because he does not believe a six month moratorium goes far enough, imagining a scenario in which an uncontrollable AI begins manufacturing biological viruses.
Via Time:
To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.
Yudkowsky went on to call for a total global shutdown on advanced AIs, enforced by U.S. air power if necessary.
The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
While Yudkowsky’s warnings seem extreme, they are not outside the norm for AI researchers and futurologists, who have long debated the outcome of a “technological singularity” in which the reasoning capabilities of machines — and their capability to create newer, more intelligent versions of themselves — outstrip humanity’s and humanity’s ability to control it. Some predict utopia. Others, like Yudkowsky, predict catastrophe.
Please let this happen. Real life terminator doesn't seem like a thing to do.
Be careful...they could pitch this in a way where "this AI is too powerful and will make us obsolete unless we merge with it via NeuraLink".
Hence the call for a 6 month pause, and the fact NeuraLink is ready for human trials.
Step 1 Oh Crap > Step 2 ??? > Step 3 Profit
Problem > Reaction > Solution
Chaos > ??? > Order
The ship for stopping it sailed years ago. Especially now as the military and defense applications have become readily apparent.
Even if some countries were inclined to do so. They couldn’t without leaving themselves vulnerable to nations and people without such ethical quandaries and concerns. In fact you’d probably get more people willing to agree to a 6 month moratorium rather then outright ban.
We’ve entered an AI arms race. And unlike Nuclear Devices. The after affects of its use won’t be readily apparent.Meaning there’s no immediate motivation for international treaties regulating its use.
It doesn’t help that alot of the people sounding the alarm are immediately jumping to “IT’LL BE SKYNET. WE’RE ALL GOING TO DIE!” People by and large ignore those sorts of people. Too much of a “Boy who cried Wolf” effect. For any of the numerous disasters similar “experts” predicted over the years that ultimately never came to pass. People with ultimately more immediately pressing concerns then an imaginary scenario get sidelined in exchange for the clickbait panicked rants by people such as this.
You’d be likely to draw more legitimate long-lasting concern by highlighting possible economic consequences and possible unemployment. Then the temporary concern a self-admitted imagined scenario and panicky rant that’ll last until the next news cycle will generate.