Human beings are not ready for a powerful AI under present conditions or even in the “foreseeable future,” stated a foremost expert in the field, adding that the recent open letter calling for a six-month moratorium on developing advanced artificial intelligence is “understating the seriousness of the situation.”
“The key issue is not ‘human-competitive’ intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence,” said Eliezer Yudkowsky, a decision theorist and leading AI researcher in a March 29 Time magazine op-ed. “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.
“Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.”
. . . Regarding AI, there is no arms race. “That we all live or die as one, in this, is not a policy but a fact of nature.”
In addition to everything mentioned in the article, nearly all AI research and development is being conducted by government and government-adjacent groups, despite the phrase "Death by Government" being so factually descriptive that R. J. Rummel used it as the title of his book on 20th century democide.
Now why would reason dictate over such an important decision when profits are involved?
Haven't we seen this before with the atomic bomb?
Exactly. The A-bomb, then ramping up to the H-bomb. "Gain of function" bio-research. Genetic modification of food organisms, of mosquitos, etc. Big Money almost always wins, no matter the harm or the dangers.
We'd better find a way to get it right this time, but since much of the AI research, development, training, and implementation around the world is being done by or for the military, I believe we will have a serious problem on our hands at some point.
In AI time, "at some point" could be later this afternoon.
Hmm. Sounds a bit like WWG1WGA.
The issue is that the AI is designed by globalist libtards and therefore will not make world-affecting decisions equitably for Conservatives or Christians. Can AI learn compassion?
The ship for stopping it sailed years ago. Especially now as the military and defense applications have become readily apparent. The Economic benefits alone would have been a pretty hefty carrot. Combine that with potential military benefits and it’ll be near impossible to justify ignoring if you have the resources to pursue such projects.
Even if some countries were inclined to do so and stop development. They couldn’t without leaving themselves vulnerable to nations and people without such ethical quandaries and concerns. In fact you’d probably get more people willing to agree to a 6 month moratorium rather then outright ban.
We’ve entered an AI arms race. And unlike Nuclear Devices. The after affects of its use won’t be readily apparent.Meaning there’s no immediate motivation for international treaties regulating its use.
It doesn’t help that alot of the people sounding the alarm are immediately jumping to “IT’LL BE SKYNET. WE’RE ALL GOING TO DIE!” People by and large ignore those sorts of people. Too much of a “Boy who cried Wolf” effect. For any of the numerous disasters similar “experts”, with credentials that were just as impressive, predicted over the years that ultimately never came to pass. People with ultimately more immediately pressing concerns then an imaginary scenario get sidelined in exchange for the clickbait panicked rants by people such as this.
Not to mention he’s also calling on the U.S to bomb the people who won’t comply with his proposed bans or try to secretly get around them.
You’d be likely to draw more legitimate long-lasting concern by highlighting possible economic consequences, possible unemployment, and ways it could potentially harm people in their day to day lives. Then the temporary concern an imagined apocalyptic scenario and panicky rant that’ll last until the next news cycle will generate.
It’s legitimate concerns as ultimately we don’t know. Our fiction is littered with examples of such projects going swimmingly. And many examples of it going badly. Dramatically so in fact. But a Schizo rant about an AI apocalypse isn’t going to draw much concern from the population at large. As frankly prophecies of doom from people with fancy titles and degrees have been a dime a dozen for decades.
Very true. On the other hand, "even a broken clock is right twice a day."
Mass extinctions DO happen (ask the dinosaurs); many scientists believe we're early in the sixth major example right now. Huge genocides and democides DO happen. Mile-high glaciers DID cover most of the North America (more than once) in ice ages that a greatly-diminished population of humans lived through.
And quite a few major empires have fallen over the centuries.
None of that proves that AI is an existential threat, but it sure doesn't prove the opposite.
Bitcoin solves it All. All human strife originates with the control of the method of exchange and the value of human labour. Eliminate the thieves and it solves itself.