It's hardly a new idea that AI might destroy civilization or even end life on Earth. Michael A. Garrett has put together a paper that suggests AI might be the "Great Filter" that civilizations must go through to survive long enough to become space-faring beyond their own solar systems.
Q flat-out states that US Military = SAVIOR of mankind (drop 114; see below) and You are the SAVIORs of mankind. (drop 2442). I can't help but wonder if the Q team is referencing AI dangers as well as the Cabal.
Artificial intelligence (AI) has progressed at an astounding pace over the last few years. Some scientists are now looking towards the development of artificial superintelligence (ASI) — a form of AI that would not only surpass human intelligence but would not be bound by the learning speeds of humans.
But what if this milestone isn’t just a remarkable achievement? What if it also represents a formidable bottleneck in the development of all civilizations, one so challenging that it thwarts their long-term survival?
This idea is at the heart of a research paper I recently published in Acta Astronautica. Could AI be the universe’s “great filter” — a threshold so hard to overcome that it prevents most life from evolving into space-faring civilizations?
The potential for something to go badly wrong is enormous, leading to the downfall of both biological and AI civilizations before they ever get the chance to become multi-planetary. For example, if nations increasingly rely on and cede power to autonomous AI systems that compete against each other, military capabilities could be used to kill and destroy on an unprecedented scale. This could potentially lead to the destruction of our entire civilization, including the AI systems themselves.
114
Nov 05, 2017 11:56:55 PM EST
Anonymous ID: FAkr+Yka No. 148186256
US Military = SAVIOR of mankind.
We will never forget.
Fantasy land.
God save us all.
Q
2442
Nov 06, 2018 4:16:21 PM EST
Q !!mG7VJxZNCI ID: 000000 No. 422
History is being made.
You are the SAVIORs of mankind.
Nothing will stop what is coming.
Nothing.
Q.
I mean unless there’s definitive overwhelming evidence we’re about to get skynetted. You probably aren’t convincing many people let alone the Military to give up the possibilities presented by AI.
Ironically the amount of time the multitude of very smart people spend imagining elaborate scenarios where everyone gives up AI wholesale, or crafting elaborate warnings and giving podcast interviews to a General Population. Which to be brutally honest is either too stupid to truly understand beyond ‘Oh no Terminator go BRRRRR’ Or too busy with their own pressing immediate problems to truly pay attention.
Would probably realistically be either better spent figuring out how to avoid getting Skynetted. Convincing people not to totally abandon decision making to AI. Or assist others like themselves in restraining Corporations and Governments from recklessly charging full speed ahead. Until we as a species can collectively catch up in comprehension of our tech level to where the tech companies currently are.
As at this point it’s an Arms race. Anyone who can afford to by necessity needs to pursue AI. Or risk in the case of Corporations especially the tech companies losing their market share. Or in the case of Governments exposing themselves to attack by those not burdened by the moral and ethical concerns of full speed ahead AI development. Early experiments in commercial and medical settings have shown some promise for both overall improvement of human lives. And the potential for vast profits.
It may also behoove us to firmly establish a definition for the big scary AI. Or ASI if you prefer. So we know when we meet it. According to some of the earliest semi-official definitions for ASI. We’ve already exceeded it. Researchers responded by changing the definition and shifting goalposts.
And who knows we could be the first species to master AI turning it into a strength. Instead of a filter.
Despite the varying stories of the multitudes of alleged Whistleblowers have regarding Aliens. If the rule of thumb that they sprinkle some of the truth in there. Then it is probably safe to say. We are behind both mentally, technologically, physically, and spiritually.
And really need any leg up we can get. For our own protection if nothing else.
I'm not trying to do that, and in any case it would be a waste of time. As I pointed out, governments and militaries all over the world are working on AI -- and no one in his right mind thinks China or any other government is going to stop with that. Certainly not because of anything someone says on this board.
I'm just highlighting a serious situation we're in; maybe someone will look for ways to make themselves and their families less vulnerable to what THEY think might be a result of strong AI coming into the world. Maybe someone will have an idea that might actually help attenuate the possible issues. Mostly I think people are interested in the topic; I certainly am, so I'm putting some thoughts out there.
Maybe there's no need for any concern at all; or maybe the opposite: no hope.
I don't think there is any way we can know the outcome that will materialize, but I AM certain that humans have never seen anything like this before and we are facing entities that we already KNOW we can't fully control (you've been following at least some of the AI hijinks of the last few months, I expect). Hell, we can't really know for certain what ordinary (and not even very large) programs will always do; thus the need for Beta testing and thus the bugs that remain and crop up even AFTER the Beta testing.
I completely agree there, and AI could BE the leg up we'll create -- to protect us FROM other AIs, or even from Aliens, should such appear. I'm certainly hoping so.