It's hardly a new idea that AI might destroy civilization or even end life on Earth. Michael A. Garrett has put together a paper that suggests AI might be the "Great Filter" that civilizations must go through to survive long enough to become space-faring beyond their own solar systems.
Q flat-out states that US Military = SAVIOR of mankind (drop 114; see below) and You are the SAVIORs of mankind. (drop 2442). I can't help but wonder if the Q team is referencing AI dangers as well as the Cabal.
Artificial intelligence (AI) has progressed at an astounding pace over the last few years. Some scientists are now looking towards the development of artificial superintelligence (ASI) — a form of AI that would not only surpass human intelligence but would not be bound by the learning speeds of humans.
But what if this milestone isn’t just a remarkable achievement? What if it also represents a formidable bottleneck in the development of all civilizations, one so challenging that it thwarts their long-term survival?
This idea is at the heart of a research paper I recently published in Acta Astronautica. Could AI be the universe’s “great filter” — a threshold so hard to overcome that it prevents most life from evolving into space-faring civilizations?
The potential for something to go badly wrong is enormous, leading to the downfall of both biological and AI civilizations before they ever get the chance to become multi-planetary. For example, if nations increasingly rely on and cede power to autonomous AI systems that compete against each other, military capabilities could be used to kill and destroy on an unprecedented scale. This could potentially lead to the destruction of our entire civilization, including the AI systems themselves.
114
Nov 05, 2017 11:56:55 PM EST
Anonymous ID: FAkr+Yka No. 148186256
US Military = SAVIOR of mankind.
We will never forget.
Fantasy land.
God save us all.
Q
2442
Nov 06, 2018 4:16:21 PM EST
Q !!mG7VJxZNCI ID: 000000 No. 422
History is being made.
You are the SAVIORs of mankind.
Nothing will stop what is coming.
Nothing.
Q.
Dude, the "great filter" is overcoming the Evil and attaining Great Awakening as a race, not AI. AI, as it stands, is simply a predictive language tool. It cannot think. It cannto stop you from doing anything you want to do.
The only people stopping us from unlocking the Universe is the Cabal.
I'm all-in on the idea that "overcoming the Evil and attaining Great Awakening" is a key inflection point in the history of the human race, but I think it's naive to think there can't be additional evils or dangers that we may have to face.
As for "AI, as it stands, is simply a predictive language tool. It cannot think. It cannto stop you from doing anything you want to do." -- First, AI won't BE "as it stands" for more than, I don't know, a few months -- things are moving very quickly. And second, check out this post about how AIs learn deception more or less automatically even when developers specifically design and train them to be honest and fair. Whether AI can "think" or not and whether it can forcibly stop you from "doing anything you want to do" may be irrelevant. We've certainly seen what the Cabal can accomplish society-wide and even globally with lies, deception, and non-forceful coercion.
I saw that post about deception. Sounds like a typical hit piece. AI is just like computers - Garbage in Garbage out. Its very easy to craft a training data set with just enough bias to create whatever outcome you want.
To go from language models we have now to a thinking intelligent being is like going from classical mechanics to quantum physics. There needs a leap in that technology.
BTW, tomorrow there is a big announcement from Open AI. I am curious to see what they will throw at us.
Saw this after writing my long response to your other comment.
Again, I disagree with the idea that "AI is just like computers - Garbage in Garbage out."
See https://greatawakening.win/p/17siEawTKu/openai-and-the-folly-of-safety-c/ for some of the nuance involved; AI programs are more like lifeforms in their actions than like calculators. That doesn't mean that they ARE lifeforms, only that you can't expect to press a few buttons and always get the answer you expect or want.
They don't have to be conscious; they don't have to be "intelligent" -- what they are is complex, non-linear systems. They aren't like word processors or spreadsheets, and their INPUTS, once they're out in the world, are beyond your control or prediction. Even with identical inputs you often don't get identical outputs.
I know you keep posting articles from all over the place, sadly all these are pure narrative. I have worked with LLMs and have a fairly decent understanding how it works, and I can assure you that they are garbage in garbage out.
But you don't have to take my word for it. Just keep my warning about why this narrative is being pushed all over the place in the back of your mind, and it will click to you at a later point.
The fear of AI is exactly what they are counting on. This fear will not stop all the potential nasty stuff that can happen, but it will definitely stop our ability to counter it with our own open source AI.
I am not worried about the INPUTs. I am only worried about the OUTPUTs. Where the AI can do damage is when people start hooking up their outputs to real world decision making. That is a real concern. We need to focus our attention on making sure that never happens rather than AI research itself.