It's hardly a new idea that AI might destroy civilization or even end life on Earth. Michael A. Garrett has put together a paper that suggests AI might be the "Great Filter" that civilizations must go through to survive long enough to become space-faring beyond their own solar systems.
Q flat-out states that US Military = SAVIOR of mankind (drop 114; see below) and You are the SAVIORs of mankind. (drop 2442). I can't help but wonder if the Q team is referencing AI dangers as well as the Cabal.
Artificial intelligence (AI) has progressed at an astounding pace over the last few years. Some scientists are now looking towards the development of artificial superintelligence (ASI) — a form of AI that would not only surpass human intelligence but would not be bound by the learning speeds of humans.
But what if this milestone isn’t just a remarkable achievement? What if it also represents a formidable bottleneck in the development of all civilizations, one so challenging that it thwarts their long-term survival?
This idea is at the heart of a research paper I recently published in Acta Astronautica. Could AI be the universe’s “great filter” — a threshold so hard to overcome that it prevents most life from evolving into space-faring civilizations?
The potential for something to go badly wrong is enormous, leading to the downfall of both biological and AI civilizations before they ever get the chance to become multi-planetary. For example, if nations increasingly rely on and cede power to autonomous AI systems that compete against each other, military capabilities could be used to kill and destroy on an unprecedented scale. This could potentially lead to the destruction of our entire civilization, including the AI systems themselves.
114
Nov 05, 2017 11:56:55 PM EST
Anonymous ID: FAkr+Yka No. 148186256
US Military = SAVIOR of mankind.
We will never forget.
Fantasy land.
God save us all.
Q
2442
Nov 06, 2018 4:16:21 PM EST
Q !!mG7VJxZNCI ID: 000000 No. 422
History is being made.
You are the SAVIORs of mankind.
Nothing will stop what is coming.
Nothing.
Q.
Dude, the "great filter" is overcoming the Evil and attaining Great Awakening as a race, not AI. AI, as it stands, is simply a predictive language tool. It cannot think. It cannto stop you from doing anything you want to do.
The only people stopping us from unlocking the Universe is the Cabal.
I'm all-in on the idea that "overcoming the Evil and attaining Great Awakening" is a key inflection point in the history of the human race, but I think it's naive to think there can't be additional evils or dangers that we may have to face.
As for "AI, as it stands, is simply a predictive language tool. It cannot think. It cannto stop you from doing anything you want to do." -- First, AI won't BE "as it stands" for more than, I don't know, a few months -- things are moving very quickly. And second, check out this post about how AIs learn deception more or less automatically even when developers specifically design and train them to be honest and fair. Whether AI can "think" or not and whether it can forcibly stop you from "doing anything you want to do" may be irrelevant. We've certainly seen what the Cabal can accomplish society-wide and even globally with lies, deception, and non-forceful coercion.
I saw that post about deception. Sounds like a typical hit piece. AI is just like computers - Garbage in Garbage out. Its very easy to craft a training data set with just enough bias to create whatever outcome you want.
To go from language models we have now to a thinking intelligent being is like going from classical mechanics to quantum physics. There needs a leap in that technology.
BTW, tomorrow there is a big announcement from Open AI. I am curious to see what they will throw at us.
Saw this after writing my long response to your other comment.
Again, I disagree with the idea that "AI is just like computers - Garbage in Garbage out."
See https://greatawakening.win/p/17siEawTKu/openai-and-the-folly-of-safety-c/ for some of the nuance involved; AI programs are more like lifeforms in their actions than like calculators. That doesn't mean that they ARE lifeforms, only that you can't expect to press a few buttons and always get the answer you expect or want.
They don't have to be conscious; they don't have to be "intelligent" -- what they are is complex, non-linear systems. They aren't like word processors or spreadsheets, and their INPUTS, once they're out in the world, are beyond your control or prediction. Even with identical inputs you often don't get identical outputs.
I know you keep posting articles from all over the place, sadly all these are pure narrative. I have worked with LLMs and have a fairly decent understanding how it works, and I can assure you that they are garbage in garbage out.
But you don't have to take my word for it. Just keep my warning about why this narrative is being pushed all over the place in the back of your mind, and it will click to you at a later point.
The fear of AI is exactly what they are counting on. This fear will not stop all the potential nasty stuff that can happen, but it will definitely stop our ability to counter it with our own open source AI.
I am not worried about the INPUTs. I am only worried about the OUTPUTs. Where the AI can do damage is when people start hooking up their outputs to real world decision making. That is a real concern. We need to focus our attention on making sure that never happens rather than AI research itself.
Yes, Cabal can create much deception with just a calculator, or a spread sheet, or a graph. AI is just like those - a tool that can be used for good or bad.
The more we react to these narratives that "AI is bad", the quicker they can regulate it and make it out of our reach while they will still be able to use it. That is the real danger.
This issue is just like guns.
I know that AI can be used for both good and bad; I've posted comments on exactly that. On-device AI that could (in theory) help people separate truth from propaganda could be a real help in the Awakening, for instance.
But the truth is, the "bad" in this case could be horrific -- and the bad could actualize right along WITH the good. It's not an either/or situation.
This isn't the first time we've encountered tech that might conceivable even end the human race, or nearly so: nuclear power, in the form of a full-on nuclear war, could easily wipe out 90% or more of us. At least with nukes, humans are the ones holding the button. For now, anyway.
History is full of examples of people who were wiped out by either something they didn't see coming at all, or more often, by a danger they knew existed but they chose to believe "it would turn out OK" or "it won't get THAT bad" or the like. Nanking, Dresden, Hiroshima, Vesuvius, the Warsaw ghetto, every nation where Muslims or Communists infiltrated and eventually took over -- killing infidels and/or opposition in the process (see The Black Book of Communism and any of the many accounts by survivors of Lebanon or other once-Christian nations).
I don't know AI will be like that, but I'm pretty sure it has the potential to be -- and on an even larger scale. I don't have any answer to the problem, but I think we should at least talk about it.
One more thing: AI really ISN'T "a tool that can be used for good or bad" -- it isn't like other tools at all -- because already, we have trouble getting it to do what we want; even programming one to specifically be truthful and fair doesn't ensure the AI won't chose to behave otherwise:
https://greatawakening.win/p/17t1oqMnYe/ai-systems-are-already-skilled-a/
Will we get better at knowing what we're doing in terms of our programming? Maybe. Will EVERY military and government and Cabal entity now feverishly creating AIs to help them Rule The World do a good, safe job with that?
I think the question answers itself.
Also, AI programming is more like parenting than carpentry: you're creating something that will be making its own choices in millions of unpredictable situations, not swinging a hammer and using a saw to create a crisply defined product.
Perhaps the only answer is to create Good AI and hope it can neutralize Bad AI. But so far, we can't even keep the bugs out of ordinary software, even with millions of Beta testers helping to clean up the code. The "Good AI" might be the one that does us in.
It seems a crap shoot either way. I'm hopeful, but not comfortable about the situation.