Anon with a great theory about the outage lastnight !!! links below.....
(media.greatawakening.win)
🔍 Notable
Comments (42)
sorted by:
For what it's worth, The Judges and LitecoinBull on X say the outage is a White Hat Military Op.
That makes sense. Because crowd strike is tied to the election machines
And worked for Hillary to provide so called evidence for Russian collusion hoax
I believe this anon is on target...WH's prepping for a clean 2024 Election.,
Awesome find mm! 👏
thx joy
Good post, and theory. A cleanup operation. Typical.
https://boards.4chan.org/pol/thread/474838444
Couple this together with the crowdstrike CEO going on a tangent about how complicated cyber security is when asked about a bug in software update....interdasting
Maybe he wasn't lying after all.
Lol "complicated" bro it's a BSoD. Literally the single easiest thing to find when testing. Dudes an idiot
No. Once again, just like the SS director, I think they're exposing themselves - hiding malice behind a veil of incompetence.
Here's the text of the post if you wanted to check out any of those links
thx fren
My post yesterday about the explosion in Tel Aviv...
https://greatawakening.win/p/17teSQuO3u/israel--smoke-is-rising-from-the/c/
https://qagg.news/?read=916
u/#q916
I'm not getting the connection, The outage was caused by a bad update that was distributed. There is a workaround for that update. Someone could blow that building up, but unless the explosion randomly blows debris onto a keyboard that in turn changes code and distributes it, I don't see anything here. (for this issue anyhow)
Is the bad update excuse a lie? How can we trust what they tell us is the truth?
Well the guys I work with used the workaround today for one. Also - the workaround came from Reddit and CrowdStrike adopted it.
Deleting the .sys file ?
Yep - but you have to get past Bitlocker if you use it. It can be easy if the keys are in AD or harder if not.
No it checks out. Bsod is a client side malfunction not backend (centralized server).
Unless there's some sort of connection back to servers in that building.. . apparently, the update caused issues with authentication - possibly malicious software client registering with a server on that building?
Where did you see anything about authentication? I didn't see that. The only thing I've seen was removing a .sys file in the Windows CrowdStrike drivers directory fixes it (temporarily). Bitlocker can complicate it, but that is MSFT - not CS.
If it was an authentication issue it would likely affect Linux installs as well. It doesn't.
It might have been here.. something about the authentication servers being down. I think the m$ servers were running crowdstrike themselves or something along those lines. It was early in the outage
I might've seen something along those lines with Azure/M365. It wasn't really an authentication bug - the servers that provided authentication were down. Like if you want to go to a website but can't resolv the hostname in DNS, it doesn't mean DNS is broken if the whole server that runs DNS is down. The server is broken.
It was more of a symptom than a cause.
Good catch on that issue though. I remember something about it Friday morning because I checked my cloud PC and it was still working.
The theory only works for folks who aren't sophisticated about computers
Picture me as a chimpanzee sitting at a keyboard and you will have a pretty accurate description of myself.
Posts like this is why I visit GAW daily. Thanks OP
Yw fren. And I know what you mean.
Makes sense.
I just assumed it was crowdstrike munching data - which in cloud infra means, often, data endpoints have to be taken offline as you are not just killing data on a single server. you have to wait for the full cloud wipe to go - otherwise the data just keeps replicating
Checks out to me. Seems pretty obvious the hat people are running this show.
Cleanup by whom? Us, or [them]?
I'd say a combination our D.S. and the tunnelers
I was under the impression this outage was client-side. Meaning, many computers received a faulty update and put it into use, then breaking. Explosions don’t write software update code.
Is there a chance of them being in the middle of some fuckery . And the drone , explosion interrupted it.
I can’t imagine a situation where they’d be editing the update live, to be automatically sent out to the world. That’s just not how it works. They have to at the very least push a “send” button (or equivalent) (and very likely more steps too), and it’s hard to do that when you’re exploded.
The issue was in the code of the update, not their infrastructure. Microsoft let them hook into the kernel before the os loads.
Mmmm popcorn
Don't forget WEF said there would be a Cyberpendemic.
I doubt it. It was a simple update that wasn't tested. There was no quality control here. The update threw Windows to a blue screen boot loop. The only way to fix it is to boot into safe mode and manually delete the update files. If you are running bitlocker, you're fucked.
I highly doubt it's a cover up. It's more that some employees half asked an update and performed no testing. What likely happened is someone ran the wrong command on accident and it pushed the update. And Crowdstrike had no secondary controls in place to request a final approval.
did you see this fren?... https://greatawakening.win/p/17teSVQsLf/now-everyone-knows-the-election-/c/
No, but that doesn't prove the theory you posted. It just futher reinforces Dominions lies. Thanks though.
yw fren