Video of Analysis of "Patterns of Deployment of Toxic Covid Batches" (Toxic Batches Appear to be Systematically Released)
(www.bitchute.com)
🤢 These people are sick! 🤮
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (75)
sorted by:
Programmer here ---
I downloaded the VAERS data and started to write some code to replicate the results.
There's one BIG assumption being made here, which is that 'all batches are the same size'. We have no confirmation of that.
You could see the exact same 'anomalies' with a consistently 'moderate toxic' formula and differing batch sizes.
Extremely small batches would have almost no adverse effects, and large batches would have many adverse effects - with no difference in toxicity.
The Pfizer 'ramp down' could be a gradual lowering of batch sizes (interspersed with minuscule batches).
I am not trying to say the vaxxes are ok -- in fact I actually suggested this kind of 'toxicity tweaking' as a theory about a month ago.
I'm just saying that without knowing the batch sizes, this is not a smoking gun.
Here's the previous theory -- do they line up?
If they give SALINE to almost everyone (the first time around), they still get ALL of their EVIL BENEFITS. Here's how...
https://greatawakening.win/p/13zgXB5572/if-they-give-saline-to-almost-ev/
Thought experiment for critical thinkers:
Pick a 'standard batch size' - for example, 10,000 doses per batch.
If the batches are that size or smaller, then the 'most toxic' batches would show one adverse reaction for every two or three patients. I have a hard time believing that would go unnoticed by even the most jaded doctor or nurse.
If the batches are that size or larger, then we quickly exceed the total number of doses produced ('avg batch size' times 'number of batches').
I think it's impossible that each batch is the same size.
Again, this is just a thought experiment -- I don't have real numbers to compare it to.
Thanks for being one of the few in this thread who is a critical thinker, and has some understanding regarding garbage statistics.
Batch size is one of the problems. Even for identical batch sizes, they aren't necessarily all injected. Large amounts of a batch might have been thrown away for a variety of reasons, like damage during shipment, or sitting on a shelf too long and expiring.
There's a ton of other issues aside from size, see my other posts in this thread for more.
The 'batch number' data entry is terrible.
I estimate that there are (roughly) 300 true batch numbers, but there are more than 40,000 additional 'botched' batch numbers (extraneous letters, '4' instead of 'A', extra spaces or periods, starts with '#', etc.).
Almost all of those botched batch numbers have only one entry (for obvious reasons), but a few have more than 100, such as 'batch number' "EUA", "PFIZER", "N/A", etc.
The analysis in the video is useless (as he clearly thinks there are far more than 300 actual batch numbers).
That is also a good point, and shows yet another problem with all this junk analysis of the VAERS data.
Thanks for looking into this. I had a number of questions about this after watching the video. You bring up an excellent point.
Other questions I had were whether we know if batch numbers were really given in chronological order, and whether moderna, jognson and johnson, and pfizer batches are all distributed evenly across batch numbers or not.
I suspect he is getting the 'chronological order' of the batches by the dates of injection.
The three companies seem to use different batch numbering schemes.
He is somehow assigning a single date to each batch and making that his x-axis upon which all of his results depend. In the data I only see dates for each individual vaccination, not a date a vax lot was issued. I tried taking the mininum and the mean of the individual dates for each VAX_LOT but my plots look nothing like his. Mine are much more spread out with Pfizer and Moderna distributed across all of 2021. I am including the non-domestic data in order to get the fullest picture possible. He needs to provide more details on how to reproduce what he did to support his remarks. Many folks are happy to jump to the most nefarious conclusions without rigorous backing.
I don't know if you've scrolled through the data to see what the 'batch names' look like, but there are huge swaths of useless batch numbers in the VAERS data. (And they would be downright misleading to a lazy analysis.)
I had to cull roughly one-third of the data due to missing batch numbers or batch numbers such as 'N/A', 'unk', 'idk', or 'Really?'. Huge numbers of them had a '#' prepended or were simply garbled text. If these were not culled, the analysis is (truly) worse than useless.
Each one of those 'garbled' batch numbers would look like a 'non-toxic' batch because each one would have exactly one 'adverse event'.
I wish I knew if the maker of the video did such a cull, or if a lot of these 'safe batches' are named 'unknown', '0.23.HG[..', or 'will find later'.
There's several different issues with the data presented. Different batch sizes is just one of the problems.
The data presented here only works when you assume every company is supplying the same size batches to everywhere at the same time. However, that's not necessarily the case. The distribution companies could look at availability from the different companies each time, and going with whatever makes sense in the moment. Some batches will go entirely to one area, some will be spread out more.
When you factor in some hospitals will be more likely to report a problem, while others not, rotating between suppliers, different size batches, drawing any conclusions from the data is meaningless.
You're saying a variable which explains the data is IMMATERIAL and then drawing a conclusion you prefer once you excluded an explanation you don't like.
The bucket effect is explained by variables you don't like. There is nothing random here, as all these variables you don't care for have a large outcome on the patterns you're noticing.
It could be that each company was 'tweaking' it's batch sizes during those windows - but not the formula/toxicity.
Note: I am NOT an apologist for the Vax companies. See my earlier post (above) for proof of my 'skepticism'.
True about the importance of the batch size - But I imagine there are process control standards. The other interesting pattern in the analysis is each vaccine company is clumped timewise so what would account for that?
Even with process control standards, a lot of 'partial batches' were discarded due to improper storage or expiration. I have no idea if this explains any of the variance.
With regard to the two separate time windows, it could be that each company was 'tweaking' it's batch sizes during those windows - but not the formula/toxicity.
Again, I am NOT an apologist for the Vax companies. See my earlier post (above) for proof of my 'skepticism'.
Let us know if your program can replicate the results of presentation. I think it would be worth doing… Also in the presentation it was hard to see what the actual timeline was date wise. I am curious If the second J&J cluster coincides with the April 13 pause that Trump put out a communication about. At the time I felt this pause was meaningful.
Trump Statement
APRIL 13, 2021 PALM BEACH, FL Statement by Donald J. Trump, 45th President of the United States of America. The Biden Administration did a terrible disservice to people throughout the world by allowing the FDA and CDC to call a “pause” in the use of the Johnson & Johnson COVID-19 vaccine. The results of this vaccine have been extraordinary but now it’s reputation will be permanently challenged. The people who have already taken the vaccine will be up in arms, and perhaps all of this was done for politics or perhaps it’s the FDA’s love for Pfizer. The FDA, especially with long time bureaucrats within, has to be controlled. They should not be able to do such damage for possibly political reasons, or maybe because their friends at Pfizer have suggested it. They’ll do things like this to make themselves look important. Remember, it was the FDA working with Pfizer, who announced the vaccine approval two days after the 2020 Presidential Election. They didn’t like me very much because I pushed them extremely hard. But if I didn’t, you wouldn’t have a vaccine for 3-5 years, or maybe not at all. It takes them years to act! Do your testing, clean up the record, and get the Johnson & Johnson vaccine back online quickly. The only way we defeat the China Virus is with our great vaccines!
The first J&J cluster started roughly 1/15/21, and the second roughly 5/22/21
If you speculate they may be tweaking batch size, what would be the motivation?
Why would batch size be a factor in their plan?
I rather can see a motivation for tweaking amount of ingredient x (99.9% tolerate ) as the vaccine was rushed and they didn’t have everything perfectly ready. Here is what I see From Canadian point of view - given that the source code for OpenVerify has eight doses and same with EU cards and given that PM ordered about 8 doses per person until 2025 - there is an optimal amount of doses they need. Is it because that is what is required for cumulative amount of ingredient X ? I am also going down this path since microscopic analysis from various source is also showing that there are “ingredients” present that should not be there.
To clarify, I'm not saying they are intentionally tweaking the batch size. (I assume batch sizes can change for all sorts of good or bad reasons.) I was only saying that variations in batch size could account for everything we're seeing here. (And I say that as someone who wants to find a smoking gun.)
See my earlier post for my take on intentional 'tweaking':
https://greatawakening.win/p/13zgXB5572/if-they-give-saline-to-almost-ev/
Does anyone have a data source that shows how big each (individual) batch is?