Pfizer is required to release 80,000 documents every month relating to the vaccine. This month one of the documents showed that in a trial that said it had 181 subjects data, there were actually only 6 real human patients.
So where did the rest of the data come from? It was synthetically produced likely using the 6 patients and some statistics random functions or machine learning GANs. So 175 "subjects" or 96.6% of the data was fake data used to justify the safety of the vaccines.
I'm missing something? If they are slow walking the data, why wouldn't they sample the data from each site. That is, rather than releasing ALL the data from some sites, and NONE of the data from other sites, they just release SOME of the data from ALL of the sites. It doesn't mean that there isn't more data to release later. They were only required to release 80,000 docs a month. It wasn't stated what algorithm they had to use in deciding what to release (was it?).
Releasing it this way (subset of data for each site) makes it easier to obfuscate the big picture.
Of the people in the files (181 of them) only 6 have a verified record associated with them. Though it could be due to the release schedule of the files, as of right now is it showing that Pfizer may have made up fake people for the studies.
I'm really not following this thread. Can someone explain better?
Pfizer is required to release 80,000 documents every month relating to the vaccine. This month one of the documents showed that in a trial that said it had 181 subjects data, there were actually only 6 real human patients.
So where did the rest of the data come from? It was synthetically produced likely using the 6 patients and some statistics random functions or machine learning GANs. So 175 "subjects" or 96.6% of the data was fake data used to justify the safety of the vaccines.
I'm missing something? If they are slow walking the data, why wouldn't they sample the data from each site. That is, rather than releasing ALL the data from some sites, and NONE of the data from other sites, they just release SOME of the data from ALL of the sites. It doesn't mean that there isn't more data to release later. They were only required to release 80,000 docs a month. It wasn't stated what algorithm they had to use in deciding what to release (was it?).
Releasing it this way (subset of data for each site) makes it easier to obfuscate the big picture.
The only logical reason is that it's the best option for them at the moment which would have to mean the data not released is somehow worse.
They fabricated their data, by fabricating the test participants
Of the people in the files (181 of them) only 6 have a verified record associated with them. Though it could be due to the release schedule of the files, as of right now is it showing that Pfizer may have made up fake people for the studies.