No. And you won't get them. The data weren't collected in a way that you could determine AE frequency from them. No responsible scientist would attempt to take these data and generate such a list.
As for the argument about rare, look up some of those diseases. They are super rare. And we have, ostensibly, data from the entire population of the US here. Reports from ~200 million people. You're going to pick up all of those super-rare things. Remember, that the surveillance system requires ALL adverse effects to be reported, any new disease diagnosis. The kind of list you see here is quite typical of a large study, because in any large population, someone's going to have it.
The frequencies gets sorted out statistically afterwards. Again, these data weren't collected in a way that we could statistically calculate frequencies. In a proper trial, the researcher goes back through those long ass lists and asks whether the report was the result of background rates of these things or the result of the intervention they're testing. There's a lot of statistics involved, and frankly my eyes glaze over just thinking about it, so I'm not going to try and explain the statistical weed-out process - only that it is done. We reduce that list to a much smaller list of actual adverse effects of the intervention. That's what gets reported in the journal article and what gets put on the long form version of the FDA label.
This 9-page list is honestly, mostly meaningless, as much as people are making of it. I'm happy to watch Pfizer squirm over it, but this isn't the smoking gun we're looking for.
So you're saying in the trials they didn't keep track of how often these side effects happened?
Because that, to me, sounds like a horribly run trial. The entire point of the trials should be to pick up on any and all health outcomes and then compare to the baseline, so if we have no frequency data, what good does that do us?
No. And you won't get them. The data weren't collected in a way that you could determine AE frequency from them. No responsible scientist would attempt to take these data and generate such a list.
As for the argument about rare, look up some of those diseases. They are super rare. And we have, ostensibly, data from the entire population of the US here. Reports from ~200 million people. You're going to pick up all of those super-rare things. Remember, that the surveillance system requires ALL adverse effects to be reported, any new disease diagnosis. The kind of list you see here is quite typical of a large study, because in any large population, someone's going to have it.
The frequencies gets sorted out statistically afterwards. Again, these data weren't collected in a way that we could statistically calculate frequencies. In a proper trial, the researcher goes back through those long ass lists and asks whether the report was the result of background rates of these things or the result of the intervention they're testing. There's a lot of statistics involved, and frankly my eyes glaze over just thinking about it, so I'm not going to try and explain the statistical weed-out process - only that it is done. We reduce that list to a much smaller list of actual adverse effects of the intervention. That's what gets reported in the journal article and what gets put on the long form version of the FDA label.
This 9-page list is honestly, mostly meaningless, as much as people are making of it. I'm happy to watch Pfizer squirm over it, but this isn't the smoking gun we're looking for.
So you're saying in the trials they didn't keep track of how often these side effects happened?
Because that, to me, sounds like a horribly run trial. The entire point of the trials should be to pick up on any and all health outcomes and then compare to the baseline, so if we have no frequency data, what good does that do us?