Please don't cite studies if you literally haven't have the faintest idea what you are talking about and somehow fail to read the part that you are citing. I find myself saying this alot but shut the fuck up about "having done the research" if you can't research yourself out of a knee deep hole, haven't seen any meaningful developments of your understanding of statistics since you were 14, and haven't ever in any significant way needed to dissect an academic paper in an academic context, ie, something you don't really give a shit about but have to interpret as accurately as possible to prevent getting failed for your course.
You aren't referring to the "real efficacy rate", dipshit, and if you read what you posted or literally just googled the acronyms. ARR is practically never used for vaccine effectiveness because its a piss-poor metric for vaccine effectiveness, and should never be employed outside the consideration of chronic conditions, if even.
This is in no small part because it obscures the baseline risk (something known in a chronic condition where both the control and test group have a 100% chance to have the condition). Which is entirely fucking contextual. So, lets assume, for the sake of argument that this study was taken over a one month period. You have 1000 people in the control group, and 1000 people in the test group, all of them testing negative for covid prior to the test starting, else it wouldn't really make much of a difference. Assuming moderate virulance and daily case increase, and again, for the sake of argument, That 970 from both groups just straight up don't get the virus. This means you have a baseline risk of 3%.
Once you actually have this group, you have to isolate those who aren't actually undergoing any Absolute Risks, this being any meaningfully harmful health condition. No, losing your smell, doesn't count. This cuts the number down to 2%.
You then, for these 2% that undergo meaningful health conditions from catching covid, see 90% of the remaining patients or 1.8% of the total populace now not get sick. As such, the number of absolute risk events has gone done by 1.8%. The actual difference? 2 people sick instead of 18 people sick.
Now, why is this a dumb number? Again, context. you could take the exact same fucking study at the exact same region 2 months later, and end up with an ARR of .3% or one of 42% depending on what the baseline risk that month is And looking at the wildly divergent ARR's relative to the RRR's, it is safe to assume that all these studies were taken at different regions of infection density and different times of infection growth rate. ARR has practically no indicative properties to speak off outside a few edge cases
ARR can help contextualize RRR in the right situations. If we can assume the ARR is somewhat consistent over either time or region, which in this case (And for that matter, most viral phenomena), is absolutely not the case. The ARR of a 90% effective cure for a viral phenoma can swing from .1% to 40% in the span of a month if we are speaking about something with a ridiculous level of infectiousness, and it can vary again when you look into the next town. It is 100% contextual and in absence of said context, its junk data.
To elaborate on your example:
For every number of road crossings, there will be a small percentage people getting run over. the RRR in this case is most likely around 100% and thus the ARR will be the same as the AR% because we can assume that people who don't cross the road have a nil chance of getting run over.
Your chances of getting ran over IF you cross the street however will fluctuate wildly. If you were to cross the freeway, your chances will most likely grow by an order of several magnitudes (Extreme example). The RRR will remain the same, but context was far more dangerous, therefor the ARR grows. Yet this does not affect the efficacy of the obvious way to avoid getting ran over; not running across the freeway. Every street will also most likely have a slightly different ARR, and every season will also quite likely have a wholly different ARR.
All of this is wholly inconsequential to the efficacy of not crossing at all. It is consistent across all scenarios without fail.
Now, again, this is where "don't research if you don't understand how to research" comes back in, and that is how it factors into human risk assessment, which is what every article you could counter me with will invariable talk about and not about statistical merit. Because you see, due to the incident rate being variable across place and time, the context will inform the advisability of deciding not to cross or altering the way you cross; though if you are doughbrained you might make the conclusion that "in order to not get ran over, you shouldn't cross the road", and therefor refuse to cross a nigh-unused road on a summer morning. Now, naturally nobody will fall for this argument as we all have a pretty good understanding of which roads are dumb to cross, but if put in the context of a patient with a chronic condition it makes more sense (In case you missed it, the article that you quoted wasn't a statistical manual, but a communication manual); for example, when advising on 2 different, mutually exclusive treatments with differing efficacy for the same condition, it would be prudent to talk about the relative and absolute benefits of the treatment and the absolute risk they both have compared to non-usage, while only mentioning the relative risks they have relative to eachother.
And again, to reiterate; without knowing exactly the time, place and population characteristics of every single fucking ARR cited here in order to first relativize them towards eachother (In case the lack of rhyme and reason between the RRR's and the ARR's didn't tip you off; they were most likely tested in different regions and at different points in time), the ARR is useless. And if we had it, it still wouldn't tell us about the efficacy of the vaccine; it can, at best, be used in conjunction with the RRR to weigh its value so people can relativize the possible risk of a jab vs the possible benefits. But again, you'd need a recent, regional ARR to actually have it be of use. on its own, it could be used to estimate the effect the vaccine might have on the spread of the virus or future logistical needs such as hospitalizations and medical supplies. If the same test was to be done on that one cruise ship in the start of the pandemic that reached almost full saturation, the ARR would most likely end up at around 80%. Do it in eastern Siberia, and we are looking at about .01% if that.
**
Listen, unless you can provide me the research papers of these trails that state otherwise, we can freely assume that every single fucking AR event in the study is "the development of any singular meaningfully disruptive health effect due to Covid-19 infection", with a cap of 1 AR per person. In other words, every notable infection is counted regardless of its severity unless its asymptomatic.
RRR is the efficacy rate of the treatment itself.
ARR is the incidence reduction rate across the entire population, in other words, not the efficacy rate of the treatment itself but the efficacy rate of what a population wide rollout would look like.
This means that 95% is 95% is 95%. Really no other way to state it. If you, as a person, take the vaccine, your chance of suffering from a meaningfully disruptive health effect in the event you are exposed to COVID is 1/20th of what it used to be. Yes, your chances of actually getting exposed to it are highly variable and by factoring in this chance you can get an overview of what the efficacy of the vaccine could be at slowing down the spread.
And that wasn't what I stated. What I stated was "if you don't know the context of the control pool you cannot ascertain how meaningful the ARR is". Certainly there are analysis where ARR relates to the health of the control group but this only factors in whenever we are looking at specific health complications of a condition rather than the appearance of the condition to begin with. You'd have a point if this ARR analysis was looking at treatment options for active covid cases rather than preventative measures, but it doesn't, so you don't. Which means that in this case, ARR only and strictly refers to the risk of infectivity in your region at that specific time, which means it will vary wildly.
That being said, ARR in this way does actually inform the default vaccine package in most nations. For example, western countries typically don't have a yellow fever vaccine, because our the ARR on a population wide rollout is so phenomenally small as to not warranting even mentioning; we are talking like a one in a million shift, despite the yellow fever vaccine being one of the most reliable in the world. In third world nations within the tropical regions where the mosquitos that have the disease live, however, we are most likely looking at a .1 ARR (which, for the record, is still considerably lower than what you've stated here, though yellow fever is of course a lot more dangerous than covid)
Please don't cite studies if you literally haven't have the faintest idea what you are talking about and somehow fail to read the part that you are citing. I find myself saying this alot but shut the fuck up about "having done the research" if you can't research yourself out of a knee deep hole, haven't seen any meaningful developments of your understanding of statistics since you were 14, and haven't ever in any significant way needed to dissect an academic paper in an academic context, ie, something you don't really give a shit about but have to interpret as accurately as possible to prevent getting failed for your course.
You aren't referring to the "real efficacy rate", dipshit, and if you read what you posted or literally just googled the acronyms. ARR is practically never used for vaccine effectiveness because its a piss-poor metric for vaccine effectiveness, and should never be employed outside the consideration of chronic conditions, if even.
This is in no small part because it obscures the baseline risk (something known in a chronic condition where both the control and test group have a 100% chance to have the condition). Which is entirely fucking contextual. So, lets assume, for the sake of argument that this study was taken over a one month period. You have 1000 people in the control group, and 1000 people in the test group, all of them testing negative for covid prior to the test starting, else it wouldn't really make much of a difference. Assuming moderate virulance and daily case increase, and again, for the sake of argument, That 970 from both groups just straight up don't get the virus. This means you have a baseline risk of 3%.
Once you actually have this group, you have to isolate those who aren't actually undergoing any Absolute Risks, this being any meaningfully harmful health condition. No, losing your smell, doesn't count. This cuts the number down to 2%.
You then, for these 2% that undergo meaningful health conditions from catching covid, see 90% of the remaining patients or 1.8% of the total populace now not get sick. As such, the number of absolute risk events has gone done by 1.8%. The actual difference? 2 people sick instead of 18 people sick.
Now, why is this a dumb number? Again, context. you could take the exact same fucking study at the exact same region 2 months later, and end up with an ARR of .3% or one of 42% depending on what the baseline risk that month is And looking at the wildly divergent ARR's relative to the RRR's, it is safe to assume that all these studies were taken at different regions of infection density and different times of infection growth rate. ARR has practically no indicative properties to speak off outside a few edge cases
Again, dipshit;
ARR can help contextualize RRR in the right situations. If we can assume the ARR is somewhat consistent over either time or region, which in this case (And for that matter, most viral phenomena), is absolutely not the case. The ARR of a 90% effective cure for a viral phenoma can swing from .1% to 40% in the span of a month if we are speaking about something with a ridiculous level of infectiousness, and it can vary again when you look into the next town. It is 100% contextual and in absence of said context, its junk data.
To elaborate on your example:
For every number of road crossings, there will be a small percentage people getting run over. the RRR in this case is most likely around 100% and thus the ARR will be the same as the AR% because we can assume that people who don't cross the road have a nil chance of getting run over.
Your chances of getting ran over IF you cross the street however will fluctuate wildly. If you were to cross the freeway, your chances will most likely grow by an order of several magnitudes (Extreme example). The RRR will remain the same, but context was far more dangerous, therefor the ARR grows. Yet this does not affect the efficacy of the obvious way to avoid getting ran over; not running across the freeway. Every street will also most likely have a slightly different ARR, and every season will also quite likely have a wholly different ARR.
All of this is wholly inconsequential to the efficacy of not crossing at all. It is consistent across all scenarios without fail.
Now, again, this is where "don't research if you don't understand how to research" comes back in, and that is how it factors into human risk assessment, which is what every article you could counter me with will invariable talk about and not about statistical merit. Because you see, due to the incident rate being variable across place and time, the context will inform the advisability of deciding not to cross or altering the way you cross; though if you are doughbrained you might make the conclusion that "in order to not get ran over, you shouldn't cross the road", and therefor refuse to cross a nigh-unused road on a summer morning. Now, naturally nobody will fall for this argument as we all have a pretty good understanding of which roads are dumb to cross, but if put in the context of a patient with a chronic condition it makes more sense (In case you missed it, the article that you quoted wasn't a statistical manual, but a communication manual); for example, when advising on 2 different, mutually exclusive treatments with differing efficacy for the same condition, it would be prudent to talk about the relative and absolute benefits of the treatment and the absolute risk they both have compared to non-usage, while only mentioning the relative risks they have relative to eachother.
And again, to reiterate; without knowing exactly the time, place and population characteristics of every single fucking ARR cited here in order to first relativize them towards eachother (In case the lack of rhyme and reason between the RRR's and the ARR's didn't tip you off; they were most likely tested in different regions and at different points in time), the ARR is useless. And if we had it, it still wouldn't tell us about the efficacy of the vaccine; it can, at best, be used in conjunction with the RRR to weigh its value so people can relativize the possible risk of a jab vs the possible benefits. But again, you'd need a recent, regional ARR to actually have it be of use. on its own, it could be used to estimate the effect the vaccine might have on the spread of the virus or future logistical needs such as hospitalizations and medical supplies. If the same test was to be done on that one cruise ship in the start of the pandemic that reached almost full saturation, the ARR would most likely end up at around 80%. Do it in eastern Siberia, and we are looking at about .01% if that. **
....Ah for the love of....
Listen, unless you can provide me the research papers of these trails that state otherwise, we can freely assume that every single fucking AR event in the study is "the development of any singular meaningfully disruptive health effect due to Covid-19 infection", with a cap of 1 AR per person. In other words, every notable infection is counted regardless of its severity unless its asymptomatic.
RRR is the efficacy rate of the treatment itself. ARR is the incidence reduction rate across the entire population, in other words, not the efficacy rate of the treatment itself but the efficacy rate of what a population wide rollout would look like.
This means that 95% is 95% is 95%. Really no other way to state it. If you, as a person, take the vaccine, your chance of suffering from a meaningfully disruptive health effect in the event you are exposed to COVID is 1/20th of what it used to be. Yes, your chances of actually getting exposed to it are highly variable and by factoring in this chance you can get an overview of what the efficacy of the vaccine could be at slowing down the spread.
And that wasn't what I stated. What I stated was "if you don't know the context of the control pool you cannot ascertain how meaningful the ARR is". Certainly there are analysis where ARR relates to the health of the control group but this only factors in whenever we are looking at specific health complications of a condition rather than the appearance of the condition to begin with. You'd have a point if this ARR analysis was looking at treatment options for active covid cases rather than preventative measures, but it doesn't, so you don't. Which means that in this case, ARR only and strictly refers to the risk of infectivity in your region at that specific time, which means it will vary wildly.
That being said, ARR in this way does actually inform the default vaccine package in most nations. For example, western countries typically don't have a yellow fever vaccine, because our the ARR on a population wide rollout is so phenomenally small as to not warranting even mentioning; we are talking like a one in a million shift, despite the yellow fever vaccine being one of the most reliable in the world. In third world nations within the tropical regions where the mosquitos that have the disease live, however, we are most likely looking at a .1 ARR (which, for the record, is still considerably lower than what you've stated here, though yellow fever is of course a lot more dangerous than covid)