ARR can help contextualize RRR in the right situations. If we can assume the ARR is somewhat consistent over either time or region, which in this case (And for that matter, most viral phenomena), is absolutely not the case. The ARR of a 90% effective cure for a viral phenoma can swing from .1% to 40% in the span of a month if we are speaking about something with a ridiculous level of infectiousness, and it can vary again when you look into the next town. It is 100% contextual and in absence of said context, its junk data.
To elaborate on your example:
For every number of road crossings, there will be a small percentage people getting run over. the RRR in this case is most likely around 100% and thus the ARR will be the same as the AR% because we can assume that people who don't cross the road have a nil chance of getting run over.
Your chances of getting ran over IF you cross the street however will fluctuate wildly. If you were to cross the freeway, your chances will most likely grow by an order of several magnitudes (Extreme example). The RRR will remain the same, but context was far more dangerous, therefor the ARR grows. Yet this does not affect the efficacy of the obvious way to avoid getting ran over; not running across the freeway. Every street will also most likely have a slightly different ARR, and every season will also quite likely have a wholly different ARR.
All of this is wholly inconsequential to the efficacy of not crossing at all. It is consistent across all scenarios without fail.
Now, again, this is where "don't research if you don't understand how to research" comes back in, and that is how it factors into human risk assessment, which is what every article you could counter me with will invariable talk about and not about statistical merit. Because you see, due to the incident rate being variable across place and time, the context will inform the advisability of deciding not to cross or altering the way you cross; though if you are doughbrained you might make the conclusion that "in order to not get ran over, you shouldn't cross the road", and therefor refuse to cross a nigh-unused road on a summer morning. Now, naturally nobody will fall for this argument as we all have a pretty good understanding of which roads are dumb to cross, but if put in the context of a patient with a chronic condition it makes more sense (In case you missed it, the article that you quoted wasn't a statistical manual, but a communication manual); for example, when advising on 2 different, mutually exclusive treatments with differing efficacy for the same condition, it would be prudent to talk about the relative and absolute benefits of the treatment and the absolute risk they both have compared to non-usage, while only mentioning the relative risks they have relative to eachother.
And again, to reiterate; without knowing exactly the time, place and population characteristics of every single fucking ARR cited here in order to first relativize them towards eachother (In case the lack of rhyme and reason between the RRR's and the ARR's didn't tip you off; they were most likely tested in different regions and at different points in time), the ARR is useless. And if we had it, it still wouldn't tell us about the efficacy of the vaccine; it can, at best, be used in conjunction with the RRR to weigh its value so people can relativize the possible risk of a jab vs the possible benefits. But again, you'd need a recent, regional ARR to actually have it be of use. on its own, it could be used to estimate the effect the vaccine might have on the spread of the virus or future logistical needs such as hospitalizations and medical supplies. If the same test was to be done on that one cruise ship in the start of the pandemic that reached almost full saturation, the ARR would most likely end up at around 80%. Do it in eastern Siberia, and we are looking at about .01% if that.
**
Listen, unless you can provide me the research papers of these trails that state otherwise, we can freely assume that every single fucking AR event in the study is "the development of any singular meaningfully disruptive health effect due to Covid-19 infection", with a cap of 1 AR per person. In other words, every notable infection is counted regardless of its severity unless its asymptomatic.
RRR is the efficacy rate of the treatment itself.
ARR is the incidence reduction rate across the entire population, in other words, not the efficacy rate of the treatment itself but the efficacy rate of what a population wide rollout would look like.
This means that 95% is 95% is 95%. Really no other way to state it. If you, as a person, take the vaccine, your chance of suffering from a meaningfully disruptive health effect in the event you are exposed to COVID is 1/20th of what it used to be. Yes, your chances of actually getting exposed to it are highly variable and by factoring in this chance you can get an overview of what the efficacy of the vaccine could be at slowing down the spread.
And that wasn't what I stated. What I stated was "if you don't know the context of the control pool you cannot ascertain how meaningful the ARR is". Certainly there are analysis where ARR relates to the health of the control group but this only factors in whenever we are looking at specific health complications of a condition rather than the appearance of the condition to begin with. You'd have a point if this ARR analysis was looking at treatment options for active covid cases rather than preventative measures, but it doesn't, so you don't. Which means that in this case, ARR only and strictly refers to the risk of infectivity in your region at that specific time, which means it will vary wildly.
That being said, ARR in this way does actually inform the default vaccine package in most nations. For example, western countries typically don't have a yellow fever vaccine, because our the ARR on a population wide rollout is so phenomenally small as to not warranting even mentioning; we are talking like a one in a million shift, despite the yellow fever vaccine being one of the most reliable in the world. In third world nations within the tropical regions where the mosquitos that have the disease live, however, we are most likely looking at a .1 ARR (which, for the record, is still considerably lower than what you've stated here, though yellow fever is of course a lot more dangerous than covid)
Again, dipshit;
ARR can help contextualize RRR in the right situations. If we can assume the ARR is somewhat consistent over either time or region, which in this case (And for that matter, most viral phenomena), is absolutely not the case. The ARR of a 90% effective cure for a viral phenoma can swing from .1% to 40% in the span of a month if we are speaking about something with a ridiculous level of infectiousness, and it can vary again when you look into the next town. It is 100% contextual and in absence of said context, its junk data.
To elaborate on your example:
For every number of road crossings, there will be a small percentage people getting run over. the RRR in this case is most likely around 100% and thus the ARR will be the same as the AR% because we can assume that people who don't cross the road have a nil chance of getting run over.
Your chances of getting ran over IF you cross the street however will fluctuate wildly. If you were to cross the freeway, your chances will most likely grow by an order of several magnitudes (Extreme example). The RRR will remain the same, but context was far more dangerous, therefor the ARR grows. Yet this does not affect the efficacy of the obvious way to avoid getting ran over; not running across the freeway. Every street will also most likely have a slightly different ARR, and every season will also quite likely have a wholly different ARR.
All of this is wholly inconsequential to the efficacy of not crossing at all. It is consistent across all scenarios without fail.
Now, again, this is where "don't research if you don't understand how to research" comes back in, and that is how it factors into human risk assessment, which is what every article you could counter me with will invariable talk about and not about statistical merit. Because you see, due to the incident rate being variable across place and time, the context will inform the advisability of deciding not to cross or altering the way you cross; though if you are doughbrained you might make the conclusion that "in order to not get ran over, you shouldn't cross the road", and therefor refuse to cross a nigh-unused road on a summer morning. Now, naturally nobody will fall for this argument as we all have a pretty good understanding of which roads are dumb to cross, but if put in the context of a patient with a chronic condition it makes more sense (In case you missed it, the article that you quoted wasn't a statistical manual, but a communication manual); for example, when advising on 2 different, mutually exclusive treatments with differing efficacy for the same condition, it would be prudent to talk about the relative and absolute benefits of the treatment and the absolute risk they both have compared to non-usage, while only mentioning the relative risks they have relative to eachother.
And again, to reiterate; without knowing exactly the time, place and population characteristics of every single fucking ARR cited here in order to first relativize them towards eachother (In case the lack of rhyme and reason between the RRR's and the ARR's didn't tip you off; they were most likely tested in different regions and at different points in time), the ARR is useless. And if we had it, it still wouldn't tell us about the efficacy of the vaccine; it can, at best, be used in conjunction with the RRR to weigh its value so people can relativize the possible risk of a jab vs the possible benefits. But again, you'd need a recent, regional ARR to actually have it be of use. on its own, it could be used to estimate the effect the vaccine might have on the spread of the virus or future logistical needs such as hospitalizations and medical supplies. If the same test was to be done on that one cruise ship in the start of the pandemic that reached almost full saturation, the ARR would most likely end up at around 80%. Do it in eastern Siberia, and we are looking at about .01% if that. **
....Ah for the love of....
Listen, unless you can provide me the research papers of these trails that state otherwise, we can freely assume that every single fucking AR event in the study is "the development of any singular meaningfully disruptive health effect due to Covid-19 infection", with a cap of 1 AR per person. In other words, every notable infection is counted regardless of its severity unless its asymptomatic.
RRR is the efficacy rate of the treatment itself. ARR is the incidence reduction rate across the entire population, in other words, not the efficacy rate of the treatment itself but the efficacy rate of what a population wide rollout would look like.
This means that 95% is 95% is 95%. Really no other way to state it. If you, as a person, take the vaccine, your chance of suffering from a meaningfully disruptive health effect in the event you are exposed to COVID is 1/20th of what it used to be. Yes, your chances of actually getting exposed to it are highly variable and by factoring in this chance you can get an overview of what the efficacy of the vaccine could be at slowing down the spread.
And that wasn't what I stated. What I stated was "if you don't know the context of the control pool you cannot ascertain how meaningful the ARR is". Certainly there are analysis where ARR relates to the health of the control group but this only factors in whenever we are looking at specific health complications of a condition rather than the appearance of the condition to begin with. You'd have a point if this ARR analysis was looking at treatment options for active covid cases rather than preventative measures, but it doesn't, so you don't. Which means that in this case, ARR only and strictly refers to the risk of infectivity in your region at that specific time, which means it will vary wildly.
That being said, ARR in this way does actually inform the default vaccine package in most nations. For example, western countries typically don't have a yellow fever vaccine, because our the ARR on a population wide rollout is so phenomenally small as to not warranting even mentioning; we are talking like a one in a million shift, despite the yellow fever vaccine being one of the most reliable in the world. In third world nations within the tropical regions where the mosquitos that have the disease live, however, we are most likely looking at a .1 ARR (which, for the record, is still considerably lower than what you've stated here, though yellow fever is of course a lot more dangerous than covid)