Because OP is a halfwit and fails to understand that ARR is never used for anything ever except for a few chronic conditions, because it is a completely contextual value that can differ wildly over 2 identical tests when dealing with a viral condition?
RRR is the only thing you ever see used because its the only thing with actual indicative properties.
Let me put it this way;
you have 2 surgery methods for the same condition:
One has been tried 1000 times, saw 45 die mid-procedure
Another one has also been tried 1000times, and saw 40 die mid-procedure.
The ARR in this case is 4.5% and 4% respectively, and only a treatment effectiveness delta of .5% relative to eachother. Thats fucking nothing. If 2 populaces were treated in the exact same way, there would be 11.25%% more mortalities from method A. Now, you might think that that makes the much more meaningful 11.25% the RRR. You'd be wrong.
Because consider this; we expect 5% of people from die from complications to begin with if not given immediate surgery. So the actual control group, the one isolated from the background risk, is only a group of 50 individuals, not the one with 1000 we started off with. And taking the AR event to be death, we are looking at a 20% RRR relative to the control group from method B and only 10% RRR relative to the control group from method A. the ARR in both cases remains the previously. This means that despite the seemingly diminutive initial numbers, the ACTUAL statistical conclusion, is that operation B is twice as effective at preventing mortalities than operation A. and yet, our ARR has remained .5%.
Now imagine if instead of death, it was a disease that you take as your AR event. Depending on the present spread rate of the virus, the background risk might fluctuate from as low as .1% to as high as 30% for a single span of a month. And if you were to have the exact same fucking test the next month, you end up with a vastly different ARR
Because OP is a halfwit and fails to understand that ARR is never used for anything ever except for a few chronic conditions, because it is a completely contextual value that can differ wildly over 2 identical tests when dealing with a viral condition?
RRR is the only thing you ever see used because its the only thing with actual indicative properties.
Let me put it this way;
you have 2 surgery methods for the same condition:
One has been tried 1000 times, saw 45 die mid-procedure
Another one has also been tried 1000times, and saw 40 die mid-procedure.
The ARR in this case is 4.5% and 4% respectively, and only a treatment effectiveness delta of .5% relative to eachother. Thats fucking nothing. If 2 populaces were treated in the exact same way, there would be 11.25%% more mortalities from method A. Now, you might think that that makes the much more meaningful 11.25% the RRR. You'd be wrong.
Because consider this; we expect 5% of people from die from complications to begin with if not given immediate surgery. So the actual control group, the one isolated from the background risk, is only a group of 50 individuals, not the one with 1000 we started off with. And taking the AR event to be death, we are looking at a 20% RRR relative to the control group from method B and only 10% RRR relative to the control group from method A. the ARR in both cases remains the previously. This means that despite the seemingly diminutive initial numbers, the ACTUAL statistical conclusion, is that operation B is twice as effective at preventing mortalities than operation A. and yet, our ARR has remained .5%.
Now imagine if instead of death, it was a disease that you take as your AR event. Depending on the present spread rate of the virus, the background risk might fluctuate from as low as .1% to as high as 30% for a single span of a month. And if you were to have the exact same fucking test the next month, you end up with a vastly different ARR
You didn’t explain this well at all. Not a dig and I’d like to understand your logic.