That's my point. The Edison data (the data that goes to the SOS of each state and is used to certify the elections) does not show that. The CALCULATION shows that.
Let me illustrate from a pretend entry from the NYT scrap data (again, the data set I believe this "analyst" used):
Date: 11/4/2020 12:00:00
Total Votes: 1,630,257
Biden Percent: .666
Trump Percent: .334
Other Percent: .000
Then, when you go to calculate the "Trump votes" and "Biden Votes" you get:
Biden Votes = 1,630,257 * 0.666 = 1,085,751.162
Trump Votes = 1.630,257 * 0.334 = 544,505.838
These calculations produce decimals (3 decimal point because that's how multiplication works). But that's not the real number being reported from the Dominion Servers to Edison (and presumably the SOS's).
The Edison data for such an entry would look something like:
Biden Vote: 1,086,129
Trump Vote: 544,128
These numbers not only don't have decimal places, but they aren't even the same number. The reason for this is one is reported (the Edison data) and the other is Calculated by multiplying a whole number (1,630,257) by a decimal accurate to the thousandths place (.666 e.g.).
The error value from such a calculation MUST be carried through or you get crappy analysis. In this case the real number from the Biden calculation is:
1,630,257 * 0.666 = 1,085,751.162 +/- 806.977
This means the REAL value for Biden falls somewhere between:
1,084,936.064 and 1,086,549.988 (which is confirmed in the Edison data).
This mistake was made by many people, not just the Jim Hoft, but he keeps running wild with it, making all kinds of false conclusions (like his "drop and roll" thing). Well, I shouldn't say his conclusions are all false, not all of them are. You don't need the increased precision from the Edison data to come to many of the conclusions, but a lot of his stuff is just plain wrong.
Again, the Edison data shows conclusively that it was NOT random samplings from a biased population (election data). In fact the Edison data shows it BETTER than the NYT .json scrap data.
I am in complete agreement with your statement. What I am saying is, the data set that was used in this persons analysis (and MOST of the others that talk about "fractional data") are using the NYT scrap data set as their reference are not realizing THEY CREATED THE DECIMAL WHEN THEY DID THEIR CALCULATION.
The NYT .json scrap data was an incomplete data set. The NYT got their information from the Dominion (et al) servers, then when they got their NEXT piece of information, the previous piece got STORED in their .json database as a smaller amount of information (as I described above). This data had much less precision than the original.
I know its confusing. Many people succumbed to the exact same error, even good analysts. Even I did for a day or so, before I realized my mistake, and I've been working with large numerical data sets like this for twenty years.
I promise you: anyone who is getting decimals from the NYT data set is not "getting them" but calculating them themselves. The original (Edison) data set does not contain them.
That doesn't mean there are no places where there might be partial votes. The fact that you can calculate fractional votes in the Dominion machines themselves is a completely separate issue (e.g. we have seen real evidence that you can select a Biden vote to be worth 1.2 votes).
I don't know how the Dominion machines STORE those partial votes, but I do know the Dominion machines are not REPORTING those fractions, if they are stored that way at all (they probably are).
That's my point. The Edison data (the data that goes to the SOS of each state and is used to certify the elections) does not show that. The CALCULATION shows that.
Let me illustrate from a pretend entry from the NYT scrap data (again, the data set I believe this "analyst" used):
Date: 11/4/2020 12:00:00
Total Votes: 1,630,257
Biden Percent: .666
Trump Percent: .334
Other Percent: .000
Then, when you go to calculate the "Trump votes" and "Biden Votes" you get:
Biden Votes = 1,630,257 * 0.666 = 1,085,751.162
Trump Votes = 1.630,257 * 0.334 = 544,505.838
These calculations produce decimals (3 decimal point because that's how multiplication works). But that's not the real number being reported from the Dominion Servers to Edison (and presumably the SOS's).
The Edison data for such an entry would look something like:
Biden Vote: 1,086,129
Trump Vote: 544,128
These numbers not only don't have decimal places, but they aren't even the same number. The reason for this is one is reported (the Edison data) and the other is Calculated by multiplying a whole number (1,630,257) by a decimal accurate to the thousandths place (.666 e.g.).
The error value from such a calculation MUST be carried through or you get crappy analysis. In this case the real number from the Biden calculation is:
1,630,257 * 0.666 = 1,085,751.162 +/- 806.977
This means the REAL value for Biden falls somewhere between:
1,084,936.064 and 1,086,549.988 (which is confirmed in the Edison data).
This mistake was made by many people, not just the Jim Hoft, but he keeps running wild with it, making all kinds of false conclusions (like his "drop and roll" thing). Well, I shouldn't say his conclusions are all false, not all of them are. You don't need the increased precision from the Edison data to come to many of the conclusions, but a lot of his stuff is just plain wrong.
Again, the Edison data shows conclusively that it was NOT random samplings from a biased population (election data). In fact the Edison data shows it BETTER than the NYT .json scrap data.
I'm not buying it. There is no reason to ever have decimals as part of your calculations of votes. PERIOD. Just my opinion take it or leave it.
I am in complete agreement with your statement. What I am saying is, the data set that was used in this persons analysis (and MOST of the others that talk about "fractional data") are using the NYT scrap data set as their reference are not realizing THEY CREATED THE DECIMAL WHEN THEY DID THEIR CALCULATION.
The NYT .json scrap data was an incomplete data set. The NYT got their information from the Dominion (et al) servers, then when they got their NEXT piece of information, the previous piece got STORED in their .json database as a smaller amount of information (as I described above). This data had much less precision than the original.
I know its confusing. Many people succumbed to the exact same error, even good analysts. Even I did for a day or so, before I realized my mistake, and I've been working with large numerical data sets like this for twenty years.
I promise you: anyone who is getting decimals from the NYT data set is not "getting them" but calculating them themselves. The original (Edison) data set does not contain them.
That doesn't mean there are no places where there might be partial votes. The fact that you can calculate fractional votes in the Dominion machines themselves is a completely separate issue (e.g. we have seen real evidence that you can select a Biden vote to be worth 1.2 votes).
I don't know how the Dominion machines STORE those partial votes, but I do know the Dominion machines are not REPORTING those fractions, if they are stored that way at all (they probably are).