Let me be clear that I agree fraud occurred on a large scale in both Fulton and Philadelphia. I just don't think Solomon proves it with his crazy, unbelievably long-winded analysis. You quote me and then say "That's not the point." but then your next words verify it is in fact the point. His ultimate argument is probabilistic, saying there is no way such ratio transfers could ever have occurred normally. That's a probability statement with a tacit reference to a uniform distribution. My claim is that uniform distribution assumption is flawed. Voting tallies and the ratios they produce with counting schemes like those in Fulton and Philly occur in batches, often size 50 and 100. This makes certain ratios much more likely than others simply by the counting process. Finally, I'd challenge you to actually write his algorithm in some kind of reasonable coding language with a clear, concise explanation and logic, not with endless attention-seeking hours on spreadsheets and Starcraft. Good luck!
What is 'crazy' about the algorithm doing the following things? :
1: Calculate the number of fake Biden votes needed to steal the state
2: Come up with ratios that can be forced on certain precincts (which, on average, are lower than the true ratios)
3: Shuffle the forced ratios among multiple precincts in order to hide the rigging
4: Keep the ratios evenly balanced to achieve the target % of Trump votes
Older voting machines did the same thing but just stole the same share of votes from each precinct without shuffling them. This is exactly what happened in New Hampshire.
I just don't think Solomon proves it with his crazy, unbelievably long-winded analysis
That is exactly why I was drawn to Edward Solomon. He demonstrates every single step of his calculations live on camera and hides nothing. He provides spreadsheets at every major step of the process.
If Solomon's livestreams are too long-winded for you, you can just watch his 'smoking gun' videos.
Voting tallies and the ratios they produce with counting schemes like those in Fulton and Philly occur in batches, often size 50 and 100
Doesn't matter that precincts don't have updates. Those ratios are NOT counted if there are no new updates. So the repeated ratios have NOTHING to do with the ratio not updating.
Also, does it take hours to count a stack of 50 or 100 ballots? That makes zero sense. Speaking of batches of 50 ballots, Solomon also analyzed the remainder of the final Trump votes of each precinct and they don't follow the expected distribution.
His ultimate argument is probabilistic, saying there is no way such ratio transfers could ever have occurred normally. That's a probability statement with a tacit reference to a uniform distribution
The distribution isn't even the strongest evidence. The strongest evidence is the fact that the ratios in GA and PA violated Euler's Totient Law from 1735. The Iowa 2016 caucus results did not violate that law. It doesn't matter if the people voted 90% Trump or 10% Trump; the ratios of the precincts much always obey that law. Otherwise it's rigged.
Finally, I'd challenge you to actually write his algorithm in some kind of reasonable coding language with a clear, concise explanation and logic, not with endless attention-seeking hours on spreadsheets and Starcraft. Good luck!
I've already started writing that algorithm. I'm mostly done with it. I'm using the TypeScript programming language. This project hasn't been very hard.
Thanks for taking the time to reply point-by-point. Agree the four steps you indicate are not that crazy. Note they no longer mention ratio transfers, seizing, releasing, totients, or wheels, nor do they require hours of mind-numbing video explanation.
What likely happened is probably not too far from this, and can be stated even more easily: 1. Calculate the number of fake Biden votes needed to steal the state 2. Add to Biden and/or subtract from Trump the desired numbers distributed proportionally to selected precincts 3. Send a one-time note to inside contacts to make sure paper ballots and their images match the adjusted numbers.
Simple addition, subtraction, and basic fractions are all that is needed, no coprime numbers, Euler's Totient function, or wheels. Apply Occam's razor to both the algo and logistics.
Please also remember the Edison time series data are approximate counts due to the 3-decimal rounding in the way candidate fractions were reported. This fact alone shows there is nothing mathematically exact here--the counts he uses are not even the true ones. I was actually the one who provided the raw time series data to Solomon for Philly and have had several exchanges with him.
You appear to be avoiding probabilities, as does Solomon, but in the end please realize there must be an appeal to them along with recognition of the stochastic nature of the counting process, unique to each location. Fulton and Philadelphia counties were both counted in large collective areas, facilitating fraud as in the three steps above. Comparing to Iowa 2016 is quite a stretch and carries little force given massive changes in procedures due to covid and mailin.
Any claim of rarity must probabilistically refer to some kind of reference distribution for what is considered to be normal. In Solomon's case this appears to be a uniform distribution over ratios with small numerators. However the true reference distribution is far from uniform given the way the counting is done and reported in time.
Great to hear of your coding effort and will look forward to seeing it. Happy to make a friendly wager that what Solomon described is not what we will find if we can ever get our hands on the Dominion source code. Godspeed to Matt Braynard's Look Ahead America to remove black box machines and make all code open source. Your code could potentially contribute to that initiative.
Alright, NOW I understand why you think Edward Solomon is wrong. You must've gotten confused, and I don't blame you, because Solomon's videos are too long.
Let me simplify my understanding of his work:
The cheating algorithm
Calculate the number of fake Biden votes needed to steal the state 2. Add to Biden and/or subtract from Trump the desired numbers distributed proportionally to selected precincts 3. Send a one-time note to inside contacts to make sure paper ballots and their images match the adjusted numbers.
Yes. But if it did just this, the fraud would be really easy to detect. In fact, older voting machines (Diebold GEMS, Sequoia, etc.) might have only done that. There's evidence that the machines in Dr. Shiva's MA Senate election just subtracted a fixed fraction from each county or precinct.
So the algorithm, as Solomon found out, has just 2 extra steps in order to hide the shenanigans: it (A) randomizes the ratios of what % of Trump votes are stolen (B) spreads them across random precincts (which he calls 'seizing').
The 'wheel' is just an easy to understand analogy Solomon uses to describe these 2 steps. (Actually, a seesaw is a better analogy). The algorithm targets a % of Trump votes in total (let's say 48%) and makes sure enough votes are stolen from all the precincts in total to add up to 48%. That also means that it doesn't matter if the real Trump votes are 50%, 53% or 58%. The wheel will always try to balance it out to 48%. It'll steal more or less votes (in real time).
Evidently in PA the share of Trump votes was so overwhelming that the cheaters had to spin the wheel down by 2% so that Trump was still behind (or dump more fake ballots). All the seized precincts had their Trump ratios docked by ~2% at the same exact time. That's completely unnatural.
So what Solomon found is pretty much what you've described, except for those two extra steps.
These are NOT part of the algorithm itself. There are TOOLS that Solomon used to prove that fraud exists and prove that the algorithm works in that way.
For example, it's a law of mathematics that the total number of reducible fractions in a random dataset is ~60% or something. When Solomon counted the Trump/Biden ratios in each precinct, they miserably failed this law. That means the votes couldn't be a natural dataset.
Solomon then applied the same test to Iowa 2016 caucus results and he found they followed the 60% rule.
Please also remember the Edison time series data are approximate counts due to the 3-decimal rounding in the way candidate fractions were reported.
The main JSON file contained links to the data from each precinct. Solomon used the NYT JSON files from the precincts, which contain much more precise counts. It might've even been whole numbers (I forgot). But even if they were 0.821, 0.346, etc., at the precinct level (with a few hundred - few thousand votes), the rounding wouldn't matter. It would be off by 1 or 2 votes.
Why did NYT/Edison even expose these URLs in the first place? I have no idea. The live maps only need to know results by county, not precinct.
But if NYT had never exposed these links in the first place, we would've never discovered this algorithm.
Of course, Edison/NYT and all the other media outlets NEVER exposed the precinct files again after all the drama with the algorithms. In the GA runoff election, it only showed data by county.
Great to hear of your coding effort and will look forward to seeing it
Thank you. I hope I clarified things for you.
P.S. Remember the Arizona senate hearing from 11/30? They allegedly showed a leaked e-mail which talked about adding 35,000 votes to every Democrat candidate "in a spread distribution". I think this is exactly what Solomon proved.
I appreciate your effort to clear your way through the unbelievably obtuse and lengthy presentation Solomon provides in attempt to distill it down to a salient understanding. Please be aware I have done the same and largely agree with your description above regarding what he did. I have been analyzing the original data in depth since the election and again, I was the one who actually provided the data to Solomon for Philly.
With that said, you have not yet addressed my main criticism involving the fundamental tacit assumption that underlies Solomon’s primary claim of proving fraud. We may now finally be to the point where you can appreciate the full force of this, so let me give it one more try.
My references to a uniform probability distribution are to what you call a natural dataset. Solomon makes a uniform probabilistic assumption on this set when he refers to hitting thin lines on a dartboard, similar to the wheel. This assumption is flawed in the sense that it does not adequately reflect how precinct-level ballot ratios are stochastically generated in time, even in non-fraudulent cases.
Here’s an example: Consider bus riders from a weekday in a busy city in pre-covid days, and suppose we have complete rider and bus records for a full day. To relate this to Solomon’s setup, riders are precincts and bus seats are specific ballot ratios. We can ask, what are the chances that rider A gets off at a specific stop at a certain time and rider B gets on and takes the exact same seat rider A was just in? This represents, by analogy, a ratio transfer. Under Solomon’s assumption, all such probabilities would be extremely small given the number of riders and available bus seats for that day. But hold on, many riders have a regular work schedule and are also creatures of habit, so in many cases the transfer probabilities end up being much larger. The observed distribution of seat occupation is surely far from being equal across the seats. To accurately compute ratio transfer probabilities, one would need to thoroughly assess the riding patterns over numerous days to determine a reasonable reference distribution for what is natural in that city. In this case, as in the voting scenario, this is would differ substantially from a uniform distribution.
Carrying this example further, suppose a group of thugs boards a few of the buses throughout the day and forces riders to sit in certain seats. Literally seizing and releasing them as Solomon describes. From the data alone, how would we determine which buses the thugs boarded and which people they accosted? Again, we would first need to know what the natural distribution of seat occupation looks like for a typical weekday in that city and then look for anomalies. This would be very difficult to determine precisely from data from only a single day due to confounding of various factors.
As mentioned previously, comparing to Iowa 2016 is weak at best. We would ideally need precinct-level time-series data for counties similar to Fulton and Philadelphia in 2020. For example, comparing to Fulton to Cobb, DeKalb, and Gwinnett in Georgia is reasonable, but that is still only three counties and they were not counted collectively in State Farm Arena as was Fulton so the dynamics are different. It is much more straightforward and convincing to do graphical and statistical comparisons like those in Chapters 2 and 6 of https://www.scribd.com/document/495988109/MITRE-Election-Report-Critique .
To summarize, Solomon’s primary conclusions ride on a highly obfuscated probabilistic assumption that appears to be a poor approximation to realities of the 2020 election. I believe his techniques are capable of finding and suggesting some potentially fraudulent instances, but this is a very long way from 100% mathematical proof. I’d encourage you to think more deeply about this and carefully reread my replies in this thread. Researching more general topics like https://en.wikipedia.org/wiki/Statistical_proof and https://en.wikipedia.org/wiki/Probabilistic_logic may also help.
One final technical correction: The 3-decimal rounding errors are much larger than 1 or 2 votes. The Edison/NYT json files report a cumulative grand total of votes and then 3-decimal fractions for each candidate. As soon as the grand totals are greater than 1000 then rounding errors begin and grow increasingly worse as both precinct size increases and as time goes along. Such errors are substantial in the larger precincts in both Fulton and Philadelphia counties and also involve third-party candidates. The deltas that Solomon analyzes are far from exact, adding another questionable aspect to his entire analysis.
Let me be clear that I agree fraud occurred on a large scale in both Fulton and Philadelphia. I just don't think Solomon proves it with his crazy, unbelievably long-winded analysis. You quote me and then say "That's not the point." but then your next words verify it is in fact the point. His ultimate argument is probabilistic, saying there is no way such ratio transfers could ever have occurred normally. That's a probability statement with a tacit reference to a uniform distribution. My claim is that uniform distribution assumption is flawed. Voting tallies and the ratios they produce with counting schemes like those in Fulton and Philly occur in batches, often size 50 and 100. This makes certain ratios much more likely than others simply by the counting process. Finally, I'd challenge you to actually write his algorithm in some kind of reasonable coding language with a clear, concise explanation and logic, not with endless attention-seeking hours on spreadsheets and Starcraft. Good luck!
What is 'crazy' about the algorithm doing the following things? :
1: Calculate the number of fake Biden votes needed to steal the state 2: Come up with ratios that can be forced on certain precincts (which, on average, are lower than the true ratios) 3: Shuffle the forced ratios among multiple precincts in order to hide the rigging 4: Keep the ratios evenly balanced to achieve the target % of Trump votes
Older voting machines did the same thing but just stole the same share of votes from each precinct without shuffling them. This is exactly what happened in New Hampshire.
That is exactly why I was drawn to Edward Solomon. He demonstrates every single step of his calculations live on camera and hides nothing. He provides spreadsheets at every major step of the process.
If Solomon's livestreams are too long-winded for you, you can just watch his 'smoking gun' videos.
Doesn't matter that precincts don't have updates. Those ratios are NOT counted if there are no new updates. So the repeated ratios have NOTHING to do with the ratio not updating.
Also, does it take hours to count a stack of 50 or 100 ballots? That makes zero sense. Speaking of batches of 50 ballots, Solomon also analyzed the remainder of the final Trump votes of each precinct and they don't follow the expected distribution.
The distribution isn't even the strongest evidence. The strongest evidence is the fact that the ratios in GA and PA violated Euler's Totient Law from 1735. The Iowa 2016 caucus results did not violate that law. It doesn't matter if the people voted 90% Trump or 10% Trump; the ratios of the precincts much always obey that law. Otherwise it's rigged.
I've already started writing that algorithm. I'm mostly done with it. I'm using the TypeScript programming language. This project hasn't been very hard.
Thanks for taking the time to reply point-by-point. Agree the four steps you indicate are not that crazy. Note they no longer mention ratio transfers, seizing, releasing, totients, or wheels, nor do they require hours of mind-numbing video explanation.
What likely happened is probably not too far from this, and can be stated even more easily: 1. Calculate the number of fake Biden votes needed to steal the state 2. Add to Biden and/or subtract from Trump the desired numbers distributed proportionally to selected precincts 3. Send a one-time note to inside contacts to make sure paper ballots and their images match the adjusted numbers.
Simple addition, subtraction, and basic fractions are all that is needed, no coprime numbers, Euler's Totient function, or wheels. Apply Occam's razor to both the algo and logistics.
Please also remember the Edison time series data are approximate counts due to the 3-decimal rounding in the way candidate fractions were reported. This fact alone shows there is nothing mathematically exact here--the counts he uses are not even the true ones. I was actually the one who provided the raw time series data to Solomon for Philly and have had several exchanges with him.
You appear to be avoiding probabilities, as does Solomon, but in the end please realize there must be an appeal to them along with recognition of the stochastic nature of the counting process, unique to each location. Fulton and Philadelphia counties were both counted in large collective areas, facilitating fraud as in the three steps above. Comparing to Iowa 2016 is quite a stretch and carries little force given massive changes in procedures due to covid and mailin.
Any claim of rarity must probabilistically refer to some kind of reference distribution for what is considered to be normal. In Solomon's case this appears to be a uniform distribution over ratios with small numerators. However the true reference distribution is far from uniform given the way the counting is done and reported in time.
Great to hear of your coding effort and will look forward to seeing it. Happy to make a friendly wager that what Solomon described is not what we will find if we can ever get our hands on the Dominion source code. Godspeed to Matt Braynard's Look Ahead America to remove black box machines and make all code open source. Your code could potentially contribute to that initiative.
Alright, NOW I understand why you think Edward Solomon is wrong. You must've gotten confused, and I don't blame you, because Solomon's videos are too long.
Let me simplify my understanding of his work:
The cheating algorithm
Yes. But if it did just this, the fraud would be really easy to detect. In fact, older voting machines (Diebold GEMS, Sequoia, etc.) might have only done that. There's evidence that the machines in Dr. Shiva's MA Senate election just subtracted a fixed fraction from each county or precinct.
So the algorithm, as Solomon found out, has just 2 extra steps in order to hide the shenanigans: it (A) randomizes the ratios of what % of Trump votes are stolen (B) spreads them across random precincts (which he calls 'seizing').
The 'wheel' is just an easy to understand analogy Solomon uses to describe these 2 steps. (Actually, a seesaw is a better analogy). The algorithm targets a % of Trump votes in total (let's say 48%) and makes sure enough votes are stolen from all the precincts in total to add up to 48%. That also means that it doesn't matter if the real Trump votes are 50%, 53% or 58%. The wheel will always try to balance it out to 48%. It'll steal more or less votes (in real time).
Evidently in PA the share of Trump votes was so overwhelming that the cheaters had to spin the wheel down by 2% so that Trump was still behind (or dump more fake ballots). All the seized precincts had their Trump ratios docked by ~2% at the same exact time. That's completely unnatural.
So what Solomon found is pretty much what you've described, except for those two extra steps.
Euler Totients, coprime numbers, pairwise fractions, snapping...
These are NOT part of the algorithm itself. There are TOOLS that Solomon used to prove that fraud exists and prove that the algorithm works in that way.
For example, it's a law of mathematics that the total number of reducible fractions in a random dataset is ~60% or something. When Solomon counted the Trump/Biden ratios in each precinct, they miserably failed this law. That means the votes couldn't be a natural dataset.
Solomon then applied the same test to Iowa 2016 caucus results and he found they followed the 60% rule.
The main JSON file contained links to the data from each precinct. Solomon used the NYT JSON files from the precincts, which contain much more precise counts. It might've even been whole numbers (I forgot). But even if they were 0.821, 0.346, etc., at the precinct level (with a few hundred - few thousand votes), the rounding wouldn't matter. It would be off by 1 or 2 votes.
Why did NYT/Edison even expose these URLs in the first place? I have no idea. The live maps only need to know results by county, not precinct.
But if NYT had never exposed these links in the first place, we would've never discovered this algorithm.
Of course, Edison/NYT and all the other media outlets NEVER exposed the precinct files again after all the drama with the algorithms. In the GA runoff election, it only showed data by county.
Thank you. I hope I clarified things for you.
P.S. Remember the Arizona senate hearing from 11/30? They allegedly showed a leaked e-mail which talked about adding 35,000 votes to every Democrat candidate "in a spread distribution". I think this is exactly what Solomon proved.
I appreciate your effort to clear your way through the unbelievably obtuse and lengthy presentation Solomon provides in attempt to distill it down to a salient understanding. Please be aware I have done the same and largely agree with your description above regarding what he did. I have been analyzing the original data in depth since the election and again, I was the one who actually provided the data to Solomon for Philly.
With that said, you have not yet addressed my main criticism involving the fundamental tacit assumption that underlies Solomon’s primary claim of proving fraud. We may now finally be to the point where you can appreciate the full force of this, so let me give it one more try.
My references to a uniform probability distribution are to what you call a natural dataset. Solomon makes a uniform probabilistic assumption on this set when he refers to hitting thin lines on a dartboard, similar to the wheel. This assumption is flawed in the sense that it does not adequately reflect how precinct-level ballot ratios are stochastically generated in time, even in non-fraudulent cases.
Here’s an example: Consider bus riders from a weekday in a busy city in pre-covid days, and suppose we have complete rider and bus records for a full day. To relate this to Solomon’s setup, riders are precincts and bus seats are specific ballot ratios. We can ask, what are the chances that rider A gets off at a specific stop at a certain time and rider B gets on and takes the exact same seat rider A was just in? This represents, by analogy, a ratio transfer. Under Solomon’s assumption, all such probabilities would be extremely small given the number of riders and available bus seats for that day. But hold on, many riders have a regular work schedule and are also creatures of habit, so in many cases the transfer probabilities end up being much larger. The observed distribution of seat occupation is surely far from being equal across the seats. To accurately compute ratio transfer probabilities, one would need to thoroughly assess the riding patterns over numerous days to determine a reasonable reference distribution for what is natural in that city. In this case, as in the voting scenario, this is would differ substantially from a uniform distribution.
Carrying this example further, suppose a group of thugs boards a few of the buses throughout the day and forces riders to sit in certain seats. Literally seizing and releasing them as Solomon describes. From the data alone, how would we determine which buses the thugs boarded and which people they accosted? Again, we would first need to know what the natural distribution of seat occupation looks like for a typical weekday in that city and then look for anomalies. This would be very difficult to determine precisely from data from only a single day due to confounding of various factors.
As mentioned previously, comparing to Iowa 2016 is weak at best. We would ideally need precinct-level time-series data for counties similar to Fulton and Philadelphia in 2020. For example, comparing to Fulton to Cobb, DeKalb, and Gwinnett in Georgia is reasonable, but that is still only three counties and they were not counted collectively in State Farm Arena as was Fulton so the dynamics are different. It is much more straightforward and convincing to do graphical and statistical comparisons like those in Chapters 2 and 6 of https://www.scribd.com/document/495988109/MITRE-Election-Report-Critique .
To summarize, Solomon’s primary conclusions ride on a highly obfuscated probabilistic assumption that appears to be a poor approximation to realities of the 2020 election. I believe his techniques are capable of finding and suggesting some potentially fraudulent instances, but this is a very long way from 100% mathematical proof. I’d encourage you to think more deeply about this and carefully reread my replies in this thread. Researching more general topics like https://en.wikipedia.org/wiki/Statistical_proof and https://en.wikipedia.org/wiki/Probabilistic_logic may also help.
One final technical correction: The 3-decimal rounding errors are much larger than 1 or 2 votes. The Edison/NYT json files report a cumulative grand total of votes and then 3-decimal fractions for each candidate. As soon as the grand totals are greater than 1000 then rounding errors begin and grow increasingly worse as both precinct size increases and as time goes along. Such errors are substantial in the larger precincts in both Fulton and Philadelphia counties and also involve third-party candidates. The deltas that Solomon analyzes are far from exact, adding another questionable aspect to his entire analysis.