I strongly disagree. The "Attention ES&S and Dominion" video shows that the ratios were exponentially distributed in GA and PA, whereas they're supposed to be on a S curve. That proves that the voting machines gave Joebama more favorable ratios.
are not nearly as rare as he claims given how vote tallies are reported over time
That's not the point. The point is that there's no way specific ratios are supposed to be transferred from multiple precincts to other precincts at specific times. If each precinct has hundreds or several thousand votes, those events are supposed to be very rare. In an honest election you can't have dozens of precincts with the same ratios at the same time. Solomon even looked at the Iowa 2016 Dem caucus and found an example of an honest election where none of this happened.
Keep in mind these ratio anomalies didn't start on the evening of Nov. 3; they started around midnight in all these states, when most of the voter fraud was carried out.
The strongest proof that Edward Solomon is correct is when he found that the 'wheel algorithm' in PA 'rotated' the entire wheel by reducing all Trump ratios in the precincts it seized by ~2% at the same time. Trump was getting an overwhelming number of votes in that state, so it made perfect sense. There's no way Trump ratios in dozens of precincts can all drop by 2% at the same time unless there was fraud.
Edward Solomon's proof is the smoking gun and is 10x better than the other statistical evidence I've ever come across. I have watched over 40+ hours of Solomon's work, and it all looks 100% solid to me. If Dominion were cheating with an algorithm, this is exactly how they would do it. The reason the vote flipping is detectable is because precincts have a limited number of votes (usually up to 3000), and Dominion's wheel can only have a few hundred or so ratios.
Let me be clear that I agree fraud occurred on a large scale in both Fulton and Philadelphia. I just don't think Solomon proves it with his crazy, unbelievably long-winded analysis. You quote me and then say "That's not the point." but then your next words verify it is in fact the point. His ultimate argument is probabilistic, saying there is no way such ratio transfers could ever have occurred normally. That's a probability statement with a tacit reference to a uniform distribution. My claim is that uniform distribution assumption is flawed. Voting tallies and the ratios they produce with counting schemes like those in Fulton and Philly occur in batches, often size 50 and 100. This makes certain ratios much more likely than others simply by the counting process. Finally, I'd challenge you to actually write his algorithm in some kind of reasonable coding language with a clear, concise explanation and logic, not with endless attention-seeking hours on spreadsheets and Starcraft. Good luck!
What is 'crazy' about the algorithm doing the following things? :
1: Calculate the number of fake Biden votes needed to steal the state
2: Come up with ratios that can be forced on certain precincts (which, on average, are lower than the true ratios)
3: Shuffle the forced ratios among multiple precincts in order to hide the rigging
4: Keep the ratios evenly balanced to achieve the target % of Trump votes
Older voting machines did the same thing but just stole the same share of votes from each precinct without shuffling them. This is exactly what happened in New Hampshire.
I just don't think Solomon proves it with his crazy, unbelievably long-winded analysis
That is exactly why I was drawn to Edward Solomon. He demonstrates every single step of his calculations live on camera and hides nothing. He provides spreadsheets at every major step of the process.
If Solomon's livestreams are too long-winded for you, you can just watch his 'smoking gun' videos.
Voting tallies and the ratios they produce with counting schemes like those in Fulton and Philly occur in batches, often size 50 and 100
Doesn't matter that precincts don't have updates. Those ratios are NOT counted if there are no new updates. So the repeated ratios have NOTHING to do with the ratio not updating.
Also, does it take hours to count a stack of 50 or 100 ballots? That makes zero sense. Speaking of batches of 50 ballots, Solomon also analyzed the remainder of the final Trump votes of each precinct and they don't follow the expected distribution.
His ultimate argument is probabilistic, saying there is no way such ratio transfers could ever have occurred normally. That's a probability statement with a tacit reference to a uniform distribution
The distribution isn't even the strongest evidence. The strongest evidence is the fact that the ratios in GA and PA violated Euler's Totient Law from 1735. The Iowa 2016 caucus results did not violate that law. It doesn't matter if the people voted 90% Trump or 10% Trump; the ratios of the precincts much always obey that law. Otherwise it's rigged.
Finally, I'd challenge you to actually write his algorithm in some kind of reasonable coding language with a clear, concise explanation and logic, not with endless attention-seeking hours on spreadsheets and Starcraft. Good luck!
I've already started writing that algorithm. I'm mostly done with it. I'm using the TypeScript programming language. This project hasn't been very hard.
Thanks for taking the time to reply point-by-point. Agree the four steps you indicate are not that crazy. Note they no longer mention ratio transfers, seizing, releasing, totients, or wheels, nor do they require hours of mind-numbing video explanation.
What likely happened is probably not too far from this, and can be stated even more easily: 1. Calculate the number of fake Biden votes needed to steal the state 2. Add to Biden and/or subtract from Trump the desired numbers distributed proportionally to selected precincts 3. Send a one-time note to inside contacts to make sure paper ballots and their images match the adjusted numbers.
Simple addition, subtraction, and basic fractions are all that is needed, no coprime numbers, Euler's Totient function, or wheels. Apply Occam's razor to both the algo and logistics.
Please also remember the Edison time series data are approximate counts due to the 3-decimal rounding in the way candidate fractions were reported. This fact alone shows there is nothing mathematically exact here--the counts he uses are not even the true ones. I was actually the one who provided the raw time series data to Solomon for Philly and have had several exchanges with him.
You appear to be avoiding probabilities, as does Solomon, but in the end please realize there must be an appeal to them along with recognition of the stochastic nature of the counting process, unique to each location. Fulton and Philadelphia counties were both counted in large collective areas, facilitating fraud as in the three steps above. Comparing to Iowa 2016 is quite a stretch and carries little force given massive changes in procedures due to covid and mailin.
Any claim of rarity must probabilistically refer to some kind of reference distribution for what is considered to be normal. In Solomon's case this appears to be a uniform distribution over ratios with small numerators. However the true reference distribution is far from uniform given the way the counting is done and reported in time.
Great to hear of your coding effort and will look forward to seeing it. Happy to make a friendly wager that what Solomon described is not what we will find if we can ever get our hands on the Dominion source code. Godspeed to Matt Braynard's Look Ahead America to remove black box machines and make all code open source. Your code could potentially contribute to that initiative.
Alright, NOW I understand why you think Edward Solomon is wrong. You must've gotten confused, and I don't blame you, because Solomon's videos are too long.
Let me simplify my understanding of his work:
The cheating algorithm
Calculate the number of fake Biden votes needed to steal the state 2. Add to Biden and/or subtract from Trump the desired numbers distributed proportionally to selected precincts 3. Send a one-time note to inside contacts to make sure paper ballots and their images match the adjusted numbers.
Yes. But if it did just this, the fraud would be really easy to detect. In fact, older voting machines (Diebold GEMS, Sequoia, etc.) might have only done that. There's evidence that the machines in Dr. Shiva's MA Senate election just subtracted a fixed fraction from each county or precinct.
So the algorithm, as Solomon found out, has just 2 extra steps in order to hide the shenanigans: it (A) randomizes the ratios of what % of Trump votes are stolen (B) spreads them across random precincts (which he calls 'seizing').
The 'wheel' is just an easy to understand analogy Solomon uses to describe these 2 steps. (Actually, a seesaw is a better analogy). The algorithm targets a % of Trump votes in total (let's say 48%) and makes sure enough votes are stolen from all the precincts in total to add up to 48%. That also means that it doesn't matter if the real Trump votes are 50%, 53% or 58%. The wheel will always try to balance it out to 48%. It'll steal more or less votes (in real time).
Evidently in PA the share of Trump votes was so overwhelming that the cheaters had to spin the wheel down by 2% so that Trump was still behind (or dump more fake ballots). All the seized precincts had their Trump ratios docked by ~2% at the same exact time. That's completely unnatural.
So what Solomon found is pretty much what you've described, except for those two extra steps.
These are NOT part of the algorithm itself. There are TOOLS that Solomon used to prove that fraud exists and prove that the algorithm works in that way.
For example, it's a law of mathematics that the total number of reducible fractions in a random dataset is ~60% or something. When Solomon counted the Trump/Biden ratios in each precinct, they miserably failed this law. That means the votes couldn't be a natural dataset.
Solomon then applied the same test to Iowa 2016 caucus results and he found they followed the 60% rule.
Please also remember the Edison time series data are approximate counts due to the 3-decimal rounding in the way candidate fractions were reported.
The main JSON file contained links to the data from each precinct. Solomon used the NYT JSON files from the precincts, which contain much more precise counts. It might've even been whole numbers (I forgot). But even if they were 0.821, 0.346, etc., at the precinct level (with a few hundred - few thousand votes), the rounding wouldn't matter. It would be off by 1 or 2 votes.
Why did NYT/Edison even expose these URLs in the first place? I have no idea. The live maps only need to know results by county, not precinct.
But if NYT had never exposed these links in the first place, we would've never discovered this algorithm.
Of course, Edison/NYT and all the other media outlets NEVER exposed the precinct files again after all the drama with the algorithms. In the GA runoff election, it only showed data by county.
Great to hear of your coding effort and will look forward to seeing it
Thank you. I hope I clarified things for you.
P.S. Remember the Arizona senate hearing from 11/30? They allegedly showed a leaked e-mail which talked about adding 35,000 votes to every Democrat candidate "in a spread distribution". I think this is exactly what Solomon proved.
I strongly disagree. The "Attention ES&S and Dominion" video shows that the ratios were exponentially distributed in GA and PA, whereas they're supposed to be on a S curve. That proves that the voting machines gave Joebama more favorable ratios.
That's not the point. The point is that there's no way specific ratios are supposed to be transferred from multiple precincts to other precincts at specific times. If each precinct has hundreds or several thousand votes, those events are supposed to be very rare. In an honest election you can't have dozens of precincts with the same ratios at the same time. Solomon even looked at the Iowa 2016 Dem caucus and found an example of an honest election where none of this happened.
Keep in mind these ratio anomalies didn't start on the evening of Nov. 3; they started around midnight in all these states, when most of the voter fraud was carried out.
The strongest proof that Edward Solomon is correct is when he found that the 'wheel algorithm' in PA 'rotated' the entire wheel by reducing all Trump ratios in the precincts it seized by ~2% at the same time. Trump was getting an overwhelming number of votes in that state, so it made perfect sense. There's no way Trump ratios in dozens of precincts can all drop by 2% at the same time unless there was fraud.
Edward Solomon's proof is the smoking gun and is 10x better than the other statistical evidence I've ever come across. I have watched over 40+ hours of Solomon's work, and it all looks 100% solid to me. If Dominion were cheating with an algorithm, this is exactly how they would do it. The reason the vote flipping is detectable is because precincts have a limited number of votes (usually up to 3000), and Dominion's wheel can only have a few hundred or so ratios.
Let me be clear that I agree fraud occurred on a large scale in both Fulton and Philadelphia. I just don't think Solomon proves it with his crazy, unbelievably long-winded analysis. You quote me and then say "That's not the point." but then your next words verify it is in fact the point. His ultimate argument is probabilistic, saying there is no way such ratio transfers could ever have occurred normally. That's a probability statement with a tacit reference to a uniform distribution. My claim is that uniform distribution assumption is flawed. Voting tallies and the ratios they produce with counting schemes like those in Fulton and Philly occur in batches, often size 50 and 100. This makes certain ratios much more likely than others simply by the counting process. Finally, I'd challenge you to actually write his algorithm in some kind of reasonable coding language with a clear, concise explanation and logic, not with endless attention-seeking hours on spreadsheets and Starcraft. Good luck!
What is 'crazy' about the algorithm doing the following things? :
1: Calculate the number of fake Biden votes needed to steal the state 2: Come up with ratios that can be forced on certain precincts (which, on average, are lower than the true ratios) 3: Shuffle the forced ratios among multiple precincts in order to hide the rigging 4: Keep the ratios evenly balanced to achieve the target % of Trump votes
Older voting machines did the same thing but just stole the same share of votes from each precinct without shuffling them. This is exactly what happened in New Hampshire.
That is exactly why I was drawn to Edward Solomon. He demonstrates every single step of his calculations live on camera and hides nothing. He provides spreadsheets at every major step of the process.
If Solomon's livestreams are too long-winded for you, you can just watch his 'smoking gun' videos.
Doesn't matter that precincts don't have updates. Those ratios are NOT counted if there are no new updates. So the repeated ratios have NOTHING to do with the ratio not updating.
Also, does it take hours to count a stack of 50 or 100 ballots? That makes zero sense. Speaking of batches of 50 ballots, Solomon also analyzed the remainder of the final Trump votes of each precinct and they don't follow the expected distribution.
The distribution isn't even the strongest evidence. The strongest evidence is the fact that the ratios in GA and PA violated Euler's Totient Law from 1735. The Iowa 2016 caucus results did not violate that law. It doesn't matter if the people voted 90% Trump or 10% Trump; the ratios of the precincts much always obey that law. Otherwise it's rigged.
I've already started writing that algorithm. I'm mostly done with it. I'm using the TypeScript programming language. This project hasn't been very hard.
Thanks for taking the time to reply point-by-point. Agree the four steps you indicate are not that crazy. Note they no longer mention ratio transfers, seizing, releasing, totients, or wheels, nor do they require hours of mind-numbing video explanation.
What likely happened is probably not too far from this, and can be stated even more easily: 1. Calculate the number of fake Biden votes needed to steal the state 2. Add to Biden and/or subtract from Trump the desired numbers distributed proportionally to selected precincts 3. Send a one-time note to inside contacts to make sure paper ballots and their images match the adjusted numbers.
Simple addition, subtraction, and basic fractions are all that is needed, no coprime numbers, Euler's Totient function, or wheels. Apply Occam's razor to both the algo and logistics.
Please also remember the Edison time series data are approximate counts due to the 3-decimal rounding in the way candidate fractions were reported. This fact alone shows there is nothing mathematically exact here--the counts he uses are not even the true ones. I was actually the one who provided the raw time series data to Solomon for Philly and have had several exchanges with him.
You appear to be avoiding probabilities, as does Solomon, but in the end please realize there must be an appeal to them along with recognition of the stochastic nature of the counting process, unique to each location. Fulton and Philadelphia counties were both counted in large collective areas, facilitating fraud as in the three steps above. Comparing to Iowa 2016 is quite a stretch and carries little force given massive changes in procedures due to covid and mailin.
Any claim of rarity must probabilistically refer to some kind of reference distribution for what is considered to be normal. In Solomon's case this appears to be a uniform distribution over ratios with small numerators. However the true reference distribution is far from uniform given the way the counting is done and reported in time.
Great to hear of your coding effort and will look forward to seeing it. Happy to make a friendly wager that what Solomon described is not what we will find if we can ever get our hands on the Dominion source code. Godspeed to Matt Braynard's Look Ahead America to remove black box machines and make all code open source. Your code could potentially contribute to that initiative.
Alright, NOW I understand why you think Edward Solomon is wrong. You must've gotten confused, and I don't blame you, because Solomon's videos are too long.
Let me simplify my understanding of his work:
The cheating algorithm
Yes. But if it did just this, the fraud would be really easy to detect. In fact, older voting machines (Diebold GEMS, Sequoia, etc.) might have only done that. There's evidence that the machines in Dr. Shiva's MA Senate election just subtracted a fixed fraction from each county or precinct.
So the algorithm, as Solomon found out, has just 2 extra steps in order to hide the shenanigans: it (A) randomizes the ratios of what % of Trump votes are stolen (B) spreads them across random precincts (which he calls 'seizing').
The 'wheel' is just an easy to understand analogy Solomon uses to describe these 2 steps. (Actually, a seesaw is a better analogy). The algorithm targets a % of Trump votes in total (let's say 48%) and makes sure enough votes are stolen from all the precincts in total to add up to 48%. That also means that it doesn't matter if the real Trump votes are 50%, 53% or 58%. The wheel will always try to balance it out to 48%. It'll steal more or less votes (in real time).
Evidently in PA the share of Trump votes was so overwhelming that the cheaters had to spin the wheel down by 2% so that Trump was still behind (or dump more fake ballots). All the seized precincts had their Trump ratios docked by ~2% at the same exact time. That's completely unnatural.
So what Solomon found is pretty much what you've described, except for those two extra steps.
Euler Totients, coprime numbers, pairwise fractions, snapping...
These are NOT part of the algorithm itself. There are TOOLS that Solomon used to prove that fraud exists and prove that the algorithm works in that way.
For example, it's a law of mathematics that the total number of reducible fractions in a random dataset is ~60% or something. When Solomon counted the Trump/Biden ratios in each precinct, they miserably failed this law. That means the votes couldn't be a natural dataset.
Solomon then applied the same test to Iowa 2016 caucus results and he found they followed the 60% rule.
The main JSON file contained links to the data from each precinct. Solomon used the NYT JSON files from the precincts, which contain much more precise counts. It might've even been whole numbers (I forgot). But even if they were 0.821, 0.346, etc., at the precinct level (with a few hundred - few thousand votes), the rounding wouldn't matter. It would be off by 1 or 2 votes.
Why did NYT/Edison even expose these URLs in the first place? I have no idea. The live maps only need to know results by county, not precinct.
But if NYT had never exposed these links in the first place, we would've never discovered this algorithm.
Of course, Edison/NYT and all the other media outlets NEVER exposed the precinct files again after all the drama with the algorithms. In the GA runoff election, it only showed data by county.
Thank you. I hope I clarified things for you.
P.S. Remember the Arizona senate hearing from 11/30? They allegedly showed a leaked e-mail which talked about adding 35,000 votes to every Democrat candidate "in a spread distribution". I think this is exactly what Solomon proved.