The original sin of American polling came in 1936, when Literary Digest surveyed its readership and declared that Alf Landon would likely be the next president. Landon lost, and a year later, the magazine was no more. George Gallup, whose organization got that contest right, pioneered the world of random sampling in polls. But barely a decade later, his firm was also on the wrong side of the “Dewey Defeats Truman” upset, giving him the most famous black eye in all of polling history.
Polling has rarely been perfect, and in recent years, response rates have fallen to around 6%. Phone polling has become incredibly expensive, and while new methods such as online polling continue to evolve and improve, there is no single “gold standard” polling method these days.
President Trump’s victory over Hillary Clinton in 2016 put the polling industry in the hot seat. Four years later, we knew the polls had missed several targets by 9 p.m. on election night, when Florida scrambled the conventional expectations of the race. Frank Luntz, perhaps polling’s most famous face, declared that 2020’s even bigger polling flops mean the industry is “dead.”
But some of us, particularly many private pollsters, weren’t really surprised by the result of this election and don’t see it as a reason to give up.
For one, predicting the outcome of an election is less important than understanding what voters are thinking and feeling in between elections. Most can’t afford to hire a lobbyist and are too busy raising a family and making ends meet to engage in political advocacy and activism. For these members of the public, polls are a way to make their voices heard by those who represent them in office. They’re a vital line of communication.
Which is precisely why I think fundamental change is both possible and necessary.
This year, the pandemic and the enormous rise in expected turnout added even more uncertainty, and smart observers had opened their minds to a wide range of possible outcomes, public polls notwithstanding. But simply saying, “Don’t trust the polls too much,” isn’t a satisfying answer to the very valid question of “What went wrong?”
The challenges facing polling are significant and, unlike in past years, aren’t explainable by any neat and tidy theory. For instance, in 2012, the polls missed because pollsters failed to call enough cellphones. The solution? Add more cellphone respondents. In 2016, the polls were off due to missing voters without college degrees. The solution? Get your education mix right.
But this time, things aren’t nearly so clear-cut.
Most efforts to create a unified theory of polling error in 2020 center on one phrase: “shy Trump voters.” No one spoke more of this theory than pollster Robert Cahaly of the Trafalgar Group, who claimed he was getting to understand better those voters who weren’t showing up in polls but showed up on Election Day. And he did come far closer to the mark than major media and university pollsters in key states such as Florida and Wisconsin did, and for that, he deserves credit.
But the quest to hunt for “shy Trump voters” also led Trafalgar’s numbers astray in states such as Minnesota, where Trafalgar was further from the mark than most others, or Arizona and Pennsylvania, where its polling had the state breaking for Trump by a few points when the final results will likely yield no such thing.
The point is not to go after any individual pollster. This work is hard, and I have great respect for pollsters who are brave enough to put their data out where it will be judged. And no public pollster this year gets to claim a perfect record (save the incomparable Ann Selzer, who projected a big Trump victory in Iowa and got a lot of grief in the process for going against the grain).
There is no simple, singular answer to what went wrong.
For starters, the idea that this is a Trump issue, that the president is a uniquely polarizing figure who scrambles pollsters’ ability to do their jobs, doesn’t explain why polls in Maine and South Carolina were close to the mark on the presidential race but missed badly on Senate contests in those states, leading hopeful Democratic donors to pour millions of dollars into those races fruitlessly. Figuring out “shy Trump voters” is one thing, but figuring out “shy Susan Collins voters” may be a very different task.
Polling problems also aren’t confined to one type of poll. If the online polls had succeeded and the cellphone polls had failed, perhaps we’d have an answer. But instead, a prominent online pollster such as Morning Consult predicted Joe Biden would win Florida by 6; prominent phone pollster Quinnipiac said Biden would win Florida by 5. Both expected Texas to be tied. Neither were even close.
And those failures aren’t easily explainable by the problems we know both phone and online polls face as a result of their methodology — for example, the unrepresentative answers from those who are paid to take online surveys, and the phone polls that rely too much on respondents with landlines, who tend to be older.
The problems aren’t confined to a single region, either. Deducing what went wrong in 2016 was easier because the polling error was heavily concentrated in some demographically similar states of the upper Midwest. These states had higher proportions of white voters without college degrees whose economic fates were tied to issues such as manufacturing and outsourcing. This time, while public polling averages were pretty close in Georgia and Minnesota, they were catastrophically wrong in Florida and Wisconsin. This doesn’t rule out the idea that there is a particular demographic group being systematically undercounted, but it does complicate the theory.
All of which brings us back to the concept of “shy Trump voters” as a catch-all explanation for what went awry. The challenge is that “shy Trump voter” can mean different things, each of which suggests a different prescription for pollsters who want to get it right.
The first iteration of the “shy Trump voter” theory is what we in the field call “nonresponse bias.” The idea is that there are people who are systematically sitting out polls, and that someone who is more inclined to vote for Trump may be less likely to take polls. While there was little evidence of this in 2016, I can say from personal experience that antipathy toward the polling industry has risen since then, particularly on the Right, and so it is not unreasonable to think Trump voters who are distrustful of the media and the “establishment” would ignore pollsters.
The inverse of this “nonresponse bias” issue could also be too much response from Democratic voters. Democratic analyst David Shor has proposed that the incandescent fury of liberal voters, combined with their phone availability thanks to the lockdowns, made them all too eager to respond to any poll that came their way. This would be less “shy Trump voters” and more “eager Biden voters.”
How would pollsters fix the nonresponse problem? First, they would aim to boost response rates overall. How exactly to do that is the billion-dollar question that pollsters have been asking since long before the 2020 election but takes on even greater urgency now that it is clear the 6% of people who take polls are no longer representative of the 94% who do not.
This may require conducting surveys that leverage multiple methods of reaching voters, to make it as easy and noninvasive as possible for a voter to participate in a poll. But it might also require pollsters to rely more on modeling so that voters who do not identify themselves as conservative or Republican but who are likely to be right-of-center aren’t being systematically left out of polls. This will require abandoning certain sampling methods, such as the random dialing of phone numbers, and greater reliance on contacting voters off of high-quality voter lists so that pollsters can have a good idea of who is answering their survey — and who is not.
But if a “shy Trump voter” is instead someone who takes polls but is reluctant to confess conservative preferences, that raises a different problem. Rather than nonresponse bias or a response-rate problem or something that can be solved through weighting or smart analytics, this challenge requires a bit more art than science to solve. How can you ask questions that better suss out someone’s underlying political leanings without asking them the direct questions they might be trying to avoid answering?
There are some interesting data points that support this as at least a partial explanation of what happened to the polls in 2020. And the good news for pollsters is that with a little creativity, this issue might be solvable. In conversations with Republican pollsters the day before the election, I was struck by how many told me that voters who approve of Trump on the economy but are reluctant to support him on the ballot might give Trump a boost on Election Day — a prediction that seems to have been borne out by events and would suggest that the media should focus less on covering “the horse race” and do more to dig into the underlying views of the voters.
But there is also an element of this “shy Trump voter” problem that might be uniquely driven by the pandemic. Charles Franklin, the head of the Marquette University Law School Poll in Wisconsin, noted an interesting phenomenon in his polls this year, which was that voters who had not yet cast a ballot were largely willing to express who they intended to vote for, but that voters who had already cast a ballot were more likely to refuse to answer. Our norms around having a sacred secret ballot in America may mean people are much less comfortable sharing their vote once that vote has been cast. If early voting sticks around, pollsters will have to navigate better how to tease out voter preferences in an environment in which they may be less likely to reveal their own actions.
There are many things that could have gone wrong with the polls this year. Was the pandemic a factor? Our polarizing times? Increased mail-in and early voters? Some variation on “shy Trump voters” after all? The answer to “what went wrong” in polling may not be a single answer. It may be, to borrow a common poll response, “all of the above.”
But despite these challenges, polling is not dead. As our industry has done over the decades, we will take a hard look at the challenges and adapt. As long as we make sure we’re guided by the fundamental goal of elevating voters’ opinions into the public square, we’ll find our way.
Kristen Soltis Anderson is a political columnist for the Washington Examiner. She is the co-founder of Echelon Insights and regularly brings her pollster expertise to Fox News. She is the author of The Selfie Vote.

