Can you believe the polls?

Six years ago, the polling industry suffered a big blow to its reputation. Despite the accuracy of many surveys, the overall picture that polls painted of the 2016 election had Hillary Clinton winning and Donald Trump losing.

In the 2020 election, all 21 of the final national polls showed Biden ahead — and he was ahead. He won by 7 million votes. Moreover, the average of the final polls came close to nailing Biden’s vote share: 51.5% in the polls versus 51.3% in the actual vote count. Poll averages also correctly indicated the right presidential winner in seven of 10 swing states; they came within a point of actual vote margins in Nevada and Georgia and within a few points in Arizona, North Carolina, Pennsylvania, and Michigan. However, the final polls in 2020 notably underestimated Trump’s vote. They had him getting 43.3% of the popular vote, well below the 46.9% he actually received. In key Senate elections, we saw the same phenomenon: Two-thirds of the polling indicated the right winners, but a shocking number of surveys underestimated Republican strength. The truth is that polling was biased against Republicans and sampling error was usually the culprit.

DOCTOR WHO CLEARED FETTERMAN FOR ‘FULL DUTY’ IN SENATE DONATED TO HIS CAMPAIGN, FILINGS SHOW

College graduates, who tend to vote Democratic in federal elections, were often over-polled. Assumptions made about partisan voter turnout were sometimes wrong. In the last two presidential elections, there was likely a small segment of Trump supporters who didn’t tell pollsters they were voting for him, making it harder to measure his full support. 

Legitimate pollsters make money by being right. That’s why deliberate bias is rare and mistakes are feared, especially in polls taken right before elections. But even pollsters with good reputations can make faulty assumptions and unwittingly allow partisan bias to skew questionnaires, sampling, and analysis. When discussing the accuracy of polls, let’s remember, too, that prognostications made by pundits, data modelers, and betting markets are not polls, even if they’re based in part on polls. These predictions are really educated guesses. But when they miss the mark, they make polls look bad.

It is important to keep in mind that polls don’t predict. They’re snapshots in time, not crystal balls. Whatever happens after the final poll is conducted won’t be measured. That’s why some polls don’t catch late-breaking shifts, a problem in the 2016 presidential election, and seem wrong when, in fact, they’re not. 

Each polling method, whether door-to-door, via cellphone calls, or some combination, has strengths and weaknesses. It is essential that pollsters carefully drive quality control throughout the entire survey process. Most do, but some don’t. Cutting corners saves money, but it produces bad numbers. That’s why when polls conflict with one another, we don’t always know which ones are painting the true picture. 

But one thing we do know: When reading election polls, never put too much stock in any one survey. Look at multiple polls for confirmation and always try to find the trends. Also, delve into the internal numbers not reported by the media to see whether this data support the horse race numbers. Despite limitations and margins of error, polls remain the best available measurement tool of public opinion. If they weren’t, there wouldn’t be so many of them.

CLICK HERE TO READ MORE FROM RESTORING AMERICA

Ron Faucheux is a nonpartisan political analyst. He publishes LunchtimePolitics.com, a nationwide newsletter on polls and public opinion.

Related Content