Why Everyone Was Surprised By the Election Results

It was around 9:20 p.m. when conventional wisdom died,” wrote the Wall Street Journal‘s Neil King on election night. That was the moment when the New York Times‘s website began projecting that a Donald Trump victory was more likely than not, and it became abundantly obvious that the presidential polls were wrong, in significant and crucial ways.

Of the 16 battleground states where Real Clear Politics (RCP) produced polling averages, Hillary Clinton won 6. She won only one state Trump had been predicted to win, Nevada, and she overperformed the polling average there by 3.2 percent, close to an expected 3 point margin of error. Set aside Nevada, and in the remaining five swing states she won, her margin of victory was lower than pollsters predicted in all of them, and dramatically lower in Minnesota. Trump, on the other hand, bested the RCP average by 8 points in Missouri, 7.5 points in Wisconsin, 6.6 points in Iowa, 5 points in Ohio, and nearly 4 points in Michigan. And significantly, pollsters wrongly predicted Clinton wins in Wisconsin, Michigan, and Pennsylvania. Those states swung the election. Finally, of those 16 RCP polling averages, 15 underestimated Trump’s support, even if most of the results were within the margin of error. It sure looks like a systemic failure in the polling industry to accurately measure the level of Trump’s support.

While this result isn’t quite a bonfire of the pollsters, it is a far cry from 2012, when there was much fanfare over the fact that FiveThirtyEight polling guru Nate Silver correctly predicted the winner of all 50 states in the presidential election. Obama’s campaign juggernaut was both deploying and validating cutting-edge predictive data, from betting markets to behavioral psychology, and more complex and more accurate polls were thought to be the wave of the future.

But four years is an eternity in politics, and all manner of incidents leading up to 2016 suggested faith in polls was gravely misplaced. The success of the Brexit vote earlier this year stunned observers. The result was close enough that it didn’t really suggest the polling was inaccurate. But the horrified recriminations afterward were indicative of a public that has been imbuing poll results with scientific certainty and has trouble coming to terms with the fact they are often the statistical equivalent of an educated guess.

Last year was also bad for polling. The Real Clear Politics polling average had Democratic candidate Jack Conway leading Republican Matt Bevin just before the Kentucky governor’s election. Bevin won 53 percent to 44 percent. The polls in elections in both the United Kingdom and Israel were also off badly and failed to predict convincing victories by the right-leaning parties in both countries.

And on the night of November 4, 2014, after the Democratic party suffered its historic midterm election defeat, Public Policy Polling tweeted, “Clearly a rough night for us and much of the polling industry and we’ll own that and try to figure out what happened in the coming days.” In that election, Mitch McConnell, the Republican leader in the Senate, was up 7 in the RCP average. He won by 15. Tom Cotton, running for the Senate from Arkansas, was up 7 in the RCP average. He won by 17. Virginia senator Mark Warner was up almost 10 points in the RCP average; he won by just 18,000 votes and was very nearly upset by GOP challenger Ed Gillespie. And there were many more such whiffs.

By and large, “In all these cases, polls seem to have understated actual support for right-of-center candidates and parties while coming fairly close to actual percentages for those left of center,” observed Michael Barone, surveying the trends in polling errors in the Wall Street Journal last fall. That’s also a good summation of the problem this year.

Ironically, the guy in recent years who’s been most vocal in sounding the alarm about polling errors is the man who popularized the use and interpretation of polling averages in recent elections: Nate Silver.

Following the 2014 midterm, Silver started excoriating pollsters and warning that polls are less accurate than people believe them to be. He headlined an article on his website following the 2014 fiasco “Here’s Proof Some Pollsters Are Putting a Thumb on the Scale.” Silver scatter-plotted the results of Iowa Senate polls and accused pollsters of “herding.” The graph showed polling numbers in Iowa were all over the place a few months before the election. But as Election Day neared, a curious thing happened. The polls started to converge on a consensus that Joni Ernst had a slight lead. (The final RCP average had Ernst ahead by 2.3 points, and she won by 8.5 percent.)

This didn’t happen just in Iowa. “By the end of the campaign [in 2014], the polls in most states varied only within a narrow range,” notes Silver. Herding was inescapable. Across the board, the data were less “noisy” than they should have been—a huge swath of the midterm polls were deeply suspect from a statistical perspective.

As the election drew near in 2016, it looked to Silver like many polls were overstating Hillary Clinton’s support, and once again he tried to pump the brakes. This time, something surprising happened. Liberals and the media—no little overlap there—turned on Silver because he was telling them something they didn’t want to hear, even though he was still predicting a Clinton win.

It can’t be overstated what a folk hero Silver had previously been on the left—he rose to fame providing polling analysis on the liberal Daily Kos blog. The protagonist of the popular TV series Orange Is the New Black explained her atheism by declaring, “I believe in science. I believe in evolution. I believe in Nate Silver and Neil deGrasse Tyson and Christopher Hitchens.”

Silver’s final forecast had a 71 percent chance of Clinton winning. This wasn’t good enough for the liberal faithful. At Vox, Matthew Yglesias wrote “Why I think Nate Silver’s model underrates Clinton’s odds.” Huffington Post sowed doubts about his integrity and methodology with headlines such as “Nate Silver Is Unskewing Polls—All of Them—in Trump’s Direction.” Huffington Post‘s own model said Hillary Clinton had a 98 percent chance of winning, and that wasn’t even the most extreme prediction. Princeton neuroscience professor Sam Wang dabbles in polling analysis, and he went so far as to predict Hillary would win with a 99 percent certainty.

Wang had been spectacularly wrong before. His 2014 Senate predictions greatly underestimated Republican support. In 2004, Wang predicted John Kerry getting a commanding 311 electoral votes to Bush’s 227. Despite this less than stellar record, on November 6, a popular post at Silver’s old Daily Kos stomping grounds was “Five Reasons Nate Silver Is Wrong & Sam Wang Is Right: Hillary Is 99%+ Likely to Win.” On November 7, Wired magazine ran an article headlined “2016’s Election Data Hero Isn’t Nate Silver. It’s Sam Wang.” The next day the American people voted, and to borrow the president-elect’s idiom, Wang got schlonged. He predicted Clinton would win with 307 electoral votes. If Trump maintains his Michigan lead as expected, once all the straggling ballots are finally counted, Trump will end up with 306.

Did anyone get 2016 right? Well, looking at the pollsters used to compute the Real Clear Politics averages, in the latest polls heading into the election a single firm had the most accurate polls in Florida, Pennsylvania, Michigan, North Carolina, Ohio, Colorado, and Georgia—the up-and-coming Trafalgar Group, headed by Robert Cahaly. Trafalgar was also perhaps the only pollster to correctly call Michigan and Pennsylvania for Trump.

Unfortunately, there was as much art as science in what they did. For starters they assumed Trump’s support was being undercounted. When Trafalgar was polling voters in the GOP primaries, they started seeing an interesting trend. Voters who responded to automatic polls, i.e., “robocalls,” consistently registered support for Trump 4.5 percent higher than when they were talking to a live pollster. This was not a statistical anomaly. “The more I checked, the more there was an undercurrent,” Cahaly tells The Weekly Standard.

Cahaly reasoned that since the media was demonizing and caricaturing Trump supporters, and Hillary Clinton was campaigning against them as a “basket of deplorables,” Trump supporters would be reluctant to admit their support to strangers. (The phenomenon of people not willing to report their support is well known in polling—when white voters don’t want to say they’re voting against a black candidate for fear of being judged, it’s called “the Bradley effect,” for L.A. mayor Tom Bradley, who lost California’s 1982 governor’s race despite consistently leading in the polls. A similar phenomenon in the U.K. is known as the “shy Tory” effect.)

To counter this perceived unwillingness to register support, Trafalgar started asking a new question. “When you ask them who their neighbor is voting for, they’re more comfortable,” he says. It appears to have worked pretty well this year.

Trafalgar’s clever approach notwithstanding, the reality is that pollsters face a great many challenges. Response rates to polls are now so low that it’s undermining the whole practice. Pew Research Center reports that the response rate to polls is now in single digits, compared to a 36 percent response rate in 1997. As a result of no one wanting to answer their calls, most pollsters can’t afford or don’t have the time to get sufficient sample sizes anymore, so they’re just reweighting the few responses they do get relative to their guess about what the demographic composition of voters in a given area or state will look like on Election Day. Many such surveys aren’t even technically polls—Silver calls them “polling-flavored statistical models.”

Even Silver, who’s done his part to add to the phenomenon, will tell you that the compulsive attention to political polls by journalists has reached a point where it’s often not healthy for democracy. “The media’s obsession with polls, especially national polls, in the early stages of the GOP campaign was pretty nutso,” he told me earlier this year. “If someone wants to take a poll in July 2015, well that’s fine, but there’s pretty much no circumstance under which they should be sending out ‘breaking news’ alerts about it.”

Here’s a prediction: Don’t expect anyone to heed Silver’s advice to exercise restraint in polling. According to a Politico/Morning Consult poll done last month, Mike Pence is already the frontrunner for the 2020 GOP nomination.

Mark Hemingway is a senior writer at The Weekly Standard.

Related Content