Morning Jay: Polling Madness, 2012 Edition

Here’s a thought experiment. Let’s say you want to do a quality poll of 1,000 likely voters. How many people would you have to contact?

For starters, the response rate for pollsters is somewhere around 9 percent. So for every 100 people you contact, you will get 9 people to do the poll.

On top of that, turnout among the voting age population in 2008 was roughly 57 percent. So, of those 9 people who answer the phone, only 5 or 6 of them will be likely voters.

In other words, for every 100 people you contact, only about find 5 or 6 will be voters.  That means you have to contact roughly 19,500 people to find 1,000 voters.

This is very expensive to do, obviously. And make no mistake about it: the media does not have that kind of money anymore. I know this because they have basically gutted the exit polls this cycle:

At least some exit polling will be done in all 50 states, and the consortium is paying for a full slate of questions in 31 states. But 19 states will only get “bare bones” polling “to help predict the outcome of races.” According to the Associated Press not enough polling will be done in these 19 states “to draw narrative conclusions about the vote – what issues mattered most to women voting for Mitt Romney, for instance, or how many Catholics voted for Barack Obama.”

 “What we are doing is taking our resources and using them where the stories are,” said Sheldon Gawiser, NBC’s elections director and head of the steering committee for the consortium, told the Associated Press.

This cycle, the exit polls are going to be slimmer in 2012 than they were in 2010. And they were already too slim, then. Actually, going back in time, it looks like the exit polls for 2012 are set to be leaner than any batch since the 1988 presidential election!

The reason is simple: the media is broke. Have you noticed all the budget cuts lately, the declining number of news bureaus, the scaling back of hard, investigative journalism, the collapsing profits, the shuttered local papers, the emphasis on cheap but entertaining evening talk shows? They do not have money to go spending on needless things…like quality polls!

So what do they do? They cut corners and hope you won’t notice. Consider all the tools in the toolbox:

First, they use robo-pollsters. Generally, this is not a huge problem, but robo-pollsters do have trouble because they cannot reach cell phones. They weight around that, but it is still an issue.

Second, they spread their interviews out across a couple days. That way, they do not have to hire more people to get the calls done. This is what CBS News and the New York Times are doing with Quinnipiac, whose calls stretch over five days. That does not seem problematic when you first think about it, but things can move pretty fast here toward the end of the cycle, and so trends are often missed.

Third, they go with smaller samples. If you are content with a 600-person likely voter poll, then you have to call many fewer people. Sure, the margin of error jumps through the roof, and the poll basically becomes useless for examining subgroups within the electorate. But it is cheaper. Again, that’s what CBS News/New York Times is doing. Their national survey released this week actually had fewer than 600 likely voters. It was spread out over 4 days, so they only contacted about 150 people a day for that poll!

Fourth, and most important, they loosen their likely voter screens. It is often hard to gauge just how loose the screens are because pollsters often fail to tell you how many adults they surveyed, but a illustration is Survey USA. I do not mean to single that pollster out per se. It is likely no better or worse than the rest; it just provides the raw numbers to take a closer look. Survey USA did a poll Ohio this week of 685 adults, of which 603 were determined to be likely voters. This suggests a turnout of 88 percent of the voting age population in Ohio. This is a perfectly reasonable estimate of turnout…for the Election of 1896! But in 2008 turnout in Ohio was actually 65.1 percent, meaning that approximately 25 percent of its “likely voter” sample will probably not vote.

This latter problem matters because, as everybody should know by now, the actual electorate tends to be more Republican than the public at large. Thus, the looser your likely voter screen is, you run a greater risk of oversampling Democrats. This was not a problem in 2008 because Democratic enthusiasm was so overwhelming that the likely voter polls showed the same thing, regardless of the screen’s tightness. But this time around, Democratic enthusiasm is down, Republican enthusiasm is up. Everybody knows that – so why are the pollsters getting results that suggest a replay of 2008?

To be sure, there is at least one pollster out there with a large sample and a good likely voter screen. It also has a proven track record during election season. It is the biggest pollster, flush with enough cash to conduct polling without cutting corners.

That pollster is Gallup, and it finds Mitt Romney leading Barack Obama by five points.

Here’s another example of what I am talking about. Last week Princeton Survey Research Associates International (PSRAI) did a poll for National Journal as well as a poll for Pew. Same pollster, basically same survey dates. The National Journal poll had 713 likely voters and (as best I can tell) a loose likely screen. The Pew poll had 1,495 likely voters and a tight screen (as has long been the case with Pew LV polls).

Did this same pollster (PSRAI) find the same results over the same time period between these two surveys? Not at all! Pew published a tie nationwide with Democratic and Republican turnout equal. National Journal published a five-point Obama lead with a Democratic turnout advantage of eight points.

Incidentally, the political class really should know better than to take these polls at face value. Time and again over the last twenty years, we in this business have been collectively burned by the polls. The polls were off in 1996, they were off in 2000, they were way off in Florida in 2004, they were way off in the 2008 New Hampshire primary, and so on.

But for some reason, people who should be skeptical of the polls are treating them with more credulity than ever. Part of the problem is sites like FiveThirtyEight and Pollster, both of which plug these highly problematic data points into Rube Goldberg devices that spit out a prediction that looks way more precise than it actually is.

For my part, I have been looking closely at polls and polling for the last eight years. And the closer I look, the less I trust the polls. In 2012 in particular, I see a lot of corners being cut, and very few people applying the appropriate level of skepticism to the data the media keeps spitting out at us.

Jay Cost is a staff writer for THE WEEKLY STANDARD and the author of Spoiled Rotten: How the Politics of Patronage Corrupted the Once Noble Democratic Party and Now Threatens the American Republic, available now wherever books are sold.

Related Content