Not a day goes by without some poll being done and some survey being reported on in the news, somewhere in the United States. Sometimes the polls are well done, sometimes they are very flawed, but the real problems come not so much from the polling as the reporting done on those polls.
Few people actually sit down and examine or analyze the polling results, particularly politicians, who tend to have less time on their hands for that kind of thing. So when you read a news report about the polling, you get their version of a survey which might leave off key information, ignore the sampling breakdowns, and even misrepresent results.
That means that some in the press can still use polls to manipulate public opinion and policy even if the polling is actually well done.
One of the tricks a pollster can use is to ask questions in a manner that breaks down options in different ways. Matthew Knee writes at Legal Insurrection in a series on polling:
A reporter can then report that most people prefer raising taxes over spending cuts, and be accurate in one sense – of the various choices, that one got the most support. However, if all the spending choices are combined, they are far more supported than tax increases.
Another way to manipulate results is how questions are worded. Several years back, President Bush proposed partially privatizing social security, and the left went berserk. The New York Times polled New Yorkers and found that when asked if they supported the “Bush plan” for social security, just over 30% did.
However, elsewhere in the poll, people were asked if they supported a plan that would partially privatize social security, and almost two thirds of the respondents agreed with it. This question was exactly what the Bush plan intended but was not identified as the “Bush plan.” The New York Times reported that people didn’t want the Bush plan and buried the other results.
Another flaw that can be exploited by reporting on polls is the margin of error. Statisticians are confident that they can identify how likely a poll is to be mistaken based on how large a sample size (the percentage of people being represented) they use.
The problem with this is that most polls have a lot of different sorts of people in them, but report on how these groups respond. So a poll of “likely voters” will have independents, Republicans, and Democrats in it. Within those subgroups are even smaller divisions such as women, Hispanics, and so on.
For example, if you take a poll of around 800 people – typical for many polls – and 49% of that poll is male, then then your sample for men is around 392. Now suddenly the results are questionable. Say that 20% of that group of men are Asian; now the sample is about 78 Asian men.
Suddenly the margin of error has become so large the poll is useless. So any conclusions based on what Asian men think is meaningless in this poll; there are too few to represent the whole. Yet some reporting and thus sometimes policy will be based on these splinters of polling groups.
Wanting to know the future is not new; polling is simply modern divination, our version of reading tealeaves. A well-done poll tends to be significantly more accurate but it is still essentially an educated guess. When that’s complicated with reporting designed to push a certain viewpoint, polling actually becomes problematic.

