Stop Asking for Margin of Error in Polling Research

Originally published on Huffington Post. Also published on Linkedin, Quora, and anywhere else I have an account.

Just a few days ago, I moderated a webinar with four leading researchers and statisticians to discuss the use of margin of error with non-probability samples. To a lot of people, that sounds like a pretty boring topic. Really, who wants to listen to 45 minutes of people arguing about the appropriateness of a statistic?

Who, you ask? Well, more than 600 marketing researchers, social researchers, and pollsters registered for that webinar. That’s as many people who would attend a large conference about far more exciting things like using Oculus Rift and the Apple Watch for marketing research purposes. What this tells me is that there is a lot of quiet grumbling going on.

I didn’t realize how contentious the issue was until I started looking for panelists. My goal was to include 4 or 5 very senior level statisticians with extensive experience using margin of error on either the academic or business side. As I approached great candidate after great candidate, a theme quickly arose among those who weren’t already booked for the same time-slot – the issue was too contentious to discuss in such a public forum. Clearly, this was a topic that had to be brought out into the open.

The margin of error was designed to be used when generalizing results from probability samples to the population. The point of contention is that a large proportion of marketing research, and even polling research, is not conducted with probability samples. Probability samples are theoretical – it is generally impossible to create a sampling frame that includes every single member of a population and it is impossible to force every randomly selected person to participate. Beyond that, the volume of non-sampling errors that are guaranteed to enter the process, from poorly designed questions to overly lengthy complicated surveys to poorly trained interviewers, mean that non-sampling errors could have an even greater negative impact than sampling errors do.

Any reasonably competent statistician can calculate the margin of error with numerous decimal places and attach it to any study. But that doesn’t make it right. That doesn’t make the study more valid. That doesn’t eliminate the potentially misleading effects of leading questions and skip logic errors. The margin of error, a single number, has erroneously come to embody the entire system and processes related to the quality of a study. Which it cannot do.

In spite of these issues, the media continue to demand that Margin of Error be reported. Even when it’s inappropriate and even when it’s insufficient. So to the media, I make this simple request.

Stop insisting that polling and marketing research results include the margin of error.

Sometimes, the best measure of the quality of research is how transparent your vendor is when they describe their research methodology, and the strengths and weaknesses associated with it.



3 responses

  1. Non-probability sampling is a sampling technique where the samples are gathered in a process that does NOT give all the individuals in the population equal chances of being selected. It dilutes the statistical basis. Why ask for “margin of error”?

    If the design (including questions) is bad, the research is flawed and “doing the stats” is a waste of time.

  2. My issue is that margin of error is used by public to have a sense of how big a difference is “real”. If candidate A has the support of 47% of decided voters and candidate B has the support of 41%, is A ahead? Without MOE or some equivalent, how is the public to judge what this difference means?

    1. Yup, that’s the problem. When a really bad survey has really bad questions, they too can simply calculate MOE and make themselves look really scientific. The public knows nothing about just how bad the survey really was. MOE is not serving the purpose it was intended for.

%d bloggers like this: