Tag Archives: survey

The myth of the Total Survey Error approach

Total Survey Error is a relatively recent approach to understanding the errors that occur during the survey, or research, process. It incorporates both sampling errors, non-sampling errors, and measurement errors, including such issues as specification error, coverage errors, non-response errors, instrument error, respondent error, and pretty much every single other error that could possibly exist. It’s an approach focused on ensuring that the research we conduct is as valid and reliable as it can possibly be. That is a good thing.

Here’s the problem. Total Survey Error is simply a list. A list of research errors. A long list, yes, but a list of every error that every researcher has been trained to recognize and account for in every research project they conduct.

We have been trained to recognize a bad sample, improve a weak survey, conduct statistics properly, generalize appropriately, and not promise more than we can deliver. Is conducting research the old name of ‘total survey error?’ It is not a new, unique approach. It does not require new study nor new books.

Perhaps I’m missing something, but isn’t total survey error how highly skilled, top notch researchers have been trained to do their job?

Manufacturing False Precision in Surveys #MRX

How do you create a survey question measuring frequency of behaviour that will generate the most accurate responses? Experience tells us to consider things like:

  • Should I include a zero or incorporate that into the smallest value?
  • Should I use use whole numbers  like ’2 to 4′ or partial numbers like ’2 to less than 4′?
  • Should I use 4 break points or go all out with 10 break points?

These are all smart considerations and will help you collect more precise data. But seriously? How accurate, how precise, how valid are these data anyways? Do you really think that survey responders are going to carefully and precisely calculate the exact number of days or minutes they do something?

When we ask responders to choose between these two options…

  • 2 to 4.99
  • 2 to less than 5

… do you really think that one or the other option will help responders think of estimates that are any more accurate? Let’s face it. They won’t. Yes, there’s statistical precision in the answer options we’ve provided, but we are manufacturing a level of accuracy that does not exist. It’s no different than reporting 10 decimals places where 1 is more than sufficient.

So what do I recommend? Make things simple for your responder. Use real language not hoity toity, decades of academic research language. Use language that makes responders want to come and take another survey.

Proud to be a member of survey research #MRX

Guest Post by Prof. Dr. Peter Ph. Mohler 

Having listened to uncountable papers and reading innumerable texts on non-response, non-response bias, survey error, even total survey error, or global cooling of the survey climate it seems to be timely considering why after so many decades working in a, according to the papers, seemingly declining field called “survey research” I still do not intend to quit that field.

The truth is, I am mighty proud to be a member of survey research because:

  • We can be proud of our respondents who, after all these years, still give us an hour or so of their precious time to answer our questions to the best of their abilities.
  • We can be proud of our interviewers who, despite low esteem/status and payment, under often quite difficult circumstance, get in contact with our respondents, convince them to give us some of their time and finally do an interview to the best of their abilities.
  • We can be proud of our survey operations crews, who, despite low esteem/status and increasing costs/time pressures organize data collection, motivate interviewers, and edit/finalize data for analysis.
  • We can be proud of our social science data archives who for more than five decades preserve and publish surveys nationally and internationally as a free, high quality service unknown of in other strands of science.
  • We can be proud of our survey designers, statisticians and PIs, who constantly improved survey quality from its early beginnings.

Of course there are drawbacks such as clients insisting on asking dried and dusted questions or, often academic, PIs who do not estimate the efforts and successes of respondents, interviewers, survey operations  and all the rest, and there are some who deliberately fabricate surveys or survey analyses (including all groups mentioned before).

But it is no good to define a profession by its outliers or not so optimal outcomes.

Thus it seems timely to turn our attention from searching for errors to optimizing survey process quality and at long last defining benchmarks for good surveys that are fit for their intended purpose.

The concepts and tools are already there, waiting to be used  to our benefit.

 As originally posted to the AAPOR distribution list. 

Peter is owner and Chief Consultant of Comparative Survey Services (COMPASS) and honorary professor at Mannheim University. He is the former Director of ZUMA, Mannheim (1987-2008). Among others he directed the German General Social Survey (ALLBUS) and  the German part of the International Social Survey Programme (ISSP) for more than 20 years. He was  a founding senior member of the European Social Survey (ESS) Central Scientific Team 2001- 2008.  He is co-editor of Cross-Cultural Survey Methods (John Wiley, 2003) and Surveys in Multinational, Multiregional and Contexts (John Wiley, 2010, AAPOR Book Award 2013). Together with colleagues of the ESS Central Coordinating Team, he received the European Descartes Prize in 2005. 

 

The Game of Life

Toilet paper tower (3)

Image via Wikipedia

I love shopping for toilet paper. It is one of the most exhilarating parts of my life. It’s extremely rewarding to choose between the 12 pack and the 24 pack knowing that my family won’t have to worry about running out mid-wipe for at least a week or two. I love taking the extra time to choose between extra soft and supreme soft knowing that it brings with it the responsibility of selecting the most appropriate texture for my loved ones’ bottoms. And making the important decision between 2 ply and 3 ply means that I’ve have the joy of taking charge of cleaning up those nasty messes that no one wants to talk about.

I truly enjoy and look forward to each of these decisions because it is fun to consider the myriad options and know that I have succeeded in bringing joy to my family.

Unfortunately, even though the process of shopping for toilet paper is extremely fun, the process of answering surveys about toilet paper shopping isn’t. I just wish there was some way to make answering toilet paper surveys more fun, like a game, like it is in real life.

Because I know if the survey taking experience emulated my real life experience, my survey answers would be more valid. Don’t you agree?

Laugh at yourself and then cry at our flailing industry

Well, once you manage to catch your breath after laughing solid for 4 minutes, let’s really think about all the people involved in this little prank.

1: Interviewer: First of all, this interviewer deserves a raise, a bonus, and a promotion for going through this interview without laughing, getting upset, or antagonizing the survey responder. I’m sure he deals with this sort of thing, whether real or fake, all day long every day. And yet, the utmost professionalism on his part. Kudos for a great job.

2: Responder: How did our industry get to such a state where surveys are written so poorly that people leave a tape recorder at their telephone waiting for researchers to call in order to make fun of them? This is nothing for us to be proud of.

3: Data Analyst: How exactly is the data analyst going to handle data which is clearly horrible quality? Will the analyst think of checking for outliers in each question? Will the analyst review the entire set of responses to recognize that it is an across the board outlier and probably a troublemaker? Will these responses lead to completely invalid analysis and conclusions?

4: Survey Author: Of course, we understand the need to use standardized questions in surveys. But, no matter how convinced you are, the world does not consist of people who know how surveys work. There are absolutely people out there who need to be taken through a survey with far more care than what we

permit when writing surveys. Telephone surveys need to be written so that interviewers can speak naturally and help those people who actually need some help. That’s where good data comes from. I’m really curious if the survey author left a place for the interviewer to indicate that this instance was possibly an outlier.

So, enjoy. But the next time you write a survey, keep this in mind. Are you antagonizing yet another survey responder or are you responsible for creating a more positive market research experience?

Social media research is the new one size fits all

Books about survey research and survey design.

Image via Wikipedia

Inspired by a tweet and after months of teaching people about what SM research really is, I’ve decided to take a stab at proving that SM research is all you really need.

I will admit to my past, present and future love affair with survey research. That will never go away. I’ve done tons of survey research on research and know far too much about the intricate disadvantages of it. Poor question design, lack of probability sampling, biased samples, overly long questions, you name it, I’ve researched it.

SM research on the other hand is all good. There are no survey design errors. Far more people contribute to SM data than to survey data. There are no concerns about incentives biasing the sample. You get data you would never see in a survey, unbelievable data!

Have I irked you yet? I’m sure I have. Because there are pros and cons of survey research just as there are pros and cons of SM research. Surveys are great for their purposes, focus groups for theirs, MROCs for theirs, and SM research for theirs.

The MR world seems to engage in a perpetual competition of which method is better. I think, however, that many of us really believe the different methods complement each other. Surveys and focus groups and, now the addition of SMR methods, are simply collaborations of three methods instead of two (or instead of 5, 6, 7 or more).

Just like the current debates of whether to start qual and then go quant, or to start quant and then go qual, we now have a new dimension to slot somewhere in the cycle. A dimension of people who aren’t intimidated by publically sharing their opinion with the world.

Combining forces gives us different knowledge, new knowledge, more in-depth knowledge. It gives us broader views and leads to new insights. Think about people who answer surveys but never chat online. Or, people who chat online but never participate in focus groups. If you only focus on one type of research, you miss out on all the other voices. We could draw a great Venn diagram of what voices we ignore when we focus too tightly.

Even better, if you think about all the methods researchers have available to them, more than ever we now have the ability to listen to people using the voice they want to speak in. And that can only mean better data.

So what is the one size fits all research method? Collaboration, my dear friends.

Read these too

Mugging, Sugging and now Rugging: I take a hard stance on privacy

The logo of European Society for Opinion and M...

Image via Wikipedia

At the Esomar conference in Chicago last year, a speaker commented on how one could send out surveys and then follow up with targeted sales calls. Someone politely corrected him a few minutes later because “sugging,” selling under the guise of research, is not an ethical practice. Clearly, the MR rules are not well known outside our immediate industry.

The exploding popularity of social media has opened the door for many new companies in the field of SM monitoring. Their bread and butter is finding out who is saying what about your brand so that you may counteract any negative happenings. Since every online post is linked to a person, often even the real name of a person, it is very easy for those companies to directly communicate with those people.

In market research, privacy and anonymity are our Prime Directives. We do not reveal names. We do not interfere with people. We do not try to change their opinions. We DO listen, learn, and try to solve business problems on a group level.

Are we about to encounter a new type of “ugging” then? “Rugging” refers to replying under the guise of research. This means that in the course of carrying out social media research, someone takes the step of replying to someone whose data just happens to appears in the research data set. The person didn’t asked to participate and they didn’t respond to a question.

For me, this is in direct violation of the Prime Directive. Sure, the internet is open. Sure, the links and names are readily available to everyone. But that doesn’t make it right. People need to be able to express their honest opinions without worrying that some big company is going to try to change their opinions. People need to retain their right to choose when they want to interact with a company. People need to maintain ownership of who they communicate with.

The internet is not an equal playing field. There are billion dollar companies with thousands of IT professionals, lawyers, and SME. Then there are little old ladies who just typed out their first youtube comment. Do not even try to convince me that “she ought to know” or “well that’s just too bad.”

Not everyone knows the rules of the game. That is not their fault. In the research world, we have taken an oath of sorts to protect the people who share their opinions. Whether they provide survey data or SM data, we owe it to them.

I hope you feel that way too.

Read these too

Are Professional Responders the Real Enemy?

TNS Philippines

Image via Wikipedia

Heavy responders are people who answer lots and lots of surveys but there is no official definition of ‘lots’ of surveys. Some people think 1 survey per week is too much, while others feel that 1 or 2 or 3 per day is too much.
.
A couple of years ago, heavy responders were all the rage. I was one of many people working on projects determined to discover whether there was an issue with heavy responders, and if so, how severe the issue was. Here is the report that came out of that work.
TNS, Ipsos And The NPD Group Conduct Study To Address Market Research Industry Concern
.
The end conclusion was that well-run panels don’t have sufficient numbers of heavy responders for those responders to have any meaningful effect on the results. Well run panels have rules in place to monitor survey frequencies, incentive policies that don’t encourage heavy responding, and recruitment polices that focus on selecting quality responders. Of course, panels with less strict rules could very well have serious issues with heavy responders.
.
But here’s my point. Do we really care about heavy responders? What if someone does answer a survey every single day. Today it’s chocolate, tomorrow it’s electronics, the next day it’s canned vegetables, then financial products, then automotive products and so on. Let’s even say the cycle continues and they always answer the same seven categories, once per week, every week. Is that such a bad thing?
.
These people might skew awareness questions simply because they’ve had more opportunities to see up and coming brand names in other surveys. They might even be more aware of the issues surrounding a product, e.g., various flavours or colours or options. But, are they necessarily providing poor quality data? Are they providing false or misleading data? Are the deliberately skewing the data through random responding and straightling?
.
I would suggest that data quality, and not heavy responding, is the real problem to be concerned about. Ask your survey company about their data quality program. I would.

Related Articles

  • 5 random things I like about statistics and proof you are a dork
  • Using Social Media Research to Predict the Future: That’s me saying yes you can!
  • Probability Sampling – Proof that only telephone samples are quality samples
  • Edward Tufte: Your Presentation Sucks Cause Your Content Sucks
  • Ray Poynter – Overview of Online Research Trends #netgain5
  • Buy The Listen Lady Book

    Annie Pettit Buy Book listen lady amazon kindle ibookstore
    Now in the iBookstore!

    Learn about social media the quick and easy way. Buy your copy on Amazon.com the iBookstore or smashwords.

    Enter your email address to subscribe to this blog.

    Join 8,885 other followers

    LoveStats on Twitter

    All Top

    Featured in Alltop
    Follow

    Get every new post delivered to your Inbox.

    Join 8,885 other followers

    %d bloggers like this: