Live blogging from Nashville. Any error or bad jokes are my own.
– the most boring thing you can do with mobile is take a survey on it [HA! very true]
– it makes boring surveys more convenient than ever before
– dramatic growth in people starting surveys on mobile
– not all survey modes produce the same experience. there is differential completion rates. higher drop out rate on mobile particularly when the surveys get longer. different demographic set on mobile so quotas may overpopulate quickly.
– completion rates differ on mobile by country
– many people take surveys on multiple modes and this happens in every country. In the US, 60% of people ONLY take surveys on mobile [did i hear that right?]
– how do we treat quality in mixed mode studies. how should quality techniques be applied?
– why don’t we put quotas on mobile, should we?
– where 8% of people suspended a survey on tablet or computer, 20% of mobile phone starts were suspends
– tablets end up looking a lot of like computers
– think about your speeding metric. survey lengths differ greatly by mode. so if you’re including mobile phones times in you calculation, that raises the median time and raises the speeding time so that you’re cutting out more computer people than you ought to. you might need to use device specific speeder rules. [tell me now, who does this! we ought to! Love this 🙂 ]
– one you remove speeders, you can use a generic rule for random responding and straightlining
– they have a dataset of people who have taken an omnibus on a computer and on a mobile. it’s a matched dataset. [and John wonders, is it omnibii? 🙂 ]
– mobile responders always take longer and it gets worse the longer the survey gets. it’s not as far off for a shorter survey.
– we know there is a true mode effect
– must test quality at the mode level, must adjust speeding at mode level
– they recommend 5 to 10 minute survey though people still do 45 minutes on a mobile.
– you cut surveys into modules, they will take all the modules in a row.
[thanks for presenting data and tables John. i like that you don’t dumb things down. we need more of this because researchers KNOW NUMBERS even if people think its funny to say they don’t]
I received my research organization’s magazine today. Inside were many lovely articles and beautiful charts and tables. I quickly noticed one particular article because of all the charts it had, but the charts are not what caused my fury.
The article was YET ANOTHER one on panel quality. Yes, random responding, straightlining, red herrings. The same topic we’ve been talking about for years and years and years.
Now, I love panel quality as much as the next person and it is an absolutely essential component for every research panel. We know what the features of low quality are and how to spot them and how to remove their effects. We even know the demographics of low quality responders (Ha! Really? We know the demographics of people who aren’t reading the question they’re answering?) But this isn’t the point.
Why do we measure panel quality? Because the surveys we write are so bad, we turn our valuable participants into zombie. They want to answer honestly but we forget to include all the options. They want to share their opinions but we throw wide and long grids at them. They want to help bring better products to market but we write questions about “purchase experience” and “marketing concepts.”
I don’t want to hear about panel quality anymore. It’s been done to death. Low panel quality is OUR fault.
Tell me instead how you’re improving survey quality. How have you convinced clients that shorter is better and simpler is more meaningful? What specific techniques have you used to improve surveys and still generate useful results? Tell me this and I’ll gaze at you with supreme admiration.
The Mahalanobis Distance
This is a lovely little statistic that folks should take advantage of a lot more. If you’re doing a lot of survey data quality work, you should know about the Mahalanobis Distance (MD). It can help you decide whether a responder is answering questions honestly or not simply by comparing their set of responses to other sets of responses. If someone responds to a series of questions in a way that doesn’t match how other people are responding, then that set of data gets picked out.
So, MD finds people who are straightlining because straightlining is usually not a normal way to respond to data. Of course, finding straightliners is easy to do anyways so who cares. BUT, and far more importantly, this statistic helps to pick out random responders, people who are just haphazardly clicking all over the place. You can find random responders if you read through an individual’s responses, but it takes a lot more personal attention. MD does it automatically, instantly, without all that extra time.
If you’d like to have a read that goes way over most people’s heads, follow this link to “On the generalised distance in statistics” by P.C. Mahalanobis.
Oh, and I have the coolest story to go along with this. I MET THE GREAT NIECE OF MAHALANOBIS HIMSELF!!!! In fact, she and I were coworkers for a short while. I did my best to discover all the amazing details that one might want to know about her celebrity uncle, you know, things like his favourite colour or his favourite brand of ketchup, but she respected his privacy and refused to budge. I did manage to get an AUTOGRAPHED (by my friend Renee) copy of his publication though.
Ok, ok, maybe I shouldn’t get all excited about it, but I am. And if you think that’s just silly, well you’re just jealous.
This is easy to do as most market research surveys are already designed to accomplish it. If you’ve taken a survey recently, you’ve probably seen it. Here’s a great example:
Let me guess, you said ‘Extremely Important’ to every single item. And, you probably answered completely honestly. Know what? You just straightlined. Straightlining is a very bad thing in the world of survey design. Why? Because it’s hard to tell whether someone truly answered the questions carefully or they were simply clicking as fast as they possibly could without reading anything. Gimme that incentive baby!
But how you do get around this? The most important thing about designing high quality survey questions is ensuring the use of both positively and negatively keyed items. In other words, half of your questions need to be phrased in a way that make the product sound good and the other half bad. Here’s how it works…
Instead of saying ‘Is all natural,’ it could have said ‘Has artificial ingredients.’
Or, instead of ‘Is a good source of nutrition,’ it could have said ‘Is a poor source of nutrition.’
If it’s this easy to, why don’t we do it? Why do I constantly get pre-written surveys that are chock full of questions that encourage straightlining? Maybe they’ve done it like this for a long time and they don’t want their norms tampered with. Maybe they feel like they’re encouraging people to think negatively about the product. Or, by offering negative options, maybe they’re encouraging people to rate their products negatively even if they wouldn’t have otherwise. But in the end, don’t you want QUALITY data? Trustworthy data? Actionable data?
So follow the rules. If there is no option but to write a grid question. At least write a good grid question.