Live blogged in Nashville. Any errors or bad jokes are my own.
Frances Barlas, Patricia Graham, and Thomas Subias
– we used to be constrained by an 800 by 600 screen. screen resolution has increased, can now have more detail, more height and width. but now mobile devices mean screen resolution matters again.
– more than 25% of surveys are being started with a mobile devices, less are being completed with a mobile device
– single response questions don’t serve a lot of needs on a survey but they are the easiest on a mobile device. and you have to take the time to consider each one uniquely. then you have to wait to advance to the next question
– hence we love the efficiency of grids. you can get data almost twice as fast with grids.
– myth – increase a scale of 3 to a scale of 11 will increase your variance. not true. a range adjust value shows this is not true. you’re just seeing bigger numbers.
– myth – aggregate estimates are improved by having more items measure the same construct. it’s not the number of items, it’s the number of people. it improves it for a single person, not for the construct overall. think about whether you need to diagnose a person’s illness versus a gender’s purchase of a product [so glad to hear someone talking about this! its a huge misconception]
– grids cause speeding, straightlining, break-offs, lower response rates in subsequent surveys
– on a mobile device, you can’t see all the columns of a grid. and if you shrink it, you cant read or click on anything
– we need to simplify grids and make them more mobile friendly
– in a study, they randomly assigned people to use a device that they already owned [assuming people did as they were told, which we know they won’t ]
– only have of completes came in on the assigned device. a percentages answered on all three devices.
– tested items in a grid, items by one by one in a grid, and an odd one which is items in a list with one scale on the side
– traditional method was the quickest
– no differences on means
[more to this presentation but i had to break off. ask for the paper ]
The MR business relies almost 100% on the kindness and generosity of our fellow human beings. We hope that people will answer unending surveys on the most boring topics with far more attention than they pay to their favorite child. Basically, we expect people to be lab rats at our beck and call.
But do we show them the respect they deserve? The respect they earned? Take out the last survey you wrote and give it a good look. Was it more than 15 minutes long? Did if have more than ten items in a grid? Were there more than two grids? Did you use marketing language not people language? Did you include outs on every question (DK, none)? I think I know the answer without even seeing the survey myself.
People must come first. Long surveys must go. Boring questions must go. Confusing question set-ups must go. The declining numbers of survey responders who put up with our bad behaviours now cannot sustain our industry for very long.
In the spirit of the season, consider better surveys your gift to the research community.
Oh no, this one’s a doozy for me!
There are a number of high quality marketing research companies out there. Here are just five of them:
Annie’s Quality Company Awards
Ipsos (My personal bias as I spent almost two years creating their iPi4 data quality system)
These companies care strongly about things like data quality, survey design, recruitment methods, incentives, engagement, ethics, and all those other things that make a great company great. Every recommendation they give to clients and potential clients is with these things in mind. They will often try to change details about a client’s research project or survey because the topic is overly sensitive (e.g., Have you felt like killing yourself in the past 3 days) or the incentive is too high (you’re going to attract people who will lie just to get the money) or the survey is too long (people will get bored and not pay attention to their answers).
You will notice that I said TRY to change details of a client’s survey. What often happens, however, is a few questions were revised or the incentive was slightly decreased or a few questions were removed from the grid. But, generally, the original concern about the research is still a concern, it’s just slightly less so. Why, you ask, are these reputable companies doing research that they don’t whole heartedly agree with?
Well first, let’s acknowledge that sometimes, it’s simply not possible to make the change. Perhaps a survey has been done this specific way for years. If it was changed now, all the norms would completely change and it would be impossible to know whether any changes were the result of real shifts or the new survey. I’ll discredit this option as there are ways to get around it.
Other times, clients are unwilling to make the change. Perhaps the client has very carefully developed the survey to meet exactly their needs. They simply can’t remove any questions or they will lose valuable information. This is another option i will discredit. I will argue strongly that surveys developed to gather huge amounts of detail end up attracting a skewed sample of people, including those who are not truly paying attention to the questions.
Obviously, the problem does not simply belong to the client. The researcher is the expert. It is their task to explain the issues, the problems, to demonstrate why their suggestions will improve the research. If they do not succeed at this task, then of course surveys will continue to be too long, too boring, and too irrelevant. Researchers need to become better teachers. Teach clients and everyone wins.
So what happens when a research company takes a stand and says “We only do quality research. These suggestions must be implemented or we cannot support a survey that will not gather quality data.” Here’s what happens. Another company, one with very different views of how to do research takes the job. They take the 60 minute survey. They take the survey with 100 grid questions. They offer the $20 incentive. Which means the quality company loses business, the clients lose quality data, and research just isn’t the best that it can be.
I guess every industry deals with this so I should just shut my mouth. Nope.😀
Anyways, I’m very curious to hear your thoughts. How do YOU think we can improve things?
This is easy to do as most market research surveys are already designed to accomplish it. If you’ve taken a survey recently, you’ve probably seen it. Here’s a great example:
Let me guess, you said ‘Extremely Important’ to every single item. And, you probably answered completely honestly. Know what? You just straightlined. Straightlining is a very bad thing in the world of survey design. Why? Because it’s hard to tell whether someone truly answered the questions carefully or they were simply clicking as fast as they possibly could without reading anything. Gimme that incentive baby!
But how you do get around this? The most important thing about designing high quality survey questions is ensuring the use of both positively and negatively keyed items. In other words, half of your questions need to be phrased in a way that make the product sound good and the other half bad. Here’s how it works…
Instead of saying ‘Is all natural,’ it could have said ‘Has artificial ingredients.’
Or, instead of ‘Is a good source of nutrition,’ it could have said ‘Is a poor source of nutrition.’
If it’s this easy to, why don’t we do it? Why do I constantly get pre-written surveys that are chock full of questions that encourage straightlining? Maybe they’ve done it like this for a long time and they don’t want their norms tampered with. Maybe they feel like they’re encouraging people to think negatively about the product. Or, by offering negative options, maybe they’re encouraging people to rate their products negatively even if they wouldn’t have otherwise. But in the end, don’t you want QUALITY data? Trustworthy data? Actionable data?
So follow the rules. If there is no option but to write a grid question. At least write a good grid question.