How to Transition a Survey


Illustrating a 90% confidence interval on a st...

Image via Wikipedia

From offline to online
From 60 minutes to 30 minutes
From $20 incentive to $5 incentive
.
It truly is possible to transition tracking surveys across variables, whether they be method, length, or other feature. There are just a few things to keep in mind.

1) You will NEVER EVER EVER achieve perfect results. Look at the data you’re getting now. Even when you change absolutely nothing, no new advertising strategies, no new flavours colours types, no new anything, your data jumps from week to week and month to month. This is simply random variation. You will continue to see random variation after you make the switch. That is a basic fact. Do not be intimidated by it. Expect it.
.
2) Any statistics you run on your data are probably based on the good old standby 95% confidence interval. One way to think about this is that of all the numbers in your dataset, 5% of them are just plain wrong. Sampling got in the way, survey design got in the way, cute little toddly boys were tugging at pant legs during the survey taking time. Think of this another way, of the 100 statistical tests you ran with your last study, 5% lied to you. They made you think there was truly a difference but there was in fact no real difference at all. These differences will continue on after the transition. You just don’t know where those 5% of wrong assumptions are.
.
3) Be realistic. Recognize that there absolutely will be differences. You KNOW that different methods cause different results so don’t be surprised when they show up in your dataset. And, don’t expect the differences to be tiny. Expect them to be statistically significant. Refer back to #2. If you’re working with box scores based on 100%, it is reasonable to see differences between your old and new dataset of 5 points. And, expect to see a number of differences throughout your dataset of 10 or even 15 points. Refer back to #2.
.
4) Transitioning takes time. Aim for 3 time periods. If you run your survey monthly, then try for a transition period of at least 3 months. During this time, the survey will be run both ways. For example, run the exact same survey on paper and the exact same survey on the internet (obviously adjusted to meet online survey standards).
.
5) Then the fun part. Learn to re-establish your baseline. Use the two sets of data to see how you need to re-think about your product. If satisfaction used to trend around 80% and now it’s at 70%, consider 70% the new normal. It’s not 10% worse, it’s 10% different. Your goal is still the same – maintain and increase satisfaction.
.
6) Consider algorithms to convert the new data to match the old data. I’m not really sure that I recommend them though. Seems to me you’re just fooling yourself, and removing yourself from the data. The more fixing you do, the more difficult it is to really understand the data. Plus, a few years from now, someone is going to ask “Why are we doing that anyways?” and nobody remembers why.

Related Articles

 

  • Why do surveys ask the same question 8 billion times?
  • How cool is market research? #mrx Social media research is the new one size fits all
  • How to upset me by generating leads with market research surveys
  • Data Tables: The scourge of falsely significant results #MRX
  • 1 topic 5 blogs: Embracing the evolution of listening
  • #Netgain5 Keynote Roundup: Last Thoughts #MRX #li
  • 3 responses

    1. I agree.

      Have/still see this with continuous tracking work done by our company.

      Significant bonuses paid based on levels of total brand awareness and movement against competitive sets. Most were automotive clients (sort of dovetails to rants JDPower) but it did cross pollinate into other verticals.

      Also saw this in copy testing where, when we re-calibrated the normative database when moving online, screwed up tons of clients AND their agencies who were compensated based on the scores derived.

    2. 5) Then the fun part. Learn to re-establish your baseline. Use the two sets of data to see how you need to re-think about your product. If satisfaction used to trend around 80% and now it’s at 70%, consider 70% the new normal. It’s not 10% worse, it’s 10% different. Your goal is still the same – maintain and increase satisfaction.

      Have I told you this story before? Or maybe posted about it? I went to a presentation by a JD Power researcher. who told us how now that they’ve transitioned almost entirely to web research, and because web scores are always lower, they have to apply a multiplier to every result to keep it in-line with the old phone-based results — because many of the companies that pay for JD Power rankings have tied executive compensation to the scores. They can’t drop from 80% to 70% because then they won’t get their bonuses, so rather than adjusting the compensation, or, better, asking each other if perhaps the problem is that it was 70% all along but people are being more honest — not just more critical — online, no, instead, they just multiply everything by 1.14.

      Forever?

      1. Oh that’s a good one! I’ve heard of that happening. It just makes no sense to me. Working forever and ever with fake data.

    %d bloggers like this: