From offline to online
From 60 minutes to 30 minutes
From $20 incentive to $5 incentive
It truly is possible to transition tracking surveys across variables, whether they be method, length, or other feature. There are just a few things to keep in mind.
1) You will NEVER EVER EVER achieve perfect results. Look at the data you’re getting now. Even when you change absolutely nothing, no new advertising strategies, no new flavours colours types, no new anything, your data jumps from week to week and month to month. This is simply random variation. You will continue to see random variation after you make the switch. That is a basic fact. Do not be intimidated by it. Expect it.
2) Any statistics you run on your data are probably based on the good old standby 95% confidence interval. One way to think about this is that of all the numbers in your dataset, 5% of them are just plain wrong. Sampling got in the way, survey design got in the way, cute little toddly boys were tugging at pant legs during the survey taking time. Think of this another way, of the 100 statistical tests you ran with your last study, 5% lied to you. They made you think there was truly a difference but there was in fact no real difference at all. These differences will continue on after the transition. You just don’t know where those 5% of wrong assumptions are.
3) Be realistic. Recognize that there absolutely will be differences. You KNOW that different methods cause different results so don’t be surprised when they show up in your dataset. And, don’t expect the differences to be tiny. Expect them to be statistically significant. Refer back to #2. If you’re working with box scores based on 100%, it is reasonable to see differences between your old and new dataset of 5 points. And, expect to see a number of differences throughout your dataset of 10 or even 15 points. Refer back to #2.
4) Transitioning takes time. Aim for 3 time periods. If you run your survey monthly, then try for a transition period of at least 3 months. During this time, the survey will be run both ways. For example, run the exact same survey on paper and the exact same survey on the internet (obviously adjusted to meet online survey standards).
5) Then the fun part. Learn to re-establish your baseline. Use the two sets of data to see how you need to re-think about your product. If satisfaction used to trend around 80% and now it’s at 70%, consider 70% the new normal. It’s not 10% worse, it’s 10% different. Your goal is still the same – maintain and increase satisfaction.
6) Consider algorithms to convert the new data to match the old data. I’m not really sure that I recommend them though. Seems to me you’re just fooling yourself, and removing yourself from the data. The more fixing you do, the more difficult it is to really understand the data. Plus, a few years from now, someone is going to ask “Why are we doing that anyways?” and nobody remembers why.