Every time we test a new subject line, a new survey question, a new image, a new way-cool-and-so-much-better alternative, we always use a control group. That way, when we see the results from the test group, we can immediately see whether it was better, worse, or the same as the way we usually do things.
But let’s take a step back for a minute and remember how “random” sampling and probability works. Statistics tell us that even superior quality research designs are plagued by random chance occurences. That’s why when we report numbers, we always put a confidence interval around them, say 3 or 4 points. And then on top of that, we have to remember that five per cent of the time, the number we see will be horribly different from reality. Sadly, we can never know whether the number we’ve found is a few points away from reality or 45 points away from reality.
The only way to know for sure, or at least, nearly for sure, is to re-run the test 19 times. To hit the 20 times that allow us to say “19 times out of 20.”
And yet, we only ever use one single control group before declaring our test group a winner, or loser. One control group that could be wrong by 3 points or 43 points.
So here’s my suggestion. Enough with the control group. Don’t use a control group anymore. Instead, use control groups. Yes, plural. Use two control groups. Instead of waiting a week and redoing the test again, which we all know you haven’t got the patience for, do two separate control groups. Launch the control survey twice using two completely separate and identical processes. Not a split-test or hold-back sample. Build the survey. Copy it. Launch full sample to both surveys.
Now you will have a better idea of the kind of variation to expect with your control group. Now you will have a better idea of how truly different your test group is.
No, it’s not 19 repetitions by 19 different researchers in 19 different countries with 19 different surveys but it’s certainly better than assuming your control group is the exact measure of reality.