Radical Market Research Idea #6: Don’t calculate p-values #MRX


p-values are the backbone of market research. Every time we complete a study, we run all of our data through a gazillion statistical tests and  search for those that are significant. Hey, if you’re lucky, you’ll be working with an extremely large sample size and everything will be statistically significant. More power to you!

But what if you didn’t calculate p-values? What if you simply looked at the numbers and decided if the difference was meaningful? What if you calculated means and standard deviations, and focused more on effect sizes and less on p<0.05? Instead of relying on some statistical test to tell you that you chose a sample size large enough to make the difference significant, what if you used your brain to decide if the difference between the numbers was meaningful enough to warrant taking a decision?

Effect sizes are such an underused, unappreciated measure in market research. Try them. You’ll like them. Radical?

3 responses

  1. This is so true. Been working with a client who has 4000+ cases… statistical significance is a meaningless in many cases so I have had to revert to my normal barometer… does the difference change the client’s actual strategy — marketing, product development, sales etc? If yes, talk about it. If no, forget it.

  2. Good point: We often overlook this and it is so easy to get this statistic in spss (just three word addition in spss syntax). Effect size should not only be used when comparing means but also in other techniques including correltion.

    The effect size is often defined as the magnitude of an observed effect. For instance, if we run a correlation analysis with a software package, we usually get two major statistics, the significance value (often called probability value or p-value) and the correlation coefficient, among other things. We often check the p-value first to see whether the relationship is statistically significant. If the p-value < .05 (in education we usually set the significance level at .05), that means the relationship is statistically significant ("significant" here means "did not happen by chance"). The second thing we do is to look at how large the effect size is, and in this case it is the correlation coefficient. As we know that the correlation coefficient ranges from –1 to +1, and the closer it is to zero, the weaker the relationship is. If the correlation coefficient is .04, for instance, we make the conclusion that the relationship is statistically significant, but the effect size is too small (too small to be meaningful).

    How large is the effect size large enough to be meaningful. No definite answer to this question. Cohen (1988, 1992) "hesitantly" made the following suggestions and they have been widely accepted:
    r = .10 (small effect)
    r = .30 (medium effect
    r = .50 (large effect)
    Do you have any other latest benchmark available?

    1. Ah, Cohen’s D. Fond memories.🙂 Thanks for sharing that with folks.

%d bloggers like this: