StatsSuck. That is all.


Cover of "Theory of Statistics (Springer ...

Cover via Amazon

[tweetmeme source=”lovestats” only_single=false]At the recent ARF audience measurement conference in New York, a couple of controversial statistical ideas were raised. Controversial in the sense that people reading my tweets couldn’t tell if I believed the idea or not.

1) The point was made that we should forget the 95% significance value and focus instead on 80%. I do agree that some people get so hung up on that 95% that they fail to see the forest for the trees. We need to understand the theory of statistics so that we know when it makes sense to go against them. As always, once you know why you’re breaking the rules, it’s ok to break them. I see 80% as a good theory building, hypothesis testing, do I bother to keep trying number. And then, 95% is a good confirmatory test. But with human discretion applied.

2) We focus a lot of our energies on trying to build the most accurate samples we possibly can, split by many demographics and complicated sampling strategies. But the problem is that we know we can never achieve that perfect sample. Ever. So let’s approach this from a different point of view. Acknowledge the flaws in a sample, and be wary of and smart abut the weaknesses they bring to the results. If you want to achieve new heights, curious outcomes, and innovation, simply press on. Innovation comes from taking risks. Working with a less than perfect sample just might create a situation for innovation.

I dare you.

Read these too

  • Why market researchers can never be marketers
  • Survey Design Tip #3: Do You Encourage Straightlining?
  • Laugh at yourself and then cry at our flailing industry
  • In Honor of Infographics. #MRX
  • This is why Twitter will die
  • 4 responses

    1. The Confidence level is easy to misinterpret. The right explanation is the observed data could have occurred due to chance alone in less 5% of the cases. But most treat this as 95% certain the alternate hypothesis is valid.
      We can’t stop when data fits the hypothesis. Data can fit any number of hypothesis.

      Why do test for statistical significance at all?
      I recommend going with Bayesian approach!
      -Rags

    2. Good points! Perfect samples are by just a fairy tale from stat books, particularly in the case of online studies where online panel samples are used, so fretting about CIs is often like triggering the ancient stress reponse as if waiting from a lion to attack us in the African savanah while living in suburbia. See my post on this: http://www.relevantinsights.com/testing-for-significant-differences

    3. The flip side is that even results that are signif at the 95% level are not always actionable. I have seen researchers enthusiastically report something because it is “significant”–but in reality, the difference was not enough to warrant real action. As you say, human discretion must be applied.

    4. Hooray for #2! I agree with you. Unless you have unlimited amount of money, time, and other resources, you will not get the perfect sample. So let’s stop pretending that we can.

      I like your idea that the limitations may lead to innovation. Great way of thinking!