Tag Archives: confidence interval

Really Simple Statistics: Chi-Square #MRX

Welcome to Really Simple Statistics (RSS). There are lots of places online where you can ponder over the minute details of complicated equations but very few places that make statistics understandable to everyone. I won’t explain exceptions to the rule or special cases here. Let’s just get comfortable with the fundamentals.

If you haven’t had your morning cup of tea or coffee, may I be the first to disappoint you by saying this post has nothing to do with chai tea! Sorry. ūüė¶

And my apologies again, it has nothing to with a traditional Chinese unit of length, or a dragon in Chinese mythology or a life-force.

What is a chi-square

Chi-squares are all about percentages. They are a statistical test that is used to determine if the percentage for one group is significantly different than the percentage for another group. Is the percentage of men who play soccer different from the percentage of women who play soccer? Is the percentage of people who made a purchase on Saturday the same as the percentage of people who made a purchase on Sunday? Is the percentage of high-income people who buy Brand A the same as the percentage of low-income people who buy Brand A?

Like any statistic, chi-squares can be very simple.

  • Compare the percentage of men who buy Brand A vs the percentage of women who buy Brand A

Chi-squares can also be more complicated.

  • Compare the percentage of men who buy Brand A or Brand B or Brand C vs the percentage of women who buy Brand A or Brand B or Brand C

Most basic market research relies heavily on chi-square tests. All of those grid questions in a survey are usually analyzed with a chi-square – the percentage of people who chose “Strongly Agree” or the percentage of people who chose “Disagree.”

Usually, when a study is launched, one of the project deliverables is a set of data-tables, you know those 300 pages of tables? These tables are chock full of chi-square tests but you wouldn’t know it unless you read the tiny little print at the bottom of the tables.

The important thing to remember is that chi-squares all about percentages.

Really simple statistics!

Related Articles

Advertisements

The Sherlock Holmes of CPG

Years ago, I read all the Arthur Conan Doyle stories and a bunch of Agatha Christie’s too. While I didn’t grow up to be an amazing sleuth who could determine that the butler did it after seeing a toothpick tucked behind the toilet, I did become a different sort of sleuth. I can’t say those books are what drew me to MR, but here are a bunch of things that keep me there.

1) Charts – oh how I like charts. Box plots, brand maps, scatter plots, radar charts. These charts allow you to instantly visualize tons of data that would otherwise make your eyes ache reading tables. (If charts are your thing, check out http://www.flowingdata.com )
2) Finding the oddity – Data can be pretty boring. You know what it is and what you’ll find. But, on that rare occasion, you’ll find the strangest little tidbit that resonates with you completely. And you wonder why it never occurred to you before.
3) Creating reports – It’s a bit of artistic creativity in an otherwise scientific day. Choosing colors that match the clients brand without clashing or looking obvious. Making sure the layout is clean and tidy so clients can focus on the results not on the jarring font size changes.
4) New topics every day – Tired of soup? How about shoes or banking or guitars or phone services. You learn something every day about things you never realized you cared about.
5) Solving mysteries – You are the Sherlock Holmes of consumer product mysteries. People will be astounded with your deductive reasoning skills and you will astound yourself too.

And now, with my trusty confidence interval calculator at my side, I shall go solve another mystery.

Read these too

  • Art, History, and Culture in Chicago Two Balloons and a Great Song
  • Survey Panel Questions – Enough Already! Word Cloud of my Resume – Wicked Cool!
  • Survey Design Tip #4: Brands are people too
  • Oh oh, I Have Another Favourite Statistic!
  • TV Shows From When I Was Short
  • Weighting – Is it all it’s cracked up to be?

    Weighting is a very common process in the research process. You might even use it every single time you run a study. I’m going to go against the grain here and challenge you to think about it more carefully.

    Let’s look at a simple example. Let’s take a data set that is made of 40% men and 60% women. The men produced an average score of 38% and the women an average score of 48%. That gives a raw score of 44%. But, because the population is 50% male and 50% female, we want to weight the results back to that target. That gives us a weighted score of 43%. So, the raw score is 44% and the weight score is 43%. Is it really all that different? Does that really change the business decision? The answer to this question should be “Absolutely not because my confidence interval is 3 points.” Makes sense, doesn’t it. If the raw score is basically equal to the weighted score, what are you doing weighting data?

    .    Gender Sample Population Score
    .    Male 40% 50% 38.0%
    .    Female 60% 50% 48.0%
    .    RAW

    44.0%
    .    WEIGHTED 43.0%

    Now, i’m not saying don’t weight your data. I’m just saying think twice before you weight your data. UNDERSTAND how weighting works before you use it. Here are some thoughts in relation to weighting:

    1) Do not expect your scores to change very much.
    2) If your scores are changing a lot, your sample is too different from the population and your weighted scores are probably not very reliable. You probably have tiny sample sizes that should be thrown out, not weighted.
    3) If your scores aren’t changing very much, why are you weighting? Data varies and comes with confidence intervals. You’re probably just shifting the score around within it’s confidence interval. So why bother.
    4) If you are using weighting, do not weight because you didn’t get enough of a particular demographic group. Weight because one group was too large.

    Moral of the story: Use the largest sample you can afford, and pull it so that it will be as representative as possible when you are done.

    Related Posts

     

  • Conversation Is Overrated The Psychological Theory of Positive Reinforcement
  • Qual or Quant – Pick one! LoveStats: Now in Alltop
  • #MRA_FOC #MRX MRA Articles of Incorporation, no longer 53 years old
  • Art, History, and Culture in Chicago Two Balloons and a Great Song
  • Survey Panel Questions – Enough Already! Word Cloud of my Resume – Wicked Cool!
  • Survey Design Tip #4: Brands are people too
  • How to Transition a Survey

    Illustrating a 90% confidence interval on a st...

    Image via Wikipedia

    From offline to online
    From 60 minutes to 30 minutes
    From $20 incentive to $5 incentive
    .
    It truly is possible to transition tracking surveys across variables, whether they be method, length, or other feature. There are just a few things to keep in mind.

    1) You will NEVER EVER EVER achieve perfect results. Look at the data you’re getting now. Even when you change absolutely nothing, no new advertising strategies, no new flavours colours types, no new anything, your data jumps from week to week and month to month. This is simply random variation. You will continue to see random variation after you make the switch. That is a basic fact. Do not be intimidated by it. Expect it.
    .
    2) Any statistics you run on your data are probably based on the good old standby 95% confidence interval. One way to think about this is that of all the numbers in your dataset, 5% of them are just plain wrong. Sampling got in the way, survey design got in the way, cute little toddly boys were tugging at pant legs during the survey taking time. Think of this another way, of the 100 statistical tests you ran with your last study, 5% lied to you. They made you think there was truly a difference but there was in fact no real difference at all. These differences will continue on after the transition. You just don’t know where those 5% of wrong assumptions are.
    .
    3) Be realistic. Recognize that there absolutely will be differences. You KNOW that different methods cause different results so don’t be surprised when they show up in your dataset. And, don’t expect the differences to be tiny. Expect them to be statistically significant. Refer back to #2. If you’re working with box scores based on 100%, it is reasonable to see differences between your old and new dataset of 5 points. And, expect to see a number of differences throughout your dataset of 10 or even 15 points. Refer back to #2.
    .
    4) Transitioning takes time. Aim for 3 time periods. If you run your survey monthly, then try for a transition period of at least 3 months. During this time, the survey will be run both ways. For example, run the exact same survey on paper and the exact same survey on the internet (obviously adjusted to meet online survey standards).
    .
    5) Then the fun part. Learn to re-establish your baseline. Use the two sets of data to see how you need to re-think about your product. If satisfaction used to trend around 80% and now it’s at 70%, consider 70% the new normal. It’s not 10% worse, it’s 10% different. Your goal is still the same – maintain and increase satisfaction.
    .
    6) Consider algorithms to convert the new data to match the old data. I’m not really sure that I recommend them though. Seems to me you’re just fooling yourself, and removing yourself from the data. The more fixing you do, the more difficult it is to really understand the data. Plus, a few years from now, someone is going to ask “Why are we doing that anyways?” and nobody remembers why.

    Related Articles

     

  • Why do surveys ask the same question 8 billion times?
  • How cool is market research? #mrx Social media research is the new one size fits all
  • How to upset me by generating leads with market research surveys
  • Data Tables: The scourge of falsely significant results #MRX
  • 1 topic 5 blogs: Embracing the evolution of listening
  • #Netgain5 Keynote Roundup: Last Thoughts #MRX #li
  • %d bloggers like this: