Live note taking at the #MRIA16 conference in Montreal. ANy errors or bad jokes are my own.
Danger ahead – or is it opportunity by Micheal Dorr
- Change is inevitable
- Marketing myopia – rail used to believe they were in the business of train travel, but they should have seen themselves as the business of transportation and then they would have invested in cars and planes too
- Our business isn’t surveys. We are consumer insight.
- 1) Mobile “power of now”, 2) Need for speed, 3) Big data gets personal, 4) Automation
- Activities formally done on PC are going mobile
- Most innovative brands embrace digital and mobile, and don’t necessarily own cars or hotels or content
- Taco Bell is named as a most innovative companies of 2016
- Competitive landscape has changed dramatically
- Over half of surveys are not mobile optimized
- Mobile power of now – geofencing, mobile diaries, mobile ethnographies, shorter surveys
- Attention span is actually dropping
- Americans will not wait in line for more than 15 minutes, 25% of people won’t wait more than 4 seconds for a webpage to load
- Amazon primes will delivers books to you door in one hour
- Krispy Kreme will tell you phone if you’re near a store and if donuts just came out of the fryer
- Possible to have a one to one conversation with a global company because if big data
- It is not Qual vs quant, it is Qual AND quant [yeah baby!]
- AI creates a more meaningful and human interview
- Data and analysis trends and patterns can be identified via automation
- These tools aren’t threats, they are tools to enable us to do our research better
Definition of madness – Digital advertising by Joe Amati and Sharon Flynn
- Focus on people who use multi-screen, lens to consuming content in the future
- 3.3 hours of video per day for French Canadians – phone, laptop, tv, pvr, ott
- 66% of video content of French people is under their control, they choose it as opposed to it being ‘on’
- 50% say advertising is under their control, they can fast forward or skip it
- 30% have a favourable view of advertising, why do we spend so much money when we know people don’t like it, we are doing the same thing expecting something to change
- People have four states of mind – are bored, goal oriented, seeking diversion, invested
- We can’t be so personal and creep people out, can’t intrude on their lives, it’s more creepy when it’s not quite relevant
- Why do people NOT skip ads – 66% because they like the ad or the brand or the quality of the ad, humour is the top reason why not
- One size does not fit all – every consumes content in a different way, handle interruptions in a different way
- Design for digital first, you can’t just take a thirty second tv commercial and put it on YouTube, people go online to be entertained not to watch ads
- Make stories not time – don’t ask to buy a ’30 second slot’. COnsumers take as much or as little as they want as long as they are entertained
- Miniwheats did a commercials where kids instructed a fitness class secretly which was 3 minutes long and people loved it
- Keep it fresh – people have world of content at their fingertips
Down with top two box scores by Michael Edwards and Parul Verma
- Traditionally there were two items to pick from, it was easy to choose between them; and it made more sense that attitudes equaled behaviour
- People need metrics that matter immediately
- Simple is not as simple as it seems
- We ask all our research questions using a five point scale, it’s crazy simple and anyone could write the questionnaire and program it in twenty minutes
- Straightlining is a serious problem because of this
- We like to like that purchase intent is a behaviour question but it is a attitude question, in the complex world intent is more than simple
- Metrics of a monodic test are attitudes, this is the only option we had decades, we still call them boards after fifty years even though we really don’t use boards anymore
- Really need to account for competitors, in the category, out of the category
- It is possible to simulate thousands of options, not just 3 because you only have manual options; simulations let you see changes not just to your product but to 500 other products as a result
- We often say a product is in the top quintile and now we can quantify into units and dollars; we can check which scenario generates more units and profit
- Clients want to know how reliable the scenarios are, they have 3 examples where prediction was nearly identical, very impressive
- There is value in monad I testing but there is a time and place for it, good for diagnostic feedback
- It’s not worthwhile doing if you want to test one single idea, better with many potentials
ATB goes all in by Ann Coulter and Tawnya Crerar
- How do we understand the emotional and rational aspects of banking
- Mind model labs using psychoanalytics
- Most of our daily decisions are tiny, but sometimes a financial issue becomes a crisis and we think about our bank in a new way
- Seven types of crises that affect customer retention, outcome is really important
- The longer it takes to respond, the more imporact it has on a customer
- Did an 8 week online community and asked people to write a love letter or a breakup letter, asked people to start a transaction at a bank and write about it. There is a lot of emotion in how people talk about these institutions
- Also used a discrete choice model, manipulated service, messaging to capitalize on retention and acquisition, compared new vs existing customers, millennials
- Developed a simulator and turned those into scenarios and strategies
- 3 major insights employes, customers, company
- Revalued welcome program for employees, want to be a place employees want to be, first day people start late and there is a welcome package on their desks, they met the CEO and his direct reports, they pay people to leave if the newbies don’t like it, newbies must work in a branch for one day and in a call Center for one day; employees are told if something doesn’t feel right they have the power to do what needs to be done
- Use real customers in their ads, pay loyal customers to bring in other customers, has a Junior ATB program where they teach kids to balance a check book and play pretend bank so kids learn all the roles
- Since it started 18 months ago, employee engagement is through the roof, profit and market share is growing
- Banking is about life and meaning, and relationships have to matter
- [lovely case study]
- [jeez, this employee welcome video is going to make me cry, it’s like saving the world and helping the poor feel safe and loved. Where do I sign up?]
If you’re a brand manager, you desperately want to know how many people love your brand. Do 5% of people love it or do 45% of people love it? Top 2 Box scores help you track that number and evaluate improvements over time. They tell you which features or aspects of your brand people like.
Similarly, Bottom 2 Box scores are essential measures as well. How many people hate your brand? Do they hate the red one or the blue one? The hope is that B2B scores decrease over time thereby indicated that any changes made to a product served to convert haters into lovers.
So what’s the point in measuring neutrals? Is it important if 40% or 60% of people couldn’t care less about your brand? Should we care if they walk past your retail store and pay no mind to the beautiful display in the window? That they flip the newspaper pages without paying attention to the 50% off coupon taking up half the page? That they choose a competitive brand over yours because the store was 20 feet closer? I think the answer is obvious.
Complacency matters. Complacency is not hate, nor is it love. Complacent consumers are imminent switchers. They are on the cusp of discovering and falling in love with a competing brand because you don’t care about them. Other complacent consumers are imminent loyals. They are ripe to become massive fans sharing your social media links with all of their friends and followers if only you would give them the reason why.
So don’t brush aside the Neutrals in favour of Top 2 and Bottom 2 Box scores. Unless you don’t care. Or whatever. Cause I don’t care. Anyways.
Say it ain’t so! Banish average scores? Where is that coming from?
- Average scores fail to explain the whole picture. Here’s an example. Let’s say the Blackberry generates an average score of 4 out of 5, a moderately positive score. And, let’s say that the iPhone generates an average score of 4 out of 5, the same score. It would seem that people like the Blackberry just as much as the iPhone. (NOOOOOOO!) But wait… did you forget to look at standard deviations or box scores? What if you learned that the Blackberry received 80% neutral scores, 10% positive scores, and 10% negative scores. And what if the iPhone received 40% neutral scores, 30% positive scores, and 30% negative scores. Those are two very different love/hate stories. But you wouldn’t know it from the average score. Those two bar charts you see represent the exact same mean but oh so different box scores.
- Trends never change. What is the biggest joke researchers make about tracking studies? That nothing ever changes. Indeed, from day to day and week to week, the numbers are always the same. There’s almost no reason to even run a tracker. The average score has been 3.466364 since last week, since before the internet, since before the beginning of time. Since we’re working with a scale from 1 to 5, instead of a scale from 1 to 100, we don’t even give ourselves the chance to see if something has changed. Why do we even bother? Because we were told to run a tracker and never change the questions or formatting even if the market warrants revisions. Wow. Great research objective.
But wait. For some reason, I can’t let average scores go.
- Big trends scream. In the social media space, where we get to use sample sizes in the thousands and millions, serious opinion changes make for beautiful charts. Sure, the number has been 3.2 for 2 years now. But all of a sudden it went to 3.25 and then 3.3 and then 3.6. I can even pin down the exact day that some unknown event took place and shook up the market before returning opinions to normal. Box scores do this too, but you’ve got three lines to check instead of just one and, well, maybe I’m just a little lazy.
The moral of the story is this. If you’re going to use average scores, you absolutely must use them in concert with a measure of distribution, whether a standard deviation or box score. And, you must consider whether you are using a scale that is wide enough to actually let you see any changes that might truly exist. Otherwise, I’ll tell you now. Your score next week will be 3.355235263.
I was in school for a bunch of years, and took a bunch of research design courses and a bunch of statistical analysis courses. Easy ones, hard ones, and a few really interesting ones. Surprisingly, one thing I never learned about was box scores, a statistical staple in the market research world.
Box scores are a way of talking about and working with Likert scales or other types of categorical scales so that everyone knows whether you are talking about the positive end of the scale (top box, top 2 box), the middle of the scale (middle/neutral box), or the negative end of the scale (bottom box, bottom 2 box).
Instead of calculating average scores from the Likert scale responses, box scores are reported as the percentage out of the total number of people who answered the question.(If 10 out of 50 people chose strongly agree, top box score is 20%) Box scores let you clearly identify how many people fall into a subgroup – people who are happy, unhappy, or just don’t care about your product.
Why do box scores matter? In a sense, they do report the same type of information as average scores. But, unless standard deviations are near and dear to you, average scores often appear very similar between groups. It’s hard to explain to a client why scores of 3.6 and 3.9 are very different because there is no intuitive difference between those numbers.
But, let’s think about box scores now. Can you intuitively understand the difference between 30% of people liking your brand and 40% of people liking your brand? I’m pretty sure you can. And you don’t need to understand what a standard deviation is either. I’m not in favour of dumbing down statistics but I am in favour of people understanding them.
Here’s another reason box scores are good. The average score calculated for a result that is 10% top box, 10% bottom box and 80% middle box is exactly the same average score you would get for a result that is 40% top box, 40% bottom box, and 20% middle box. I’d certainly like to know if 10% or 40% of people hated my product. That’s a pretty important difference to be aware of and I wouldn’t want it getting lost because someone had a weak understanding of what a SD is.
So now, psychology/sociology/geography majors, go forth and prosper as market researchers!
Read these too
- CAUTION: The Scariest Thing EVER
- 1topic5blogs: The only thing cell phone surveys are good fer
- I don’t give a rat’s ass about probability sampling
- Building a bad reputation before we even start: Privacy in social media research
- 2011 Market Research Unpredictions #MRX
Wouldn’t it be great it you could just read and interpret a number, and then be confident about your interpretation? If that was the case, you wouldn’t be able to buy 23 different books called “How to lie with statistics.”
Here are a few common problems I see when people try to interpret numbers.
Dislike matters just as much as like. Don’t get so focused on top box scores that you forget about bottom box scores. Brands can easily have identical top box scores and ridiculously different bottom scores.
How many times have you seen huge inexplicable spikes in your charts? Spikes are a key indicator that your sample size is too small. Be extremely nervous about numbers based on only 30 people. Be cautious of numbers based on fewer than 100 people. Check first and avoid embarrassing conclusions.
Everything on the planet is governed by rules. And one of those rules is randomness. When you’ve determined that a small sample size is not the cause of the spike, and there is no discernible explanation for the spike, consider that it may in fact be a random number. Random happens. Deal with it.
Just because a test came out significant today doesn’t mean it will with new data next week. See previous point. You will know you’ve really got something when its significant when it occurs on several unique occasions.
Have a look here too