Tag Archives: bias

Should market researchers measure the conscious or unconscious mind? #MRX #NewMR

Measuring the unconscious through implicit techniques is in-vogue right now, and I’ll admit that I’ve been a huge fan of them for a couple decades, ever since I got to use a tachistoscope in university. Implicit techniques are based on the premise that people’s feelings, opinions, and attitudes are often not accessible to basic awareness. You’re probably most familiar with this in terms of people not recognizing or admitting that they are sexist, racist, homophobic, or xenophobic. Or, at least, the extent to which they are —ist or —ic.

Oh yes, I HAD to choose an iceberg image. The world needs to see another 83 trillion of them before we pick a new image. 🙂

Implicit techniques often entail having people do word or image comparisons at super-high speeds. For instance, you might ask people to assign one set of 100 words (e.g., adventurous, bewildered, debonair, heroic, birthday balloons, seaside, pyramids) to a couple of brands in under a minute. A choice must be made for every single word. The reasoning behind this technique is that decisions are made too quickly for logical thought to occur. Rather, gut feelings, the unconscious mind, the reptilian brain, are the only processes being accessed.

But what about this scenario?

I KNOW I am sexist, racist, homophobic, and xenophobic. I was raised in that culture and it is embedded in me. Growing up, I saw sexism and racism all over the media and, today, I see homophobia and xenophobia all over media. At this point in my life, it would be massively hard to make that part of me disappear. Fortunately, what I can do, and what I have done, is to recognize that part of me so damn fast that it has miniscule effects on my actions. I know these biases exist in every socialized human being (ah, the innocence of babies who haven’t yet been taught to be biased!) and I actively tell myself that those feelings are wrong. I’ve actively moved the treatment of those thoughts and feelings from the unconscious to the conscious.

Which brings me to my main point. It doesn’t make sense to always and only measure the unconscious. Why? Because my actions will demonstrate a completely different story than my unconscious brain will reveal. Implicit testing may suggest that I wouldn’t be amenable to a person, brand, service, or company, but then, low and behold, there I am endorsing, using, and buying it. My biased brain is contradicting the scientifically developed prediction algorithm that says I will not open my wallet.

I hope you’ll take a couple of lessons from this.

  • Never forgo implicit techniques for explicit techniques. Both are always mandatory or you will have gaps in your understandings and treatments. You need to know what biases and conscious decisions relate to your brand.
  • Accept that human beings, including you, have negative biases. And that’s not a bad thing. The only bad thing is being unable to recognize and being unwilling to accept those biases.

Tipping the sacred cows of MR #IIeX 

Live note-taking at #IIeX in Atlanta. Any errors or bad jokes are my own.

Will Watson replace researchers? By Bruce Weed

  • Health data will grow 99%; Insurance data will grow 94%; Utilities data will grow 99%, and more than 80% of that data will be unstructured
  • Machines don’t make up answers, they will give the answer you teach it to give
  • Now we teach machines to read images like MRIs, a doctos can’t remember an MRI from ten years ago but a machine will
  • Machines understand, reason, learn. They can learn multiple languages too. Can teach it how to read, hear, see, and 9 languages 
  • Showed all the Ted talks to Watson and now it will find the relevant part of the video you want to see
  • Teach machines to do more than a keyword search, teach it to learn and understand
  • Machines are listening in to call Centers and helping the agents give better answers
  • Machines learning will give us crime and threat detection, early detection of diseases, understanding customers, new product development
  • Machine learning makes humans smarter because it gives us capacity

Co-Creating a tailored experience to identify relevant insights leveraging advanced cognitive text analytics by Sion Agami and David Johnson

  • There are lots of five star ratings out there but not all five stars are created equally
  • Can’t approach analytics from a single dimension
  • Corpus linguistics – how people communicate
  • Olden days used to be keyword, Boolean, taxonomies
  • Now it’s NLP, machine learning, topics modeling – these are probabilistic models – 65% confidence that this is what you wanted, what if 4 different models are 65% confident?
  • Next is leveraging all methods in parallel – focus on emotions and cognitive states
  • Emotions, persona, experience, purchase path, topics are all important
  • How do you rate BOO, Not like, disappointed, like, good, WOW, and then add the emoticons into the scale
  • Algorithms can pick apart which products really are a 5
  • Fix the social media comments that are filled with emotion
  • How do identify WOW experiences before launching a products? What is the best question to ask consumers so they can share emotions, how accurate does your model need to be, can you measure what moved the needle from consumers with confidence
  • Put new tools in front of people who are passionate, those with project specific challenges
  • Watch out for groups who think they can already do something, maybe it’s time to work together OR let the people are ARE doing being the people who DO

The Perils and Pitfalls of Recall Memory: How flawed recall and memory bias pollute market research with David Paull, Elizabeth Merrick, Andrew Jeavons and Elizabeth Loftus

  • [I did an entire class in graduate school on unconscious and flawed memory. I’m totally on board with this session. Love this topic. Wish I could remember more of it. Ha ha. I really do.]
  • Market research has made a lot of assumptions about how memory works, completely contrasting academic research, we can’t remember names so how can we remember the past
  • [we need more true academics in market research ]
  • We assume what you did in the past will predict what you did in the future, or that we can predict
  • Our goal is to make money, we want to know allocation of marketing dollars so we ask about recall, we just don’t have better tools though good tools are on the horizon
  • There are lots of false positive and false negatives in recall data, 15% of people misremembered receiving something [This is NOT a bad respondent or a cheater or fraud. This is real human behavior.]
  • There is more to memory than forgetting, false memories are a huge part of memory
  • It’s very easy to expose people to leading questions, misinformation, erroneous versions and to contaminate or transform people’s memories
  • You can plant entirely false memories for things that didn’t happen, it has consequences, it affects their thoughts intentions and behaviours, memory is malleable
  • They planted memories that people got sick eating something as a child and people no longer wanted to eat those foods, they planted positive memories and got people to like yuck foods more
  • Should we take advantage of this to make people happier and healthier, or use them for marketing purpose
  • Sounds like advertising, we find a feeling like nostalgia so we put that into an ad
  • [we need more academics on stage. Most market researchers just don’t have the relevant psychology/sociology background]
  • Manipulation feels creepy but that’s a practical application
  • What is ethical – a therapist helping someone eat better, maybe not; What about a parent doing it with an overweight child?… Hello Santa Clause. WHich would you rather have, an obese child with heart problems or a child who remembers broccoli with grandma when that never happened
  • People have a lot of fiction mixed in with their facts
  • Memory includes “what you bought at the store last week” but memory also includes meaning “I remember the brand Uber” but I may now remember going to the store
  • Semantic memory helps us build great products
  • Memory applies to doors – we expect a pull door to look in a certain way different from a push door
  • We know we shouldn’t have long questionnaires, cognitive load is a problem, that hurts recall, we need to make it easy for people to recall episodic memories, it’s very shaky to ask people to remember the past
  • At least get the recollection as soon as possible, as they’re happening, need to get it before they interact with other people, responses influence each other, doesn’t matter if it’s a focus groups, early responses affect what people say later on
  • Automated systems can help remove some biases, qualitative is less and less reliant on humans
  • think about biases of respondents and and yourself
  • If people know they can look up the information later, they won’t try to remember it, we no longer bother to remember phone numbers, passwords are a huge problem
  • Does the precise memory matter more than the feeling, we can alter the feelings people have about products
  • “You told us you were a 4 on that scale” and many people won’t remember that they originally said it was a 2
  • Must think carefully about the outcome you need, be realistic about when you need precise memories vs insightful memories, knowing it was the 37th flow of a building may not matter because all that was important was that it was high

Social Disruption: The vertical network arrives by Ashlyn Berg

  • There are many social networks specific to  careers – ResearchGate is for PhDs, Github, ZumZero, SpiceWorks
  • Community aspect – online home for professionals to interact with their peers
  • Content – users share millions of original and shared content to stay up to date on trends and do their jobs better
  • Apps and tools – help people get their job done
  • Mostly for free so people engage for a long time
  • 1-stop shop for marketers, place to build relationships, platforms for research
  • Easy place to investigate Ned’s and challenges of your audience
  • Better platform for research over other alternatives
  • Rich projected information, not just into about their company, know which apps they use, hardware and software they use, massive amount of Behavioral insight
  • Vertical network is very clean data, social behavior is clean, it is the real audience you want to talk to, not someone who wanted an incentive 

Questionning the questionnaire – using games to real self-report biases by Amber Brown and Joe Marks #CASRO #MRX

Live blogging from Nashville. Any errors or bad jokes are my own.

– surveys that aren’t well designed have social desirability bias, aspirational biases, demand characteristics, satisficing
– games can help with some of these if they are properly designed
– purchase/visit intent can have problems as people want to please you, are aspirational in their answers with little follow through, similar to charitable giving and exercise
– study asked about prior and future behaviour of behaviours
– people were offered either cash or theme park tickets and then asked whether they planned to visit the park – would they take the cash (they probably won’t go) or would they take the tickets (they probably will go) (Cash is always less)
– for a charity company – will you donate your incentive to a charity or take the cash (cash is always less)
– for an exercise company – will you take a sports authority gift card or a cash incentive (cash is always less)
– for readership – will you take a book store gift card or cash
– the incentive choice was a good predictor of the intent question
– games engage instinctual thinking. you’re just trying to win. people play games every day. it’s faster and gives less time for biases to creep in
– the test is actual choice behaviour which his similar to the marketplace
– would you be willing to donate to wikipedia? real case study – do you want $10 in cash or donate $50 to wikipedia. 14% chose the 10$ donation but 2% chose the $30 donation
– the game comes much closer to real behaviour
– can help to counter biases that poorly designed surveys may have

[i want to read the paper on this one. very cool!]

What Do Regression Models Indicate? #MRX

I just returned from two of the best marketing research conferences out there, ESOMAR and WAPOR, and was flipping through the notebook of rants and raves that I create as I listen to speakers. Interestingly, even at these conferences, where the best of the best speak, I heard a certain phrase repeatedly.

The regression model indicated…”
“The data indicated…”
“The results indicated…”

Well you know what? The data indicated absolutely nothing. Zip. Zilch. Zero.

Data is data. Numbers in a table. Points in a chart. Pretty diagrams and statistical output.

The only thing that indicated anything is you. YOU looked at the data and the statistical output and interpreted it based on your limited or extensive skills, knowledge, and experience. If I were to review your data, My skills, knowledge, and experience might say that it indicates something completely different.

Data are objective and indicate nothing. Take responsibility for your own interpretations.

(Me at esomar)

Minimizing Nonresponse Bias (GREAT session) #AAPOR #MRX

AAPOR… Live blogging from beautiful Boston, any errors are my own…

AAPOR Concurrent Session A

Thursday, May 16, 1:30 p.m. – 3:00 p.m.
5 papers on Minimizing Nonresponse Bias (MANY speakers, check the website

First paper

  • Evaluation and Use of Commercial Data for Nonresponse Bias Adjustment
  • Using commercially available data to estimate non-responders, there is a lack of this info
  • RDD is showing increased non-response
  • Census aggregate data is uninformative of nonresponse bias due to error in matching, and low associations with the survey variables
  • Purchased commercial data is better, e.g., filling out your toaster registration card [this is why i NEVER register anything. registering products is ONLY for marketing purposes.]. There can still be lots of missing data, weighting may not be possible, can be high rates of error.
  • Allow the use of incomplete auxiliary data, use multiple imputation
  • They had success matching adults of the same age/gender in the same household

Second paper

  • Interviewer Observations vs. Commercial Data:

    Which is Better for Nonresponse Bias Correction?

  • Should be we budget for commercial data or interviewer observations?
  • How does the quality and usefulness compare?
  • Observer can make a record of a non-responder as they person says “i don’t want to participate”
  • Employment benefits study for social status, family type, house type, foreigners, young people
  • Study tested for observations and commercial data separately and together
  • [Wondering how important this information is given the recent miss on polling in BC Canada]
  • Results show that observer evaluations for unemployment benefits were pretty good, income not so much; Commercial data not so good, worked better for a general population not an unemployed population
  • Can’t say consistently which method was best for predicting employment benefits but commercial data was the worst
  • Make sure the observations are related to the survey topic
  • [cool paper!]
  • Similar costs for both methods

Third paper

  • Assessing the Reliability of Unit Level Auxiliary Data in RDD Surveys: NHTSA Distracted Driving Survey
  • [Why are there different standards of research for gvt research and private research?]
  • Purchased demographic data from a panel company, did a distracted driving survey
  • 18 minute interview [how many responders were just nodding their heads in agreement by the end?]
  • [Funny how we assume there is RIGHT data. Just because you bought or created data doesn’t mean it’s true.]
  • Landline and mobile surveys differ by 10% in terms of response by the youngest people (20% vs 30%)
  • Across 4 methodologies, demographic frequencies differ by 10% to 40%; match rates between those on multiple datasources as low as 20%
  •  Auxiliary data tend to be household characteristics while interview data tends to be individual characteristics
  • Agreement rates are too low for most measures to facilitate non-response analysis
  • But, auxiliary data might be useful for underrepresented segments

Fifth paper

  • [Awesome ten dollar word title]
  • Comparative Ethnographic Evaluations of Enumeration Methods Across Race/Ethnic Groups in the 2010 Census Nonresponse Follow-up and Update Enumerate Operations
  • Ethnographics were to accompany an interviewer and observe live interviews and tape them if they had permission; to identify issues with enumeration
  • Study targeted a number of race/ethnic groups as well as a control group
  • “No data source is truth”  [ha! that was my side remark further up this page 🙂  ]
  • Sources of inconsistency are in order are interviewer error, mobility/tenuoousness, respondent concealment/refusal, addressed missed, not in the census, respondent confusion, language barrier
  • 37% of questions were read correctly or with appropriate corrections — two thirds were changed by the interviewer!!!
  • How do you get into apartment buildings? What about occupancy hotels? No buzzer boxes? Unlabeled units? In the middle of nowhere requiring a helicopter to get there?

10 reasons why you don’t know why you do what you do #MRX

Ponder this list for a moment…

  1. Social desirability: Some people tend to answer questions in a way that makes them look good to other people. “I truly believe that men and women are equal even though I’ve never given a raise to nor promoted any women who’ve worked for me.”
  2. Order effects: Items that are earlier in a list get chosen and remembered more often than those later in listJust Shocking!. Because this item is second in a list of ten, chances are you will be more likely to remember this bias.
  3. Interviewer effect: The demographic and psychographic characteristics of an interviewer affect the responses given by an interviewee. “It doesn’t matter that we’re both women, I would have still would have told a man that women are smarter.”
  4. Acquiescence bias: Someone who tends to answer a question with agreement regardless of what the question is. “Of course I agree with you that I shouldn’t be paid for working overtime.”
  5. Recall bias: The way we answer a question is affected by our memory of an event. “I don’t remember hearing anything about a product recall so it couldn’t have been a big deal.
  6. Optimism bias: People believe they are less likely to experience a negative event than other people are. “I won’t get sick from smoking even though most people who smoke end up with some smoking related illness.”
  7. Cognitive dissonance: In order to feel better about themselves, people find a logical reason for their negative actions. “I had to cut that guy off in traffic or the receipts on my dashboard would have flown all over the place.”
  8. Anchoring: People often rely heavily on a single trait when making decisions. “That guy is just a stupid idiot. His forget to set his alarm so obviously he can’t do anything right.”
  9. Self-serving bias: People claim more responsibility for the good that happen to themselves than the bad things. “I worked really hard for my raise but all those problems with my work are because my colleague screwed up my filing system.”
  10. Dunning Kruger effect: People who are lacking in a skill overestimate their own skill in that area. “I took an introductory statistics course in my undergrad so I could easily do that factor analysis.”

Clearly, people’s opinions are affected by myriad unconscious effects that prevent them from accessing true answers. Now tell me, if we’ve been teaching and learning about response biases in school and we learn oh so many more on the job as market researchers, why do we ask research participants to verbalize responses to the following questions:

  • Why did you buy Brand A?
  • Why did you choose red over blue?
  • Why did you use the $1 coupon for the $3 item but not the $2 coupon for the $4 dollar item?

Why? Why oh why?

Arriving Yesterday: A New Era of Research

We all know this. Response rates continue to decline despite all efforts to improve them. We’re working on taking advantage of rich media questions that make the survey taking experience more fun. We’re working on cell phone surveys so that some surveys can be moved into a different, possibly more engaging, format. We’re developing communities and social networks to keep survey responders happy.

This is all good stuff and it’s important, but will it be enough? Will this keep our industry afloat?

It seems to me that social media has ushered in a new era of research. It didn’t start with researchers and we didn’t ask for it. But it’s here.

This new world has lots of good stuff in it. There is no such thing as declining response rates. There are no order effects, no question biases, no leading statements, no interviewer effects. There aren’t even any incentive costs, though let’s not count that out just yet.

What it does have is millions upon millions of unprompted, genuine opinions about the most minuscule and the most topical issues. It has opinions from people who’ve never answered an online survey before, and from people who gave up answering online surveys ages ago. It has opinions from chat leaders and early adopters, influencers and thought leaders. It has breaking issues, ongoing issues, and issues we never even knew were issues.

Of course, it means that we’ve got a ton of new issues to work through but for someone, like me, who loves the challenge of research on research, thing is just more good news.

It sounds to me like research using social media has a ton of advantages. So you might as well come along for the ride because the train is moving forward whether you like it or not.

Read these too

I’m Told I Have No Opinion

Marketing Research Association

Image via Wikipedia

I love Merrell shoes. The sole is like walking on sponges and the designs is really cool. I love Kitchenaid appliances because they look good, they are sturdy, and baking is so much easier. I love Tall Girl clothing because the pants are just the right length and the price is right if you get the sales. I could bore you profusely about even more merits and downfalls of each but unfortunately, I am not permitted to have an opinion of them.

You’ve seen those screener questions. “Do you or any member of your household work in the marketing research industry?” Why is this question so important? Why can’t I answer the survey if I do work in the industry?

Perhaps I am going to steal confidential questions.
Perhaps I am going to try to skew data because I have a competitive client.
Perhaps my opinions will be biased because I understand the purposes of the questions.

Well first, various research codes say I must behave ethically which means no stealing and no biasing.
Second, as I rush through the survey like everyone else who is disgusted with low quality questions and bored with ridiculous questions, I am certainly not paying any more attention to the purposes of the questions than anyone else.
And third, why aren’t my opinions valid? Doesn’t Merrel and Kitchenaid and Tall Girl want to know what one of their most loyal customers thinks? Even worse, since my spouse lives in the same household as someone who works in the marketing research industry, why aren’t his opinions valid? He’s never even heard of a likert scale and would probably stab himself in the heart if I tried to explain it to him.

Besides, if I truly am going to lie and cheat and bias answers, why would I ever confess that I work in market research so please don’t show me your survey?

And if you’re curious, I DO answer competitive surveys (just like you do). My rule is answer every question, except the screener, completely honestly or don’t finish the survey. Just how I’d want to be treated.


Related Posts

Enhanced by Zemanta
%d bloggers like this: