Measuring the unconscious through implicit techniques is in-vogue right now, and I’ll admit that I’ve been a huge fan of them for a couple decades, ever since I got to use a tachistoscope in university. Implicit techniques are based on the premise that people’s feelings, opinions, and attitudes are often not accessible to basic awareness. You’re probably most familiar with this in terms of people not recognizing or admitting that they are sexist, racist, homophobic, or xenophobic. Or, at least, the extent to which they are —ist or —ic.
Implicit techniques often entail having people do word or image comparisons at super-high speeds. For instance, you might ask people to assign one set of 100 words (e.g., adventurous, bewildered, debonair, heroic, birthday balloons, seaside, pyramids) to a couple of brands in under a minute. A choice must be made for every single word. The reasoning behind this technique is that decisions are made too quickly for logical thought to occur. Rather, gut feelings, the unconscious mind, the reptilian brain, are the only processes being accessed.
But what about this scenario?
I KNOW I am sexist, racist, homophobic, and xenophobic. I was raised in that culture and it is embedded in me. Growing up, I saw sexism and racism all over the media and, today, I see homophobia and xenophobia all over media. At this point in my life, it would be massively hard to make that part of me disappear. Fortunately, what I can do, and what I have done, is to recognize that part of me so damn fast that it has miniscule effects on my actions. I know these biases exist in every socialized human being (ah, the innocence of babies who haven’t yet been taught to be biased!) and I actively tell myself that those feelings are wrong. I’ve actively moved the treatment of those thoughts and feelings from the unconscious to the conscious.
Which brings me to my main point. It doesn’t make sense to always and only measure the unconscious. Why? Because my actions will demonstrate a completely different story than my unconscious brain will reveal. Implicit testing may suggest that I wouldn’t be amenable to a person, brand, service, or company, but then, low and behold, there I am endorsing, using, and buying it. My biased brain is contradicting the scientifically developed prediction algorithm that says I will not open my wallet.
I hope you’ll take a couple of lessons from this.
- Never forgo implicit techniques for explicit techniques. Both are always mandatory or you will have gaps in your understandings and treatments. You need to know what biases and conscious decisions relate to your brand.
- Accept that human beings, including you, have negative biases. And that’s not a bad thing. The only bad thing is being unable to recognize and being unwilling to accept those biases.
Questionning the questionnaire – using games to real self-report biases by Amber Brown and Joe Marks #CASRO #MRX
Live blogging from Nashville. Any errors or bad jokes are my own.
– surveys that aren’t well designed have social desirability bias, aspirational biases, demand characteristics, satisficing
– games can help with some of these if they are properly designed
– purchase/visit intent can have problems as people want to please you, are aspirational in their answers with little follow through, similar to charitable giving and exercise
– study asked about prior and future behaviour of behaviours
– people were offered either cash or theme park tickets and then asked whether they planned to visit the park – would they take the cash (they probably won’t go) or would they take the tickets (they probably will go) (Cash is always less)
– for a charity company – will you donate your incentive to a charity or take the cash (cash is always less)
– for an exercise company – will you take a sports authority gift card or a cash incentive (cash is always less)
– for readership – will you take a book store gift card or cash
– the incentive choice was a good predictor of the intent question
– games engage instinctual thinking. you’re just trying to win. people play games every day. it’s faster and gives less time for biases to creep in
– the test is actual choice behaviour which his similar to the marketplace
– would you be willing to donate to wikipedia? real case study – do you want $10 in cash or donate $50 to wikipedia. 14% chose the 10$ donation but 2% chose the $30 donation
– the game comes much closer to real behaviour
– can help to counter biases that poorly designed surveys may have
[i want to read the paper on this one. very cool!]
I just returned from two of the best marketing research conferences out there, ESOMAR and WAPOR, and was flipping through the notebook of rants and raves that I create as I listen to speakers. Interestingly, even at these conferences, where the best of the best speak, I heard a certain phrase repeatedly.
“The regression model indicated…”
“The data indicated…”
“The results indicated…”
Well you know what? The data indicated absolutely nothing. Zip. Zilch. Zero.
Data is data. Numbers in a table. Points in a chart. Pretty diagrams and statistical output.
The only thing that indicated anything is you. YOU looked at the data and the statistical output and interpreted it based on your limited or extensive skills, knowledge, and experience. If I were to review your data, My skills, knowledge, and experience might say that it indicates something completely different.
Data are objective and indicate nothing. Take responsibility for your own interpretations.
AAPOR Concurrent Session A
Thursday, May 16, 1:30 p.m. – 3:00 p.m.
5 papers on Minimizing Nonresponse Bias (MANY speakers, check the website
- Evaluation and Use of Commercial Data for Nonresponse Bias Adjustment
- Using commercially available data to estimate non-responders, there is a lack of this info
- RDD is showing increased non-response
- Census aggregate data is uninformative of nonresponse bias due to error in matching, and low associations with the survey variables
- Purchased commercial data is better, e.g., filling out your toaster registration card [this is why i NEVER register anything. registering products is ONLY for marketing purposes.]. There can still be lots of missing data, weighting may not be possible, can be high rates of error.
- Allow the use of incomplete auxiliary data, use multiple imputation
- They had success matching adults of the same age/gender in the same household
- Interviewer Observations vs. Commercial Data:
Which is Better for Nonresponse Bias Correction?
- Should be we budget for commercial data or interviewer observations?
- How does the quality and usefulness compare?
- Observer can make a record of a non-responder as they person says “i don’t want to participate”
- Employment benefits study for social status, family type, house type, foreigners, young people
- Study tested for observations and commercial data separately and together
- [Wondering how important this information is given the recent miss on polling in BC Canada]
- Results show that observer evaluations for unemployment benefits were pretty good, income not so much; Commercial data not so good, worked better for a general population not an unemployed population
- Can’t say consistently which method was best for predicting employment benefits but commercial data was the worst
- Make sure the observations are related to the survey topic
- [cool paper!]
- Similar costs for both methods
- Assessing the Reliability of Unit Level Auxiliary Data in RDD Surveys: NHTSA Distracted Driving Survey
- [Why are there different standards of research for gvt research and private research?]
- Purchased demographic data from a panel company, did a distracted driving survey
- 18 minute interview [how many responders were just nodding their heads in agreement by the end?]
- [Funny how we assume there is RIGHT data. Just because you bought or created data doesn’t mean it’s true.]
- Landline and mobile surveys differ by 10% in terms of response by the youngest people (20% vs 30%)
- Across 4 methodologies, demographic frequencies differ by 10% to 40%; match rates between those on multiple datasources as low as 20%
- Auxiliary data tend to be household characteristics while interview data tends to be individual characteristics
- Agreement rates are too low for most measures to facilitate non-response analysis
- But, auxiliary data might be useful for underrepresented segments
- [Awesome ten dollar word title]
- Comparative Ethnographic Evaluations of Enumeration Methods Across Race/Ethnic Groups in the 2010 Census Nonresponse Follow-up and Update Enumerate Operations
- Ethnographics were to accompany an interviewer and observe live interviews and tape them if they had permission; to identify issues with enumeration
- Study targeted a number of race/ethnic groups as well as a control group
- “No data source is truth” [ha! that was my side remark further up this page 🙂 ]
- Sources of inconsistency are in order are interviewer error, mobility/tenuoousness, respondent concealment/refusal, addressed missed, not in the census, respondent confusion, language barrier
- 37% of questions were read correctly or with appropriate corrections — two thirds were changed by the interviewer!!!
- How do you get into apartment buildings? What about occupancy hotels? No buzzer boxes? Unlabeled units? In the middle of nowhere requiring a helicopter to get there?
- Esomar Best of Bulgaria: Brought to you by BAMOR #MRX (lovestats.wordpress.com)
- Thoughts on the CMRP designation #MRX #NewMR (mriablog.wordpress.com)
- No spouses were harmed in this experiment #MRX (lovestats.wordpress.com)
Ponder this list for a moment…
- Social desirability: Some people tend to answer questions in a way that makes them look good to other people. “I truly believe that men and women are equal even though I’ve never given a raise to nor promoted any women who’ve worked for me.”
- Order effects: Items that are earlier in a list get chosen and remembered more often than those later in list. Because this item is second in a list of ten, chances are you will be more likely to remember this bias.
- Interviewer effect: The demographic and psychographic characteristics of an interviewer affect the responses given by an interviewee. “It doesn’t matter that we’re both women, I would have still would have told a man that women are smarter.”
- Acquiescence bias: Someone who tends to answer a question with agreement regardless of what the question is. “Of course I agree with you that I shouldn’t be paid for working overtime.”
- Recall bias: The way we answer a question is affected by our memory of an event. “I don’t remember hearing anything about a product recall so it couldn’t have been a big deal.
- Optimism bias: People believe they are less likely to experience a negative event than other people are. “I won’t get sick from smoking even though most people who smoke end up with some smoking related illness.”
- Cognitive dissonance: In order to feel better about themselves, people find a logical reason for their negative actions. “I had to cut that guy off in traffic or the receipts on my dashboard would have flown all over the place.”
- Anchoring: People often rely heavily on a single trait when making decisions. “That guy is just a stupid idiot. His forget to set his alarm so obviously he can’t do anything right.”
- Self-serving bias: People claim more responsibility for the good that happen to themselves than the bad things. “I worked really hard for my raise but all those problems with my work are because my colleague screwed up my filing system.”
- Dunning Kruger effect: People who are lacking in a skill overestimate their own skill in that area. “I took an introductory statistics course in my undergrad so I could easily do that factor analysis.”
Clearly, people’s opinions are affected by myriad unconscious effects that prevent them from accessing true answers. Now tell me, if we’ve been teaching and learning about response biases in school and we learn oh so many more on the job as market researchers, why do we ask research participants to verbalize responses to the following questions:
- Why did you buy Brand A?
- Why did you choose red over blue?
- Why did you use the $1 coupon for the $3 item but not the $2 coupon for the $4 dollar item?
Why? Why oh why?
I love Merrell shoes. The sole is like walking on sponges and the designs is really cool. I love Kitchenaid appliances because they look good, they are sturdy, and baking is so much easier. I love Tall Girl clothing because the pants are just the right length and the price is right if you get the sales. I could bore you profusely about even more merits and downfalls of each but unfortunately, I am not permitted to have an opinion of them.
You’ve seen those screener questions. “Do you or any member of your household work in the marketing research industry?” Why is this question so important? Why can’t I answer the survey if I do work in the industry?
Perhaps I am going to steal confidential questions.
Perhaps I am going to try to skew data because I have a competitive client.
Perhaps my opinions will be biased because I understand the purposes of the questions.
Well first, various research codes say I must behave ethically which means no stealing and no biasing.
Second, as I rush through the survey like everyone else who is disgusted with low quality questions and bored with ridiculous questions, I am certainly not paying any more attention to the purposes of the questions than anyone else.
And third, why aren’t my opinions valid? Doesn’t Merrel and Kitchenaid and Tall Girl want to know what one of their most loyal customers thinks? Even worse, since my spouse lives in the same household as someone who works in the marketing research industry, why aren’t his opinions valid? He’s never even heard of a likert scale and would probably stab himself in the heart if I tried to explain it to him.
Besides, if I truly am going to lie and cheat and bias answers, why would I ever confess that I work in market research so please don’t show me your survey?
And if you’re curious, I DO answer competitive surveys (just like you do). My rule is answer every question, except the screener, completely honestly or don’t finish the survey. Just how I’d want to be treated.
- Check Out the Statistical Outlier on my LinkedIn Cluster Analysis
- Why do surveys ask the same question 8 billion times?
- How cool is market research? #mrx Social media research is the new one size fits all
- How to upset me by generating leads with market research surveys
- Data Tables: The scourge of falsely significant results #MRX