What’s up Canadian researchers! In recent weeks…
- CRIC set out a statement of purpose and polices to help market research companies in Canada.
- ESOMAR announced a partnership with CRIC to help individual researchers whose company’s are members of CRIC.
But what about students, academics, government employees, freelancers like me, and all the individual researchers from Canadian companies that aren’t members of CRIC? Where do these people turn?
Fortunately, I’ve helped provide a Canadian perspective to a lot of really great organizations over the years including:
- ISO: I’ve been the Canadian Chair of the International Standards Association, TC 225: Market, Opinion, and Social Research committee since 2014
- Insights Association: I was part of the MRA Research Advisory Board (2016), and worked on the MRA/IMRO Guide to the Top 16 Social Media Research Questions (2010)
- ESOMAR: I helped with the ESOMAR/GRBN Guideline on Social Media Research (2017), ESOMAR 24 Questions for Social Media Research (2010), ESOMAR Guide to Market Research (2016)
- AAPOR: I helped AAPOR with the planning group for council on diversity (2016), national conference planning (2018), board nominations (2015), conference code of conduct (2017)
In other words, I’ve seen first hand that these associations have decades of experience in promoting high quality standards and ethics in our industry and have been longtime supporters of the Canadian industry as well.
You will be extremely well served as an individual member of one of these four associations.
- AAPOR: If you’re an academic, polling geek, or into social and political research, this is a great association for you. Even better, their annual conference will be in Toronto this May. I’m helping them organize the chapter event!
- ESOMAR: If you conduct research around the world or want to stay in touch with what’s new and amazing in countries beyond our borders, look no further. Esomar is a great choice for you even if your company is not a member of CRIC.
- Insights Association: If much of your work is conducted in North America, why not say a howdy hello to our neighbours to the south sharing the same time zone as us!
- QRCA: Oh quallies, you’ve built something amazing here. If you’re a quallie and not already a member, correct that mistake post haste!
There are, of course, other options. But before jumping into one, do your homework. Make sure the association and association leaders you choose have a solid foundation and proven track record of promoting high standards and ethical behaviours, and are viewed as gold-standard providers by our industry leaders.
If you aren’t sure which association is right for you, talk to several of your clients or research providers. Find out which associations they know and trust. And if you’re still stuck, I’d be happy to help you out. Send me a quick message.
[Side note: MRIA-TT progress is slow. We don’t yet have an option to add to the above list.]
As a conference speaker, the best sales pitch you can offer on stage is a presentation that educates and entertains the audience. One that explicitly shows them you understand what the audience needs.
I chat with a lot of speakers who assure me they didn’t do a sales pitch and then are astonished to find out that they did. I also chat with other speakers who are so paranoid about NOT doing a sales pitch that they strip out all the good parts of their presentation. Fortunately, there are some easy things you can do to prevent both of these situations.
Ban these words
Never say the word we. Never say the word our. Never say the word us. These tiny unassuming words automatically turn the most glorious presentation into a horrid sales pitch. And your audience has no need for a sales pitch. They are sitting in front of you because they are desperate for knowledge and insights. They want to know your personal opinion, what you have discovered from your techniques. They want to engage with and listen to you as a person. They’d rather not tweet how boring and out of touch you were.
Don’t name-drop your products
Companies spend thousands of dollars trademarking brand names. While it’s helpful to have names so that your employees and your clients know that they’re all talking about the same thing, no one in the audience cares about your cutesy names. They don’t care that you use SalesForce or SurveyMonkey. They care that you understand marketing and research. So if you find yourself wanting to say the name of a tool while you’re talking, instead simply say ‘these types of tools’ or ‘these types of companies.’ I can assure you that you don’t need to use any of your brand names or trademarked names in your presentation.
Don’t describe your company
Your audience doesn’t care about your company and they certainly don’t need you to present a detailed explanation of all the products and services your company offers, even if that slide only takes 3 minutes. That slide explaining your company needs to be turned into a discussion of how your specific topic impacts the industry. Don’t tell the audience that Annie Pettit Consulting is a business that combines artificial intelligence and eye tracking. Instead, tell the audience that eye tracking has seen huge advancements with the application of artificial intelligence. Strip out the branded content and focus on the educational content.
Don’t describe your company philosophy
Don’t waste valuable presentation time talking about your company mission and philosophy. It is not important for the audience to understand your company philosophy in order to understand the research. The audience doesn’t need to know that your company believes research should be easy. The audience DOES need to know how research can be made easy. They also don’t need to know that your mission is to solve problems. Instead, explain to them how research processes can be used to solve problems.
What is your reward?
If you do a great job of educating and entertaining your audience, they will line up to ask questions, get your business card, and they will email you afterwards asking for advice and copies of your presentation. Guaranteed.
Every person who’s ever sat in a conference audience
Demand that your conferences be Diversity Approved! (Tweet this post!)
When Canada’s new Prime Minister, Justin Trudeau, was asked why his cabinet was 50% male and 50% female, his answer was simple. Because it’s 2015. Such a simple answer to a long standing problem.
As I look back over 2015, I see that “because it’s 2015” didn’t apply to every market research conference. Some conferences had speaker lists that were 70% male. Some conferences had speaker panels that were 100% male. No conferences had attendee lists nor industry lists that were 100% male let alone 70% male.
There are many reasons that men might be over-represented as speakers, but few that are acceptable.
- Random chance. As a lover of statistics, I accept that random chance will create some all male panels. But since I’ve never seen an all female panel, random chance is not what’s at play here. If you’d rather see the math, Greg Martin calculated the chance of having all male speakers here. It’s not good.
- 70% or more of submissions were from men. That also is an acceptable reason. If women aren’t submitting, then they can’t be selected. So on that note, it’s up to you ladies to make sure you submit at every chance you get. And don’t tell me you’re not good enough to speak. I ranted on that excuse already.
- You haven’t heard of any women working in this area. This excuse is unacceptable. You can’t look for speakers only inside your own comfortable friend list. Get out of your box. Get online. There are tons of women talking about every conceivable industry issue. Find one woman and ask her for recommendations. You can start here: Data science, Marketing research, Statistics, Tech.
- The best proposals happen to be from men. This excuse is also unacceptable. It demonstrates that you believe men are better than women. You need to broaden your perception of what ‘better’ means. Men and women speak in different ways so you need to listen in different ways. It’s good for you. Try it.
- Women decline when we ask them to speak. It’s a real shame particularly if women decline invitations more often than men. But any time a woman declines, ask her for a list of people she recommends. And then consider the women on that list. No women in the list? Then specifically ask her if she knows any women.
- It’s a paid talk and they only sent men. Know what? It’s okay to remind companies that their panel isn’t representative of the industry. You can suggest that they send a broader range of people.
- We didn’t realize this was a problem. Inexcusable. Diversity has been an issue for years. People have been pointing this out to market research conferences for years. The right time to fix things is always now.
When was the last time you prepared a sampling matrix balanced on age, gender, and ethnicity and then were pleased when it was 70% female, 70% age 50+, and 90% white? Never, that’s when. You stayed in field and implemented appropriate sampling techniques until your demographics were representative. This is absolutely no different.
So, to every conference organizer out there, ESOMAR, CASRO, MRA, MRIA, ARF, MRS, AMSRS, ESRA, AAPOR, I challenge you to review and correct your speaker list before announcing it.
- What percentage of submissions are from men versus women? Only when submissions are far from balanced is it acceptable for the acceptance list to be unbalanced.
- Are there any all male panels? Are there any all female panels? (By the way, all female panels talking about female issues do NOT count.)
- Are more than 55% of speakers male? Are more than 55% of speakers female?
- Is the invited speaker list well balanced? There is zero reason for invited speakers to NOT be representative.
- Did you actively ask companies to assist with ensuring that speakers were diverse?
If you can give appropriate answer to those questions, I invite you to publicly advertise your conference as Diversity Approved.
Will you accept this challenge for every conference you run in 2016? Will you:
- Post the gender ratio of submissions
- Post the gender ratio of acceptances
- Proudly advertise that your conference is “Diversity Approved”
Demand that your conferences be Diversity Approved! (Tweet this demand!)
Gamification in survey research: do results support the evangelists by Lisa Weber-Raley and Kartik Pashupati #CASRO #MRX
Live blogging from Nashville. Any errors or bad jokes are my own.
– game mechanics include a back story, a game like aesthetic, rules for play and advancement, a challenge, rewards
– Gamification can be as simple as changing the way questions are worded [shout out to Jon Puleston : ]
– frame questions in way that makes responders WANT to answer them
– change a task into a game
– add an element of competition to a question such as putting a time limit
– “i engaged with a brand and all i got was this lousy badge” 🙂
– people don’t always think gamified is easier to read or answer, or quicker, or more fun, it’s a statistical different though not a substantive different
– should we trash gamification?
– greater survey engagement lies in dealing with the components of respondent burden. but creating a more enjoyable survey is still a worthwhile goal even if it doesn’t lead to all the claimed benefits
– did a survey on college experience, needed to develop a tool to build a tool for highschool students to choose a college, it’s not a genpop sample. it’s a sample that might be more inclined to gamification
– four survey types – standard, one with photo breaks, one with letter finding game throughout the survey, one with avatar
– not many differences between these four groups [did they all get the exact same words of questions?]
– photo break people may have actually used the photos to take a break
– picture break was more enjoyable for people
– there were no differences in data quality
[i wonder what would happen if the survey was actually gamified or the questions were worded differently]
Questionning the questionnaire – using games to real self-report biases by Amber Brown and Joe Marks #CASRO #MRX
Live blogging from Nashville. Any errors or bad jokes are my own.
– surveys that aren’t well designed have social desirability bias, aspirational biases, demand characteristics, satisficing
– games can help with some of these if they are properly designed
– purchase/visit intent can have problems as people want to please you, are aspirational in their answers with little follow through, similar to charitable giving and exercise
– study asked about prior and future behaviour of behaviours
– people were offered either cash or theme park tickets and then asked whether they planned to visit the park – would they take the cash (they probably won’t go) or would they take the tickets (they probably will go) (Cash is always less)
– for a charity company – will you donate your incentive to a charity or take the cash (cash is always less)
– for an exercise company – will you take a sports authority gift card or a cash incentive (cash is always less)
– for readership – will you take a book store gift card or cash
– the incentive choice was a good predictor of the intent question
– games engage instinctual thinking. you’re just trying to win. people play games every day. it’s faster and gives less time for biases to creep in
– the test is actual choice behaviour which his similar to the marketplace
– would you be willing to donate to wikipedia? real case study – do you want $10 in cash or donate $50 to wikipedia. 14% chose the 10$ donation but 2% chose the $30 donation
– the game comes much closer to real behaviour
– can help to counter biases that poorly designed surveys may have
[i want to read the paper on this one. very cool!]
Live blogging from Nashville. Any errors or bad jokes are my own.
– are you a purist or pragmatist about quality? [i like to think i’m both. you know what’s right. do it right. i don’t see why that’s so difficult]
– why do we need a revolution? maybe self-report doesn’t always work. men report using 1.6 billion condoms, women 1.1 billion condoms per year. but companies only sell 600 million per year. [hmmmm… who to believe :)]
– 95% of our behaviour is subconscious. how are surveys doing in this area?
– 95% of water below the waterline of the titanic is what sank it
– not everyone can be honest with themselves, not everyone understands why they do things, not everyone wants to be honest with you or themselves about why they do things
– you cannot think your feelings, you have to feel your feelings
– self-reported information is always filtered
– emotional metrics are subconscious – recall, call to action, preference, satisfaction, loyalty
– it’s not simply left brain, right brain. the emotional brain sends 10 times as much to the other side as it receives
– you can’t rewire the human brain. we are closer to homer simpson than Spock
– you can ASK when people are engaged during a commercial or you can measure each second
– in a commercial, people turn off in the last 5 seconds when it turns to the sell job [pay attention conference presenters]
– when people are asked to move a dial to show how they’re feeling, they forget to move the dial. that’s not very accurate
– people are particularly bad at sharing opinions about negative feelings.
– charles darwin did work on emotions – emotions are universal, even blind people across cultures has the same facial expressions, face is spontaneous and fast as the muscles are attached directly to the skin, we have more facial muscles than anywhere else on the body
– theory refined by Paul Ekman, matched emotions to specific muscles
– believes facial coding in 3 years will be good enough and ready to go [i’ll take that bet 🙂 ]
– fMRI brain scans were the closest to their manual facial coding. automated facial coding isn’t there yet
– 40% of time during commercials have no reportable emotion. really, it’s not as interesting as the birth of your child.
– best joke of all time, you might laugh for 4 seconds
– political campaigns are the energizer bunny of fake social smiles
– people will say they are happiest for 12 seconds but that’s just not reality
– surprise and anxiety often go together
– need self report, we need all the data we can get and self report is about as good as other data right now
– there are many things that people won’t tell you, perhaps you can see it in their face
Live blogged from Nashville. Any errors or bad jokes are my own.
– We want surveys short and simple. to avoid straightlining, and satisficing. reuce breakoffs, and dropping off the panel.
– but companies are ok with panelists taking multiples surveys in a row
– is multiple short surveys better than one long survey?, assume it lets people handle fatigue better, assumes if they do take another survey that that survey will be better quality. is any of this true?
– who takes multiple surveys, what are their completion rates, how good is the data, how does it affect attrition
– defined surveys as all the surveys taken within 1.25 hours
– 40% of surveys are completed in chains
– younger people make more use of chains
– moderate chaining is the norm. most people average 1.5 to 3 surveys per session. about 10% average more than 3 surveys per chain.
– completion rates increase with each survey in the chain. people who want to drop already dropped out.
– buying rate is unaffected by chaining. for people who take five surveys, buying rate increases with each survey.
– why is this? panelists will take more surveys if they did not exhaust themselves in the previous survey. or maybe those with lots of buying behaviours pace their reporting. or those people are truly different. [read the paper. it’s getting too detailed for me to blog on]
– poor responders are more likely to chain, but not massively more likely
– for younger panelists, heavy chainers have greater longevity. for oldest panelists, it results in burnout.
– people who agree to chain, do it because they are ready to do so. if they exhausted in a previous survey, they don’t continue. a small minority abuse the process
– chaining helps younger panelists stay engaged
Live blogged in Nashville. Any errors or bad jokes are my own.
Frances Barlas, Patricia Graham, and Thomas Subias
– we used to be constrained by an 800 by 600 screen. screen resolution has increased, can now have more detail, more height and width. but now mobile devices mean screen resolution matters again.
– more than 25% of surveys are being started with a mobile devices, less are being completed with a mobile device
– single response questions don’t serve a lot of needs on a survey but they are the easiest on a mobile device. and you have to take the time to consider each one uniquely. then you have to wait to advance to the next question
– hence we love the efficiency of grids. you can get data almost twice as fast with grids.
– myth – increase a scale of 3 to a scale of 11 will increase your variance. not true. a range adjust value shows this is not true. you’re just seeing bigger numbers.
– myth – aggregate estimates are improved by having more items measure the same construct. it’s not the number of items, it’s the number of people. it improves it for a single person, not for the construct overall. think about whether you need to diagnose a person’s illness versus a gender’s purchase of a product [so glad to hear someone talking about this! its a huge misconception]
– grids cause speeding, straightlining, break-offs, lower response rates in subsequent surveys
– on a mobile device, you can’t see all the columns of a grid. and if you shrink it, you cant read or click on anything
– we need to simplify grids and make them more mobile friendly
– in a study, they randomly assigned people to use a device that they already owned [assuming people did as they were told, which we know they won’t 🙂 ]
– only have of completes came in on the assigned device. a percentages answered on all three devices.
– tested items in a grid, items by one by one in a grid, and an odd one which is items in a list with one scale on the side
– traditional method was the quickest
– no differences on means
[more to this presentation but i had to break off. ask for the paper 🙂 ]
Live blogged from Nashville. Any errors or bad jokes are my own.
children have more internet access than adults. their homes are littered with devices. they start with a leap-pad and download games for it. have it in the car and it goes everywhere with them. then they get a nintendo. they are in-tune with mobile. they are the first generation to grow up with tech. today’s students are not the people our education system was designed to teach.
classrooms rely on tech early now. clickers for interaction. interactive reading solutions. reading apps. smart boards instead of chalk boards. many schools have some iPads as standard in the classroom.
designing surveys for kids. we are working on agnostic and respondent friendly surveys. but we rarely place focus on survey design for kids, especially when focused on mobile.
Do kids really go onto the computer for 30 minutes to answer a survey? [My response – HA HA HA HA HA HA HA. Oh sorry. No, i don’t think so. ]
They did qual and quant to figure out how kids think about and use surveys.
– parents are not concerned with parents using their phone
– kids prefer less than ten minutes
– age 11 to 17 say they rarely use computers!!!
– children read every single question and respond very carefully
– easy concepts may actually be difficult for them
– testing is critical
– responses need to be different to avoid confusion
– less wording is essential
– more engaging question types are easier for them to understand
– simplified scales are more easily processed, maybe using images
– use more imagery, bigger buttons
[this is funny – dear 4 year old – how likely are you to recommend this product to your friends, family, and colleagues?]
– kiddie fingers aren’t as precise with hitting buttons especially when survey buttons are close to phone buttons
– kids don’t understand our concepts of new, different, intent, believability
– kids up to age ten are much more likely to get help from their parent 60% or more, falls to 15% with older teens
– a pre-recruit is helpful, then send the official invite/portal, then again get parental permission
– response rates are higher on tablets, smartphones next, computers worst
– LOI is longer on smartphones, best on computers
– people on smartphones felt there were too many questions
– click rates vary by device but the end conclusions are the same [cool data here]
– ideal length is around 10 minutes
– 3 point scales may be enough [halleluja! Do we TRULY need ten or five point scales in marketing research? i think in many cases it’s a selfish use not a necessary use.]
Live blogged in Nashville. Any errors or bad jokes are my own. Any typos are purely the fault of the ipad. [Peace and love to Gina.]
by Tom Webster
[I love Tom’s work. Delighted to be at one of his talks!]
Edison research does the exit polls for CNN, they have interviewers all across the nation waiting outside the polls. we need to understand the mobile human.
mobile is not a phone. it’s mobile behaviour. business used to be the butcher knew your name and kept a tab and they knew what cut of meat you wanted. remember Norm from Cheers, everyone always knew his name, they yelled his name, he never takes out his wallet, there’s always a beer at his seat, and no one is ever in ‘his’ seat.
[Hey Tom, don’t say you don’t understand numbers!]
It’s not about the phone. Giant outdoor concert, lollapalooza. everyone had a wristband. sold a lot more beer that way. let people forget about their phones.
Buckee’s is a convenience store/gas chain in texas. they thought about mobil humans. no phone strategy. they thought about number one need. the bathroom. they built palatial restrooms. they are cleaned every ten minutes. you will ‘hold it’ to get to their store. that’s a mobile strategy – think about the mobile human
How do we make our customers feel like ‘Norm.’
There is a huge gap.when people search for a brand online, it wasn’t a search. it was capturing a brand you already knew you wanted to buy because your friend said it was awesome not because you saw a google search ad. Google gets credit for things that are offline social transactions.
we rarely measure things on the same scale – billboards, banner ads, audio, podcasts, how do you compare these.
what solves the gap? our mobile device which is with us when we are online and offline. it is the common point. expands how we think of mobile research and surveys.
[Tom is a great speaker! Good off the cuff humour. 🙂 ]
Tesla really doesn’t have dealers or places to try their car. Most of what you know is from social. And most people DO know about Tesla. WHen he test drove the car, they could have asked him did he like the car, would he buy or recommend the car. but all they did was say thank you. they could have gathered so much information in that ‘mobile moment’ but they didn’t. they lost that opportunity.
Follow the humans – that’s the key.what tare they doing. what are their problems. where are the frictions in their transactions. can we use mobility to alleviate that. starbucks mobile app was designed by people watching humans. while you wait for coffee, you are on your phone texting tweeting snap chatting. people have to put their phone away to make their transaction. well why not just let them stay on their phone instead of shuffling through all their stuff to find their wallet. the app may be old school with a barcode but it works on every single device.
Torchy Taco – wanted to overhaul mobile app. wanted people order online. they are not a delivery service. people like to call ahead and pick up their order. They watched some of their humans. many people ordered while in their car on their way home from work. [PLEASE don’t text and drive] . They took mobile payment out of it. its just an ordering device. they don’t want people entering credit card while driving. the order system is ‘thumbable.’ really easy to click with just your thumb. it’s easy to click and buy taco, taco, taco. Our humans are driving so it needs to be easy. everyone logs in using facebook. no need to log into Torchy’s.
Every Dell server, on the back, has a QR code that goes directly to mobile documentation for that device. those people are on their hands and knees not at a desk.
get into the mobile mindset. this data is in your server logs. It’s kind of trivia. Intuit looked at their server logs to see which specific pages get more mobile access than other pages. found accountants did ‘accounting’ at their desk. but training was done at off moments on the bus in transit. so training files were completely mobile optimized, not just mobile better. this is just by looking at server logs. No “Insight-a-tron” 🙂
Get people at the point of sale. could be online or offline. every table at chilis has a kiosk for ordering or games. you can pay on it too. when you pay, it gives you a two question survey. way better than a comment card left on the table.
he hates the question – where did you hear about us. the drop box answers are often ‘friend.’ how do you increase your ‘friend’ budget as a marketer. better is to ask why did you choose us. point of sale is a great place to ask this.
COnsumer exit polls. entering or exiting a business is the perfect place to gather information. you can do a phone survey later but the response rate won’t be as high.
Taxonomies – you are never offline even when you are offline. new life for billboards, kiosks, radio. everyone offline moment is now an online moment. you can build on online following using offline methods – eg a billboard showing a twitter address.
You don’t want a mobile payment device for chips. you want a mobile device to enable chip eating – movies, football, drinking.
Mobile has killed the bar bet. anyone can find out anything on line. we don’t want to be cliff. we want to be norm. we want a one to one relationship. he uses Level Up to pay for everything. no transaction fees, loyalty programs built in. [is this an offline or online transaction? 🙂 ]
The Mobile Commerce Revolution is Tom’s book. [Everyone at CASRO get a free signed book. Insert YAY here 🙂 ]