Tag Archives: survey

Fusing Marketing Research and Data Science by John Colias at @DecisionAnalyst

Live note-taking of the November 9, 2016 webinar. Any errors are my own.

  • Survey insights have been overshadowed in recent years, market research is struggling to redefine itself, there is an opportunity to combine big data and surveys
  • Preferences are not always observable in big data, includes social data, wearable data
  • Surveys can measure attitudes, preferences, and perceptions
  • Problem is organizational – isolation, compartmentalization of market research and big data functions
  • Started with a primary research survey about health and nutrition, one question is how often do you consume organic foods and beverages; also had block-group census data from American community survey five-year summary data with thousands of variables
  • Fused survey data and block group data using location of survey respondent from their address and matched to block group data, brought in geo data for that block group
  • Randomly split the data, built predictive model on training model, determined predictive accuracy using validation data (the hold-out data), 70% of data for model development, 30% for validation – independent objective model
  • Created a lift curve, predictive model identified consumers of organic foods more than 2.5 times better than chance
  • When predictive models bows out from random model, you have achieved success
  • Which variables were most predictive, not that they’re correlated but they predict behaviour – 26 or older, higher income, higher education, less likely Hispanic; this may be known but now they have a model to predict where these people are
  • Can map against actual geography and plot distances to stores
  • High-tech truck research
  • Used a choice modeling survey, design attributes and models to test, develop customer level preferences for features of the truck
  • Cargo space, hitching method, back up cameras, power outlets, load capacity, price
  • People chose preferred model from two choices, determined which people are price sensitive, or who value carrying capacity, biggest needs were price, capacity, and load
  • How to target to these groups of people
  • Fused in external data like previously, but now predicting based on choice modeling not based on survey attitudes, lift curve was again bowed to left, 1.8 times better than chance – occupation, education, income, and household size were the best predictors
  • [these are generic results – rich people want organic food and trucks, but point taken on the method. If there is a product whose users are not obvious, then this method would be useful]
  • Fusion can use primary and secondary data, also fuses technology like R dashboards and google maps, fuses survey and modeling, fuses consumer insights database marketing and big data analytics
  • Use this to find customers whose preferences are unobserved, improve targeting of advertising and promotions, optimize retail location strategies, predict preferences and perceptions of consumers, collaboration of MR departments with big data groups would benefit both entities
  • In UK and Spain, demographics are more granular, GPS tracking can be used in lesser developed countries
  • Used R to query public data set, beauty of open-source code and data

People Aren’t Robots – New questionnaire design book by Annie Pettit

I’ve been busy writing again!

People Aren’t Robots: A practical guide to the psychology and technique of questionnaire is the best 2 bucks you’ll ever spend!

Questionnaire design is easy until you find yourself troubled with horrid data quality. The problem, as with most things, is that there is an art and science to designing a good quality and effective questionnaire and a bit of guidance is necessary. This book will give you that guidance in a short, easy to read, and easy to follow format. But how is it different from all the other questionnaire design books out there?

  • It gives practical advice from someone who has witnessed more than fifteen years of good and poor choices that experienced and inexperienced questionnaire writers make. Yes, even academic, professional researchers make plenty of poor questionnaire design choices.
  • It outlines how to design questions while keeping in mind that people are fallible, subjective, and emotional human beings. Not robots. It’s about time someone did this, don’t you think?

This book was written for marketers, brand managers, and advertising executives who may have less experience in the research industry.

It was also written to help academic and social researchers write questionnaires that are better suited for the general population, particularly when using research panels and customer lists.

I hope that once you understand and apply these techniques, you think this is the best $2 you’ve ever spent and that you hear your respondents say “this is the best questionnaire I’ve ever answered!”

Early reviews are coming in!

  • For the researchers and entrepreneurs out there, here’s a book from an expert. Pick it up (& read & implement). 👌
  • Congrats, Annie! A engagingly written and succinct book, with lots of great tips!
  • Congratulations! It’s a joy watching and learning from your many industry efforts.
  • It looks great!!! If I could, I would buy many copies and give to many people I know who need some of your advice.🙂

Improvements to survey modes #PAPOR #MRX 

What Are They Thinking? How IVR Captures Public Opinion For a Democracy, Mary McDougall, Survox

  • many choices, online is cheapest followed by IVR followed by phone interview
  • many still do not have internet – seniors, non-white, low income, no high school degree
  • phone can help you reach those people, can still do specific targeting
  • good idea to include multiple modes to test for any mode effects
  • technology is no longer a barrier for choosing a data collection strategy
  • ignoring cell phones is poor sampling
  • use labor startegically to allow IVR
  • tested IVR on political polling, 300 completes in 2.5 hours, met the quotas, once a survey was started it was generally completed

The Promising Role of Fax in Surveys of Clinical Establishments: Observations from a Multi-mode Survey of Ambulatory Surgery Centers, Natalie Teixeira, Anne Herleth, and Vasudha Narayanan, Weststat; Kelsey O’Yong, Los Angeles Department of Public Health

  • we often want responses from an organization not a company
  • 500 medical facilities, 60 questions about staffing and infection control practices
  • used multimode – telephone, postal, web, and fax
  • many people requested the survey by fax and many people did convert modes
  • because fax was so successful, reminder calls were combined with fax automatically and saw successful conversions to this method
  • this does not follow the current trend
  • fax is immediate and keeps gatekeepers engaged, maybe it was seen as a novelty
  • [“innovative fax methodology” so funny to hear that phrase. I have never ever ever considered fax as a methodology. And yet, it CAN be effective.🙂 ]
  • options to use “mass” faxing exist

The Pros and Cons of Persistence During Telephone Recruitment for an Establishment Survey, Paul Weinfurter and Vasudha Narayanan, Westat

  • half of restaurant issues are employees coming to work ill, new law was coming into effect regarding sick pay
  • recruit 300 restaurants to recruit 1 manager, 1 owner, and a couple food preparers
  • telephone recruitment and in person interviews, english, spanish, mandarin, 15 minutes, $20 gift card
  • most of the time they couldn’t get a manager on the phone and they received double the original sample of restaurants to contact
  • it was assumed that restaurants would participate because the sponsor was health inspectors, but it was not mandatory and they couldn’t be told it was mandatory, there were many scams related to this so people just declined, also all of the health inspectors weren’t even aware of the study
  • 73% were unreachable after 3 calls, hard to get a person of authority during open hours
  • increased call attempts to five times, but continued on when they thought recruitment was likely
  • recruited 77 more from people who were called more than 5 times
  • as a result, data were not limited to a quicker to reach sample
  • people called up to ten times remained noncommittal and never were interviewed
  • there wasn’t an ideal number of calls to get maximum recruits and minimum costs
  • but the method wasn’t really objective, the focus was on restaurants that seemed like they might be reachable
  • possibly more representation than if they had stopped all their recruitment at five calls
  • [would love to see results crossed by number of attempts]

It’s a dog eat DIY world at the #AMSRS 2015 National Conference

  What started out as a summary of the conference turned into an entirely different post – DIY surveys. You’ll just have to wait for my summary then!

My understanding is that this was the first time SurveyMonkey spoke at an #AMSRS conference. It resulted in what seemed to be perceived by the audience as a controversial question and it was asked in an antagonistic way – what does SurveyMonkey intend to do about the quality of surveys prepared by nonprofessionals. This is a question with a multi-faceted answer.

First of all, let me begin by reminding everyone that out of all the surveys prepared by professional, fully-trained survey researchers, most of those surveys incorporate at least a couple of bad questions. Positively keyed grids abound, long grids abound, poorly worded and leading questions abound, overly lengthly surveys abound. For all of our concerns about amateurs writing surveys, I sometimes feel as though the pot is calling the kettle black.

But really, this isn’t a SurveyMonkey question at all. This is a DIY question. And it isn’t a controversial question at all. The DIY issue has been raised for a few years at North American conferences. It’s an issue with which every industry must deal. Taxis are dealing with Uber. Hotels are dealing with AirBnB. Electricians, painters, and lawn care services in my neighbourhood are dealing with me. Naturally, my electrical and painting work isn’t up to snuff with the professionals and I’m okay with that. But my lawn care services go above and beyond what the professionals can do. I am better than the so-called experts in this area. Basically, I am the master of my own domain – I decide for myself who will do the jobs I need doing. I won’t tell you who will do the jobs at your home and you won’t tell me who will do my jobs. Let me reassure you, I don’t plan to do any home surgery.

You can look at this from another point of view as well. If the electricians and painters did their job extremely well, extremely conveniently, and at a fair price, I would most certainly hire the pros. And the same goes for survey companies. If we worked within our potential clients’ schedules, with excellent quality, with excellent outcomes, and with excellent prices, potential clients who didn’t have solid research skills wouldn’t bother to do the research themselves. We, survey researchers, have created an environment where potential clients do not see the value in what we do. Perhaps we’ve let them down in the past, perhaps our colleagues have let them down in the past. 

And of course, there’s another aspect to the DIY industry. For every client who does their own research work, no matter how skilled and experienced they are, that’s one less job you will get hired to do. I often wonder how much concern over DIY is simply the fear of lost business. In this sense, I see it as a re-organization of jobs. If research companies lose jobs to companies using DIY, then those DIY company will need to hire more researchers. The jobs are still there, they’re just in different places. 

But to get back to the heart of the question, what should DIY companies do to protect the quality of the work, to protect their industry, when do-it-yourselfers insist on DIY? Well, DIY companies can offer help in many forms. Webinars, blog posts, and white papers are great ways to share knowledge about survey writing and analysis. Question and survey templates make it really easy for newbies to write better surveys. And why not offer personalized survey advice from a professional. There are many things that DIY companies can do and already do.

Better yet, what should non-DIY companies do? A better job, that’s what. Write awesome surveys, not satisfactory surveys. Write awesome reports, not sufficient reports. Give awesome presentations, not acceptable presentations. Be prompt, quick, and flexible, and don’t drag clients from person to person over days and weeks. When potential clients see the value that professional services provide, DIY won’t even come to mind.

And of course, what should research associations do? Advocate for the industry. Show Joe nonresearcher what they miss out on by not hiring a professional. Create guidelines and standards to which DIY companies can aspire and prove themselves. 

It’s a DIY world out there. Get on board or be very, very worried.

Determinants of survey and biomarker participation #AAPOR #MRX  

prezzie #1: consent to biomarker data collection

  •  blood pressure, grip strength, balance, blood spot collection, saliva collection, 
  • agreeement ranged from 70% to 92% 
  • used the Big 5 personality test, agreeableness, openness, conscientiousness, self control
  • used R for the analysis [i’m telling you, R is the future, forget SAS and SPSS]
  • for physical measurements, people who are more open are less likely to consent, agreeable is positively related
  • for saliva, we also see that higher hostility are less likely to consent, and higher self-control is less likely
  •  for blood spot collection, more hostile and more self control are less likely to consent
  • openness was a surprise

prezzie #2: nonresponse to physical assessments

  • strength, balance, mobility, physical activity – not influenced by reporting bias
  • half of the sample did a face to face physical assessment, perhaps in a nursing home, n=15000
  • nonconsent highest for walking speed test, lowests for breathing and grip strength test, balance test was in the middle
  • oldest old are less likely to complete, women less likely to complete the grip walking and balance test
  • less educated are less likely to complete the tests
  • much more missing data for balance test and breathing test – can impute based on reasons for noncompletion – eg no space to do the test, not safe to do it, don’t understand instructions, unwilling, health issues

prezzie #3: data quality trends in the general social survey

  • social security number was most interesting
  • theres been a lot of leaked and stolen government data which can make people nervous about completing govt surveys
  • low refusal rate for phone number and birth date but half refused social security
  • varied widely by geo region
  • trend is not drastically different despite data issues

prezzie #4: does sexual orientation, race, language correlate with nonresponse

  • correlates of nonresponse – we know younger, male, hispanic, single, renters. but what about sexual orientation or language use
  • few differences by sexuality
  • nonhispanic black had highest non response, hispanic also high when the survey was in spanish
  • self reported good health had higher nonresponse, could mean our surveys tell us people are less healthy than they really are
  • spanish speakers had higher nonresponse

The funniest marketing research joke #MRX

I have a favourite research joke. Have you heard it before? It has two possible punch lines and it goes like this.

What’s worse than a pie chart?
Answer 1: a 3d pie chart
Answer 2: several pie charts

I bet you’re rolling on the floor laughing aren’t you! Well, I have a treat for you because that is not the funniest marketing research joke I know. There are several formats of the funniest ever market research joke too so get ready…

We know 40 minute surveys are too long but no one’s ever going to stop writing them. HA HA HA HA HA HA
We know long grid questions are a bad idea but no one’s ever going to stop using them. HA HA HA HA HA HA
We know respondents hate multiple loops but no one’s ever going to stop writing them. HA HA HA HA HA HA

You think these aren’t jokes? I challenge you to prove otherwise. I’ve been in numerous situations where laughter is the standard response to these statements.

I find it infuriating to listen to smart and informed people overtly display a lack of effort to address the problem and laugh it off as silly. They generally feel they have no power. Clients feel they have zero alternatives in writing their surveys and vendors feel clients will take their business elsewhere if they refuse to run a bad survey.

Let me say this. Every single person out there has power to make surveys better. Imagine if we all worked together. Imagine if everyone spoke up against bad surveys. Imagine if everyone took quality surveys seriously. Imagine what would happen to our complete rates. Just imagine.

Mobile Surveys for Kids by Brett Simpson #CASRO #MRX

Live blogged from Nashville. Any errors or bad jokes are my own.

children have more internet access than adults. their homes are littered with devices. they start with a leap-pad and download games for it. have it in the car and it goes everywhere with them. then they get a nintendo. they are in-tune with mobile. they are the first generation to grow up with tech. today’s students are not the people our education system was designed to teach.

classrooms rely on tech early now. clickers for interaction. interactive reading solutions. reading apps. smart boards instead of chalk boards. many schools have some iPads as standard in the classroom.

designing surveys for kids. we are working on agnostic and respondent friendly surveys. but we rarely place focus on survey design for kids, especially when focused on mobile.
Do kids really go onto the computer for 30 minutes to answer a survey? [My response – HA HA HA HA HA HA HA. Oh sorry. No, i don’t think so. ]

They did qual and quant to figure out how kids think about and use surveys.
– parents are not concerned with parents using their phone
– kids prefer less than ten minutes
– age 11 to 17 say they rarely use computers!!!
– children read every single question and respond very carefully
– easy concepts may actually be difficult for them
– testing is critical
– responses need to be different to avoid confusion
– less wording is essential
– more engaging question types are easier for them to understand
– simplified scales are more easily processed, maybe using images
– use more imagery, bigger buttons

[this is funny – dear 4 year old – how likely are you to recommend this product to your friends, family, and colleagues?]

– kiddie fingers aren’t as precise with hitting buttons especially when survey buttons are close to phone buttons
– kids don’t understand our concepts of new, different, intent, believability
– kids up to age ten are much more likely to get help from their parent 60% or more, falls to 15% with older teens
– a pre-recruit is helpful, then send the official invite/portal, then again get parental permission

– response rates are higher on tablets, smartphones next, computers worst
– LOI is longer on smartphones, best on computers
– people on smartphones felt there were too many questions
– click rates vary by device but the end conclusions are the same [cool data here]
– ideal length is around 10 minutes
– 3 point scales may be enough [halleluja! Do we TRULY need ten or five point scales in marketing research? i think in many cases it’s a selfish use not a necessary use.]

How marketing researchers can start being more ethical right now #MRX

I challenge you to rethink your behaviours. I challenge you to jump off that pedestal of marketing researchers are more ethical than other people in the marketing world and think about whether you’re being as ethical as you like to think you are. I challenge you to:

1) tell people that answering your survey or participating in your focus group might make them sad or uncomfortable or angry
2) recognize that benign questions about age, gender, income, brand loyalty, weather, and politics make people unhappy, uncomfortable, and angry
3) incentivize people when they quit a survey partway through especially when a question may have made them uncomfortable
4) allow people to not answer individual questions but still complete the entire survey
5) debrief people at the end if surveys by sharing some details about how the results will be used to make people happier via better products and services

Can you hold yourself to a higher standard? Can you start right now?

IMG_2646.JPG

WAPOR Day 2: People don’t lie on government surveys #AAPOR #MRX

Day two of WAPOR has come and is nearly gone, but my brain continues to ponder and debate all that I heard today. I hope you enjoy a few of the ramblings from my macaron infested brain.

  • People don’t lie on government surveys. Wow. That’s news to me! My presentation focused on how people don’t always provide exactly correct answers to surveys for various reasons – the answer isn’t there, they misread something, they deliberately gave a false answer. But, while people may feel more incentive to answer government surveys honestly, those surveys are certainly not immune to errors. Even the most carefully worded and carefully pre-tested survey will be misread and misinterpreted. And, some people will choose to answer incorrectly for a variety of reasons – privacy, anti-government sentiment, etc. There is no such thing as “immune to errors.” Don’t fool yourself.
  • How do you measure non-internet users? Well, this was a fun one! One speaker described setting up a probability panel (i know, i know, those don’t really exist). In order to ensure that internet usage was not a confounding variable, they provided a 3G tablet to every single person on the panel. This would ensure that everyone used the same browser, had the same screen size, had the same internet connection, and more. Of course, as soon as you give a tablet to a non-internet user, they suddenly become….. an internet user. So how do you understand perceptions and opinions from non-internet users. Chicken and egg! Back to paper you go!
  • Stop back translating. I don’t work much in non-English languages so it was interesting to hear this one. The authors are suggesting a few ideas:  questionnaire writers should write definitions of each question, preliminary draft translations should be provided by skilled translators, and finally, those two sets of information should go to the final translator. This is how you avoid “military rule” being translated as “role of the military” or “rules the military has” or “leadership of the military.” Interesting concept, and I’d love to know whether it’s efficient in practice.
  • Great presenter or great researcher: Pick one. I was reminded on many occasions today that, as a group, researchers are not great presenters. We face the screen instead of the audience,  we mumble, we read slides, and we speak too quietly. We focus on sharing equations instead of sharing learnings, we spend two thirds of the time explaining the method instead of sharing our insights. Let’s make it a priority to become better speakers. I know it won’t happen over night but I’ve progressed from being absolutely terrible to reasonably ok in a short matter of just 15 years. You can do it too.

Innovation in Web Data Collection: How ‘Smart’ Can I Make My Web Survey?” by Melanie Courtright, Ted Saunders, Jonathan Tice #CASRO #MRX

Live blogging from the #CASRO tech conference in Chicago. Any errors or bad jokes are my own.

“Innovation in Web Data Collection: How ‘Smart’ Can I Make My Web Survey?”

Melanie Courtright, Senior Vice President, Client Services, Americas, Research Now; Ted Saunders, Manager, Digital Solutions, Maritz Research Inc.; Jonathan Tice, Senior Vice President, Decipher

  • The proportion of respondents taking surveys on tablets and mobile phones continues to increase
  • Researchers are exploring ways to improve data accuracy and the respondent experience on mobile surveys
  • Mobile or touch-screen devices enable different ways of interacting with the respondent to capture responses
  • Programmers used to developing mobile applications may naturally want to extend such features to surveys, and researchers may see these features as new and inventive
  • Want to use a randomized experimental design to see how data quality and respondent experience is affected
  • Web-based survey fielded April 2014; Auto insurance satisfaction and attitudes; 3,600 respondents, 10-minute survey, 60 questions; Respondents self-selected into PC, mobile phone and tablet cells;  Optimized for Tablet and Mobile
  • Melanie Courtright

    Slider start position influences “passive” responses

    • Respondents instructed to click on the slider button
    • Sliders were programmed to record a lack of use as missing
    • People are generally satisfied with their auto insurance company, so saw a large amount of non-use of the Right slider starting position
  • Slider start position matters more on touch devices
    • The right starting position tended to bias mean scores upward
    • PC users using a mouse were not as affected by the slider start position as much and had much less passive use of the slider on both 5- and 11-points
  • Touch device users who used sliders liked them
    • Respondents generally preferred the type of scale they used throughout the survey
    • PC users preferred Standard and had No Preference more.
    • Touch device users had slight preference for Sliders when they used them
  • Sliders may be suitable for continuous objective measures
    • Tests so far have been on attitudinal response scales.
    • Sliders may be more appropriate for entering an objective value
    • Prevents touch users from having to type
    • Responses similar to those entered into a text box
  • Length of the list matters more than style of list
    • Respondents were asked to give a time from 3 randomly assigned list lengths (of different granularities), on either a Radio button list, or a Drop-down list.
    • The longer the list, the more likely respondents chose a time earlier on the list, regardless of how the list was presented.
  • The display of Drop-down lists varies by browser. The Safari browser is dominant on iPhones, but browsers vary more on Android phones.
  • Jonathan Tice

    Android browser differences result in primacy effects. Because the default Android browser only shows the first three choices on the list and doesn’t easily scroll, those choices where selected much more often when shown in a drop-down.

  • Unprompted use of Voice-To-Text is very low. Even when asked, most respondents didn’t use it. About 90% of Tablet users opted to type in their response, either because they didn’t have the functionality, or didn’t want to use it. Mobile users were more evenly split between using it and not wanting to
  • Don’t want to use it because of heavy accents for hispanic people or asian people, need to be quiet, won’t be as accurate, environment was too loud
  • premature to recommend it on a wide basis but people are becoming more familiar with itRespondents using Voice-To-Text gave slightly longer answers
  • Using Image or Video Capture on mobile devices – there are many differences by OS, browser, and screen size which will affect results
  • request for a generic image of “where you are” led lots of feet, selfies, and friends – literal definition of “where you are”🙂
  • environments ranged from home, school, church, beach, office, bars, labs, gyms, cars, hospital, kitchens, bathrooms, airports, malls
  • It’s in-experience data without any delay
  • Embedded image permalinkPicture quality instructions recommended: Blurry, bad lighting and more
  • These are not professional photographers. Some, but very limited, importing of existing pictures. Question wording is critical. Photo size is a double-edged sword. Internet connection speed and latency is worth considering
  • reason for not using was it seemed intrusive but it was all by full permission
  • There were fewer “easy” ratings for tablets vs phones
  • it is candid, personal, and open, it is in the moment and in-context, no geographic limitations, no real tech issues
  • But, the data needs to be reviewed individually, and it doesn’t work on non-smartphones
  • still need to test a lot and educate people on why and how to do it. Yet, consumers are still quite ahead of us when it comes to tech
  • consider rating every survey on it’s mobile friendliness – open ends, length, LOI, scale lengths, grid lengths, use of flash, rich media, audio or video streaming which add bandwidth, responsive design – all contribute to whether a study will work well on a phone. consider incenting CLIENTS for mobile friendly surveys
  • also consider designing every survey from stage 1 for mobile phones as opposed to adapting a web survey to phone

 

Other Posts

Enhanced by Zemanta
%d bloggers like this: