Tag Archives: survey

Taking multiple surveys in one session by Mark Kinnucan and Inna Burdein #CASRO #MRX

Live blogged from Nashville. Any errors or bad jokes are my own.

– We want surveys short and simple. to avoid straightlining, and satisficing. reuce breakoffs, and dropping off the panel.
– but companies are ok with panelists taking multiples surveys in a row
– is multiple short surveys better than one long survey?, assume it lets people handle fatigue better, assumes if they do take another survey that that survey will be better quality. is any of this true?
– who takes multiple surveys, what are their completion rates, how good is the data, how does it affect attrition
– defined surveys as all the surveys taken within 1.25 hours
– 40% of surveys are completed in chains
– younger people make more use of chains
– moderate chaining is the norm. most people average 1.5 to 3 surveys per session. about 10% average more than 3 surveys per chain.
– completion rates increase with each survey in the chain. people who want to drop already dropped out.
– buying rate is unaffected by chaining. for people who take five surveys, buying rate increases with each survey.
– why is this? panelists will take more surveys if they did not exhaust themselves in the previous survey. or maybe those with lots of buying behaviours pace their reporting. or those people are truly different. [read the paper. it’s getting too detailed for me to blog on]
– poor responders are more likely to chain, but not massively more likely
– for younger panelists, heavy chainers have greater longevity. for oldest panelists, it results in burnout.
– people who agree to chain, do it because they are ready to do so. if they exhausted in a previous survey, they don’t continue. a small minority abuse the process
– chaining helps younger panelists stay engaged

Advertisements

Mobile Surveys for Kids by Brett Simpson #CASRO #MRX

Live blogged from Nashville. Any errors or bad jokes are my own.

children have more internet access than adults. their homes are littered with devices. they start with a leap-pad and download games for it. have it in the car and it goes everywhere with them. then they get a nintendo. they are in-tune with mobile. they are the first generation to grow up with tech. today’s students are not the people our education system was designed to teach.

classrooms rely on tech early now. clickers for interaction. interactive reading solutions. reading apps. smart boards instead of chalk boards. many schools have some iPads as standard in the classroom.

designing surveys for kids. we are working on agnostic and respondent friendly surveys. but we rarely place focus on survey design for kids, especially when focused on mobile.
Do kids really go onto the computer for 30 minutes to answer a survey? [My response – HA HA HA HA HA HA HA. Oh sorry. No, i don’t think so. ]

They did qual and quant to figure out how kids think about and use surveys.
– parents are not concerned with parents using their phone
– kids prefer less than ten minutes
– age 11 to 17 say they rarely use computers!!!
– children read every single question and respond very carefully
– easy concepts may actually be difficult for them
– testing is critical
– responses need to be different to avoid confusion
– less wording is essential
– more engaging question types are easier for them to understand
– simplified scales are more easily processed, maybe using images
– use more imagery, bigger buttons

[this is funny – dear 4 year old – how likely are you to recommend this product to your friends, family, and colleagues?]

– kiddie fingers aren’t as precise with hitting buttons especially when survey buttons are close to phone buttons
– kids don’t understand our concepts of new, different, intent, believability
– kids up to age ten are much more likely to get help from their parent 60% or more, falls to 15% with older teens
– a pre-recruit is helpful, then send the official invite/portal, then again get parental permission

– response rates are higher on tablets, smartphones next, computers worst
– LOI is longer on smartphones, best on computers
– people on smartphones felt there were too many questions
– click rates vary by device but the end conclusions are the same [cool data here]
– ideal length is around 10 minutes
– 3 point scales may be enough [halleluja! Do we TRULY need ten or five point scales in marketing research? i think in many cases it’s a selfish use not a necessary use.]

How marketing researchers can start being more ethical right now #MRX

I challenge you to rethink your behaviours. I challenge you to jump off that pedestal of marketing researchers are more ethical than other people in the marketing world and think about whether you’re being as ethical as you like to think you are. I challenge you to:

1) tell people that answering your survey or participating in your focus group might make them sad or uncomfortable or angry
2) recognize that benign questions about age, gender, income, brand loyalty, weather, and politics make people unhappy, uncomfortable, and angry
3) incentivize people when they quit a survey partway through especially when a question may have made them uncomfortable
4) allow people to not answer individual questions but still complete the entire survey
5) debrief people at the end if surveys by sharing some details about how the results will be used to make people happier via better products and services

Can you hold yourself to a higher standard? Can you start right now?

IMG_2646.JPG

WAPOR Day 2: People don’t lie on government surveys #AAPOR #MRX

Day two of WAPOR has come and is nearly gone, but my brain continues to ponder and debate all that I heard today. I hope you enjoy a few of the ramblings from my macaron infested brain.

  • People don’t lie on government surveys. Wow. That’s news to me! My presentation focused on how people don’t always provide exactly correct answers to surveys for various reasons – the answer isn’t there, they misread something, they deliberately gave a false answer. But, while people may feel more incentive to answer government surveys honestly, those surveys are certainly not immune to errors. Even the most carefully worded and carefully pre-tested survey will be misread and misinterpreted. And, some people will choose to answer incorrectly for a variety of reasons – privacy, anti-government sentiment, etc. There is no such thing as “immune to errors.” Don’t fool yourself.
  • How do you measure non-internet users? Well, this was a fun one! One speaker described setting up a probability panel (i know, i know, those don’t really exist). In order to ensure that internet usage was not a confounding variable, they provided a 3G tablet to every single person on the panel. This would ensure that everyone used the same browser, had the same screen size, had the same internet connection, and more. Of course, as soon as you give a tablet to a non-internet user, they suddenly become….. an internet user. So how do you understand perceptions and opinions from non-internet users. Chicken and egg! Back to paper you go!
  • Stop back translating. I don’t work much in non-English languages so it was interesting to hear this one. The authors are suggesting a few ideas:  questionnaire writers should write definitions of each question, preliminary draft translations should be provided by skilled translators, and finally, those two sets of information should go to the final translator. This is how you avoid “military rule” being translated as “role of the military” or “rules the military has” or “leadership of the military.” Interesting concept, and I’d love to know whether it’s efficient in practice.
  • Great presenter or great researcher: Pick one. I was reminded on many occasions today that, as a group, researchers are not great presenters. We face the screen instead of the audience,  we mumble, we read slides, and we speak too quietly. We focus on sharing equations instead of sharing learnings, we spend two thirds of the time explaining the method instead of sharing our insights. Let’s make it a priority to become better speakers. I know it won’t happen over night but I’ve progressed from being absolutely terrible to reasonably ok in a short matter of just 15 years. You can do it too.

A “How-To” Session on Modularizing a Live Survey for Mobile Optimization by Chris Neal and Roddy Knowles #FOCI14 #MRX

Live blogging from the #FOCI14 conference in Universal City. Any errors or bad jokes are my own.foci14

A “How-To” Session on Modularizing a Live Survey for Mobile Optimization
Chris Neal, CHADWICK MARTIN BAILEY 
& Roddy Knowles, RESEARCH NOW

  • conducted a modularized survey for smartphone survey takers, studied hotels for personal travel and tablets for personal use, excluded tablet takers to keep the methodology clean
  • people don’t want to answer a 20 minute survey on a phone but clients have projects that legitimately need 20 minutes of answers
  • data balanced and weighted to census
  • age was the  biggest phone vs computer difference
  • kept survey to 5 minutes, asked no open ended questions, minimize the word count, break grids into individual questions to avoid burden of scrolling and hitting a tiny button with a giant finger
  • avoid using a brand logo even though you really want to. space is at a premium
  • avoid flash on your surveys, avoid images and watermarks, avoid rich media even though it’s way cool – they don’t always work well on every phone
  • data with more variability is easier to impute – continuous works great, scale variables work great, 3 ordinal groups doesn’t work so well, nominal doesn’t work so well at all
  • long answer options lists are more challenging – vertical scrolling on a smartphone is difficult, affects how many options responders choose, ease of fewer clicks often wins out
  • branching is not your friend. if you must branch, have the survey programmers account for the missing data ahead of time, impute all the top level variables and avoid imputing the bottom level branched variables
  • Predictive mean matching works better than simply using a regression model to replace missing data
  • hot decking (or data stitching which combines several people into one)  replaces missing data with that from someone who looks the same, worked really well though answers to “other” or “none of the above” didn’t work as well
  • hot decking works better if you have nominal data
  • good to have a set of data that EVERYONE answers
  • smartphone survey takers aren’t going away, we need to reach people on their own terms, we cannot force people into our terms
  • we have lots of good tools and don’t need to reinvent the wheel. [i.e., write shorter surveys gosh darn it!!!]

Other Posts

When is survey burden the fault of responders? #AAPOR #MRX

Ah, yet another enjoyable set of sessions from #AAPOR, chock full of modeling, p-values, and the need to transition to R. Because hey, if you’re not using R, what old-fashioned, sissy statistical package are you using?

This session was all about satisficing, burden, and data quality and one of the presenters made a remark that really resonated with me – when is burden caused by responders. In this case, burden was measured as surveys that required people to extend a lot of cognitive ability, or when people weren’t motivated to pay full attention, or when people had difficulty with the questions.

Those who know me know that it always irks me when the faults of researchers and their surveys are ignored and passed on to people taking surveys. So let me flip this coin around.

  • Why do surveys require people to extend a lot of cognitive ability?
  • Why do surveys cause people to be less than fully motivated?
  • Why do people have difficulty answering surveys?

We can’t, of course, write surveys that will appeal to everyone. Not everyone has the same reading skills, computer skills, hand-eye coordination, visual acuity, etc. Those problems cannot be overcome. But we absolutely can write survey that will appeal to most people. We can write surveys with plain and simple language that don’t have prerequisites of sixteen Dicken’s novels. We can write surveys that are interesting and pleasant and respective of how people think and feel, thereby helping them to feel motivated. We CAN write surveys that aren’t difficult to answer.

And yes, my presentation compared data quality in long vs short surveys. Assuming my survey was brilliantly written, then why were there any data quality issues at all?  🙂

 

Other Posts

How many survey contacts is enough? #AAPOR #mrx

This afternoon, I attended a session on how the number of survey contacts affects data quality and results equivalence this afternoon. I just loved the tables and stats and multicollinearity. Many of my hunches, and likely your hunches were confirmed and yes overly obvious.

But something bothered me. As cool as it is to confirm that people who are reluctant to participate give bad data and people who always participate give good data, it irked me to be reminded that our standard business practice is to recontact people ten times. 10. TEN. X.

Have we conveniently ignored various facts?
– people have call display. When they see the same name and number pop up ten times, they learn to hate that caller. And the name associated with that caller. And that makes our industry look terrible.
– half of people are introverts. A ton of them let every call go to voicemail which means we are pissing them off by calling them ten times in a few days. Seriously pissing them off. I know. I’m a certified, high order introvert.
– I like to listen to what people say about research companies online. People DO search out the numbers on their call display and identify survey companies. Even if you use local numbers to encourage participation. And yet again, this makes us look bad.

Why do we allow this? For the sake of data integrity? Hogwash. It’s easy for me.

Do we care about respondents or not?

Do Smartphones Really Produce Lower Scores? Understanding Device Effects on Survey Ratings by Jamie Baker-Prewitt #CASRO #MRX

Live blogging from the CASRO Digital conference in San Antonio, Texas. Any errors or bad jokes are my own.CasroDigital

“Do Smartphones Really Produce Lower Scores? Understanding Device Effects on Survey Ratings”

As the proliferation of mobile computing devices continues, some marketing researchers have taken steps to understand the impact of respondents opting to take surveys on smartphones. Research conducted to date suggests a pattern of lower evaluative ratings from smartphone respondents, yet the cause of this effect is not fully understood. Whether the observed differences truly are driven by the data collection device or by characteristics of smartphone survey respondents themselves requires further investigation. Leveraging the experimental control associated with a repeated measures research design, this research seeks to understand the implications of respondent-driven smartphone survey completion on the survey scores obtained.

  • Jamie Baker-Prewitt, SVP/Director of Decision Science, Burke, Inc.
  • Tested four devices for data quality and responses
  • Brand awareness was not significantly different
  • Brand engagement – trust, financially stable, value, popular, proud, socially responsible – did show differences. PC users had higher ratings.  Smartphone takers had lower ratings.
  • Customer engagement – purchase, recommend, loyalty, preference – half of tests showed significant differences. PC users had higher recommend scores and smartphone takers had lower recommend scores.
  • Different topics and sources all suggested that devices cause lower ratings
  • Did a nice repeated measures design with order controls
  • Frequency of purchasing looked the same on both devices, average cell phone bill showed no differences [interesting data point!]
  • No differences on brand engagement – 1 out of 30 was significant [i.e., the 5% error rate we expect due to chance]
  • Purchase data looked very similar in many cases for PC vs phone, frequency distributions were quite similar
  • Correlations between PC and phone scores were around .8, which is very high [recall people did the same survey twice, once on each device]
  • Current research replicates original research, no significant device effect. Did not replicate lower scores from smartphones.
  • Study lacked mundane realism, they were in a room with other people taking the survey, there weren’t ‘at home’ distractions but there were distractions – chatty people, people needed assistance, people might have simply remembered what they wrote in the first survey
  • Ownership of mobile will continue to grow and mobile surveys will grow
  • Business professionals are far more likely to answer surveys via mobile, fastfood customer are more likely to use smart phone for surveys
  • Very few people turned the phone horizontally  – they could see less screen but it was easier to read. Why not tell people they CAN turn their phone horizontally.

Other Posts

The myth of the Total Survey Error approach

Total Survey Error is a relatively recent approach to understanding the errors that occur during the survey, or research, process. It incorporates both sampling errors, non-sampling errors, and measurement errors, including such issues as specification error, coverage errors, non-response errors, instrument error, respondent error, and pretty much every single other error that could possibly exist. It’s an approach focused on ensuring that the research we conduct is as valid and reliable as it can possibly be. That is a good thing.

Here’s the problem. Total Survey Error is simply a list. A list of research errors. A long list, yes, but a list of every error that every researcher has been trained to recognize and account for in every research project they conduct.

We have been trained to recognize a bad sample, improve a weak survey, conduct statistics properly, generalize appropriately, and not promise more than we can deliver. Is conducting research the old name of ‘total survey error?’ It is not a new, unique approach. It does not require new study nor new books.

Perhaps I’m missing something, but isn’t total survey error how highly skilled, top notch researchers have been trained to do their job?

Proud to be a member of survey research #MRX

Guest Post by Prof. Dr. Peter Ph. Mohler 

Having listened to uncountable papers and reading innumerable texts on non-response, non-response bias, survey error, even total survey error, or global cooling of the survey climate it seems to be timely considering why after so many decades working in a, according to the papers, seemingly declining field called “survey research” I still do not intend to quit that field.

The truth is, I am mighty proud to be a member of survey research because:

  • We can be proud of our respondents who, after all these years, still give us an hour or so of their precious time to answer our questions to the best of their abilities.
  • We can be proud of our interviewers who, despite low esteem/status and payment, under often quite difficult circumstance, get in contact with our respondents, convince them to give us some of their time and finally do an interview to the best of their abilities.
  • We can be proud of our survey operations crews, who, despite low esteem/status and increasing costs/time pressures organize data collection, motivate interviewers, and edit/finalize data for analysis.
  • We can be proud of our social science data archives who for more than five decades preserve and publish surveys nationally and internationally as a free, high quality service unknown of in other strands of science.
  • We can be proud of our survey designers, statisticians and PIs, who constantly improved survey quality from its early beginnings.

Of course there are drawbacks such as clients insisting on asking dried and dusted questions or, often academic, PIs who do not estimate the efforts and successes of respondents, interviewers, survey operations  and all the rest, and there are some who deliberately fabricate surveys or survey analyses (including all groups mentioned before).

But it is no good to define a profession by its outliers or not so optimal outcomes.

Thus it seems timely to turn our attention from searching for errors to optimizing survey process quality and at long last defining benchmarks for good surveys that are fit for their intended purpose.

The concepts and tools are already there, waiting to be used  to our benefit.

 As originally posted to the AAPOR distribution list. 

Peter is owner and Chief Consultant of Comparative Survey Services (COMPASS) and honorary professor at Mannheim University. He is the former Director of ZUMA, Mannheim (1987-2008). Among others he directed the German General Social Survey (ALLBUS) and  the German part of the International Social Survey Programme (ISSP) for more than 20 years. He was  a founding senior member of the European Social Survey (ESS) Central Scientific Team 2001- 2008.  He is co-editor of Cross-Cultural Survey Methods (John Wiley, 2003) and Surveys in Multinational, Multiregional and Contexts (John Wiley, 2010, AAPOR Book Award 2013). Together with colleagues of the ESS Central Coordinating Team, he received the European Descartes Prize in 2005. 

 

%d bloggers like this: