Big Things from Little Data by Sherri Stevens and Frank Kelly #CASRO #MRX


Live blogging from the CASRO Digital conference in San Antonio, Texas. Any errors or bad jokes are my own.CasroDigital

 “Big Things from Little Data”

Sherri Stevens
While great effort has been expended on improving how we collect online data, there has been insufficient attention on making full use of the data collected. Partial completes of long surveys are discarded. But if there was an effective method to salvage this data, we could increase the average sample size for any given question in a survey by 20% for no additional cost. As an extension of previous research around survey modularization, this research evaluates the potential of partial completes in a modularized and randomized survey design.

  • Frank Kelly, SVP, Global Marketing and Strategy, Lightspeed Research
  • Sherri Stevens, Vice President, Global Innovation, Millward Brown

Frank Kelly
  • Online surveys averaged around 20 minutes for the last ten years
  • 65% of smartphone users not willing to spend more than 15 minutes on a survey.
  • Almost half of the time spent on completed survey are on surveys that are more than 25 minutes.
  • Longer surveys have higher drop out rates.12% on up to ten minutes, 28% on 31 minutes or more. Why can’t we use the partial data?
  • Drop outs on mobile data are way higher than computer. 46% on smartphone, 25% on tablet, 12% on computer.
  • New panelists have much higher drop out rates
  • Around 40% of new panel joins do so via mobile. People think it makes sense and then realize it’s not that good after all.
  • A fully optimized survey still took 34% longer to complete on the phone than on the computer.
  • We could charge more for long surveys, tell people to write shorter surveys, chunk surveys into pieces and impute or fuse
  • Proposal – don’t ask everybody everything. work with human nature, encourage responses through smaller surveys. Embedded image permalink
  • Tried various orders of various modules, not all had same sample size depending on important of  module
  • 1000 completes, 26 minutes cost $6500; 1400 completes 17 minutes cost $6500; 1000 completes 19 minutes $5000. Modular design allowed them to save some costs.
  • Incompletes could be by module, by skip pattern, or by drop-outs
  • High incidence study of social media, common brands, respondent info
  • In general 17% of people dropped out as in this study. But within those 35% completed at least one section.
  • What drives drop out? boring question or topic, hard questions, extended screening, low tolerance for survey taking
  • Survey enjoyability was higher with module surveys, survey length satisfaction higher in module survey
  • Reported more social media activities and brand engagement within module survey
  • Richer open ends in module survey
  • It’s not fusion and bayesian networks. it’s a generally applicable model but it still requires careful design. can be generally applied.
  • Think about partial completes as modular completes
  • Look for big positive effect on fieldwork costs and data quality
  • Are there better question types to do this with? How to randomize modules best?

Other Posts

%d bloggers like this: