Our Evolving Ecosystem: A Comprehensive Analysis of Survey Sourcing, Platforms and Lengths by Mark Menig and Chuck Miller #CASRO #MRX


Live blogging from the CASRO Digital conference in San Antonio, Texas. Any errors or bad jokes are my own.CasroDigital

“Our Evolving Ecosystem: A Comprehensive Analysis of Survey Sourcing, Platforms and Lengths”

Chuck Miller
As more variables enter the research eco-system, assessing the impact of any specific element becomes increasingly difficult. A greater understanding of the interrelationship among survey question types, survey lengths, medium of survey completion (device types), and respondent sourcing (traditional panel, virtual panel, river) – and how these relate to respondent engagement and data quality will be achieved through studying the results of this comprehensive primary research project.

  • Mark Menig, General Manager, TrueSample
  • Chuck Miller, President, DM2

Mark Menig
  • “Portable” experience, not tied to desk. It used to be just phones or computers. Now it’s also tablets.
  • Does screen size impact ability to complete surveys? Does survey length matter? How is data quality affected? Is there an optimal combination of devices?
  • Used trad research panel, managed panel, and river sampling; compared phone, tablet, computer; compared 3 survey lengths
  • Quota sampled on demographics by device, some cells took longer to fill, some took two months to fill [but that can affect the results]
  • Didn’t break any grids up, same on phone and computer. Widest grid was 7 points.
  • Time viewing the concept was inversely proportional to the screen size
  • Age showed the greatest variation – age showed the greatest variation and age 18-34 had the quality score
  • Those with lower quality data were far more likely to skim the concept
  • among higher quality data, awareness that Google Glass was the concept had little impact on amount of time viewing conceptEmbedded image permalink
  • Re verbatims: 8% of people gave junk answers, 8% gave a valid but short response, 33$ gave a short single thought, 18% gave one complex sentence, 33$ gave multiple sentences
  • Lowest and highest education have better data quality – surprising
  • Influencers gave low quality data
  • Males give slightly worse data
  • Prefer not to answer demos gives a lot worse data
  • Speeders have the worst data quality
  • Affiliate samples have slightly worse data quality
  • PC data has slightly worse data quality, best from tablet. Has concerns about PC data quality.
  • Best data – People over 50, shoppers, average techies
  • Worst data – Tech enthusiasts, tech laggards, influentials, people under 35
  • MS Windows PC was only OS with lower quality data; Those who answer factual questions correctly provide better data (e.g., describe the government structure)
  • Prefer not to answer gives lower quality data
  • Recommend timing grids because people can still answer them randomly
  • Slower concept viewing was NOT an indicator

Other Posts

%d bloggers like this: