Concerning quality in opt-in panels  #AAPOR #MRX 


6 papers moderated by Martin Barron, NORC

prezzie 1: evaluting quality control questions, by Keith Phillips

  • people become disengaged in a moment but not throughout an entire survey, true or false – these people are falsely accused [agree so much!]
  • if most people fail a data quality question, its a bad question
  • use a long paragraph and then state at the end please answer with none of the above to this engagement question – use a question that everyone can answer –> is there harm in removing these people
  • no matter how a dataset is cleaned, the answers remained the same, they don’t hurt data quality, likely because it happens randomly
  • people who fail many data quality questions are the problem, which questions are most effective?
  • most effective questions were low incidence check, open ends, speeding

prezzie 2: key factor of opinion poll quality

  • errors in political polling have doubled over the last ten years in canada
  • telephone coverage has decreased to 67% when it used to be 95%
  • online panel is highly advantageous for operational reasons but it has high coverage error and it depends on demographic characteristics
  • online generated higher item selection than IVR/telephone

prezzie 3: new technology for global population insights

  • random domain intercept technology – samples people who land on 404 pages, reaches non-panel people
  • similar to random digit dialing
  • allows access to many countries around the world
  • skews male, skews younger, but that is the nature of the internet
  • rr in usa are 6% compared to up to 29% elsewhere [wait until we train them with our bad surveys. the rates will come down!]
  • 30% mobile in USA but this is competely different around the world
  • large majority of people have never or rarely take surveys, very different than panel

prezzie 5: surveys based on incomplete sampling

  • first mention of total survey error [its a splendid thing isn’t it!]
  • nonprobability samples are more likely to be early adopters [no surprise, people who want to get in with new tech want to get in with other things too]
  • demographic weighting is insufficient
  • how else are nonprobability samples different – more social engagement, higher self importance, more shopping behaviours, happier in life, feel like part of the community, more internet usage
  • can use a subset of questions to help reduce bias – 60 measures reduced to number surveys per month, hours on internet, trying new products first, time spent watching TV, using coupons, number of times moved in last 5 years
  • calibrated research results matched census data well
  • probability sampling is always preferred but we can compensate greatly

prezzie 6: evaluating questionnaire biases across online sample providers

  • calculated the absolute difference possible when completing rewriting a survey in every possible way – same topic but different orders, words, answer options, answer order, imagery, not using a dont know
  • for example, do you like turtles vs do you like cool turtles
  • probability panel did the best, crowd sourced was second best, opt in panel and river and app clustered together at the worst
  • conclusions – more research is needed [shocker!]
%d bloggers like this: