Panel: Public Opinion Quarterly Special – Survey research today and tomorrow #AAPOR #MRX #NewMR 


Live note taking at #AAPOR in Austin, Texas. Any errors or bad jokes are my own.

Moderator: Peter V. Miller, U.S. Census Bureau 

  • He is accepting submissions of 400 words regarding these papers, to be published in an upcoming issue, due June 30, send to peter.miller@census.gov

Theory and Practice in Nonprobability Surveys:
Parallels Between Casual Inference and Survey Inference 
Discussant: Jill DeMatteis, Westat
; Andrew Mercer, Pew Research Center; Frauke Kreuter, University of Maryland; Scott Keeter, Pew Research Center; Elizabeth Stuart, Johns Hopkins University

  • Noncoverage – when people can’t be included in a survey
  • Problem is when they are systematically based
  • Selection bias is not as useful in a nonprobability sample as there is no sampling frame, and maybe not even a sample
  • Need a more general framework
  • Random selection and random treatment assignment is the best way to avoid bias
  • Need exchangeability – know all the confounding, correlated variables
  • Need positivity – everyone needs to be able to get any of the treatments, coverage error is a problem
  • Need composition – everyone needs to be in the right proportions 
  • You might know the percent of people who want to vote one way, but then you also know you have more of a certain percentage of demographic in your group, but it’s never just one demographic group, it’s ten or twenty or 100 important demographic and psychographic variables that might have an association with the voting pattern
  • You can’t weight a demographic group up [pay attention!]
  • We like to assume we don’t have any of these three problems and you can never know if you’ve met them all, we hope random selection accomplishes this for us; or with quota selection we hope it is met by design
  • One study was able to weight using census data a gigantic sample and the results worked out well [makes sense if your sample is so ridiculously large that you can put bigger weights on a sample of 50 000 young men]
  • Using demographics and psychographics helps to create more accurate results, religion, political affiliation
  • This needs to be done in probability and nonprobability samples
  • You can never be certain you have met all the assumptions
  • Think about confounding variables during survey design, not just demographics, tailored to the research question at hand
  • Confounding is more important than math – it doesn’t matter what statistic you use, if you haven’t met the requirments first you’re in troubl

Apples to Oranges or Gala vs. Golden Delicious? Comparing Data Quality of Nonprobability Internet Samples to Low Response Rate Probability Samples

Discussant: George Terhanian, NPD Group
, David Dutwin, SSRS,  Trent Buskirk, Marketing Systems Group

  •  S>80000, 9% response rate for probability sample [let’s be real here, you can’t have a probability sample with humans]
  • The matching process is not fool proof, uses categorical match, matching coefficient, randomly selected when there was a tie
  • Looked at absolute bias, standard deviation, and overal mean absolute bias 
  • Stuck with demographics variables, conditional variables, nested within gender, age, race or region
  • Weighted version was good, but matched and raked was even closer, variability is much less with the extra care
  • Nonprobability telephone surveys consistently had less variability in the errors
  • Benchmarks are essential to know what the error actually is, you can’t just the bias without a benchmark
  • You can be wrong, or VERY wrong and you won’t know you’re wrong
  • Low response rate telephone gets you better data quality, much more likely you’re closer to truth
  • Cost is a separate issue of course
  • Remember fit for purpose – in politics you might need reasonably accurate point estimates 

Audience discussion

  • How do you weight polling research when political affiliation is part of both equations, what is the benchmark, you can’t use the same variables for weighting and measuring and benchmarking or you just creating the results you want to see
  • If we look at the core demographics, maybe we’ve looked at something that was important [love that statement, “maybe” because really we use demographics as proxies of humanality]
  • [if you CAN weight the data, should you? If you’re working with a small sample size, you just probably just get more sample. If you’re already dealing with tens of thousands, then go ahead and make those small weighting adjustments]

%d bloggers like this: