Evaluating polling accuracy #AAPOR #MRX 


moderated by Mary McDougall, CfMC Survox Solutions

prezzie 1: midterm election polling in Georgia

  • georgia has generally been getting more media attention because it is bluer than expected, change may be fast enough to outpace polls, population has changed a lot particularly around atlanta, georgia has become less white
  • telephone survey using voter registration info, tested three weights – voter data, party targets, education weights
  • registered voting weight was much better, education weighting was worse
  • voter weight improvied estimates in georgia but you need voter information
  • [why do presenters keep talking about need more research for reliability purposes, isn’t that default?]

prezzie #2: error in the 2014 preelection polls

  • house effects – difference between one poll and every other poll, difference from industry average
  • they aren’t what they used to be, used to be interview method and weight practices
  • regression model is better than difference of means tests
  • could it be whether the pollster is extremely active or if they only do it once in a while
  • results show the more poll the more accurate you are, if you poll more in risky areas you are less accurate – but overall these results were kind of null
  • second model using just many pollsters was much better – arkansas had a lot more error, it had the most pollsters
  • in the end, cant really explain

prezzie #3: north carolina senate elections

  • to use RDD or registration based sampling; will turn out be high or low; a small university has limited resources with highly talented competition
  • chose RBS and did three polls, worked saturday to thursday, used live interviewers, screen for certain or probably will vote
  • RBS worked well here, there were demographic gaps, big race gap, big party gaps

prezzie #4: opinion polls in referendums

  • [seriously presenters, what’s with these slides that are paragraphs of text?]
  • most polls are private and not often released, questions are all different, there is no incumbent being measured
  • data here are 15 tobacco control elections and 126 questions in total, courts forced the polls to be public, find them on legacy library website
  • five types of questions – uninformed heads up questions where you’re asked whether you agree or strongly agree [i.e., leading, biased, unethical questions. annie not happy!]
  • predictions are better closer to the election, spending is a good predictor, city size is a good predictor
  • using the word ‘strongly’ in the question doesn’t improve accuracy
  • asking the question exactly as the ballot doesn’t improve the accuracy
  • asking more questions from your side of the opinion doesn’t improve the accuracy 
  • polls often overestimate the winner’s percentage
  • [these polls are great examples of abusing survey best practices research]
  • post election surveys are accurate and useful for other purposes
  • [big slam against appor for not promoting revealing of survey sponsors]

prezzie #5: comparing measures of accuracy

  • big issue is opt-in surveys versus random sample [assuming random sampling of humans is possible!]
  • accuracy affected by probability sampling, days to election, sample sizes, number of fielding days
  • used elections in sweden with has eight parties in parliament, many traditional methods are inappropriate with multi-candidate elections
  • sample size was good predictor, fielding days was not predictive, opt-in sample was worse but overall r square was very small

prezzie #6: polling third party candidates

  • why do we care about these? don’t want to waste space on candidates who only get 1% of the votes
  • 1500 data points, 121 organizations, 94 third party candidates – thank you to HuffPollster and DailyKos
  • aggregate accuracy was good, most were overstatement, but there was systematic bias
  • using the candidates names makes a difference, but if you name one candidate, you should name them all – i know i’m not voting for the top two candidates so i’m probably voting for this third party person you listed
  • accuracy gets better closer to the date, sometimes you don’t know who the third party candidate is till close to the date
  • live phone and IVR underestimate, internet overestimated
  • there were important house effects – CBS/yougove underestimate; PPP overestimates; on average FOX news is fairly accurate with third party candidates
%d bloggers like this: