When does #AAPOR infuriate me? #MRX 


Let me begin by saying I love AAPOR. I go to many conferences around the world and so can make some fair comparisons regarding the content and style of presentations.  While AAPOR presentations are not known for skill in the physical presentation, AAPOR is top notch for its focus on methods and science. There is no fluff here. Give me content over decoration any day.  I always recommend AAPOR conferences to my scientifically minded research friends.  That said…

aaporToday i heard inferences that the difference between probability panels and nonprobability panels is quality. Are you kidding me? Since when does recruitment method translate into poor quality. Different isn’t bad. It’s different. I know first hand just how much work goes into building a quality panel. It ain’t easy to find and continually interest people in your (my) boring and tedious surveys. Fit for purpose is the issue here. Don’t use a data source for point estimates when it’s not suited for point estimates.

And stop asking for response rates with nonprobability panels. High rates are not good and low rates are not bad. High response rates mean every person with a low response rate has been kicked off the panel. It does NOT mean you’re getting better representativity. Instead, ask about their data quality techniques. That’s what truly matters.

I heard today that a new paradigm is coming and AAPOR ought to lead it. Well, sadly, if AAPOR members still think response rates with panels are meaningful,  nonprobability panels are worthless, and they’re still doing email subject line tests, oh my you’re in for a treat when you discover what eye-tracking is. AAPOR leading? Not even close. You’re meandering at the very end of an intensely competitive horse race.

Dear AAPOR, please enter the 21st century. Market researchers have been doing online surveys for twenty years. We finished our online/offline parallel tests ten years ago.  We finished subject line testing ten years ago too. We’ve been doing big data for 50 years.  We’ve been using social media data for 10 years.  I could go on but there’s no need.

Where have you been all these years? Arguing that probability panels are the only valid method? That’s not good enough. Let me know when you’re open to learning from someone outside your bubble. Until then, I’ll be at the front of the horse race.

4 responses

  1. […] -Annie Petit shares what infuriates her about AAPOR. [LoveStats] […]

  2. Annie, I’ll play the devil’s advocate, and take the “fit for purpose” mantra in a different direction. I only saw the cross-tabs most of the time at this conference (and I am not 100% sure that all of them were weighted cross-tabs). But I am not ranting like “Where are the instrumental variable estimation techniques invented in the 1950s? Where is the logistic regression invented in the 1960s? Where are the latent variable models invented in the 1970s? Where are the bootstrap methods researched throughout 1980s? Where are the Bayesian computational methods rediscovered by statisticians (not invented though) in 1990s, or multilevel models from about the same time frame? Where are the functional data analysis methods (analysis of curves and shapes) and causality estimation techniques developed throughout the 2000s?” (I had seen some presentations with more than cross-tabs as far as the methods and statistical techniques go, of course, as I am sure you had seen presentations that were more vivid, as you put it, on the “physical presentation” side.)

    If the cross-tabs work for most purposes of this association, we can stick to cross-tabs. If people need to research their subject lines for their specific application, please let them do so — or guide them to the MRX resources if the latter drove the nail into the research question (I doubt it; attitudes change over time, so subject lines that may have been OK 10 years ago may not be OK today — simply because they don’t fit on the screen of your iPhone while the were fitting on the screen of your CRT monitor in 2000. And on top of that, guess what, the definition of science is that the experimental results must be reproducible by an independent body; reproducing research is always good, and lack of reproducibility lead some journals to adopt crazy editorial policies, as we heard earlier this year). If federal statistical agencies need a justifiable point estimate and a justifiable standard error around it, please, please let them use probability sampling, OK? Because, for all the love that some people out there have for non-probability panels, they do not work for (many) survey statisticians because they have the math backbone of probability sampling that these survey statisticians are used to. There’s an inherent math beauty in the robustness of probability sampling and inference that follows. I am not saying that non-prob panels are wrong and useless; they aren’t (and my organization presented on the topic — I hope you saw the presentations by Charles DiSogra and Andrew Burkey at the mini-conference session on Saturday morning). We just don’t know how to generalize from these panels to the larger population without a lot, A LOT of hoopla; we just don’t have the right math that would convince everybody (as random sampling did not convince everybody at first in the 1930s).

    And at this stage, we are back to the first line of my post — to the fit for purpose. If the purpose of the MRX folks is to obtain a vague idea if you can enroll more people with a blue-red map vs. a puppy (i.e., just the direction of the effect), you probably can do that with non-prob. If the purpose of the academic folks and the federal agencies is to say that Head Start gives kids X months of advantage by the time they enter school, which in the very long run of when they enter labor force translates to $X,XXX more they will be making per year (I don’t quite know what the latter X number would be; I think the 6 months figure is an existing estimate of the first X number) that justifies the federal spending on this program, there’s a very strong preference to utilize the methods that can give a specific and accurate answer.

  3. Annie,

    I suspect that you already know a primary determinant at work and just don’t want to admit or state it: The preponderance of academic and government researchers among the AAPOR membership. These folks often remain cloistered in positions where slavish devotion to lessons of theory learned decades ago – regardless of the degree to which those lessons impact the real-world conduct and applications of research – is more important than innovation and solving problems. (In case you can’t tell, my abhorrence of the mindset is what drove me from an academic career path back when I was still in grad school.)

    True story: A few years ago, I interviewed at a large think-tank headed by a very well-known, widely respected and accomplished pollster, who had received AAPOR’s highest honor. After opening the interview with a comment disparaging my choice of a career in marketing research (i.e., versus more polling-centric opinion research), he asked me how I would approach a scenario in which I wanted to determine the effect of knowledge of some phenomenon among a specialized population on attitudes toward government action dealing with the phenomenon. Given that I already had a doctorate, I thought the question was a little condescending. But, I humored him – first with a “straight” answer comprising a relative simple study design, but one which would be really resource-intensive; then, with a couple alternatives that would be provide less control, but with acceptable validity and far lower demands on resources. The interviewer stopped me in the middle of the conversation and asked why I would possibly want to approach the problem with a less-than-perfect design. I explained that, after 25-plus years in the industry, I rarely encountered budgets that would allow me to travel the “perfect” path.

    He looked at me with a suitably serious expression and deadpanned, “If we don’t have the funds to do a study the proper way here, we simply don’t do it.” (I thought that was kind of ironic, because this particular outfit tends to approach measurement of items such as media exposure in ways that are at variance with established industry norms – apparently for no reason I can determine other than to be different – and then discuss in write-ups how their findings are new and different.) I told the guy he was lucky to be in a position where he could make that decision and not have to answer to an indignant client.

    That was pretty much the end of our conversation.

    I think that attitude is reflected in what you wrote about here, Annie, as well as in the recent hubbub over revisions to the AAPOR Code. It’s too bad, really, because AAPOR offers a lot of serious attention – both topical and methodological – to issues of benefit to marketing researchers, attention which just isn’t present in an organization like MRA, for example. But, to get to the good stuff, one often has to wade through awful amounts of hubris and plain ol’ outdated thinking.

    My two cents.

    MD

  4. Hi Annie, I would love to hear about their thoughts about the UK election polls. The results were poor, for the purposes they were put to, the error in the predictions were much larger than the predicted margins, partly because the combined sample sizes were so large. The key question for AAPOR to consider is that the probability panels produced almost identical predictions, i.e. the probability predictions were just as bad.

    So, whilst I am happy to say that given a choice between probability and non-probability panels (i.e. if both are possible and affordable) I would pick probability, it is clear that the biggest problem is not sampling. The top priority might be the questions we ask, it might the way we process the data, but it certainly is not the sampling problem. Yes, we should keep looking at sampling issues, but the PRIORITY should be to produce better predictions, not better methods of analysing response rates, drop-out rates, skip rates, satisfying etc.

    IMHO