Tag Archives: panel

How the best research panel in the world accurately predicts every election result #polling #MRX 

Forget for a moment the debate about whether the MBTI is a valid and reliable personality measurement tool. (I did my Bachelors thesis on it, and I studied psychometric theory as part of my PhD in experimental psychology so I can debate forever too.) Let’s focus instead on the MBTI because tests similar to it can be answered online and you can find out your result in a few minutes. It kind of makes sense and people understand the idea of using it to understand themselves and their reactions to our world. If you’re not so familiar with it, the MBTI divides people into groups based on four continuous personality characteristics: introversion/extroversion, sensing/intuition, thinking/feeling, judging/perception . (I’m an ISTJ for what it’s worth.)

Now, in the market and social research world, we also like to divide people into groups. We focus mainly on objective and easy to measure demographic characters like gender, age, and region though sometimes we also include household size, age of children, education, income, religion, and language. We do our best to collect samples of people who look like a census based on these demographic targets and oftentimes, our measurements are quite good.  Sometimes, we try to improve our measurements by incorporating a different set of variables like political affiliation, type of home, pets, charitable behaviours, and so forth. 

All of these variables get us closer to building samples that look like census but they never get us all the way there. We get so close and yet we are always missing the one thing that properly describes each human being. That, of course, is personality. And if you think about it, in many cases, we’re only using demographic characteristics because we don’t have personality data. Personality is really hard to measure and target. We use age and gender and religion and the rest to help inform about personality characteristics. Hence why I bring up the MBTI. The perfect set of research sample targets. 

The MBTI may not be the right test, but there are many thoroughly tested and normed personality measurement scales that are easily available to registered, certified psychologists. They include tests like the 16PF, the Big 5, or the NEO, all of which measure constructs such as social desirability, authoritarianism, extraversion, reasoning, stability, dominance, or perfectionism. These tests take decades to create and are held in veritable locked boxes so as to maintain their integrity. They can take an hour or more for someone to complete and they cost a bundle to use. (Make it YOUR entire life’s work to build one test and see if you give it away for free.) Which means these tests will not and can not ever be used for the purpose I describe here. 

However, it is absolutely possible for a Psychologist or psychological researcher to build a new, proprietary personality scale which mirrors standardized tests albeit in a shorter format, and performs the same function. The process is simple. Every person who joins a panel answers ten or twenty personality questions. When they answer a client questionnaire, they get ten more personality questions, and so on, and so on, until every person on a panel has taken the entire test and been assigned to a personality group. We all know how profiling and reprofiling works and this is no different. And now we know which people are more or less susceptible to social desirability. And which people like authoritarianism. And which people are rule bound. Sound interesting given the US federal election? I thought so. 

So, which company does this? Which company targets people based on personality characteristics? Which company fills quotas based on personality? Actually, I don’t know. I’ve never heard of one that does. But the first panel company to successfully implement this method will be vastly ahead of every other sample provider. I’d love help you do it. It would be really fun. 🙂

Shopper insights for foresights #IIeX 

Live note taking at #IIeX in Atlanta. Any errors or bad jokes are my own.

I didn’t do anything wrong: The inventor’s dilemma by RIck West

  • In 1989, lots of people had nokia’s which were awesome phones at the time, bricks that never broke. In 2008, smartphones started to enter the market. Why did Nokia go from 55% share to 3%? They did nothing wrong so how did they lose?
  • We don’t want to be sitting here five years from now screaming that we’re relevant
  • But I invented this and we coined this phrase!
  • Today, no one swipes their credit card on a physical charge machine. We swipe in on a Square. No bank certified that charge marchine.  Inventors of the charge machine are now out of business.
  • Five years from now, you will not be doing the same business you’re doing now without major change

Completing the consumer journey with purchase analytics by Jared Schrieber and Bridget Gilbert

  • How do people purchase alcohol for attending an event
  • Data collected from a purchase panel, people take photos of every receipt they get from every purchase everywhere
  • Groups “The Socialite” and “The Rebel” – rebel spends 20% more
  • Trigger, ready to buy, and buy – three stages of the purchase
  • Journey for millenials is straightforward – invited to some event, think about the occasion, speak to someone, mental budget, added to list, talk to friends, check the fridge section, check for sales, compare prices, buy [darn it, I tried to avoid millenial talks!]
  • Millenials are always talking to someone at some point in the journey 
  • Key differentiator with rebels is they don’t speak to people, they have ghost influencers, more likely to say they bought someone else’s favorite type of alcohol, they are thinking about friends or family or whoever will be attending the event [or is this simply self justification of a larger purchase – “it’s not for me”]
  • Socialite – liquor store, express lane, after 5pm, shop in pairs, has a baby, lower income
  • Rebels – grocery store, stock up trip, before 5pm, shops alone, has a pet, higher income 

Brands and American mythology: Narrative identify, brand identity, and the construction of the American self by Jim White

  • We are all tellers of tales, give our lives meaning and coherence 
  • We don’t construct this identity in a vacuum, it’s within our culture, the mythology of our culture, we try to align our lives with the this we’re familiar with
  • We edit and reedit our identities
  • Brand strategists need to spend  more time listening to consumer stories
  • We rarely step back and listen to customers talk about themselves
  • Six languages of redemption – atonement, emancipation, upward mobility, recovery, enlightenment, development
  • We use brands to tell ourselves stories about who we are, to try and give ourselves some reality
  • Brands can be markers in our lives, can tap into that notion of our lives
  • Understand how personal myths draw from cultural myths
  • Ask people to tell stories about themselves not about your brand
  • Find the tensions they need to resolve, can my brand help smooth those contradictions, actualize th story they want to tell

Reimagining the traditional consumer panel by Bijal Shah

  • She’s a promotions company and they have millions of purchase records in their database, they are not a data company
  • Rely on panels but there is a sever lack of scale, not enough information about the entire population
  • We try multiple data sources but often can’t link sources
  • Partner with a DMP to make your data actionable like krux, lotame, Adobe
  • Find unique data source to enhance your data assets

Non-Probability Sampling and Online Panel: They’re all grown up now

Written by
Annie Pettit, Canadian Chair of ISO TC225
Debrah Harding, UK Chair
Elissa Molloy, Australian Chair

In the seven years since the creation of the quality standard ISO 26362, the use of online panels for market, opinion and social research has experienced massive growth and evolution. The standard was extremely useful in helping both clients and vendors explain and understand the technical aspects of what is now a ‘traditional’ online panel. And while online panels are now default sample sources for many researchers, new options that must also be considered have been developed since then.

In the online world, we have seen the introduction of panels that use not ‘traditional’ email invitations but rather options such as pop-up intercepts, or requiring people to visit a specific website and select from available research opportunities, or offering opportunities from pre-roll webpages. We now have to consider whether automated inventory and survey routing is appropriate for our needs. And of course, we now have the option to engage panel and sample brokers who will find sample providers for us.

The great success of online sample led to the decline of offline sample in rich areas of the world. But don’t let that fool you. There still exist large communities of people around the world where access to online services, or financial resources, means that advanced online surveys are simply not feasible. Offline panels are still very necessary and important in many communities and for many types of research.

And, what may seem surprising to some is that, now, in both offline and online environments, we must consider whether the sample or panel has probability or nonprobability characteristics.

In the time that our sector has greatly advanced researchers’ capabilities, people have also advanced in their responses to surveys.  For some, answering surveys is now a normal activity for people, many of whom participate in one or more panels, in addition to innumerable surveys from ad hoc outreach programs and end-client research studies. Participants are more familiar than ever with techniques for increasing their chances of qualifying for incentives as well as techniques for completing surveys as quickly as possible, sometimes with less than good intentions and sometimes as a reaction to poor quality research tools and services.

It is clear that we have reached a new stage with samples where both offline and online sample have been accepted as valid and reliable techniques, each with a host of new intricate technical requirements.

On March 11 and 12, representatives from around the world, including Canada, UK, USA, The Netherlands, Australia, Japan, Austria, and more, will gather in London, England. There, we will discuss and debate the advancements our industry has made and how we can incorporate those advancements into the ISO standards. Our goal will be to update the online panel standard to better reflect the current and future state of sampling for market, opinion and social research. Also high on the agenda will be the new draft ISO standard for digital analytics and web analyses, which aims to develop the service requirements for digital research services.  These leaders will also bring to light the global differences in research requirements and practice, to help solidify the wider issue of how the ISO research standards can best serve the research sector well into the future.

When does #AAPOR infuriate me? #MRX 

Let me begin by saying I love AAPOR. I go to many conferences around the world and so can make some fair comparisons regarding the content and style of presentations.  While AAPOR presentations are not known for skill in the physical presentation, AAPOR is top notch for its focus on methods and science. There is no fluff here. Give me content over decoration any day.  I always recommend AAPOR conferences to my scientifically minded research friends.  That said…

aaporToday i heard inferences that the difference between probability panels and nonprobability panels is quality. Are you kidding me? Since when does recruitment method translate into poor quality. Different isn’t bad. It’s different. I know first hand just how much work goes into building a quality panel. It ain’t easy to find and continually interest people in your (my) boring and tedious surveys. Fit for purpose is the issue here. Don’t use a data source for point estimates when it’s not suited for point estimates.

And stop asking for response rates with nonprobability panels. High rates are not good and low rates are not bad. High response rates mean every person with a low response rate has been kicked off the panel. It does NOT mean you’re getting better representativity. Instead, ask about their data quality techniques. That’s what truly matters.

I heard today that a new paradigm is coming and AAPOR ought to lead it. Well, sadly, if AAPOR members still think response rates with panels are meaningful,  nonprobability panels are worthless, and they’re still doing email subject line tests, oh my you’re in for a treat when you discover what eye-tracking is. AAPOR leading? Not even close. You’re meandering at the very end of an intensely competitive horse race.

Dear AAPOR, please enter the 21st century. Market researchers have been doing online surveys for twenty years. We finished our online/offline parallel tests ten years ago.  We finished subject line testing ten years ago too. We’ve been doing big data for 50 years.  We’ve been using social media data for 10 years.  I could go on but there’s no need.

Where have you been all these years? Arguing that probability panels are the only valid method? That’s not good enough. Let me know when you’re open to learning from someone outside your bubble. Until then, I’ll be at the front of the horse race.

Evaluating response rates for web surveys #AAPOR #MRX 

prezzie #1: do response rates matter

  • response rates used to be an indicator of data quality, are participation rates meaningful?
  • completion rates were related to higher error [makes sense to me, making everyone answer a survy includes people who don’t want to answer the survey]
  • if you only surcey people who answer surveys, your response rates will be high
  • emerging indicator is cumulative response rate for probability panels = recruit rate * profile rate * participation rate
  • largest drop in rate is immediately after recruitment, by fifth survey the rate really slows down [this is fairly standard, by this point people who know they don’t like participating have quit]
  • by this measurement, the cumulative response rate had dropped to less than 3%, across all the groups the rate was less than 10%  [tell me again that a probability panel is representative. maybe if the units were mitochondria not humans who self determine, hello non-sampling error!]

prezzie 2: boosting response rates with follow-ups

  • 30% response rate with follow up compared to 10% with no follow up using the aapor response rate
  • follow ups helped a little with hispanic rates
  • helped a lot for cell phone only households
  • helped a lot for lowest and highest   income households, adults under 50 years old, high school only education
  • [hey presenters, slides full of text might as well be printed and handed out, and since i’m on this topic, yellow on white does not work, fonts under 25 point don’t work, equations don’t work. use your slides wisely! and please don’t read your slides 😦 ]

prezzie 3: testing email invitations in a nonprobability panel

  • using mobile optimized forms 🙂
  • used short invites sent from census bureau with census logo
  • best subject line as chosen by responders was – help us make the us census better, answer our survey
  • but real data showed this was worse than the others tested
  • best message was the message focusing on confidential, and possibly even better if you specify 10 minutes

prezzie 4: does asking for an email address predict future participation

  • response rates were 2 to 3 times higher for people who gave an email address
  • but it’s not exactly the email as an indicator, it’s people open to participating in further research
  • no effects by gender or ethnicity, graduate degree people are less likely to provide their email address

prezzie 5: predictors of completion rates

  • first selected only studies with completion rates over 60% [don’t know why you would do this, the worst surveys are important indicators]
  • completion rates are higherif you start with a simple multiple choice, lower if you start with an open end
  • introductory text and images don’t help completion rates
  • completion rates decrease as number of questions increase
  • higher completion rates if you put it all on one page insteading of going page page page page
  • a title increases the rate, and numbering the questions increases the rate for shorter surveys
  • open ends are the worst offendors for completion rates, question length and answer length is next worse [so stop making people read! you know they aren’t actually reading it anyways]
  • respondents don’t want to use their keyboards
  • avoid blocks of text

[my personal opinion… response rates of online panels have no meaning. every panel creates the response rate that is suited to their needs. and they adjust these rates depending on the amount and type of work coming in. response rates can be increased by only sending surveys to guaranteed responders or lowered by sending surveys to people who rarely respond. and by adjusting incentives and recruitment strategies you can also raise or lower the rates. instead, focus all your inquisitions on data quality. and make sure the surveys YOU launch are good quality. and i don’t mean they meet your needs. i mean good quality, easy to read, engaging surveys.]

by the way, this was a really popular session!

  

Concerning quality in opt-in panels  #AAPOR #MRX 

6 papers moderated by Martin Barron, NORC

prezzie 1: evaluting quality control questions, by Keith Phillips

  • people become disengaged in a moment but not throughout an entire survey, true or false – these people are falsely accused [agree so much!]
  • if most people fail a data quality question, its a bad question
  • use a long paragraph and then state at the end please answer with none of the above to this engagement question – use a question that everyone can answer –> is there harm in removing these people
  • no matter how a dataset is cleaned, the answers remained the same, they don’t hurt data quality, likely because it happens randomly
  • people who fail many data quality questions are the problem, which questions are most effective?
  • most effective questions were low incidence check, open ends, speeding

prezzie 2: key factor of opinion poll quality

  • errors in political polling have doubled over the last ten years in canada
  • telephone coverage has decreased to 67% when it used to be 95%
  • online panel is highly advantageous for operational reasons but it has high coverage error and it depends on demographic characteristics
  • online generated higher item selection than IVR/telephone

prezzie 3: new technology for global population insights

  • random domain intercept technology – samples people who land on 404 pages, reaches non-panel people
  • similar to random digit dialing
  • allows access to many countries around the world
  • skews male, skews younger, but that is the nature of the internet
  • rr in usa are 6% compared to up to 29% elsewhere [wait until we train them with our bad surveys. the rates will come down!]
  • 30% mobile in USA but this is competely different around the world
  • large majority of people have never or rarely take surveys, very different than panel

prezzie 5: surveys based on incomplete sampling

  • first mention of total survey error [its a splendid thing isn’t it!]
  • nonprobability samples are more likely to be early adopters [no surprise, people who want to get in with new tech want to get in with other things too]
  • demographic weighting is insufficient
  • how else are nonprobability samples different – more social engagement, higher self importance, more shopping behaviours, happier in life, feel like part of the community, more internet usage
  • can use a subset of questions to help reduce bias – 60 measures reduced to number surveys per month, hours on internet, trying new products first, time spent watching TV, using coupons, number of times moved in last 5 years
  • calibrated research results matched census data well
  • probability sampling is always preferred but we can compensate greatly

prezzie 6: evaluating questionnaire biases across online sample providers

  • calculated the absolute difference possible when completing rewriting a survey in every possible way – same topic but different orders, words, answer options, answer order, imagery, not using a dont know
  • for example, do you like turtles vs do you like cool turtles
  • probability panel did the best, crowd sourced was second best, opt in panel and river and app clustered together at the worst
  • conclusions – more research is needed [shocker!]

Combining a probability based telephone sample with an opt-in web panel by Randal ZuWallack and James Dayton #CASRO #MRX

Live blogging from Nashville. Any errors or bad jokes are my own.

– National Alcohol Survey in the US, for 18 years plus [because children don’t drink alcohol]
– even people who do not drink end up taking a 34 minute survey compared to 48 minutes for someone who does drink. this is far too long
– only at 18 minutes are people determined to be drinkers or abstainers. [wow, worst screen-out position EVER]
– why data fusion? not everyone is online [please, not everyone is on a panel either. and what about refusals? this fascination with probability panels is often silly]
– RDD measures population percents
– web measures depth of information conditional on who is who
– they matched an online and RDD sample using overlapping variables
– problem is matching can create strange ‘people’ that doesn’t explain real people. however, in aggregate, the distributions work out. we think about it being right on an individual level
– “The awesome thing about having a 45 minute survey”…is the statistical analyses you can do with it [made me laugh. there IS an awesome thing? 🙂 ]
– [SAS user 🙂 Have I told you lately….. that I love SAS]
– There were small differences in frequencies between the RDD and web surveys for both wine and beer. averages are very close but significantly different [enter conversation – when does significantly different mean meaningfully different]
– heavy drinking is much much greater on web surveys
– is there social desirability, recall bias 🙂
– not everything lines up perfectly RDD vs web, general trends are the same but point estimates are different
– so how do you know which set of data is true or better?
– regardless, web does not reproduce RDD estimates
– problem now is which data is correct, need multiple samples from the same panel to test

Do panel companies bother to manage their panels? #AAPOR

One of yesterday’s #AAPOR sessions focused on data quality of online panels. One of the speakers posited that maybe panels don’t know or don’t care about their management. This could not be further from the truth.

I’ve been on the management team for several national and global panels, and have also worked with a number of panel managers from competitive panel companies.

The amount of care and expertise that these people put into managing their panels is astonishing. On a daily basis, these folks are analyzing and trying to figure out how to respond to things like
– tenure: how long have people been on the panel as of today, which demographics have been there shorter and longer
– response rates: what are the newest rates by survey, by demographics, by survey category, by client
– supplier health: depending on where a panelist was sourced from, do any suppliers give better or worse data or panelists who stay longer on the panel
-data quality: what people are providing better or worse data, by source, by category, by everything
– invites: which demos are getting more or fewer invites, who is being ignored or bothered

And of course, all of these data, and many more, factor into panel rules dictating how many invites individuals are allowed to receive, whether that rule needs to be changed temporarily or permanently, whether it needs to change by demographic or by source.

You know what, perhaps it would just be easier to read the ESOMAR 28 questions document that most panel companies have created. The moral of the story is that just because you aren’t familiar with what the companies are doing, doesn’t mean they aren’t doing it.

Other Posts

Respondent Identity Verification with Non-Panel, Real-time Samples: Is There Cause for Concern by Nancy Brigham and James Karr #CASRO #MRX

Live blogging from the CASRO Digital conference in San Antonio, Texas. Any errors or bad jokes are my own.CasroDigital

Respondent Identity Verification with Non-Panel, Real-time Samples: Is There Cause for Concern?”


Nancy Brigham
As the research industry evolves toward non-panel sample sourcing and real-time sampling, questions have arisen about the quality of these respondents, especially in the area of respondent identity verification. This research addresses two key questions: Are fraudulent identities a concern with non-panel samples, and what are the research implications of assessing identity validation with these samples? This work examines identity verification and survey outcomes among five different levels of Personally Identifiable Information (PII) collection. In addition to the presenters, this paper was authored by Jason Fuller (Ipsos Interactive Services, Ipsos).

  • Nancy Brigham, Vice President, IIS Research-on-Research, Ipsos
  • James Karr, Director & Head of Analytics, Ipsos Interactive Services

James Karr
  • Do people whose validity cannot be confirmed providing bad data? Should we be concerned?
  •  What do we know about non-panel people? Maybe they don’t want to give PII to just take one survey. Will they abandon surveys if we ask for PII?  [I don’t see answering “none” as a garbage question. It’s a question of trust and people realizing you do NOT need my name to ask me my opinions.]
  • Is it viable to assess identify validation with non-panel sources?
  • In the study, PII was asked at the beginning of the survey [would be great to test end of survey after people have invested all that time in their responses]
  • Five conditions asking for combination of name, email, address
  • Used a third party validator to check PIIEmbedded image permalink
  • 25% of people abandoned at this point
  • Only 4 out of 2640 respondents gave garbage information at this point, 12 tried to bypass without filling it out and then abandoned. It’s so few people that this is hard to trust. [Hey people, let’s replicate]
  • Name and address caused 6% of abandonment, name and email caused only 3% abandonment
  • Did people get mad that we asked this? can we see anger in concept test? no.
  • didn’t lead to poor quality survey behaviours – used a 13 minute survey
  • when given a choice, people prefer to give less information – most people will choose to give name and email, low some people will give all information
  • Simply collecting PII didn’t appear to influence other aspects
  • Did their non-panel source give lower quality data? no. 82% passed the validation test across all conditions. Those who provide the most comprehensive data validate better but that’s likely because it’s more possible to validate them.
  • Real-time sample gives just as good data quality, same pass rates, no data differences
  • Conclude the screening question is necessary, heads up that PII question will be coming
  • Younger ages abandoned more across all test conditions
  • This study only looked at the general population, not hard to reach groups like hispanics, or different modes like mobile browsers, or in-app respondents

Other Posts

DIY Panel: Gardlen, Ribeiro, Smith, Terhandian, Thomas #CASRO #MRX

casrobanner
… Live blogging from beautiful San Francisco…

 

Do It Yourself (DIY) Research Panel Discussion

  •  John Bremer, session moderator
  • Bob Fawson, session moderator
  • Phillip Garland, Vice President, Methodology, SurveyMonkey
  • Efrain Ribeiro, Chief Research Officer, Lightspeed Research
  • Ryan Smith, Co-founder and CEO, Qualtrics
  • George Terhanian, N. A. President and Group Chief Strategy Officer, Toluna
  • Randall Thomas, Vice President – Online Research Methods, GfK

Speaker thoughts

  • DIY is a source for innovation
  • Think about DIY checkout lines vs DIY research. There are risks with it but the experts are still available is need be
  • Data is data but DIY doesn’t solve the insight and analysis. You still need the researcher for that.
  • You can get numbers out of a machine but do the numbers have any meaning
  • Concern with DIY is are people writing good surveys, are they using samples properly, are they weight appropriately, are they analyzing data appropriately. DIY and Do-It-Alone are different. You always need the expertise around you.
  • Current clients of DIY include people who didn’t have access to research before as well as many of the major research companies
  • DIY can often get work done much more quickly
  • When to use DIY – when you don’t want to do the work yourself, when you need results extremely quickly, when it’s not a major/serious issue, when you don’t have the staff for it, goor for an organization, good when you have standardized tools
  • DIY is simply part of the assembly line
  • The researcher of tomorrow will be comfortable with DIY tools
  • When is DIY NOT appropriate – [folks didn’t answer this audience question 🙂  how about DIY shouldn’t be used for census rep weekly/monthly tracking over 12 months. it’s far too complicated to just throw in a tool.]