Tag Archives: response rate

Evaluating response rates for web surveys #AAPOR #MRX 

prezzie #1: do response rates matter

  • response rates used to be an indicator of data quality, are participation rates meaningful?
  • completion rates were related to higher error [makes sense to me, making everyone answer a survy includes people who don’t want to answer the survey]
  • if you only surcey people who answer surveys, your response rates will be high
  • emerging indicator is cumulative response rate for probability panels = recruit rate * profile rate * participation rate
  • largest drop in rate is immediately after recruitment, by fifth survey the rate really slows down [this is fairly standard, by this point people who know they don’t like participating have quit]
  • by this measurement, the cumulative response rate had dropped to less than 3%, across all the groups the rate was less than 10%  [tell me again that a probability panel is representative. maybe if the units were mitochondria not humans who self determine, hello non-sampling error!]

prezzie 2: boosting response rates with follow-ups

  • 30% response rate with follow up compared to 10% with no follow up using the aapor response rate
  • follow ups helped a little with hispanic rates
  • helped a lot for cell phone only households
  • helped a lot for lowest and highest   income households, adults under 50 years old, high school only education
  • [hey presenters, slides full of text might as well be printed and handed out, and since i’m on this topic, yellow on white does not work, fonts under 25 point don’t work, equations don’t work. use your slides wisely! and please don’t read your slides 😦 ]

prezzie 3: testing email invitations in a nonprobability panel

  • using mobile optimized forms 🙂
  • used short invites sent from census bureau with census logo
  • best subject line as chosen by responders was – help us make the us census better, answer our survey
  • but real data showed this was worse than the others tested
  • best message was the message focusing on confidential, and possibly even better if you specify 10 minutes

prezzie 4: does asking for an email address predict future participation

  • response rates were 2 to 3 times higher for people who gave an email address
  • but it’s not exactly the email as an indicator, it’s people open to participating in further research
  • no effects by gender or ethnicity, graduate degree people are less likely to provide their email address

prezzie 5: predictors of completion rates

  • first selected only studies with completion rates over 60% [don’t know why you would do this, the worst surveys are important indicators]
  • completion rates are higherif you start with a simple multiple choice, lower if you start with an open end
  • introductory text and images don’t help completion rates
  • completion rates decrease as number of questions increase
  • higher completion rates if you put it all on one page insteading of going page page page page
  • a title increases the rate, and numbering the questions increases the rate for shorter surveys
  • open ends are the worst offendors for completion rates, question length and answer length is next worse [so stop making people read! you know they aren’t actually reading it anyways]
  • respondents don’t want to use their keyboards
  • avoid blocks of text

[my personal opinion… response rates of online panels have no meaning. every panel creates the response rate that is suited to their needs. and they adjust these rates depending on the amount and type of work coming in. response rates can be increased by only sending surveys to guaranteed responders or lowered by sending surveys to people who rarely respond. and by adjusting incentives and recruitment strategies you can also raise or lower the rates. instead, focus all your inquisitions on data quality. and make sure the surveys YOU launch are good quality. and i don’t mean they meet your needs. i mean good quality, easy to read, engaging surveys.]

by the way, this was a really popular session!

  

Advertisements

Advances in designing questions in brief #AAPOR #MRX 

Concurrent Session A, Moderator Carl Ramirez, US Government Accountability Office, 9 papers!

prezzie 1: using item repsonse theory modeling

  • useful for generating shorter question lists [assuming you are writing scales, plan to reuse scales many times, and don’t require data to every question youve written]
  • [know what i love about aapor? EVERYONE can present regardless of presentation skill. content comes first. and on a tangent, I’ve already eaten all the candies i found in the dish]

prezzie 2: measurements of adiposity

  • prevalence rate of obesity is 36% in the USA, varies by state but every state is at least 20% [this is embarrassing in a world where millions of people starve to death]
  • we most often use self reported height and weight to calculate BMI, this is how national CDC measures it but these reports are not reliable
  • correlations of BMI and body fat is less than 40%, we create a proxy with an unreliable measure
  • underwater weight is a better measure but there are oviously many drawbacks to that

prezzie 3: asking sensitive GLBT questions

  • respondents categorize things differently than researchers, instructions do affect answers, does placement of those intructions matter? [hm, never really thought of that before]
  • tested long instructions before vs after the question
  • examined means and nonresponse
  • data collection incomplete so can’t report results

prezzie 4: response order effects related to global warming

  • most americans believe climate change is real but one third do not
  • primacy and recency effects can affect results, primacy more often in self-administered, recency more often in interviewer assisted
  • reverse ordered five questions for two groups, 5 attitudes were arranged on a scale from belief to disbelief
  • more people believed global warming when it was presented first, effect size small around 5%
  • it affected the first and last items, not the middle opinions
  • less educated people were more affected by response orders, also people who weren’t interested in the topic were more affected

prezzie 5: varying administration of sensitive questions to reduce nonresponse 

  • higher rates of LGB are assumed to be more accurate
  • 18 minute survey on work and job benefits
  • tried assigning numbers versus words to the answers ( are you 1: gay…. vs are you gay) [interesting idea!]
  • [LOVE sample sizes of over 2300]
  • non response differences were significant but the effect size was just 1% or less
  • it did show higher rates of LGB, do recommend trying this in a  telephone survey

prezzie 6: questionnaire length and response rates

  • CATI used a 10$ incentive, web was $1, and mail was $1 to $4 [confound number 1]
  • short survey was 30% shorter but still well over 200 questions
  • no significant difference in response rate, completion rate better for short version 3% more
  • no effect on web, significant effect on mail

prezzie 8: follow up short surveys to increase response rates

  • based on a taxpayer burden survey n=20000
  • 6 stage invite and reminder process, could receive up to 3 survey packages, generates 40% response rate
  • short form is 4 pages about time to complete and money to complete, takes 10 minutes tocomplete 
  • many of the questions are simply priming questions so that people answer the time and money questions more accurately
  • at stage 6, divided into long and short form
  • there was no significant difference in response rate overall
  • no differences by difficulty or by method of filing 
  • maybe people didn’t realize the envelope has a shorter survey, they may have chucked it without knowing
  • will try a different envelope next time as well as saying overtly its a shorter survey

prezzie 9: unbalanced vs balanced scales to reduce measurement error

  • attempted census, 92% response rate of peace corps volunteers
  • before it was 0 to 5, after it was -2 to +2
  • assume most variation will be in the middle as very happy and very unhappy people will go to the extremes anyways
  • only 33 people in the test, 206 items
  • endpoint results were consistent but the means were very slightly different [needs numerous replications to get over the sample size and margin of error issue]

Down with response rates!

Continue reading →

In Search of Horribly Low Response Rates #MRX

Ask anyone what the response rate to their last research project was and they’ll hold their head in shame if the answer is a number under 10%. As researchers, we work really hard to generate response rates that are as high as we can possibly get them. In the competitive world of market research, the survey panel or focus group recruiter with the highest response rate just might win the job.

But wait. Why do some sources have higher response rates than others?

  1. Active rules: Sources that only invite people to research if they have completed a research study in the last month have much higher response rates.
  2. Incentives: Sources that provide more valuable incentives have higher response rates.
  3. Recruitment: Sources that recruit participants from research sources have higher response rates (e.g., “Thank you for answering our Purchase Satisfaction Survey. Would you like to join our panel?”

skinner box

In each of these three situations, the research panels have essentially pre-selected people based on their propensity to participate in research. And, as we all know, the propensity to participate in research is not a randomly distributed characteristic. Certain personality types are just more or less likely to want to participate in research. And this brings me to my point.

Shouldn’t we actually be seeking out the lowest response rate possible?  Instead of focusing on gathering opinions from people who are MOST likely to want incentives or who always participate in research, shouldn’t we keep the pipe lines open to accept opinions from research keeners as well as those who hardly ever want to participate in research and who couldn’t care less about incentives? Wouldn’t a really low response rate reflect a research participant pool that is awash with both keeners and frequent abstainers, a pool that is more reflective of the real population?

Perhaps we should actually be seeking out low response rates. Perhaps we shouldn’t judge sample providers simply on response rates.  Perhaps we should consider that the quality of a research sample goes far beyond response rates. What a strange thought.

Arriving Yesterday: A New Era of Research

We all know this. Response rates continue to decline despite all efforts to improve them. We’re working on taking advantage of rich media questions that make the survey taking experience more fun. We’re working on cell phone surveys so that some surveys can be moved into a different, possibly more engaging, format. We’re developing communities and social networks to keep survey responders happy.

This is all good stuff and it’s important, but will it be enough? Will this keep our industry afloat?

It seems to me that social media has ushered in a new era of research. It didn’t start with researchers and we didn’t ask for it. But it’s here.

This new world has lots of good stuff in it. There is no such thing as declining response rates. There are no order effects, no question biases, no leading statements, no interviewer effects. There aren’t even any incentive costs, though let’s not count that out just yet.

What it does have is millions upon millions of unprompted, genuine opinions about the most minuscule and the most topical issues. It has opinions from people who’ve never answered an online survey before, and from people who gave up answering online surveys ages ago. It has opinions from chat leaders and early adopters, influencers and thought leaders. It has breaking issues, ongoing issues, and issues we never even knew were issues.

Of course, it means that we’ve got a ton of new issues to work through but for someone, like me, who loves the challenge of research on research, thing is just more good news.

It sounds to me like research using social media has a ton of advantages. So you might as well come along for the ride because the train is moving forward whether you like it or not.

Read these too

Quick tips for writing a quality survey

Designing a quality survey seems simple but for anyone who’s tried, the questions you have increase exponentially with every attempt. Here are just a few quick tips.

1) Start with a precise, well thought out purpose. If a survey question does not specifically answer a purpose, cut it.
2) Write questions that can be measured quantitatively. It will save you time, money, and peace of mind if you those numbers map directly to specific company goals.
3) Keep your survey short. This will lead to higher response rates, less self-opting out, and greater generalizability. When I say short, I mean 15 minutes. Absolutely no more.
4) Keep your questions short. This will lead to higher reading comprehension, greater accuracy, and greater data quality. Ditto for short answers.
5) Use real words. Forget consumption and purchase intent. Talk about eating and buying. We’re not all marketers and we don’t all get those fancy words.
6) Use negatives cautiously and sparingly. The human brain has a unique fondness for NOT seeing this word. Avoid tempting fate. If you must use a negative, try to use a capitalized NOT.
7) Get yourself a survey question design book and learn the art. You might as well do it right and get the right data. You will be amazed at all the other strange things the brain does when it digests a survey.

Good luck!

WRITE
WRITE‘ by karindalziel via Flickr
Image is licenced under a Creative Commons Attribution licence

Related Posts

My Theory on Declining Response Rates

DSC08577


It’s quite simple.
.
MANY research companies — You can’t answer everyone’s surveys
Increasing survey lengths — Who has a spare hour?
Boring surveys — grid after grid after grid
Old fashioned surveys — Hello… it’s 2009. Have you seen the wicked cool options that are available?
Competition — Facebook or survey, Twitter or survey, YouTube or survey, FAMILY or survey
Sensitive surveys — Why is it now ok to ask about the most intimate details of a person’s life
Marketing speak — Com’on now, my gum chewing needs? get serious.
.
Bad + bad + bad + bad = more and more and more annoyances = more and more and more people just giving up
.
What do we do?
We spend more money on recruitment. We spend more money quicker. We spend money gaining the trust of people who will get frustrated and leave. We DON’T spend money maintaining the trust of people who have decided they do want to trust us. I think there is a cardinal rule of business that it is cheaper to retain good customers than to recruit new ones.
.
The solution, though simple, is long term. Let’s improve surveys. Shorten them, improve the quality, make them real. The most incredible thing is that we already know how to do this. Over time, people will begin to see the change. They will start to appreciate marketing research surveys again. A new generation of responders will see what we are offering and choose to be a part of it. It always feels good to know that you’ve made an important contribution. It feels even better when that contribution was fun to make.
.
Greece seems to have turtles at every archeological site, we called them guard turtles. Let’s hire guard turtles at every MR agency and put them in charge of guarding against surveys that don’t promote responder engagement. That would be fun!

Related Articles

 

  • Will Goodhand: Social Media Research and Digividuals #netgain5 #mrx
  • There is no quality insight. There is only insight.
  • Twitterversity: It’s University in Pajamas! #MRX
  • Quick tips for writing a quality survey
  • Brian Singh: Insights from the Nenshi Campaign #netgain5 #mrx #li
  • Screener keeners or rejection correction?
  • Survey Design Tip #2: Short and Sweet

    Burrowing Owl, a diurnal owl of open country.

    Image via Wikipedia

    So how was your day today? Did you grab a coffee, sit at your desk, and then not move until lunch? Or, did you spend 20 minutes doing a quick check of your email, followed by a 10 minute discussion of a new product, followed by a 2 hour meeting during which time you checked your email and twittered? Did you watch TV last night? Did you patiently sit for the entire 60 minutes watching all the commercials or did you get up at every commercial for a drink, a snack, a peck on the cheek, an email, or a…. um…. pee break? My guess is you chose the second option in both cases.
    .
    So why do we expect survey responders to be able to sit through a 45 minute survey? Why do we expect them to do it once a week, every week? What could possibly cause them to be interested in a survey for that long when you don’t even do it in your regular life?
    .
    All we’re really doing with these long surveys is giving our precious survey participants reason to reconsider sharing their opinions, reason to let their attention wander, reason to move on to something else. Sure, long survey give you lots of detailed data, in depth information, and plenty of opportunity to run fancy schmancy multivariate statistics. But, with response rates following such a scary decline, it is high time the survey research industry reconsiders what a long survey is. On that note, tell me what YOU think is too long for a survey.
    .
    .

    Related Articles

     

  • 6 groups of people who need to follow research standards
  • Down with quant! Long live qual!
  • Keep the ESOMAR faith
  • #MRA_FOC #MRX MRA/IMRO Social Media Research Guidelines
  • Survey Panel Questions – Enough Already!

    Survey panels are big right now. If you want to launch a survey to hundreds or thousands of people across the country, chances are you go to a survey panel company. They have pre identified permission based access to people who are ready and willing to take surveys at the drop of a hat. When it comes to panel companies, I’ve had to explain the following two issues so many times I thought I’d just lay them out right here.

    1) Your panel size is too small to meet our needs
    Then how can one panel company have over 1 million panelists and another company stays in business with only 200k panelists? How does that work?
    Well, let’s look at an example.

    Company A has 1 million panelists and their return rate is 5%. If this panel was to launch to every single panelist right now, they would get 50 000 completes.
    Company B has 200 000 panelists and their response rate is 25%. If they were to launch to everyone, they would also get 50 000 completes.
    So basically, these two panels are identical! Different panel sizes, exact same outcome.

    Why is Company A so much larger? Well, panels recruit thousands and thousands of people every year. Technically, they could advertise the size of their panel to be anything they wanted to. But, there’s a little thing called the panel rule that determines the real size of a panel.

    A company could let anyone be ‘active’ on their panel as long as they clicked (and didn’t even finish!) a survey in the last 12 months. This panel will be really big, but since many of their panelists never even finished a survey, their response rates will be pretty low. On the other hand, another company could use a much stricter rule. Maybe it’s something like people are ‘active’ as long as they finished a survey in last 3 months. In this case, a lot fewer people meet the qualification, but they are all survey completers. This means their response rates are quite a big higher.

    So, it comes down to what’s your panel size AND what’s your response rate. That will tell you the real size of a panel.

    2) Your panel doesn’t look anything like what I need
    This one usually comes to me as “I need to know the demographic profile of your panel so that i can determine whether you can run my survey.” Well, the reason panel companies have such huge panels is so that they can pick and choose from among their panelists to create the group that you need. Even if their panel is only 40% male, they can easily choose a sample for you that is 50% male or even 80% male.

    Related Articles

    %d bloggers like this: