Questionnaire Design #AAPOR 


Live note taking at #AAPOR in Austin Texas. Any errors or bad jokes are my own.

The feedback of respondent committment and tailored feedback on response quality in an online survey; Kristin Cibelli, U of Michigan

  • People can be unwilling or unable to provide high quality data, will informing them of the importance and asking for committment help to improve data quality [I assume this means the survey intent is honourable and the survey itself is well written, not always the case]
  • Used administrative records as the gold standard
  • People were told their answers would help with social issues in the community [would similar statements help in CPG, “to help choose a pleasant design for this cereal box”]
  • 95% of people agreed to the committment statement, 2.5% did not agree but still continued; thus, we could assume that the control group might be very similar in committment had they been asked
  • Reported income was more accurate for committed respondents, marginally significant
  • Overall item nonresponse was marginally better for committed respondents, not committed people skipped more
  • Not committed were more likely to straightlining 
  • Reports of volunteering, social desirability were possibly lower in the committed group, people confessed it was important for the resume
  • Committed respondents were more likely to consent to reviewing records
  • Committment led to more responses to income question, and improved the accuracy, more likely to check their records to confirm income
  • Should try asking control group to commit at the very end of the survey to see who might have committed 

Best Practice Instrument design and communications evaluation: An examination of the NSCH redesign by William Bryan Higgins, ICF International

  • National and state estimates of child well-being 
  • Why redesign the survey? To shift from landline and cell phone numbers to household address based sampling design because kids were answering the survey, to combine two instruments into one, to provide more timely data
  • Moe to self completion mail or web surveys with telephone follow-up as necessary
  • Evaluated communications about the survey, household screener, the survey itself
  • Looked at whether people could actually respond to questions and understand all of the questions
  • Noticed they need to highlight who is supposed to answered the survey, e.g., only for households that have children, or even if you do NOT have children. Make requirments bold, high up on the page. 
  • The wording assumed people had read or received previous mailings. “Since we last asked you, how many…”
  • Needed to personalize the people, name the children during the survey so people know who is being referred to 
  • Wanted to include less legalese

Web survey experiments on fully balanced, minimally balanced, and unbalanced rating scales by Sarah Cho, SurveyMonkey

  • Is now a good time or a bad time to buy a house. Or, is now a good time to buy a house or not? Or, is now a good time to buy a house?
  • Literature shows a moderating effect for education
  • Research showed very little difference among the formats, no need to balance question online
  • Minimal differences by education though lower education does show some differences
  • Conclusion, if you’re online you don’t need to balance your results

How much can we ask? Assessing the effect of questionnaire length on survey quality by Rebecca Medway, American Insitute for research

  • Adult education and training survey, paper version
  • Wanted to redesign the survey  but the redesign was really long
  • 2 version were 20 pages and 28 pages, 138 questions or 98 questions
  • Response rate slightly higher for shorter questionnaire
  • No significant differences in demographics [but I would assume there is some kind of psychographic difference]
  • Slightly more non-response in longer questionnaire
  • Longer surveys had more skips over the open end questions
  • Skip errors had no differences between long and short surveys
  • Generally longer had lower repsonse rate but no extra problems over the short 
  • [they should have tested four short surveys versus the one long survey 98 is just as long as 138 questions in my mind]

2 responses

  1. Thanks for the notes, very useful.

    A couple of thoughts.

    On the Sarah Cho presentation:

    Really? Based on one specific and simple question (“Good time to buy a house”) there’s a claim that “if you’re online you don’t need to balance your results”? That assumes that all questions are the same, doesn’t it? Rather a massive claim: from one simple question to all possible questions.

    On the Rebecca Medway presentation:

    Agree with your final thought, and I’d take it further. Why not try (say) 4 questions per survey and do 25 of them? Or 1 question per survey and do 98 of them? Ultra-short, ultra-light surveys that turnaround really quickly and focus us on what we really need to know.

    1. Glad to hear it. Thanks for sharing your thoughts in return.

%d bloggers like this: