Rights Of Respondents #AAPOR


Live note taking at #AAPOR in Austin Texas. Any errors or bad jokes are my own.


Moderator: Ronald Langley, University of Kentucky

Examining the Use of Privacy Language: Privacy from the Respondent’s View; Kay Ricci, Nielsen Lauren A. Walton, Nielsen Ally Glerum, Nielsen Robin Gentry, Nielsen

  • Respondents have concerns about the safety and security of their data, Want do know how data is stored, collected, We hear about breaches all the time now
  • In 2015, FCC issued TCPA rules re automatic telephone dialling systems, can’t use them for cell phones without consent
  • Existing page of legalize was terrifying and could affect key metrics
  • 3 steps – in depth interviews, implemented language into TV diary screener, analyzed key metrics to understand impact of new language
  • 30 English interviews, 16 Spanish interviews
  • Did people notice the language, read it, skim it, did they care? Did they understand the terms, was the language easy or difficult
  • Tested several versions, tested a longer version with softer language
  • Only one person understood what an autodialler was, people didn’t realize it was a live interviewer, people didn’t care how their number was dialled if they were going to talk to a live human anyways
  • 2/3 didn’t like the concept, thought they’d be phoned constantly, 1/3 didn’t mind because it’s easier to hang up on a machine
  • People liked we weren’t selling or marketing products, but many don’t see the difference
  • Many people don’t know who neilsen is
  • People liked being reminded that it was voluntary, extra length was fine for this
  • The after version was longer with more broken up sentences
  • Test group had lower return rate but very slightly, lower mail rate
  • Higher return rate for 50 plus, and Hispanic
  • Contact number provision was much lower, drop from 71% to 66%
  • It’s essential to protest so you know the impact
  • [simple language is always better even if it takes more space]

Allowing Access to Household Internet Traffic: Maximizing Acceptance of Internet Measurement; Megan Sever, Nielsen Sean Calvert, Nielsen

  • How do we measure what people watch and buy online in their home
  • How do we access third party data , but then how do we great true demographic information to go with it
  • 22 semi structured interviews – mock recruit into please share your password
  • Ranges from absolutely yes – they believe it’s already being collected anyways
  • Sceptics wanted more information – what are you actually recording, how is my data secure
  • Privacy – security – impact on Internet performance
  • People seemed to think they would Screencap everything they were doing, that they could see their bank account
  • Brought examples of real data that would be collected, what the research company will see, essentially lines of code, only see URL, not the contents of the page, start and stop times; at this point participants were no longer concerned
  • Gave a detailed description of encryption, storage and confidentially procedures
  • Explain we’re not marketing or selling and data is only stored as long as is necessary
  • Reputation of the research company builds trust, more familiar folks were okay with it
  • Script should describe purpose of measurement, what will and will not be measured, how it will be measured, data security privacy and protection policies, impact on Internet performance, reputation of company
  • Provide supplementary information is asked for – examples of data, policies that meet or exceed standards, examples of Internet performance, background and reputation of company 

Informed Consent: What Do Respondents Want to Know Before Survey Participation; Nicole R. Buttermore, GfK Custom Research Randall K. Thomas, GfK Custom Research Jordon Peugh, SSRS; Frances M. Barlas, GfK Custom Research Mansour Fahimi, GfK Custom Research

  • Recall the kerfuffle last year about what information should be told to respondents re sponsor or person conducting the research 
  • Recognized participants should be told enough information to give informed consent – but also if we are concerned about bias, then we can tell people they won’t be debriefed until afterwards; but companies said sometimes they could NEVER review the sponsor and they’d have to drop out of #AAPOR if this happened
  • We worry about bias and how knowing the sponsors affects the results
  • Sponsor information is a less important feature to respondents
  • Do respondents view taking sureys as risky? What information to respondents want prior to computing surveys.
  • Topic, my time, and my incentive are thought to be most important
  • People were asked about surveys in general, not just this one or just this company
  • 6% thought an online survey could have a negative impact 
  • Most worried about breaks of privacy, confidentially; less worried is survey is waste of time or boring, or might upset them
  • 70% said no risk to mental health, 2% said high risk to mental health
  • 23% said stopped taking a survey because it made them uncomfortable – made think more deeply about life, made them angry, made them feel worse about themselves, made them feel sad, or increased their stress
  • Politics made them angry, bad questions made them angry, biased survey and too long survey made them angry [That’s OUR fault]
  • Same for feeling stressed, but also add in finance topics
  • Feel worse about self is the finance topic or health, or about things they can’t have
  • Feel sad related to animal mistreatment
  • People want to know how personal information will be protected, surely length, risks, topic, how results will be used, incentives, purpose of survey – at least one third of people [1/3 might not seem like a lot but when you’re sample is 300 people that’s 100 people who want to know this stuff]
  • Similar online vs phone, incentives more important for online, one the phone people wan to know what types of questions will be asked

Communicating Data Use and Privacy: In-person versus Web Based Methods for Message Testing; Aleia Clark Fobia, U.S. Census Bureau Jennifer Hunter Childs, U.S. Census Bureau

  • Concern about different messages in different place and they weren’t consistent
  • Is there a difference between “will only be used for statistically purpose” and “will never be used for non statistical purposes”
  • Tested who will see data, identification, sharing with other departments, burden of message
  • Tested it on a panel of census nerds🙂, people who want to be involved in research, 4000 invites, 300 completes
  • People were asked to explain what each message means, broke it down by demographics
  • 30 cognitive interviews, think aloud protocol, reads sets of messages and react, tested high and low performing messages [great idea to test the low messages as well]
  • FOcused on lower education and people of colour
  • Understanding is higher for in person testing, more misunderstanding in online responses, “You are required by law to respond to the census (technical reference)” was better understood than listing everything in a statement
  • People want to know what ‘sometimes’ means. And want to know which federal agencies – they don’t like the IRS
  • People don’t believe the word never because they know breaches happen
  • More negative on the web
  • Less misunderstanding in person
  • Easier to say negatives things online
  • In person was spontaneous and conversation
  • Focus on small words, avoid unfamiliar concepts, don’t know what tabulate means, don’t know what statistical means [they aren’t stupid, it’s just that we use it in a context that makes no sense to how they know the word]

    Respondent Burden & the Impact of Respondent Interest, Item Sensitivity and Perceived Length; Morgan Earp, U.S. Bureau of Labor Statistics Erica C. YuWright, U.S. Bureau of Labor Statistics

    • 20 knowledge questions, 10 burden items, 5 demographic questions, ten minute survey
    • Some questions were simple, others were long and technical
    • Respondents asked to complete a follow up survey a week later
    • Asked people how hard the survey was related to taking an exam at school or reading a newspaper or completing another survey – given only one of these comparisons 
    • Anchor of school exam had a noticeable effect size but not significant 
    • Burden items – length, difficulty, effort, importance, helpfulness, interest, sensitivity, intrusive, private, burden
    • Main effects – only sensitivity was significant, effect size is noticeable
    • Didn’t really see any demographic interactions
    • Burden length difficult; effort importance helpfulness interesting; sensitive intrusive private – these are the three factors 
    • Only first factor related to whether they would answer the second survey
    • Females more likely to respond a second time
    • More sensitive less likely to be answered again, more interestnig in would attract more women the second dime

      %d bloggers like this: