Tag Archives: AAPOR

Eye Tracking in Survey Research #AAPOR 

Moderator: Aaron Maitland, Westat; Discussant: Jennifer Romano Bergstrom, Facebook 

Evaluating Grid Questions for 4th Graders; Aaron Maitland, Westat

  • Used to study cognitive processing
  • Does processing of questions change over the survey period
  • 15 items, 5 content areas about learning, school, tech in school, self-esteem
  • 15 single items and 9 grid items
  • Grid questions are not more difficult, only first grid takes extra consideration//fixation
  • Double negatives have much longer fixation, so did difficult words
  • Expressed no preference for one type of question

Use of Eye-tracking to Measure Response Burden; Ting Yan, Westat Douglas Williams, Westat

  • Normally consider length of interview, or number of questions or page; but these aren’t really burden
  • Attitudes are a second option, interest, importance, but that’s also not burden; Could ask people if they are tired or bored
  • Pupil dilation is a potential measure, check while they recall from memory, pay close attention, thinking hard, these things are small and involuntary; related to memory load
  • 20 participants, 8 minute survey, 34 target questions, attitude and behavioural questions, some hard or easy
  • Asked self reported burden on 4 items – how hard is this items, how much effort did it take to answer this
  • Measured pupil diameter at each fixation, base diameters differ by person, they used dilation instead, used over a base measure, percentage over a base, average dilation and peak dilation
  • Dilation greater for hard questions, peak 50% larger for hard questions, statistically significant though raw number seems very small
  • Could see breakoffs on the questions with more dilation 
  • Sometimes not consistent with breakoffs
  • Self report did correlate with dilation 
  • Can see people fixate on question many times and go back and forth from question to answer
  • Question stems caused more fixation for hard questions 
  • Eye tracking removes bias of self report, more robust
  • Can we use this to identify people who are experiencing too much burden [imagine using this during an interview, you could find out which candidates were having difficult answering questions]

The Effects of Pictorial vs. Verbal Examples on Survey Responses; Hanyu Sun, Westat; Jonas Bertling, Educational Testing Service Debby Almonte, Educational Testing Service

  • Survey about food, asked people how much they eat of each item
  • Shows visual or verbal examples 
  • Measured mean fixation
  • Mean fixation higher for pictorial in all cases, more time on pictures than the task, Think it’s harder when people see the pictures [i’d suggest the picture created a limiting view of the product rather than a general view of ‘butter’ which makes interpretations more difficult]
  • No differences in the answers
  • Fixation times suggests a learning curve for the questions, easier over time
  • Pictorial requires more effort to respond 

Respondent Processing of Rating Scales and the Scale Direction Effect; Andrew Caporaso, Westat

  • Some people suggest never using a vertical scale
  • Fixation – is pausing
  • Saccades – is the rapid movement between pausing
  • Respondents don’t always want to tell you they’re having trouble
  • 34 questions, random assignment of scale direction
  • Scale directions didn’t matter much at all
  • There may be a small primacy effect with a longer scale, lower education may be more susceptible 
  • Fixations decreased over time
  • Top of scale gets most attention, bottom gets the least [so, people figuring out what the scale it, you don’t need to read all five options once you know what the first one particuarly for an agreement scale. Where people can guess all the answers from the first answer. ]


Conference Speakers Wearing Heart Rate FitBits at #AAPOR #Wearables

Everyone says they get nervous when they speak but how do you know who’s just saying it and who actually is nervous? Well, welcome to Fitbit. Ever since I got my Fitbit, I could easily tell from the heartrate monitor exactly what was happening in my life.  Three days before a conference, my heartrate would start increasing. Once I arrived, my sleep was disrupted. And when I actually spoke, my heartrate went nuts.

Today, Josh put his Fitbit chart online so I figured, ah, what the heck, I can do that that too. And when I asked for more volunteers, Jessica offered hers as well.  Feel free to send YOUR heartrate chart to me and I’ll add it here. Enjoy!

Annie / LoveStats

Annie / @LoveStats. My 25 minute walk to the hotel started with a high heartrate and then leveled off as I settled in at the hotel. But as soon as it was my turn to present, you’d think I was out jogging again!

Josh De La Rosa @JoshDelaRosa1

Josh De La Rosa / @JoshDelaRosa1. Lovely peak when Josh spoke which almost instantly dropped as soon as he finished speaking.


Jessica Holzberg / @jlholzberg  “apparently my evening out got my heart rate up more than my pre-presentation jitters :)”


Data Quality Issues For Online Surveys #AAPOR

Moderator: Doug Currivan, RTI International Location: Meeting Room 410, Fourth Floor
Impact of ‘Don’t Know’ Options on Attitudinal and Demographic Questions; Larry Osborn, GfK Custom Research; Nicole R. Buttermore, GfK Custom Research Frances M. Barlas, GfK Custom Research Abigail Giles, GfK Custom Research

  • Telephone and in person rarely offer a don’t know option but they will record it, offering it doesn’t improve data
  • May not be the case with online surveys
  • They offered a prompt following nonresponse to see how it changed results
  • Got 4000 completes
  • Tested attitudinal items – with, without, and with a prompt
  • don’t know data was reduced after a prompt, people did choose an opinion, it was effective and didn’t affect data validity
  • Tested it on a factual item as well, income, which is often missing up to 25% of data
  • Branching income often helps to minimize nonresponse (e.g., start with three income groups and then each group is split into three more groups)
  • 1900 completes for this question – 35k or below, > 35k, DK, and then branched each break; DK was only offered for people who skipped the question
  • Checked validity by correlations with income related variables (e.g., education, employment)
  • Lower rates of missing when DK is offered after nonresponse, it seems most missing data is satisficing 

Assessing Changes in Coverage Bias of Web Surveys as Internet Access Increases in the United States; David Sterrett, NORC at the University of Chicago Dan Malato, NORC at the University of Chicago Jennifer Benz, NORC at the University of Chicago Trevor Tompson, NORC at the University of Chicago Ned English, NORC at the University of Chicago

  • Many people don’t have Internet access but nowadays it’s much better, can we feel safe with a web only survey
  • Is coverage bias minimal enough to not be worried – people who don’t have access to Internet 
  • It can be question by question, not just for the survey overall
  • Coverage bias is a problem if there are major differences between those with coverage and without, if they are the same kinds of people it won’t matter as much
  • Even when you weight data, it might not be a representative sample, weights don’t fix everything
  • ABS – address based sampling design – as opposed to telephone number or email address based
  • General social survey has information about whether people have Internet access and it has many health, social, political, economic issues; can see where coverage error happens
  • Income, education, ethnicity, age are major predictors of Internet access as predicted
  • What issues are beyond weighting on demographics
  • For many issues, there was a less than 1% point coverage error
  • For health, same sex marriage, and education, the differences were up to 7% point different
  • Over time – bias decreased for voting, support for assistance of blacks; but error increased for spend on welfare, marijuana, getting ahead in life 
  • Saw many differences when they looked into subgroups
  • [so many tests happening, definitely need to see replication to rule out which are random error]
  • As people who don’t have Internet access become more different from people who have it, we need to be cognizant of how that skews which subset of results
  • You can’t know whether the you are research is a safe one or not

Squeaky Clean: Data Cleaning and Bias Reduction; Frances M. Barlas, GfK Custom Research Randall K. Thomas, GfK Custom Research Mansour Fahimi, GfK Custom Research Nicole R. Buttermore, GfK Custom Research

  • Do you need to learn your data [yes, because you don’t know if errors sit within a specific group of people, you need to at least be aware of the quality]
  • Some errors are intentional, others accidental, or they couldn’t find the best answer
  • Results did not change if no more than 5% of the data was removed
  • Is there such a thing as too much data cleaning
  • Cleaned out incremental percentages of data and then weighted to census, matched to census data as the benchmark
  • Saw no effect with cleaning up to 50% of the data with one of the panels, similar with the second almost no effect of cleaning
  • [given that different demographic groups have different data quality, it could matter by subsample]

Trap Questions in Online Surveys; Laura Wronski, SurveyMonkey Mingnan Liu, SurveyMonkey

  • Tested a variety of trap questions, easy or hard, beginning or end – used the format of selecting an answer they specify
  • 80% were trapped with the hard question
  • [saddens me that we talk about ‘trapping’ respondents.. They are volunteering their time for us. We must treat them respectfully. Trap questions tell respondents we don’t trust them.]
  • Tested follow ups and captcha 
  • Announcements didn’t result in any differences, picture verification question trapped about ten percent of people
  • Captcha trapped about 1% [probably they couldn’t read it]
  • Prefered the picture trap
  • [I personally prefer using many questions because everyone makes errors somewhere. Someone who makes MANY errors is the problem, not someone who misses one question.]
  • At the end of the survey, asked people if they remembered the data quality question – many people didn’t notice it
  • One trap question is sufficient [wow, I disagree so much with this conclusion]

Identifying Psychosocial Correlates of Response in Panel Research: Evidence from the Health and Retirement Study; Colleen McClain, University of Michigan – MAPOR student paper winner

  • People who are more agreeable are less likely to participate (big 5 traits)
  • More conscientious are more likely to participate 
  • More agreeable people took longer to respond to the survey
  • Conscientious  people respondent more quickly
  • More distrustful are less likely to check their records
  • Effects were very small
  • We need to consider more than demographics when it comes to data quality

What Are You? Measuring The Size, Characteristics And Attitudes Of The Multiracial Population In America #AAPOR

Live note taking at #AAPOR in Austin Texas. Any errors or bad jokes are my own.

Moderator: Richard Morin, Pew Research Center 

Exploring New Ways to Measure Race/Ethnicity for the 2020 Census; Nicholas Jones, U.S. Census Bureau

  • Focus is multiethnic Americans 
  • Increasing numbers of people don’t identify with the current categories, lobbying government to change the categories
  • Can people find themselves more accurately and easily in the form
  • [My thought – do we need a probability sample to learn about the best way to write an ethnicity question? I suggest no.]
  • Want to explore separate questions and combined questions
  • Middle Eastern or North African MENA category being considered
  • [Why does the census ask about Race when it really is a question of Breed or Historical Origin? There is only one race – human]
  • Test option to let people write in more specific details
  • Also testing detailed checkbox question
  • List the options in order of frequency of occurrence 
  • Testing instruction – Note, you may report more than one group. People were misreading “Mark X” as mark a single X even though it said “Mark X in one or more boxes
  • Testing race, original, ethnicity. Also tested tested no terms at all “which category” [Which category is my favorite. I’ve moved pretty everything over to that style. It’s a huge help for the gender/sex/identity question.]
  • Want to understand how multiracial groups respond, particularly since these groups are growing faster than others
  • Want to talk about results by the fall
  • [I really want to see the results of this!]

Measuring Hispanic Racial Identity:
A Challenge to Traditional Definitions of Race; Mark Hugo Lopez, Pew Research Cente;r Ana Gonzalez-Barrera, Pew Research Center

  • Census leaves Hispanic as a separate question, came form the 7s
  • Do you ask Hispanic or race first?
  • What does “some other race” mean.. It tended to mean Mexican or Hispanic or Latin American.
  • People consider Hispanic to be a race regardless of what the researchers want it to mean
  • Hispanic identity is changing particularly when more have US parents 
  • It varies a lot depending on how you ask it
  • If you ask directly people will tell you they are black or indigenous. 
  • Multiple choices  maybe because people don’t find themselves in the list

The Size, Characteristics and Key Attitudes of Multiracial Americans; Juliana Horowitz, Pew Research Center Richard Morin, Pew Research Center

  • Asked race question of same people over a time span
  • Asked about parents and grandparent race
  • Data included self report and parent race
  • Included only mixed race in the data but they have demographic data on everyone
  • 2.9% said they are only one race but based on their parents could be called mixed race
  • another percent were mixed race based on grandparents
  • does this mean the census is wrong? no, it’s different
  • [LOVE the idea of asking about parents and grandparents, sort of gets to aculturation]
  • race is fluid
  • 30% mixed race people have seen themselves as being one race at some point, vice versa as well
  • 6.9% are mixed race based on these definitions
  • black native were called mixed race due to their grandparents information
  • identity gap – when questions don’t reflect how people see themselves
  • why don’t you say you’re mixed – were raised as one race and look like one race; treatment like discrimination can affect how your identity is felt
  • sometimes people feel proud about being mixed race, some feel more open to other cultures because of it, half have felt discrimination because of it
  • native people say if someone on the street saw them, they would say theyre white, but for black people they would be perceived as black
  • amount of contact with family relatives determined how people felt about themselves [really points out how race is a silly concept. it’s a cultural and social concept.]

    Do Multiracial Adults Favor One of their Background Races Over the Other: An Implicit Association Test; Annie Franco, Stanford University

    • By 2050 one in five Americans will be multiracial
    • Explicit vs implicit bias is important because some people will refuse to admit they are biased or won’t even know they are biased
    • Measured bias based on self reports as well as implicit measures 
    • People can pair words together more quickly if the words are consistent with their beliefs
    • 50% of white people show preference for their own group
    • White/Asian attitudes are closer to Asian than white
    • White/black are closer to black and are solidly positive on the black side
    • [lots of details here, I might have mixed things up, ask for the paper]
    • White/black express more positive view of blacks, white/Asian express less positive view of Asian
    • There are definite differences in implicit/explicit views [think of this in relation to the upcoming election in terms of which candidate is inline with your implicit views] 

    Questionnaire Design #AAPOR 

    Live note taking at #AAPOR in Austin Texas. Any errors or bad jokes are my own.

    The feedback of respondent committment and tailored feedback on response quality in an online survey; Kristin Cibelli, U of Michigan

    • People can be unwilling or unable to provide high quality data, will informing them of the importance and asking for committment help to improve data quality [I assume this means the survey intent is honourable and the survey itself is well written, not always the case]
    • Used administrative records as the gold standard
    • People were told their answers would help with social issues in the community [would similar statements help in CPG, “to help choose a pleasant design for this cereal box”]
    • 95% of people agreed to the committment statement, 2.5% did not agree but still continued; thus, we could assume that the control group might be very similar in committment had they been asked
    • Reported income was more accurate for committed respondents, marginally significant
    • Overall item nonresponse was marginally better for committed respondents, not committed people skipped more
    • Not committed were more likely to straightlining 
    • Reports of volunteering, social desirability were possibly lower in the committed group, people confessed it was important for the resume
    • Committed respondents were more likely to consent to reviewing records
    • Committment led to more responses to income question, and improved the accuracy, more likely to check their records to confirm income
    • Should try asking control group to commit at the very end of the survey to see who might have committed 

    Best Practice Instrument design and communications evaluation: An examination of the NSCH redesign by William Bryan Higgins, ICF International

    • National and state estimates of child well-being 
    • Why redesign the survey? To shift from landline and cell phone numbers to household address based sampling design because kids were answering the survey, to combine two instruments into one, to provide more timely data
    • Moe to self completion mail or web surveys with telephone follow-up as necessary
    • Evaluated communications about the survey, household screener, the survey itself
    • Looked at whether people could actually respond to questions and understand all of the questions
    • Noticed they need to highlight who is supposed to answered the survey, e.g., only for households that have children, or even if you do NOT have children. Make requirments bold, high up on the page. 
    • The wording assumed people had read or received previous mailings. “Since we last asked you, how many…”
    • Needed to personalize the people, name the children during the survey so people know who is being referred to 
    • Wanted to include less legalese

    Web survey experiments on fully balanced, minimally balanced, and unbalanced rating scales by Sarah Cho, SurveyMonkey

    • Is now a good time or a bad time to buy a house. Or, is now a good time to buy a house or not? Or, is now a good time to buy a house?
    • Literature shows a moderating effect for education
    • Research showed very little difference among the formats, no need to balance question online
    • Minimal differences by education though lower education does show some differences
    • Conclusion, if you’re online you don’t need to balance your results

    How much can we ask? Assessing the effect of questionnaire length on survey quality by Rebecca Medway, American Insitute for research

    • Adult education and training survey, paper version
    • Wanted to redesign the survey  but the redesign was really long
    • 2 version were 20 pages and 28 pages, 138 questions or 98 questions
    • Response rate slightly higher for shorter questionnaire
    • No significant differences in demographics [but I would assume there is some kind of psychographic difference]
    • Slightly more non-response in longer questionnaire
    • Longer surveys had more skips over the open end questions
    • Skip errors had no differences between long and short surveys
    • Generally longer had lower repsonse rate but no extra problems over the short 
    • [they should have tested four short surveys versus the one long survey 98 is just as long as 138 questions in my mind]

    Rise Of The Machines: DSc Machine Learning In Social Research #AAPOR #MRX #NewMR 

    Enjoy my live note taking at AAPOR in Austin, Texas. Any bad jokes or errors are my own. Good jokes are especially mine.  

    Moderator: Masahiko Aida, Civis Analytics

    Employing Machine Learning Approaches in Social Scientific Analyses; Arne Bethmann, Institute for Employment Research (IAB) Jonas F. Beste, Institute for Employment Research (IAB)

    • [Good job on starting without a computer being ready. Because who needs computers for a talk about data science which uses computers:) ]
    • Demonstration of chart of wages by age and gender which is far from linear, regression tree is fairly complex
    • Why use machine learning? Models are flexible, automatic selection of features and interactions, large toolbox of modeling strategies; but risk is overfitting, not easily interpretable, etc
    • Interesting that you can kind of see the model in the regression tree alone
    • Start by setting every case in a sample to 0, e.g., male and female are both 0; then predict responses for every person; calculate AME/APE as mean difference between predictions for all cases
    • Regression tree and linear model end up with very different results
    • R package for average M effects – MLAME on github
    • MLR package as well [please ask author for links to these packages]
    • Want to add more functions to these – conditional AME, SE estimation, MLR wrapper

    Using Big Census Data to Better Understand a Large Community Well-being Study: More than Geography Divides Us; Donald P. Levy, Siena College Research Institute Meghann Crawford, Siena College Research Institute

    • Interviewed 16000 people by phone, RDD
    • Survey of quality of community, health, safety, financial security, civic engagement, personal well being
    • Used factor analysis to group and test multiple indicators into factors, did the items really rest within in each factor [i love factor analysis. It helps you see groupings that are invisible to the naked eye. ]
    • Mapped out cities and Burroughs, some changed over time
    • Rural versus urban have more in common than neighbouring areas [is this not obvious?]
    • 5 connections – wealthy, suburban, rural, urban periphery, urban core
    • Can set goals for your city based on these scores
    • Simple scoring method based on 111 indicators to help with planning and awareness campaigns, make the numbers public and they are shared in reports and on public transportation so the public knows what they are, helps to identify obstacles, help to enhance quality of life

    Using Machine Learning to Infer Demographics for Respondents; Noble Kuriakose, SurveyMonkey; Tommy Nguyen, SurveyMonkey

    • Best accuracy for gender inferring is 80%, Google has seen this
    • Use mobile survey, but not everyone fills out the entire demographic survey
    • Works to find twins, people you look like based on app usage
    • Support vector machines try to split a scatter plot where male and female are as far apart as possible 
    • Give a lot of power to the edges to split the data 
    • Usually the data overlaps a ton, you don’t see men on the left and women on the right
    • “Did this person use this app?” Split people based on gender, Pinterest is often the first node because it is the best differentiator right now, Grindr and emoticon use follow through to define the genders well, stop when a node is all one specific gender
    • Men do use Pinterest though, ESPN is also a good indicator but it’s not perfect either, HotOrNot is more male
    • Use time spend per app, app used, number of apps installed, websites visited, etc
    • Random forest works the best
    • Feature selection really matters, use a selected list not a random list
    • Really big differences with tree depth
    • Can’t apply the app model to the android model, the apps are different, the use of apps is different

    Dissonance and Harmony: Exploring How Data Science Helped Solve a Complex Social Science Problem; Michael L. Jugovich, NORC at the University of Chicago; Emily White, NORC at the University of Chicago

    • [another speaker who marched on when the computer screens decided they didn’t want to work 🙂 ]
    • Recidivism research, going back to prison
    • Wanted a national perspective of recidivism
    • Offences differ by state, unstructured text forms means a lot of text interpretation, historical data is included which messes up the data if it’s vertical or horizontal in different states
    • Have to account for short forms and spelling errors (kinfe)
    • Getting the data into a useable format talks the longest time and most work
    • Big data is often blue in pictures with spirals [funny comments 🙂 ]
    • Old data is changed and new data is added all the time
    • 30 000 regular expressions to identify all the pieces of text
    • They seek 100% accuracy rate [well that’s completely impossible]
    • Added in supervised learning and used to help improve the speed and efficiency of manual review process
    • Wanted state specific and global economy models, over 300 models, used brute force model
    • Want to improve with neural networks, auto make data base updates

    Machine Learning Our Way to Happiness; Pablo Diego Rosell, The Gallup Organization

    • Are machine learning models different/better than theory driven models
    • Using Gallup daily tracking survey
    • Measuring happiness using the ladder scale, best possible life to worst possible life, where do you fall along this continuum, Most people sit around 7 or 8
    • 500 interviews everyday, RDD of landlines and mobile, English and Spanish, weighted to national targets and phone lines
    • Most models get an R share of .29. Probably because they miss interactions we can’t even imagine
    • Include variables that may not be justified in a theory driven model, include quadratic terms that you would never think of, expanded variables from 15 to 194
    • [i feel like this isn’t necessarily machine learning but just traditional statistics with every available variable crossed with every other variable included in the process]
    • For an 80% solution, needed only five variables
    • This example didn’t uncover significant unmodeled variables
    • [if machine learning is just as fast and just as predictive as a theory driven model, I’d take the theory driven model any day. If you don’t understand WHY a model is what it is, you can’t act on it as precisely.]

    Panel: Public Opinion Quarterly Special – Survey research today and tomorrow #AAPOR #MRX #NewMR 

    Live note taking at #AAPOR in Austin, Texas. Any errors or bad jokes are my own.

    Moderator: Peter V. Miller, U.S. Census Bureau 

    • He is accepting submissions of 400 words regarding these papers, to be published in an upcoming issue, due June 30, send to peter.miller@census.gov

    Theory and Practice in Nonprobability Surveys:
    Parallels Between Casual Inference and Survey Inference 
    Discussant: Jill DeMatteis, Westat
    ; Andrew Mercer, Pew Research Center; Frauke Kreuter, University of Maryland; Scott Keeter, Pew Research Center; Elizabeth Stuart, Johns Hopkins University

    • Noncoverage – when people can’t be included in a survey
    • Problem is when they are systematically based
    • Selection bias is not as useful in a nonprobability sample as there is no sampling frame, and maybe not even a sample
    • Need a more general framework
    • Random selection and random treatment assignment is the best way to avoid bias
    • Need exchangeability – know all the confounding, correlated variables
    • Need positivity – everyone needs to be able to get any of the treatments, coverage error is a problem
    • Need composition – everyone needs to be in the right proportions 
    • You might know the percent of people who want to vote one way, but then you also know you have more of a certain percentage of demographic in your group, but it’s never just one demographic group, it’s ten or twenty or 100 important demographic and psychographic variables that might have an association with the voting pattern
    • You can’t weight a demographic group up [pay attention!]
    • We like to assume we don’t have any of these three problems and you can never know if you’ve met them all, we hope random selection accomplishes this for us; or with quota selection we hope it is met by design
    • One study was able to weight using census data a gigantic sample and the results worked out well [makes sense if your sample is so ridiculously large that you can put bigger weights on a sample of 50 000 young men]
    • Using demographics and psychographics helps to create more accurate results, religion, political affiliation
    • This needs to be done in probability and nonprobability samples
    • You can never be certain you have met all the assumptions
    • Think about confounding variables during survey design, not just demographics, tailored to the research question at hand
    • Confounding is more important than math – it doesn’t matter what statistic you use, if you haven’t met the requirments first you’re in troubl

    Apples to Oranges or Gala vs. Golden Delicious? Comparing Data Quality of Nonprobability Internet Samples to Low Response Rate Probability Samples

    Discussant: George Terhanian, NPD Group
    , David Dutwin, SSRS,  Trent Buskirk, Marketing Systems Group

    •  S>80000, 9% response rate for probability sample [let’s be real here, you can’t have a probability sample with humans]
    • The matching process is not fool proof, uses categorical match, matching coefficient, randomly selected when there was a tie
    • Looked at absolute bias, standard deviation, and overal mean absolute bias 
    • Stuck with demographics variables, conditional variables, nested within gender, age, race or region
    • Weighted version was good, but matched and raked was even closer, variability is much less with the extra care
    • Nonprobability telephone surveys consistently had less variability in the errors
    • Benchmarks are essential to know what the error actually is, you can’t just the bias without a benchmark
    • You can be wrong, or VERY wrong and you won’t know you’re wrong
    • Low response rate telephone gets you better data quality, much more likely you’re closer to truth
    • Cost is a separate issue of course
    • Remember fit for purpose – in politics you might need reasonably accurate point estimates 

    Audience discussion

    • How do you weight polling research when political affiliation is part of both equations, what is the benchmark, you can’t use the same variables for weighting and measuring and benchmarking or you just creating the results you want to see
    • If we look at the core demographics, maybe we’ve looked at something that was important [love that statement, “maybe” because really we use demographics as proxies of humanality]
    • [if you CAN weight the data, should you? If you’re working with a small sample size, you just probably just get more sample. If you’re already dealing with tens of thousands, then go ahead and make those small weighting adjustments]

    2016: The year of the outsider #PAPOR #MRX 

    live blogged at #PAPOR in San Francisco. Any errors or bad jokes are my own.

    The Summer of Our Discontent, Stuart Elway, Elway Research

    • regarding state of Washington
    • it’s generally democratic
    • between elections, more people are independents and then they drift to democrat
    • independents are more social liberals
    • has become more libertarian
    • don’t expect a rebellion to start in Washington state
    • [sorry, too many details for me to share well]

    Californian’s Opinion of Political Outsiders, Mark Baldassare, PPIC

    • California regularly elects outsiders – Reagan, Schwarzenegger
    • flavour is often outsider vs insider, several outsiders have run recently
    • blog post on the topic – http://ppic.org/main/blog_detail.asp?i=1922
    • they favour new ideas over experience
    • 3 things are important – approval ratings of elected officials, people who prefer outsiders give officials lower approval, negative attitudes of the two party system
    • majority think a third party is needed – more likely to be interested in new ideas over experience
    • [sorry, too many details for me to share well]

    Trump’s Beguiling Ascent: What 50-State Polling Says About the Surprise GOP Frontrunner, Jon Cohen & Kevin Stay, SurveyMonkey

    • 38% of people said they’d be scared if trump is the GOP nominee
    • 25% would be surprised
    • 24% would be hopeful
    • 21% would be angry
    • 14% would be excited
    • List is very different as expected between democrats and republicans, but not exactly opposite
    • quality polling is scale, heterogeneity , correctable self-selection bias
    • most important quality for candidates is standing up for principles, strong leader, honest and trustworthy – experience is lowest on the list
    • Views on Trump’s Muslim statement change by the minute – at the time of this data: 48% approve, 49% disapprove, split as expected by party
    • terrorism is the top issue for republicans; jobs AND terrorism are top for independents; jobs is top for democrats
    • for republicans – day before paris 9% said terrorism was top, after paris 22%
    • support for Cruz is increasing
    • half of trump voters are absolutely certain they will vote for trump; but only 17% of bush voters are absolutely certain
    • among republicans, cruz is the second choice even among trump voters
    • trump has fewer voters who go to religious services weekly, least of all candidates; carson and cruz are on the high end
    • trump voters look demographically the same but carson has fewer male voters and cruz has fewer female voters
    • trump voters are much less educated, rubio voters are much more educated

    Improvements to survey modes #PAPOR #MRX 

    What Are They Thinking? How IVR Captures Public Opinion For a Democracy, Mary McDougall, Survox

    • many choices, online is cheapest followed by IVR followed by phone interview
    • many still do not have internet – seniors, non-white, low income, no high school degree
    • phone can help you reach those people, can still do specific targeting
    • good idea to include multiple modes to test for any mode effects
    • technology is no longer a barrier for choosing a data collection strategy
    • ignoring cell phones is poor sampling
    • use labor strategically to allow IVR
    • tested IVR on political polling, 300 completes in 2.5 hours, met the quotas, once a survey was started it was generally completed

    The Promising Role of Fax in Surveys of Clinical Establishments: Observations from a Multi-mode Survey of Ambulatory Surgery Centers, Natalie Teixeira, Anne Herleth, and Vasudha Narayanan, Weststat; Kelsey O’Yong, Los Angeles Department of Public Health

    • we often want responses from an organization not a company
    • 500 medical facilities, 60 questions about staffing and infection control practices
    • used multimode – telephone, postal, web, and fax
    • many people requested the survey by fax and many people did convert modes
    • because fax was so successful, reminder calls were combined with fax automatically and saw successful conversions to this method
    • this does not follow the current trend
    • fax is immediate and keeps gatekeepers engaged, maybe it was seen as a novelty
    • [“innovative fax methodology” so funny to hear that phrase. I have never ever ever considered fax as a methodology. And yet, it CAN be effective. 🙂 ]
    • options to use “mass” faxing exist

    The Pros and Cons of Persistence During Telephone Recruitment for an Establishment Survey, Paul Weinfurter and Vasudha Narayanan, Westat

    • half of restaurant issues are employees coming to work ill, new law was coming into effect regarding sick pay
    • recruit 300 restaurants to recruit 1 manager, 1 owner, and a couple food preparers
    • telephone recruitment and in person interviews, English, Spanish, mandarin, 15 minutes, $20 gift card
    • most of the time they couldn’t get a manager on the phone and they received double the original sample of restaurants to contact
    • it was assumed that restaurants would participate because the sponsor was health inspectors, but it was not mandatory and they couldn’t be told it was mandatory, there were many scams related to this so people just declined, also all of the health inspectors weren’t even aware of the study
    • 73% were unreachable after 3 calls, hard to get a person of authority during open hours
    • increased call attempts to five times, but continued on when they thought recruitment was likely
    • recruited 77 more from people who were called more than 5 times
    • as a result, data were not limited to a quicker to reach sample
    • people called up to ten times remained noncommittal and never were interviewed
    • there wasn’t an ideal number of calls to get maximum recruits and minimum costs
    • but the method wasn’t really objective, the focus was on restaurants that seemed like they might be reachable
    • possibly more representation than if they had stopped all their recruitment at five calls
    • [would love to see results crossed by number of attempts]

    Uses of survey and polling data collection: practical and ethical implications #PAPOR #MRX 

    Live blogged at #PAPOR in San Francisco. Any errors or bad jokes are my own.

    Are California’s Registered Independents Shy Partisans?, David Kordus, Public Policy Institute of California

    • number of independent voters has doubled in the last twenty years
    • automatic voter registration via the DMV will add new voters
    • independents are not one homogeneous group
    • on average, they really are the middle between republicans and democrats, not necessarily more moderate

    Exploring the Financial Landscape Facing Veterans in Nevada: Financial Literacy, Decision-making, and Payday Loans, Justin S. Gardner & Christopher Stream, UNLV, Runner-Up Student Paper Competition Winner

    • payday lending only started in the 1990s, more of them in military areas
    • largest security clearance issues were financial, capped interest rate of payday loans
    • 375 respondents, lots of disabled veterans who can’t work
    • use as medical loans is very low, many use it to pay off student loans or other debts, paying for housing also major use
    • most learned about it from tv commercials, or friends and family. If family are encouraging them to do this, something needs to change
    • people who don’t feel prepared for emergencies are more likely to use
    • majority had salary under $50 000, likely to need another loan in the future
    • 20% had used payday, it is cyclical, once you’re in the cycle it’s difficult to break out of it
    • half people could walk there from their home, didn’t need a car

    What Constitutes Informed Consent? Understanding Respondents’ Need for Transparency, Nicole Buttermore, Randall Thomas, Frances M. Barlas, & Mansour Fahimi, GfK

    • biggest threat is release of name of participant but should participants be told sponsor of the study?
    • problem is nonresponse and survey bias if people know who the sponsor is
    • 6% thought taking a survey could have a negative impact on their life – worried about data breach, who has access to data, company might be hacked, improper use of data, questions might make me feel uncomfortable
    • 95% think surveys have no or minimal risk to my mental health – about 23% have quit a survey because it made them feel uncomfortable
    • about 20% said a survey has made them feel very uncomfortable – ask abour race, income, too much personal information, can’t give the exact answer they want to, feel political surveys are slanted, surveys are boring, don’t know how to answer the question
    • respondents want to know how personal information will be used and how privacy will be protected
    • want to know how long it will take, the topic, and the points for it
    • about twenty percent want to know company doing the research and company paying for the research

    Recent Changes to the Telecommunications Consumer Protection Act, Bob Davis, Davis Research

    • this is not legal advice
    • TCPA issue is regarding calls using automated telephone equipment
    • lawyers like to threaten to sue but settle
    • vicarious liability – responsibility of the superior for the acts of their subordinates, i.e., contract work, sponsor of research
    • any phone with a redial button is an autodialer – so only the old phones where you stick your finger in the hole and turn the dial is not an autodialer
    • if you can get permission, then get it
    • regularly scrub your landline system to make sure there are no cell phones in it
    • use a non-predictive dialing system
    • ask that suppliers are TCPA compliant
    • international partners dialing into the US need to follow the rules as well
    • talk with your lawyer ahead of time so you can say you have already talked to a lawyer and they don’t think you are weak
    %d bloggers like this: