Eye Tracking in Survey Research #AAPOR 


Moderator: Aaron Maitland, Westat; Discussant: Jennifer Romano Bergstrom, Facebook 

Evaluating Grid Questions for 4th Graders; Aaron Maitland, Westat

  • Used to study cognitive processing
  • Does processing of questions change over the survey period
  • 15 items, 5 content areas about learning, school, tech in school, self-esteem
  • 15 single items and 9 grid items
  • Grid questions are not more difficult, only first grid takes extra consideration//fixation
  • Double negatives have much longer fixation, so did difficult words
  • Expressed no preference for one type of question


Use of Eye-tracking to Measure Response Burden; Ting Yan, Westat Douglas Williams, Westat

  • Normally consider length of interview, or number of questions or page; but these aren’t really burden
  • Attitudes are a second option, interest, importance, but that’s also not burden; Could ask people if they are tired or bored
  • Pupil dilation is a potential measure, check while they recall from memory, pay close attention, thinking hard, these things are small and involuntary; related to memory load
  • 20 participants, 8 minute survey, 34 target questions, attitude and behavioural questions, some hard or easy
  • Asked self reported burden on 4 items – how hard is this items, how much effort did it take to answer this
  • Measured pupil diameter at each fixation, base diameters differ by person, they used dilation instead, used over a base measure, percentage over a base, average dilation and peak dilation
  • Dilation greater for hard questions, peak 50% larger for hard questions, statistically significant though raw number seems very small
  • Could see breakoffs on the questions with more dilation 
  • Sometimes not consistent with breakoffs
  • Self report did correlate with dilation 
  • Can see people fixate on question many times and go back and forth from question to answer
  • Question stems caused more fixation for hard questions 
  • Eye tracking removes bias of self report, more robust
  • Can we use this to identify people who are experiencing too much burden [imagine using this during an interview, you could find out which candidates were having difficult answering questions]



The Effects of Pictorial vs. Verbal Examples on Survey Responses; Hanyu Sun, Westat; Jonas Bertling, Educational Testing Service Debby Almonte, Educational Testing Service

  • Survey about food, asked people how much they eat of each item
  • Shows visual or verbal examples 
  • Measured mean fixation
  • Mean fixation higher for pictorial in all cases, more time on pictures than the task, Think it’s harder when people see the pictures [i’d suggest the picture created a limiting view of the product rather than a general view of ‘butter’ which makes interpretations more difficult]
  • No differences in the answers
  • Fixation times suggests a learning curve for the questions, easier over time
  • Pictorial requires more effort to respond 


Respondent Processing of Rating Scales and the Scale Direction Effect; Andrew Caporaso, Westat

  • Some people suggest never using a vertical scale
  • Fixation – is pausing
  • Saccades – is the rapid movement between pausing
  • Respondents don’t always want to tell you they’re having trouble
  • 34 questions, random assignment of scale direction
  • Scale directions didn’t matter much at all
  • There may be a small primacy effect with a longer scale, lower education may be more susceptible 
  • Fixations decreased over time
  • Top of scale gets most attention, bottom gets the least [so, people figuring out what the scale it, you don’t need to read all five options once you know what the first one particuarly for an agreement scale. Where people can guess all the answers from the first answer. ]

%d bloggers like this: