Tag Archives: passive

Goodbye Humans: Robots, Drones, and Wearables as Data Collectors #AAPOR 

Live note taking at #AAPOR in Austin Texas. Any errors or bad jokes are my own.

Moderator: Jamres Newswanger, IBM 

Using Drones for Household Enumeration and Estimation; Safaa R. Amer, RTI International Mark Bruhn, RTI International Karol Krotki, RTI International

  • People have mixed feelings about drones, privacy
  • When census data is available it’s already out of date
  • Need special approval to fly drones around
  • Galapagos province  census, new methodology used tablet to collect info to reduce cost and increase timeliness
  • Usually give people maps and they walk around filling out forms
  • LandScan uses satellite imager plus other data
  • Prepared standard and aerial maps for small grid cells, downloaded onto tablet
  • Trained enumerators to collect data on the ground
  • Maps show roof of building so they know where to go, what to expect, maps online might be old, show buildings no longer there or miss new buildings
  • Can look at restricted access, e.g., on a hill, vegetation 
  • Can put comments on the map to identify buildings no longer existing
  • What to do when a building lies on a grid line, what if the entrance was in a different grid than most of the house
  • Side image tells you how high the building is, get much better resolution with drone
  • Users had no experience with drones or GIS
  • Had to figure out how to standardize data extraction
  • Need local knowledge of common dwelling opposition to identify type of structure, local hotels looked like houses
  • Drones gave better information about restricted access issues, like fence, road blocks 
  • Drones had many issue but less time required for drones, can reuse drones but you can’t use geolisting
  • Can extend to conflict and fragile locations like slums, war zones, environmentally sensitive areas

Robots as Survey Administrators: Adapting Survey Administration Based on Paradata; Ning Gong, Temple University
Nina DePena Hoe, Temple University Carole Tucker, Temple University; Li Bai, Temple University; Heidi E. Grunwald, Temple University

  • Enhance patience reported outcome for surveys of children under 7 or adults with cognitive disabilities 
  • Could a robot read and explain the questions, it is cool and cute, and could reduce stress
  • Ambient light, noise level, movement of person are all paradata
  • Robot is 20 inches high, likes toys or friends, it’s very cute, it can dance, play games, walk, stand up, to facial recognition, speech recognition, sees faces and tries to follow you
  • Can read survey questions, collect responses, collect paradata, use item response theory, play games with participants 
  • Can identify movements of when person is nervous and play music or games to calm them down 
  • Engineers, social researchers, and public health researchers worked together on this; HIPPA compliance


Wearables: Passive Media Measurement Tool of the Future; Adam Gluck, Nielsen; Leah Christian, Nielsen
Jenna Levy, Nielsen; Victoria J. Hoverman, Nielsen Arianne Buckley, Nielsen Ekua Kendall, Nielsen
Erin Wittkowski, Nielsen

  • Collect data about the wearer or the environment
  • People need to want to wear the devices
  • High awareness of wearable, 75% aware; 15% ownership. Computers were 15% ownership in 1991
  • Some people use them to track all the chemicals that kids come near everyday
  • Portable People Meter – clips to clothing, detects radio and audio codes or TV and radio; every single person in household must participate, 80% daily cooperation rate
  • Did research on panelists, what do they like and dislike, what designs would you prefer, what did younger kids think about it
  • Barriers to wearing clothing difficulties, some situations don’t lend to it, it’s a conspicuous dated design
  • Dresses and skirts most difficult becuase no pockets or belts, not wearing a belt is a problem
  • Can’t wear while swimming, some exercising, while getting ready in the morning, preparing for bed, changing clothes, taking a shower
  • School is a major impediment, drawing attention to is is an impediment, teachers won’t want it, it looks like a pager, too many people comment on it and it’s annoying 
  • It’s too functional and not fashionable, needs to look like existing technology
  • Tried many different designs, LCD write and most prefered by half of people, others like the watch, long clip, jawbone, or small clip style
  • Colour is important, right now they’re all black and gray [I’M OUT. ]
  • Screen is handy, helps you know which meter is whose
  • Why don’t you just make it a fitness tracker since it looks like I’m wearing one
  • Showing the the equipment should be the encouragement they need to participate
  • [My SO NEVER wore a watch. But now never goes without the wrist fitbit]

QR Codes for Survey Access: Is It Worth It?; Laura Allen, The Gallup Organization Jenny Marlar, The Gallup Organization

  • [curious where the QR codes she showed lead to 🙂 ]
  • Static codes never change; Dynamice works off a redirect and can change
  • Some people think using a QR code makes them cool
  • Does require that you have a reader on your phone
  • You’d need one QR code per person, costs a lot more to do 1000 codes
  • Black and what paper letter with one dollar incentive, some people also got a weblink with their QR code
  • No response rate differences
  • Very few QR code completes, 4.2% of completes, no demographic differences
  • No gender, race differences; QR code users had higher education and were younger
  • [wonder what would happen if the URL was horrid and long, or short and easy to type]
  • Showing only a QR code decreased the number of completes
  • [I have a feeling QR codes are old news now, they were a fun toy when they first came out]


Comparing Youth’s Emotional Reactions to Traditional vs. Non-traditional Truth Advertising Using Biometric Measurements and Facial Coding; Jessica M. Rath, Truth Initiative; Morgane A. Bennett, Truth Initiative; Mary Dominguez, Truth Initiative; Elizabeth C. Hair, Truth Initiative; Donna Vallone, Truth Initiative; Naomi Nuta, Nielsen Consumer Neuroscience Michelle Lee, Nielsen Consumer Neuroscience Patti Wakeling, Nielsen Consumer Neuroscience Mark Loughney, Turner Broadcasting; Dana Shaddows, Turner Broadcasting

  • Truth campaign is a mass media smoking prevention campaign launched in 2000 for teens
  • Target audience is now 15 to 21, up from 12 years when it first started
  • Left swipe is an idea of rejection or deleting something
  • Ads on “Adult Swim” incorporating the left swip concept in to “Fun Arts”
  • Ads where profile pictures with smoking were left swiped
  • It trended higher than #Grammys
  • Eye tracking showed what people paid attention to, how long attention was paid to each ad
  • Added objective tests to subjective measures
  • Knowing this helps with media buying efforts, can see which ad works best in which TV show

New Math For Nonprobability Samples #AAPOR 

Moderator: Hanyu Sun, Westat

Next Steps Towards a New Math for Nonprobability Sample Surveys; Mansour Fahimi, GfK Custom Research Frances M. Barlas, GfK Custom Research Randall K. Thomas, GfK Custom Research Nicole R. Buttermore, GfK Custom Research

  • Neuman paradigm requires completes sampling frames and complete response rates
  • Non-prob is important because those assumptions are not met, sampling frames are incomplete, response rates are low, budget and time crunches
  • We could ignore that we are dealing with nonprobability samples, find new math to handle this, try more weighting methods [speaker said commercial research ignores the issue – that is absolutely not true. We are VERY aware of it and work within appropriate guidelines]
  • In practice, there is incomplete sampling frames so samples aren’t random and respondents choose to not respond and weighting has to be more creative, uncertainty with inferences is increasing
  • There is fuzz all over, relationship is nonlinear and complicated 
  • Geodemographic weighting is inadequate; weighted estimates to benchmarks show huge significant differences [this assumes the benchmarks were actually valid truth but we know there is error around those numbers too]
  • Calibration 1.0 – correct for higher agreement propensity with early adopters – try new products first, like variety of new brands, shop for new, first among my friends, tell others about new brands; this is in addition to geography
  • But this is only a UniversitĂ© adjustment, one theme, sometimes it’s insufficient
  • Sought a Multivariate adjustment
  • Calibration 2.0 – social engagement, self importance, shopping habits, happiness, security, politics, community, altruism, survey participation, Internet and social media
  • But these dozens of questions would burden the task for respondents, and weighting becomes an issue
  • What is the right subset of questions for biggest effort
  • Number of surveys per month, hours on Internet for personal use, trying new products before others, time spend watching TV, using coupons, number of relocations in past 5 years
  • Tested against external benchmarks, election, BRFSS questions, NSDUH, CPS/ACS questions
  • Nonprobability samples based on geodemogarphics are the worst of the set, adding calibration improves them, nonprobability plus calibration is even better, probability panel was the best [pseudo probability]
  • Calibration 3.0 is hours on Internet, time watching TV, trying new products, frequency expressing opinions online
  • Remember Total Research Error, there is more error than just sampling error
  • Combining nonprobability and probability samples, use stratification methods so you have resemblance of target population, gives you better sample size for weighting adjustments
  • Because there are so many errors everywhere, even nonprobability samples can be improved
  • Evading calibration is wishing thinking and misleading 

Quota Controls in Survey Research: A Test of Accuracy and Inter-source Reliability in Online Samples; Steven H. Gittelman, MKTG, INC.; Randall K. Thomas, GfK Custom Research Paul J. Lavrakas, Independent Consultant Victor Lange, Consultant

  • A moment of silence for a probabilistic frame 🙂
  • FoQ 2 – do quota controls help with effectiveness of sample selections, what about propensity weight, matching models
  • 17 panels gave 3000 interviews via three sampling methods each; panels remain anonymous, 2012-2013; plus telephone sample including cell phone; English only; telephone was 23 minutes 
  • A – nested region, sex, age
  • B – added non nested ethnicity quotas
  • C – add no nested education quotas
  • D – companies proprietary method
  • 27 benchmark variables across six government and academic studies; 3 questions were deleted because of social desirability bias
  • Doing more than A did not result in reduction of bias, nested age and sex within region was sufficient; race had no effect and neither did C and those made the method more difficult; but this is overall only and not looking at subsamples
  • None of the proprietary methods provided any improvement to accuracy, on average they weren’t powerful and they were a ton of work with tons of sample
  • ABC were essentially identical; one proprietary methods did worse;  phone was not all that better
  • Even phone – 33% of differences were statistically significant [makes me think that benchmarks aren’t really gold standard but simply another sample with its own error bars]
  • The proprietary methods weren’t necessarily better than phone
  • [shout out to Reg Baker 🙂 ]
  • Some benchmarks performed better than others, some questions were more of a problem than others. If you’re studying Top 16 you’re in trouble
  • Demo only was better than the advanced models, advanced models were much worse or no better than demo only models
  • An advanced model could be better or worse on any benchmark but you can’t predict which benchmark
  • Advanced models show promise but we don’t know which is best for which topic
  • Need to be careful to not create circular predictions, covariates overly correlated, if you balance a study on bananas you’re going to get bananas
  • Icarus syndrome – covariates too highly correlated
  • Its’ okay to test privately but clients need to know what the modeling questions are, you don’t want to end up with weighting models using the study variables
  • [why do we think that gold standard benchmarks have zero errors?]

Capitalizing on Passive Data in Online Surveys; Tobias B. Konitzer, Stanford University David Rothschild, Microsoft Research 

  • Most of our data is nonprobability to some extent
  • Can use any variable for modeling, demos, survey frequency, time to complete surveys
  • Define target population from these variables, marginal percent is insufficient, this constrains variables to only those where you know that information 
  • Pollfish is embedded in phones, mobile based, has extra data beyond online samples, maybe it’s a different mode, it’s cheaper faster than face to face and telephone, more flexible than face to face though perhaps less so than online,efficient incentives
  • 14 questions, education, race, age, location, news consumption, news knowledge, income, party ID, also passive data for research purposes – geolocation, apps, device info
  • Geo is more specific than IP address, frequency at that location, can get FIPS information from it which leads to race data, with Lat and long can reduce the number of questions on survey
  • Need to assign demographics based on FIPS data in an appropriate way, modal response wouldn’t work, need to use probabilities, eg if 60% of a FIPS is white, then give the person a 60% chance of being white
  • Use app data to improve group assignments
%d bloggers like this: