Live blogged at Esomar in Dublin. Any errors or bad jokes are my own.
Modular surveys for agile research by Grant Miller and John Crockett
- Survey with 150 questions and has been done for 30+ years – social values survey
- Can we modularize this survey? How will that impact the results?
- Used RIWI as the data provider, a mostly random URL bar sampler
- Used chunking, broke survey into multiple sections – reshaped the survey based on sensical modules – 35 modules
- fielded invidual modules, sometimes 1 or 2 or 3 or 4 randomly selected modules, let people answer as many as they wanted
- problems with chunking – missing data in different areas, data from ten people might work out to five completes
- people answered far more modules than expected, but a lot of data was coming from few respondents – 70% of data came from 30% of responders, didn’t see major demographic differences
- opportunity to ask more questions because it didn’t seem to create bias demgraphically
- There were differences versus panel data, skewed to younger population
- [brand name drop :/]
- RDIT has a positivity bias, more so than online panelists, this source uses scales differently, they were less likely to use the most negative response
- we have to stop being uncomfortable working with partial data [in other words, stop forcing a response to every survey question!]
- be open to blended data, partial data, be open to think differently
- lesson learned – even though people like really short and researchers like really long, you can be inbetween
When should we ask, we when should we measure by Melanie Revilla and German Loewe
- How many times did you connect to your email last week? Do you have access to this information? Can surveys collect this data?
- Surveys have been used for subjective and objective data over the years, will we do this in the future?
- What is the determinant of quality in survey data? memory affects it, but our memory is completely overwhelmed
- we have so many distractions now, events are much quicker, so many products to think about, and why do we bother even trying to remember anything anymore since our phone remembers for us
- used metering devices associated with a panel, compared stated versus actual passive device usage, is one more accurate and when?
- asked people about the last five websites they visited, what was the match rate – 1% recalled 5 out of 5, 6% remembered 4, 9% remembered 3, 29% remembered none
- ask people about ‘most often’ websites, spontaneous recall, 7 days or 2 months, people were far better with 2 months recall
- with prompted recall, trend wasn’t as expected but they don’t know why yet
- there is always more over-reporting than under-reporting, acquiesence bias
- people don’t remember their online activities
- recall is even worse on a smartphone, so much marketing taking place [hello competely distracted! phone games, text messages, video watching, snapchatting]
- think about when six blind men touch a different part of an elephant and they describe it differently, but together, they describe an elephant