After two days at CASRO, I learned the following:
- When you use a 5 point or 7 point scale, you will get different answers
- When you label or don’t label scales, you will get different answers
- When you use a web survey vs a mobile survey, you will get different answers
- When you gamify a survey, you will get different answers
- (And from the good ol’ days) when you run the same survey on two different panels, you will get different answers
What are we to gain from all of this? Well, no matter what you do or how you do it, you will get different results on surveys every time. There’s just no way around it. What we HOPE is that the results won’t be contrary, but rather simply different in magnitude. That rank orders will remain generally similar, that hates will remain hates, and loves will remain loves. Indeed, if we are lucky enough to run a single study across a number of different methods or styles and get similar rank orders every time, it’s a good indication that the conclusions we’ve drawn are both reliable and valid. Heaven.
What this problem also suggests is that there is and can be no right answer. The only right answer is the one in the responder’s head and given that people can’t even adequately describe what is going on in their head, it seems that we will never know the right answer. What we can do is develop clear and specific research hypotheses, and match them up with clear and specific research designs. That is best way to create reliable and valid answers.
We may not know the exact right answer, but we can know a good answer.
- Validity of Gamification: Sweeney, Goldstein, and Becker #CASRO #MRX (lovestats.wordpress.com)
- Cyborgs vs Monsters in modularizing surveys: Edward Paul Johnson and Lynn Siluk #CASRO #MRX (lovestats.wordpress.com)
- Shorter isn’t always better: Inna Burdein #CASRO #MRX (lovestats.wordpress.com)