The impact of questionnaire design on measurements in surveys #2 #ESRA15 #MRX  


Live blogged at #ESRA15 in Reykjavik. Any errors or bad jokes are my own.

Breaktime treated us to fruit and croissants this morning. I was hoping for another unique to iceland treat but perhaps that was a sign to stop eating. No, just kidding! Apparently you’re not allowed to bring food or drink into the classrooms. The signs say so. The signs also say no Facebook in the classrooms. Shhhh…. I was on Facebook in the classroom!

The sun is out again and I took a quick walk outside. I am thankful my hotel is at the foot of the famous church. No matter where I am in this city, I can always, easily, and instantly find my hotel. No map needed when the church is several times higher than the next highest building!

I’ve noticed that the questions at this conference are far more nit-picky and critical than I’m used. I suspect that is because the audience includes many academics whose entire job is focused on these topics. They know every minute detail because they’ve done similar studies themselves. It makes for great comments and questions, though it does seem to put the speaker on the spot every time!

smart respondents: let’s keep it short.

  • do we really need scale instructions in the question stem? they add length, mobile screens have limited space, and respondents skip the instructions if the response scale is already labeled [isn’t this just an artifact of old fashioned face to face surveys, telephone surveys]
  • they tested instructions that matched and did not match what was actually in the scale [i can imagine some panelists emailing the company to complain that the survey had errors!]
  • used a probability survey [this is one case where a nonprobability sample would have been well served, easier cheaper to obtain with no need to generalize precisely to a population]
  • answer frequencies looked very similar for correct and incorrect instructions, no significant differences, she’s happy to have nonsignificant results, unaffected by mobile device or age
  • [more regression results shown, once again, speaker did not apologize and the audience did not have a heart attack]
  • it seems like responsents ignore instructions in the question, they reply on the words in the answer options, e.g., grid headers
  • you can omit instructions if the labeling is provided in the answer options
  • works better for experienced survey takers [hm, i doubt that. anyone seeing the answer options will understand. at least, thats my opinion.]

from web to paper: evaluation from data providers and data analysts. The case of annual survey finances of enterprises

  • we send out questionaires, something happens, we get data back – we don’t know what happens🙂
  • wanted to keep question codes in the survey which seemed unnecessary to respondents, had really long instructions for some questions that didn’t fit on the page so they put them on a pdf
  • 64% of people evaluted the codes on the online questionnaire positively, 12% rated the codes negatively. people liked that they could communicate with statistics netherlands by using the codes
  • 74% negative responses to explanations of question which were intended to reduce calls from statistics netherlands, only 11% were positive
  • only 25% of people consulted the pdf with instructions
  • most people wanted to received a printed version of the questionnaire they filled out, people really wanted to print it and they screen capped it, people liked being able to return later, they could easily get an english version
  • data editors liked that they didn’t have to do data entry but now they needed more time to read and understand what was being said
  • they liked having the email address because they got more direct and precise answers, responses came back faster, they didn’t notice any changes in the time series data

is variation in perception of inequality and redistribution of earnings actual or artifactual. effects of wording, order, and number of items

  • opinions differ when you ask how much should people make vs how much should the top quintile of peopl emake
  • they asked people how much a number of occupations should earn, they also varied how specific the title was e.g., teacher vs math teacher in a public highschool
  • estimates for specific descriptions were higher, high status jobs got much higher estimates
  • adding more occupations to the list makes reliability in earnings decrease

exploring a new way to avoid errors in attitude measurements due to complexity of scientific terms: an example with the term biodiversity

  • how do people talk about complicated terms, their own words often differ from scientific definitions
  • “what comes to mind when you think of biodiversity?” – used text analysis for word frequencies, co-occurences, correspondence analysis, used the results to design items for the second study
  • found five classes of items – standard common definition, associated with human actions to protect it, human envionment relationship, global actions and consequences, scientific definition
  • turned each of the five types of defiintions into a common word definition
  • people gave more positive opinions about biodiversity when they were asked immediately after the definition
  • items based on representations of biodiversity were valid and reliable
  • [quite like this methodology, could be really useful in politics]

[if any of these papers interest you, i recomend finding the author on the ESRA program and asking for an official summary. Global speakers and weak microphones makes note taking more challenging.🙂 ]



%d bloggers like this: