Tag Archives: questionnaire design

Shhhh…. A Post in Which We Reveal the Midi-chlorians of Questionnaire Design

This post originally appeared on the Sklar Wilton & Associates blog.

There are no midi-chlorians when it comes to questionnaire design.

Sigh. I’m sad to start the post like that but it’s true. There are no Jedi mind tricks that will make people generate better questionnaire data. There are no sacred texts on the market research version Ahch-To containing that one single piece of advice that will allow someone who’s never written a questionnaire before to create an effective questionnaire that generates actionable outcomes. The only Force at our disposal is careful training as a Padawan and years of experience. Fortunately, as a questionnaire Jedi Knight myself, having years of experience does mean that I can share a few tidbits I’ve learned along the way, tidbits not necessarily found in an academic textbook. So here goes.

spaceQuestionnaires aren’t about grammatically perfect writing: After perhaps two decades of primary, secondary, undergraduate, and graduate school, many of us have learned an abundance of grammar and writing skills that we’ve been told are essential for clear communication. Don’t end sentences in a preposition. Don’t use sentence fragments. Don’t start sentences with ‘and.’ However, as questionnaire writers, we have a very specific goal: To write questions and answers that are understandable to as many people as possible. And sometimes, that means joining the Dark Side and ignoring the rules we’ve struggled to follow for years. With that in mind, when there isn’t a good alternative, it is indeed okay to write questions that end in prepositions!

  • Which country do you live in? [Or better, ask “Where do you live?”]
  • Which of these have you heard of?
  • Which of these have you seen before?

Questionnaires aren’t about professional and formal writing: Of course we want research participants to recognize that the questionnaire they’re completing is important and should be taken seriously. However, formal language can be a deterrent to questionnaire completion, particularly for people whose reading skills don’t match the writing skills of the researcher. Besides, participating in a research questionnaire ought to feel like entertainment, not like a 30-minute life skills exam. Banish that language to a life locked in carbonite and instead, choose a casual language style that people will feel comfortable with. (Oh, see what I did with that preposition!) You need to avoid slang, idioms, and inside jokes that are meaningless without context, but you can certainly inject a bit of casual but relevant humour along the way.

  • Are you ready to chat about carpet cleaners and vacuums? It might be a boring topic but we all need a clean home!

Questionnaires aren’t about comprehensive questions: Sometimes, in our attempts to be clear and focused, we end up writing questions that are long and complicated, subsequently making it difficult for people to deconstruct and comprehend the intention behind asking the question in the first place and causing the resulting data to be riddled with quality issues. The alternative is to break sentences apart. Short sentences make comprehension accessible to everyone. People who are reading in a second language can understand short sentences. People who have different reading skills can understand short sentences. Be part of the resistance when it comes to long questions and long answers. If our goal is comprehension, short sentences are always preferred.

  • In the last month, how many large bottles of detergent did you buy? (A large bottle is 1 litre or 1 kilogram or more. Please include liquid and powder detergent.)

Questionnaires aren’t about category comprehensiveness: When you start thinking all the questions that could be answered, it’s easy to stretch a 5-minute questionnaire into a 35-minute questionnaire. Use the force to avoid this inclination. Short questionnaires retain the interest and attention of participants and therefore generate much better data. Cut every question you know you won’t act on. Cut every question that won’t generate an actionable outcome. Cut all the ‘nice to know’ and ‘I wonder whether’ questions. If the questionnaire still requires more than 15 minutes to complete, then you need to move to step two – figure out whether it can be cut it into pieces. That could mean giving twice as many people half as many questions, or spreading the questionnaire out over multiple occasions.

Quality questionnaire writing is a rare skill: Whether it’s designing marketing strategies that double the business in one year, accurately translating mission statements into six languages, or writing effective questionnaires, everyone is a Jedi at something. Jedi Knights in the research industry have written entire textbooks on how to create a good questionnaire. They’ve witnessed thousands of fatal errors across many different categories and industries, and know many of the common and obscure mistakes. Even better, Jedi Masters have learned a plethora of techniques to counteract hundreds of cognitive biases that prevent people from answering truthfully. They’ve acquired a unique skill of ensuring questionnaires will meet specific needs and generate the best possible data quality. If your research outcomes are intended to feed into major decisions impacting the health of your business, it is essential that you seek out the advice of Jedi Master questionnaire writers.

And with these tips firmly entrenched, may the survey force be with you!

 

Annie Pettit, PhD, FMRIA, is a consultant for Sklar Wilton & Associates. She helps marketers build research tools that facilitate clear and direct answers to key questions and problems.

Sklar Wilton & Associates has worked for more than 30 years with some of Canada’s most iconic brands to help them solve tough business challenges to unlock growth and build stronger brands. SW&A was recognized as a Great Workplace for Women in 2018, and the Best Workplace in Canada for Small Companies in 2017 by the Great Place To Work® Institute. Recognized as the number one Employee Recommended Workplace among small private employers by the Globe and Mail and Morneau Shepell in 2017, SW&A achieved ERW certification again in 2018.

Like our posts? Sign up for our newsletter and enjoy insights from our associates in your mailbox every 4 to 6 weeks.

Advertisement

When should you design a questionnaire with brand colours, fonts, and formats? #MRX 

You’ve seen the commercials on TV where the host or actor discussing the fantastic properties of the amazing product is wearing clothes and accessories that match the products packaging and branding perfectly. Sometimes it makes for creepy over-branding whereas other times it makes the commercial more calm and focused. In either case, the intent is to unconsciously teach you the brand colour so that when you are in the store, the familiar colour will draw you in, consciously or unconciously.  

However, the world of research is different. Using brand colours as part of questionnaire design can significantly affect the outcome of research and whether that results in increased or decreased scores, the impact is negative. Results from surveys should reflect in-market experiences, not unconcious associations of brand colours. If you plan to measure brand recall, awareness, purchase, attitudes, or perceptions within the the general population or within category users, particularly if you want to compare with other brands, never brand your questionnaires with brand colours, text styles, or formats. Questionnaires formatting should be neutral in all ways such that unconconsious recollections won’t be created. 

So when is it appropriate for questionnaires to use brand features in the design? When can you use your brand’s colours and fonts and styles to pretty up what can be generic, boring pages?

When you’re contacting existing clients or customers to ask about a specific purchase experience or brand experience. That’s about it. 

In such cases, the bulk of the questionnaire will focus on the specific experience with the specific brand. There may be a couple of generic introductory questions, but 90% of the questionnaire will focus heavily on your brand, your employees, your shelves, your website, your selection, etc. There is no point in creating a sense of blind review or uncontaminated response because the brand must be revealed early and significantly. 

If you’re not sure which way to go, there is a very simple solution. Never brand your questionnaires unless there is no way around it. Better safe than sorry. 

Want more questionnaire tips? Have a peak at #PeopleArentRobots, available on Amazon.  https://www.amazon.ca/dp/1539730646/ 

Is MyDemocracy.ca a Valid Survey?

Like many other Canadians, I received a card in the mail from the Government of Canada promoting a website named MyDemocracy.ca. Just a day before, I’d also come across a link for it on Twitter so with two hints at hand, I decided to read the documentation and find out what it was all about. Along the way, I noticed a lot of controversy about the survey so I thought I’d share a few of my own comments here. I have no vested interested in either party. I am simply a fan of surveys and have some experience in that regard.

First, let’s recognize that one of the main reasons researchers conduct surveys is to generate results which can be generalized to a specific population, for example the population of Canada. Having heard of numerous important elections around the world recently, we’ve become attuned to polling research which attempts to predict election and electoral winners. The polling industry has taken a lot of heat regarding perceived levels of low accuracy lately and people are paying close attention.

Sometimes, however, the purpose of a survey is not to generalize to a population, but rather to gather information so as to be more informed about a population. Thus, you may not intend to learn whether 10% of people believe A and 30% believe B, but rather that there is a significant proportion of people who believe A or B or C or D. These types of surveys don’t necessarily focus on probability or random sampling, but rather on gathering a broad spectrum of opinions and understanding how they relate to each other.  In other cases, the purpose of a survey to generate discussion and engagement, to allow people to better understand themselves and other people, and to think about important issues using a fair and balanced baseline that everyone can relate to.

The FAQ associated with MyDemocracy.ca explains the purpose of the survey in just this manner – to foster engagement. It explains that the experimental portion of the survey used a census balanced sample of Canadians, and that the current intention of the survey is  to help Canadians understand where they sit in relation to their fellow citizens. I didn’t see any intention for the online results to be used in a predictive way.

I saw some complaints that the questions are biased or unfair. Having completed the survey two and a half times myself, I do see that the questions are pointed and controversial. Some of the choices are extremely difficult to make. To me, however, the questions seem no different than what a constituent might be actually be asked to consider and there are no easy answers in politics. Every decision comes with side-effects, some bad, some horrid. So while I didn’t like the content of some of the questions and I didn’t like the bad outcomes associated with them, I could understand the complexity and the reasoning behind them. In fact, I even noticed a number of question design practices that could be used in analysis for data quality purposes. In my personal opinion, the questions are reasonable.

I’m positive you noticed that I answered the survey more than twice. Most surveys do not allow this but if the survey was launched purely for engagement and discussion rather than prediction purposes, then response duplication is not an issue. From what I see, the survey (assuming it was developed with psychometric precision as the FAQ and methodology describe) is a tool similar to any psychological tool whether personality test, intelligence test, reading test, or otherwise. You can respond to the questions as often as you wish and see whether your opinions or skills change over time. Given what is stated in the FAQ, duplication has little bearing on the intent of the survey.

One researcher’s opinion.

 

Since you’re here, let me plug my new book on questionnaire design! It makes a great gift for toddlers and grandmas who want to work with better survey data!
People Aren’t Robots: A practical guide to the psychology and technique of questionnaire design
http://itunes.apple.com/us/book/isbn9781370693108
https://www.amazon.ca/dp/1539730646/
https://www.smashwords.com/books/view/676159

People Aren’t Robots – New questionnaire design book by Annie Pettit

I’ve been busy writing again!

People Aren’t Robots: A practical guide to the psychology and technique of questionnaire is the best 2 bucks you’ll ever spend!

Questionnaire design is easy until you find yourself troubled with horrid data quality. The problem, as with most things, is that there is an art and science to designing a good quality and effective questionnaire and a bit of guidance is necessary. This book will give you that guidance in a short, easy to read, and easy to follow format. But how is it different from all the other questionnaire design books out there?

  • It gives practical advice from someone who has witnessed more than fifteen years of good and poor choices that experienced and inexperienced questionnaire writers make. Yes, even academic, professional researchers make plenty of poor questionnaire design choices.
  • It outlines how to design questions while keeping in mind that people are fallible, subjective, and emotional human beings. Not robots. It’s about time someone did this, don’t you think?

This book was written for marketers, brand managers, and advertising executives who may have less experience in the research industry.

It was also written to help academic and social researchers write questionnaires that are better suited for the general population, particularly when using research panels and customer lists.

I hope that once you understand and apply these techniques, you think this is the best $2 you’ve ever spent and that you hear your respondents say “this is the best questionnaire I’ve ever answered!”

Early reviews are coming in!

  • For the researchers and entrepreneurs out there, here’s a book from an expert. Pick it up (& read & implement). 👌
  • Congrats, Annie! A engagingly written and succinct book, with lots of great tips!
  • Congratulations! It’s a joy watching and learning from your many industry efforts.
  • It looks great!!! If I could, I would buy many copies and give to many people I know who need some of your advice.🙂

Questionnaire Design #AAPOR 

Live note taking at #AAPOR in Austin Texas. Any errors or bad jokes are my own.

The feedback of respondent committment and tailored feedback on response quality in an online survey; Kristin Cibelli, U of Michigan

  • People can be unwilling or unable to provide high quality data, will informing them of the importance and asking for committment help to improve data quality [I assume this means the survey intent is honourable and the survey itself is well written, not always the case]
  • Used administrative records as the gold standard
  • People were told their answers would help with social issues in the community [would similar statements help in CPG, “to help choose a pleasant design for this cereal box”]
  • 95% of people agreed to the committment statement, 2.5% did not agree but still continued; thus, we could assume that the control group might be very similar in committment had they been asked
  • Reported income was more accurate for committed respondents, marginally significant
  • Overall item nonresponse was marginally better for committed respondents, not committed people skipped more
  • Not committed were more likely to straightlining 
  • Reports of volunteering, social desirability were possibly lower in the committed group, people confessed it was important for the resume
  • Committed respondents were more likely to consent to reviewing records
  • Committment led to more responses to income question, and improved the accuracy, more likely to check their records to confirm income
  • Should try asking control group to commit at the very end of the survey to see who might have committed 

Best Practice Instrument design and communications evaluation: An examination of the NSCH redesign by William Bryan Higgins, ICF International

  • National and state estimates of child well-being 
  • Why redesign the survey? To shift from landline and cell phone numbers to household address based sampling design because kids were answering the survey, to combine two instruments into one, to provide more timely data
  • Moe to self completion mail or web surveys with telephone follow-up as necessary
  • Evaluated communications about the survey, household screener, the survey itself
  • Looked at whether people could actually respond to questions and understand all of the questions
  • Noticed they need to highlight who is supposed to answered the survey, e.g., only for households that have children, or even if you do NOT have children. Make requirments bold, high up on the page. 
  • The wording assumed people had read or received previous mailings. “Since we last asked you, how many…”
  • Needed to personalize the people, name the children during the survey so people know who is being referred to 
  • Wanted to include less legalese

Web survey experiments on fully balanced, minimally balanced, and unbalanced rating scales by Sarah Cho, SurveyMonkey

  • Is now a good time or a bad time to buy a house. Or, is now a good time to buy a house or not? Or, is now a good time to buy a house?
  • Literature shows a moderating effect for education
  • Research showed very little difference among the formats, no need to balance question online
  • Minimal differences by education though lower education does show some differences
  • Conclusion, if you’re online you don’t need to balance your results

How much can we ask? Assessing the effect of questionnaire length on survey quality by Rebecca Medway, American Insitute for research

  • Adult education and training survey, paper version
  • Wanted to redesign the survey  but the redesign was really long
  • 2 version were 20 pages and 28 pages, 138 questions or 98 questions
  • Response rate slightly higher for shorter questionnaire
  • No significant differences in demographics [but I would assume there is some kind of psychographic difference]
  • Slightly more non-response in longer questionnaire
  • Longer surveys had more skips over the open end questions
  • Skip errors had no differences between long and short surveys
  • Generally longer had lower repsonse rate but no extra problems over the short 
  • [they should have tested four short surveys versus the one long survey 98 is just as long as 138 questions in my mind]

Mobile devices and modular survey design by Paul Johnson #PAPOR #MRX 

Live blogged at the #PAPOR conference in San Francisco. Any errors or bad jokes are my own.

  • now we can sample by individuals, phone numbers, location, transaction
  • can reach by an application, eail, text, IVR but make sure you have permission for the method you use (TCPA)
  • 55+ prefer to dial an 800 number for a survey, young perfer prefer an SMS contact method; important to provide as many methods as possible so people can choose the method they prefer
  • mobile devices give you lots of extra data – purchase history, health information, social network information, passive listening – make sure you have permission to collect the information you need; give something back in terms of sharing results or hiding commercials
  • Over 25% of your sample is already taking surveys on a mobile device, you should check what device people are using, skip questions that wont render well on small screens
  • remove unnecessary graphics, background templates are not helpful
  • keep surveys under 20 minutes [i always advise 10 minutes]
  • use large buttons, minimal scrolling; never scroll left/right
  • avoid using radio buttons, aim for large buttons intead
  • for openends, put a large box to encourage people to us a lot of words
  • mobile open ends have just as much content although there may be fewer words, more acronyms, more profanity
  • be sure to use a back button if you use auto-next
  • if you include flash or images be sure to ask whether people saw the image
  • consider modularizing your surveys, ensure one module has all the important variables, give everyone a random module, let people answer more modules if they wish
  • How to fill in missing data  – data imputation or respondent matching [both are artificial data remember! you don’t have a sense of truth. you’re inferring answers to infer results.   Why are we SOOOOO against missing data?]
  • most people will actually finish all the modules if you ask politely
  • you will find differences between modular and not but the end conclusions are the same [seriously, in what world do two sets of surveys ever give the same result? why should this be different?]

One size no longer fits all – beyond traditional surveys #ESOMAR #MRX 

Live blogged at Esomar in Dublin. Any errors or bad jokes are my own.

Modular surveys for agile research by Grant Miller and John Crockett

  • Survey with 150 questions and has been done for 30+ years – social values survey
  • Can we modularize this survey? How will that impact the results?
  • Used RIWI as the data provider, a mostly random URL bar sampler
  • Used chunking, broke survey into multiple sections – reshaped the survey based on sensical modules – 35 modules
  • fielded invidual modules, sometimes 1 or 2 or 3 or 4 randomly selected modules, let people answer as many as they wanted
  • problems with chunking – missing data in different areas, data from ten people might work out to five completes
  • people answered far more modules than expected, but a lot of data was coming from few respondents – 70% of data came from 30% of responders, didn’t see major demographic differences
  • opportunity to ask more questions because it didn’t seem to create bias demgraphically
  • There were differences versus panel data, skewed to younger population
  • [brand name drop :/]
  • RDIT has a positivity bias, more so than online panelists, this source uses scales differently, they were less likely to use the most negative response
  • we have to stop being uncomfortable working with partial data [in other words, stop forcing a response to every survey question!]
  • be open to blended data, partial data, be open to think differently
  • lesson learned – even though people like really short and researchers like really long, you can be inbetween 

When should we ask, we when should we measure by Melanie Revilla and German Loewe

  • How many times did you connect to your email last week?  Do you have access to this information? Can surveys collect this data?
  • Surveys have been used for subjective and objective data over the years, will we do this in the future?
  • What is the determinant of quality in survey data? memory affects it, but our memory is completely overwhelmed
  • we have so many distractions now, events are much quicker, so many products to think about, and why do we bother even trying to remember anything anymore since our phone remembers for us
  • used metering devices associated with a panel, compared stated versus actual passive device usage, is one more accurate and when?
  • asked people about the last five websites they visited, what was the match rate – 1% recalled 5 out of 5,  6% remembered 4, 9% remembered 3, 29% remembered none
  • ask people about ‘most often’ websites, spontaneous recall, 7 days or 2 months, people were far better with 2 months recall
  • with prompted recall,  trend wasn’t as expected but they don’t know why yet
  • there is always more over-reporting than under-reporting, acquiesence bias
  • people don’t remember their online activities
  • recall is even worse on a smartphone, so much marketing taking place [hello competely distracted! phone games, text messages, video watching, snapchatting]
  • think about when six blind men touch a different part of an elephant and they describe it differently, but together, they describe an elephant

The impact of questionnaire design on measurements in surveys #4 #ESRA15 #MRX 

Live blogged from #ESRA15 in Reykjavik. Any errors or bad jokes are my own.

Well, last night i managed to stay up until midnight. The lights at the church went on, lighting up the tower and the very top in an unusual way. They were quite pretty! The rest of the town enjoyed mood lighting as it didn’t really get dark at all. Tourists were still wandering in the streets since there’s no point going to bed in a delightful foreign city if you can still see where you’re going. And if you weren’t a fan of the mood lighting, have no fear! The sun ‘rose’ again just four hours later. If you’re scared of the dark, this is a great place to be – in summer!

Today’s program for me includes yet another sessions of question data quality, polling question design, and my second presentation on how non-native English speakers respond to English surveys. We may like to think that everyone answering our surveys is perfectly fluent but let’s be realistic. About 10% of Americans have difficulty reading/writing in English because it is not their native language. Add to that weakly and non-literate people, and there’s potential big trouble at hand.


the impact of answer format and item order on the quality of measurement

  • compared 2 point scale and 11 point scale, different order of questions and question can even be very widely apart, looked at perceived prestige of occupations
  • separated two pages of the surveys with a music game of guessing the artist and song, purely as distraction from the survey. the second page was the same questions in a completely different order, did the same thing numerous times changing the number of response options and question orders each time. whole experiment lasted one hour
  • assumed scale was uni-dimensional
  • no differences comparing 4 point to 9 point scale, none between 2 point and 9 point scale [so STOP USING HUGE SCALES!!!]
  •  prestige does not change depending on order in the survey [but this is to be expected with non-emotional, non-socially desirable items]
  • respondents confessed they tried to answer well but maybe not the best of their ability or maybe their answers would change the next time [glad to see people know their answers aren’t perfect. and i wouldn’t expect anything different. why SHOULD they put 100% effort into a silly task with no legitimate outcome for them.]

measuring attitudes towards immigration with direct questions – can we compare 4 answer categories with dichotomous responses

  • when sensitive questions are asked, social desirability affects response distributions
  • different groups are affected in different ways
  • asked questions about racial immigration – asked binary or as a 4 point scale
  • it’s not always clear that slightly is closer to none or that moderately is closer to strongly. can’t just assume the bottom two boxes are the same or the top two boxes are the same
  • education does have an effect, as well as age in some cases
  • expression of opposition for immigration depends on the response scale
  • binary responses leads to 30 to 50% more “allow none” responses than the 4 point scale
  • respondents with lower education have lower probability to choose middle scale point

cross cultural differences in the impact of number of response categories on response behaviour and data structure of a short scale for locus of control

  • locus of control scale, 4 items, 2 internal, 2 external
  • tested 5 point vs 9 point scale
  • do the means differ, does the factor structure differ
  • I’m  own boss; if i work hard, i’ll succeed; when at work or in m private life what I do is mainly determined by others; bad luck often gets in the way of m plans
  • labeled doesn’t apply at all, applies completely
  • didn’t see important demographic differences
  • saw one interaction but it didn’t really make sense [especially given sample size of 250 and lots of other tests happening]
  • [lots of chatter about significance and non-significance but little discussion of what that meant in real words]
  • there was no effect of item order, # of answer options mattered for external locus but not internal locus of control
  • [i’d say hard to draw any conclusions given the tiny number of items, small sample size. desperately needs a lot of replication]

the optimal number of categories in item specific scales

  • type of rating scale where the answer is specific to the scale and doesn’t necessarily apply to every other item – what is your health? excellent, good, poor
  • quality increased with the number of answer options comparing 11,7,5,3 point scales but not comparing 10,6,4 point scales
  • [not sure what quality means in this case, other audience members didn’t know either, lacking clear explanation of operationalization]

The impact of questionnaire design on measurements in surveys #3 #ESRA15 #MRX 

Live blogged from #ESRA15 in Reykjavik. Any errors or bad jokes are my own.

We had 90 minutes for lunch today which is far too long. Poor me.  I had pear skyr today to contrast yesterday’s coconut skyr. I can’t decide which one i like better. Oh, the hard decisions I have to make!  I went for a walk which was great since it drizzled all day yesterday. The downtown is tiny compared to my home so it’s quite fun to walk from one end to the other, including dawdling and eating, in less than half an hour. It’s so tiny that you don’t need a map. Just start walking and take any street that catches your fancy. I dare you to get lost. Or feel like you’re in an unsafe neighbourhood. It’s not possible.

I am in complete awe at the bird life here. There are a number of species i’ve never seen before which on its own is fun. It is also baby season so most of the ducks are paired off and escorting 2 to 8 tiny babies. They are utterly adorable as the babies float so well that they can barely swim underwater to eat. I haven’t seen any puffins along the shore line. I’m still hopeful that a random one will accidentally wander across my path.

By the way, exceptional beards really are a thing here. In case you were curious.

  the Who: experimental evidence on the effect of respondent selection on collecting individual asset ownership information

  • how do you choose who to interview?
  • “Most knowledgeble person”, random selection, the couple together, each individual adult by themself about themself, by themself about other people
  • research done in uganda so certainly not generalizable to north america
  • ask about dwelling, land, livestock, banking, bequeathing, selling, renting, collateral, investments
  • used CAPI, interviews matched on gender, average interview was 30 minutes
  • challenges included hard to find couples together as one person might be working in the field, hard to explain what assets were
  • asking couple together shows differences from ownership incidence but the rest is the same
  • [sorry, couldn’t determine what “signficant positive results” actually meant. would like to know. 😦 ]

Portuguese national health examination survey: questionnaire development

  • study includes physical measurements and a survey of health status, health behaviours, medication, income, expenses
  • pre-tested the survey for comprehension and complexity
  • found they were asking for things from decades ago and people couldn’t remember (eg when did you last smoke)
  • some mutually exclusive questions actually were not
  • you can’t just ask about ‘activity’ you have to ask about ‘physical activity that makes you sweat’
  • responses cards helped so that people didn’t have to say an embarrassing word
  • had to add instructions that “some questions may not apply to you but answer anyways” because people felt that if you saw them walking you shouldn’t ask whether they can walk
  • gave examples of what sitting on the job, or light activity on the job meant so that desk sitters don’t include walking to the bathroom as activity
  • pretest revealed a number of errors that could be corrected, language and recall problems can be overcome with better questions

an integrated household survey for Wales

  • “no change” is not a realistic option [i wish more people felt that way]
  • duplication among the various surveys, inefficient, survey costs are high
  • opportunity to build more flexibility into a new survey
  • annual sample size of 12000, randomly selected 16+ adults, 45 minutes
  • want to examine effects of offering incentives
  • survey is still in field
  • 40% lower cost compared to previous, significant gains in flexibility

undesired repsonse to sureys, wrong answers or poorly worded question? how respondents insist on reporting their situation despite unclear questionning

  • compared census information with family survey information
  • interested in open text answers
  • census has been completed since 1881
  • belle-mere can mean stepmother and mother in law in french
  • can’t tell if grandchildren in the house belong to which adult child in the house
  • ami can mean friend or boyfriend or partner or spouse, some people will also specify childhood friend or unemployed friend or family friend
  • can’t tell if an unknown location of child means they don’t know the address or the child has died
  • do people with an often changing address live in a camper, or travel for work?
  • if you only provide age in years for babies you won’t know if it’s stillborn or actually 1 year old

ask a positive question and get a positive answer: evidence on acquiesence bias from health care centers in nigeria

  • created two pairs of questions were one was positive and one was negative – avoided the word no [but the extremeness of the questions differed, eg., “Price was reasonable” vs “Price was too expensive” ]
  • some got all positive, all negative, or a random mix
  • pilot test was a disaster, in rural nigeria people weren’t familiar with this type of question
  • instead, started out asking a question about football so people could understand how the question worked. asked agree or disagree, then asked moderately or strongly – two stage likert scale
  • lab fees were reasonable generated very different result than lab fees were unreasonable [so what is reality?]
  • it didn’t matter if negatives were mixed in with positives
  • acquiescence bias affects both positive and negative questions, can’t say if it’s truly satisficing, real answer is probably somewhere in between [makes we wonder, can we develop an equation to tease out truth]
  •  large ceiling effects on default positive framing — clinics are satisfactory despite serious deficiencies
  • can’t increase scores with any intervention but you can easily decrease the scores
  • maybe patient satisfaction is the wrong measure
  • recommend using negative framing to avoid ceiling effects [I wonder if in north america, we’re so good at complaining that this isn’t relevant]

The impact of questionnaire design on measurements in surveys #2 #ESRA15 #MRX  

Live blogged at #ESRA15 in Reykjavik. Any errors or bad jokes are my own.

Breaktime treated us to fruit and croissants this morning. I was hoping for another unique to iceland treat but perhaps that was a sign to stop eating. No, just kidding! Apparently you’re not allowed to bring food or drink into the classrooms. The signs say so. The signs also say no Facebook in the classrooms. Shhhh…. I was on Facebook in the classroom!

The sun is out again and I took a quick walk outside. I am thankful my hotel is at the foot of the famous church. No matter where I am in this city, I can always, easily, and instantly find my hotel. No map needed when the church is several times higher than the next highest building!

I’ve noticed that the questions at this conference are far more nit-picky and critical than I’m used. I suspect that is because the audience includes many academics whose entire job is focused on these topics. They know every minute detail because they’ve done similar studies themselves. It makes for great comments and questions, though it does seem to put the speaker on the spot every time!

smart respondents: let’s keep it short.

  • do we really need scale instructions in the question stem? they add length, mobile screens have limited space, and respondents skip the instructions if the response scale is already labeled [isn’t this just an artifact of old fashioned face to face surveys, telephone surveys]
  • they tested instructions that matched and did not match what was actually in the scale [i can imagine some panelists emailing the company to complain that the survey had errors!]
  • used a probability survey [this is one case where a nonprobability sample would have been well served, easier cheaper to obtain with no need to generalize precisely to a population]
  • answer frequencies looked very similar for correct and incorrect instructions, no significant differences, she’s happy to have nonsignificant results, unaffected by mobile device or age
  • [more regression results shown, once again, speaker did not apologize and the audience did not have a heart attack]
  • it seems like responsents ignore instructions in the question, they reply on the words in the answer options, e.g., grid headers
  • you can omit instructions if the labeling is provided in the answer options
  • works better for experienced survey takers [hm, i doubt that. anyone seeing the answer options will understand. at least, thats my opinion.]

from web to paper: evaluation from data providers and data analysts. The case of annual survey finances of enterprises

  • we send out questionaires, something happens, we get data back – we don’t know what happens 🙂
  • wanted to keep question codes in the survey which seemed unnecessary to respondents, had really long instructions for some questions that didn’t fit on the page so they put them on a pdf
  • 64% of people evaluted the codes on the online questionnaire positively, 12% rated the codes negatively. people liked that they could communicate with statistics netherlands by using the codes
  • 74% negative responses to explanations of question which were intended to reduce calls from statistics netherlands, only 11% were positive
  • only 25% of people consulted the pdf with instructions
  • most people wanted to received a printed version of the questionnaire they filled out, people really wanted to print it and they screen capped it, people liked being able to return later, they could easily get an english version
  • data editors liked that they didn’t have to do data entry but now they needed more time to read and understand what was being said
  • they liked having the email address because they got more direct and precise answers, responses came back faster, they didn’t notice any changes in the time series data

is variation in perception of inequality and redistribution of earnings actual or artifactual. effects of wording, order, and number of items

  • opinions differ when you ask how much should people make vs how much should the top quintile of peopl emake
  • they asked people how much a number of occupations should earn, they also varied how specific the title was e.g., teacher vs math teacher in a public highschool
  • estimates for specific descriptions were higher, high status jobs got much higher estimates
  • adding more occupations to the list makes reliability in earnings decrease

exploring a new way to avoid errors in attitude measurements due to complexity of scientific terms: an example with the term biodiversity

  • how do people talk about complicated terms, their own words often differ from scientific definitions
  • “what comes to mind when you think of biodiversity?” – used text analysis for word frequencies, co-occurences, correspondence analysis, used the results to design items for the second study
  • found five classes of items – standard common definition, associated with human actions to protect it, human envionment relationship, global actions and consequences, scientific definition
  • turned each of the five types of defiintions into a common word definition
  • people gave more positive opinions about biodiversity when they were asked immediately after the definition
  • items based on representations of biodiversity were valid and reliable
  • [quite like this methodology, could be really useful in politics]

[if any of these papers interest you, i recomend finding the author on the ESRA program and asking for an official summary. Global speakers and weak microphones makes note taking more challenging. 🙂 ]



%d bloggers like this: