Tag Archives: likert scale

Questionnaire Design #AAPOR 

Live note taking at #AAPOR in Austin Texas. Any errors or bad jokes are my own.

The feedback of respondent committment and tailored feedback on response quality in an online survey; Kristin Cibelli, U of Michigan

  • People can be unwilling or unable to provide high quality data, will informing them of the importance and asking for committment help to improve data quality [I assume this means the survey intent is honourable and the survey itself is well written, not always the case]
  • Used administrative records as the gold standard
  • People were told their answers would help with social issues in the community [would similar statements help in CPG, “to help choose a pleasant design for this cereal box”]
  • 95% of people agreed to the committment statement, 2.5% did not agree but still continued; thus, we could assume that the control group might be very similar in committment had they been asked
  • Reported income was more accurate for committed respondents, marginally significant
  • Overall item nonresponse was marginally better for committed respondents, not committed people skipped more
  • Not committed were more likely to straightlining 
  • Reports of volunteering, social desirability were possibly lower in the committed group, people confessed it was important for the resume
  • Committed respondents were more likely to consent to reviewing records
  • Committment led to more responses to income question, and improved the accuracy, more likely to check their records to confirm income
  • Should try asking control group to commit at the very end of the survey to see who might have committed 

Best Practice Instrument design and communications evaluation: An examination of the NSCH redesign by William Bryan Higgins, ICF International

  • National and state estimates of child well-being 
  • Why redesign the survey? To shift from landline and cell phone numbers to household address based sampling design because kids were answering the survey, to combine two instruments into one, to provide more timely data
  • Moe to self completion mail or web surveys with telephone follow-up as necessary
  • Evaluated communications about the survey, household screener, the survey itself
  • Looked at whether people could actually respond to questions and understand all of the questions
  • Noticed they need to highlight who is supposed to answered the survey, e.g., only for households that have children, or even if you do NOT have children. Make requirments bold, high up on the page. 
  • The wording assumed people had read or received previous mailings. “Since we last asked you, how many…”
  • Needed to personalize the people, name the children during the survey so people know who is being referred to 
  • Wanted to include less legalese

Web survey experiments on fully balanced, minimally balanced, and unbalanced rating scales by Sarah Cho, SurveyMonkey

  • Is now a good time or a bad time to buy a house. Or, is now a good time to buy a house or not? Or, is now a good time to buy a house?
  • Literature shows a moderating effect for education
  • Research showed very little difference among the formats, no need to balance question online
  • Minimal differences by education though lower education does show some differences
  • Conclusion, if you’re online you don’t need to balance your results

How much can we ask? Assessing the effect of questionnaire length on survey quality by Rebecca Medway, American Insitute for research

  • Adult education and training survey, paper version
  • Wanted to redesign the survey  but the redesign was really long
  • 2 version were 20 pages and 28 pages, 138 questions or 98 questions
  • Response rate slightly higher for shorter questionnaire
  • No significant differences in demographics [but I would assume there is some kind of psychographic difference]
  • Slightly more non-response in longer questionnaire
  • Longer surveys had more skips over the open end questions
  • Skip errors had no differences between long and short surveys
  • Generally longer had lower repsonse rate but no extra problems over the short 
  • [they should have tested four short surveys versus the one long survey 98 is just as long as 138 questions in my mind]

It’s just sentiment analysis… or is it…

[tweetmeme source=”lovestats” only_single=false]What is sentiment analysis? Many people have staked their entire careers on that topic, wrestling with what it is and what it isn’t, how to tweak it and massage and improve it so that it edges every so slowly more closer to the 85% accurate that we deem to be perfect. However close to perfect you may be, many of us are doing fairly similar jobs.
.
But sentiment analysis really has nothing to do with social media research. It is no more related to SMR than are any other tools that researchers use. A likert scale isn’t research. A regression isn’t research. A sample isn’t research. But every tool that researchers use is built with important research principles and goals in mind. We then collect these tools together and combine them in ways that make one plus one equal three.
.
It’s not the sentiment analysis, but rather the people and processes that turns bits and pieces of hints and gleamings into piles and piles of smarts. Perhaps it’s time to work on that too.

Read these too

  • Geico chucks wood
  • 5 random things I like about statistics and proof you are a dork
  • Using Social Media Research to Predict the Future: That’s me saying yes you can!
  • Probability Sampling – Proof that only telephone samples are quality samples
  • Edward Tufte: Your Presentation Sucks Cause Your Content Sucks
  • A Box Score Lesson for Psychology Students

    A plot of a normal distribution (or bell curve...

    Image via Wikipedia

    I was in school for a bunch of years, and took a bunch of research design courses and a bunch of statistical analysis courses. Easy ones, hard ones, and a few really interesting ones. Surprisingly, one thing I never learned about was box scores, a statistical staple in the market research world.

    Box scores are a way of talking about and working with Likert scales or other types of categorical scales so that everyone knows whether you are talking about the positive end of the scale (top box, top 2 box), the middle of the scale (middle/neutral box), or the negative end of the scale (bottom box, bottom 2 box).

    Instead of calculating average scores from the Likert scale responses, box scores are reported as the percentage out of the total number of people who answered the question.(If 10 out of 50 people chose strongly agree, top box score is 20%) Box scores let you clearly identify how many people fall into a subgroup – people who are happy, unhappy, or just don’t care about your product.

    Why do box scores matter? In a sense, they do report the same type of information as average scores. But, unless standard deviations are near and dear to you, average scores often appear very similar between groups. It’s hard to explain to a client why scores of 3.6 and 3.9 are very different because there is no intuitive difference between those numbers.

    But, let’s think about box scores now. Can you intuitively understand the difference between 30% of people liking your brand and 40% of people liking your brand? I’m pretty sure you can. And you don’t need to understand what a standard deviation is either. I’m not in favour of dumbing down statistics but I am in favour of people understanding them.

    Here’s another reason box scores are good. The average score calculated for a result that is 10% top box, 10% bottom box and 80% middle box is exactly the same average score you would get for a result that is 40% top box, 40% bottom box, and 20% middle box. I’d certainly like to know if 10% or 40% of people hated my product. That’s a pretty important difference to be aware of and I wouldn’t want it getting lost because someone had a weak understanding of what a SD is.

    So now, psychology/sociology/geography majors, go forth and prosper as market researchers!

    Read these too

     

    1 topic, 5 blogs: Rich Media in Surveys

    Welcome to the first of five blogs wherein five bloggers chatter on aimlessly about the same topic. Where can one topic possibly lead? Well, just you read on to find out.

    This month’s topic: How does rich media affect survey results?

    _._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._

    Did you answer any surveys when they first came online more then ten years ago? I did my own HTML coding back then and was so proud of how I managed to code a 20 item grid on each of ten pages. Nice clean lines. Scroll bars down the side. Such technology!

    Now, I hate grids unless they meet very specific criteria. One of those criteria is of course eye-appeal and interest with the goal being not to irritate survey responders. Though no longer a new technology, rich text features are still not widely used. Most surveys stick to the traditional “check the box” or “type in some words.”

    But, rich media in surveys give users new ways of interacting with a survey. These new ways are interesting to survey participants. They provide variety, a component of fun, and for some people, even a sense of anticipation as they wonder what the next survey will have to offer them.

    For me, everything comes down to data quality. If survey participants are bored, data quality takes a severe nose dive. If incentives are insufficient, data quality takes a nose dive. I see providing a variety of survey questions as killing two birds with one stone (sorry birdies). If surveys are fun, people don’t get bored and straightlining is less likely to occur. And, if surveys are fun, incentives are less important. The survey itself becomes incentive enough.

    There are of, course, caveats. Any one who’s read a book on survey design or research methods will tell you that if you change the style of a question, you change the answers you will receive. If you’ve always used traditional grids, then switching to rich media “grids” means your data will change. You will need to be prepared to see trendlines adjust to the new layout. This is NOT a reason to avoid moving to rich media. There are ways to deal with changes in trendlines.

    Why do these changes happen? Personally, I was raised in a world where words go from left to right. That’s what feels right to me. Same for traffic lights that are red, yellow and green. If stop signs were suddenly green, the city would turn to chaos. No matter how hard we try, any tampering with ingrained rules means that people will misinterpret something, even if conscientious efforts are made to interpret the new rules correctly.

    And along those lines, there are many many other things that are inherently correct to me that I don’t even realize. For example, if I’m given a drop and drag likert scale, will I be able refrain from filling up each box equally? If I’m asked to draw lines between objects, will I disregard the patterns the lines are making? If asked to choose pictures and colours, will i be able to refrain from choosing those at the top left?

    I think that using rich media in surveys is worth it. Anything that makes the survey experience more interesting to participants is worth the effort to make it work.
    Green Means Go!
    Green Means Go!‘ by Hyokano via Flickr
    Image is licenced under a Creative Commons Attribution-ShareAlike licence
    _._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._

    And the other four authors of this months topic are…
    Bernie Malinoff
    Joel Rubinson
    Josh Mendelsohn
    Brandon Bertelsen

    Related Articles

    The most horrible stupidest smartest amazing way to write surveys!!!

    Did that title get your attention, for good or bad? What if I had said “An ok way to write pretty good surveys?” Would you have bothered to click through? Would you have gotten bored reading the post by the third sentence? I’m pretty sure you would.

    We all know that subject lines must be a bit edgy to get a response, but what about the post itself? I’ve taken extreme positions before in the quest to be a bit edgy. Not safe, not easy, but provoking. I hate cell phone surveys. Surveys must be under 15 minutes. Only experts should write surveys.

    I do feel strongly about these positions but in all honesty, they do not reflect my absolute 100% true opinion. Really, how can there never be exceptions or room for adjustments? But, they do cause lots of disagreements and discussion which might have never taken place otherwise.
    With that in mind, let’s see what happens when I say you must never use a scale with more than five points. 🙂

    Everybody is surrounded by many question marks!!!
    Everybody is surrounded by many ques…‘ by ePi.Longo via Flickr
    Image is licenced under a Creative Commons Attribution licence

    Related Posts

     

  • This is why Twitter will die
  • Only crazy people are on twitter
  • Mugging, Sugging and now Rugging: I take a hard stance on privacy
  • I’m sorry but representative samples are 100% unattainable
  • What’s Gonna Kill You? An Infographic That Actually Works #MRX
  • To Mid-Point or Not To Mid-Point, That is the Question #MRX

    Snickers Purchased Feb. 2005 in Atlanta, GA, USA

    Image via Wikipedia

    Here’s the question: Do you use a mid-point or not?
    .
    The standard five point scale gives you two degrees of positivity, two degrees of negativity, and one degree right in the middle that can be interpreted as neutrality, uncertainty, or whatever the responder feels like using it as. The six point scale provides three degrees of positivity, three degrees of negativity, and no way for anyone to waffle. Is there a right way to set your Likert scale?
    .
    Sometimes, researchers really want to know which side of the fence you’re on. When you go to vote, you aren’t there to tell both candidates that they’re doing a good job or that they are both equally horrible. You are there to pick one, the winner. However, when given the choice between a Crunchie bar and a Snickers bar, it is certainly possible that you will end up buying one of each. And I would whole heartedly stand behind that decision. Sometimes a mid-point is logical, other times it’s just not so clear. Here are a few things to consider.
    .
    1) Does the survey refer to something that people truly have to choose between? Is it reasonable for me to buy Dove and Herbal Essences, or will I really only buy one Dell computer.
    .
    2) How much do you want to annoy survey participants? Put your feet back in the shoes of a normal everyday person and think about how it feels to answer a survey without midpoints. You know you hate it. You know responders hate it. And hate equals decreased data quality. Hate equals lower response rates. Hate equals increased costs of panel recruitment. Are these risks worth that one point?
    .
    3) Do you know how to use decimal places? Remember, you aren’t using a 5 or 6 point scale. Because you really aren’t concerned about individual responses, but rather averages of hundreds of responses, you’ve got a bazillion decimal places on your side. 1, 2, and 3 may reflect the positive side of a 6 point scale, but so does 1 to 2.4 on a 5 point scale. You can cut your midpoint, top box or bottom box anywhere you like with decimals.
    .
    What is my choice? In almost every case, I stick with odd numbered scales. To be more specific, 5 point scales.

    Related Articles

    %d bloggers like this: