Tag Archives: scales

Analysis, design, and sampling methods #PAPOR #MRX 

Live blogged at #PAPOR in San Francisco. Any errors or bad jokes are my own.

Enhancing the use of Qualitative Research to Understand Public Opinion, Paul J. Lavrakas, Independent Consultant; Margaret R. Roller, Roller Research

  • thinks research has become to quantitative because qual is typically not as rigorous but this should and can change
  • public opinion in not a number generated from polls, polls are imperfect and limited
  • aapor has lost sight of this [you’re a brave person to say this! very glad to hear it at a conference]
  • we need more balance, we aren’t a survey research organization, we are a public opinion organization, our conference programs are extremely biased quantitative
  • there should be criteria to judge the trustworthyness of research – was it fit for purpose
  • credible, transferable, dependability, confirmability
  • all qual resaerch should be credible, analyzable, transparent, useful
  • credible – sample repreentation and data collection
  • do qual researchers seriously consider non-response bias?
  • credibility – scope deals with coverage design and nonresponse, data gathering – information obtained, researcher effects, participant effects
  • analyzability – intercoder reliability, transcription quaity
  • transparency – thick descriptions of details in final documents

Comparisons of Fully Balanced, Minimally Balanced, and Unbalanced Rating Scales, Mingnan Liu, Sarah Cho, and Noble Kuriakose, SurveyMonkey

  • there are many ways to ask the same question
  • is it a good time or a bad time? – fully balanced
  • is it a good time or not? – minimally balanced
  • do you or do you not think it is getting better?
  • are things headed in the right direction?
  • [my preference – avoid introducing any balancing in the question, only put it in the answer. For instance: What do you think about buying  a house? Good time, Bad time]
  • results – effect sizes are very small, no differences between the groups
  • in many different questions tested, there was no difference in the formats

Conflicting Thoughts: The Effect of Information on Support for an Increase in the Federal Minimum Wage Level, Joshua Cooper & Alejandra Gimenez, Brigham Young University, First Place Student Paper Competition Winner

  • Used paper surveys for the experiment, 13000 respondents, 25 forms
  • Would you favor or oppose raising the minimum wage.
  • Some were told how many people would increase their income, some were told how many jobs would be lost, some were told both
  • Negative info opposed a wage increase, positive info in favor of wage increase, people who were told both opposed a wage increase
  • independents were more likely to say don’t know
  • negatively strongly outweighs the good across all types of respondents regardless of gnder, income, religion, partyID
  • jobs matter, more than anything
Advertisements

Direction of response scales #ESRA15 #MRX 

Live blogged at #ESRA15 in Reykjavik. Any errors or bad jokes in the notes are my own.

I discovered that all the buildings are linked indoors. Let it rain, let it rain, i don’t care how much it rains….  [Feel free to sing that as loud as you can.] Lunch was Skyr, oat cookies and some weird beet drink. Yup. I packed it myself. I always try to like yogurt and never really do. Skyr works for me. So far, coconut is my favourite. I’ve forgotten to take pictures of speakers today so let’s see if I can keep the trend going! Lots of folks in this session so @MelCourtright and I are not the only scale geeks out there . 🙂

Response scales: Effects of scale length and direction on reported political attitudes

  • instruments are not neutral, they are a form of communication
  • cross national projects use different scales for the same question so how do you compare the reuslts
  • trust in parliament is a fairly standard question for researchers and so makes a good example
  • 4 point scale is most popular but it is used up to 11 points, traditional format is very positive to very negative
  • included a don’t know in the answer options
  • transformed all scales into a 0 to 1 scale and evenly distributed all scores in between
  • means highest with 7 point scale traditional direction and lowest with 4 point and 11 point traditional direction
  • reverse direction had much fewer mean differences, essentially all the same
  • four point scales show differences in direction, 7 and 11 point show fewer differences in direction
  • [regression results shown on the screen – no one fainted or died, the speaker did not apologize or say she didn’t understand them. interesting difference compared to MRX events.]

Does satisficing drive scale direction effects

  • research shows answers shift towards the start fo the scale but this is not consistent
  • achoring and adjustment effects whereby people use the first answer option as the anchor, interpretative heuristics suggest people choose an early response to express their agreement with the questions, primacy effects due to satisficing decreases cognitive load
  • scores were more positive when the scale started positive, differences were huge across all the brands
  • the pattern is the same but the differences are noticeable
  • speeding measured as 300 milliseconds per word
  • speeders more likely to choose early answer option
  • answers are pushed to the start of the scale, limited evidnce that it is caused by satisficing

Ordering your attention: response order effects in web-based surveys

  • primacy happens more often visually and recency more often orally
  • scales have an inherence order. if you know the first answer option, you know the remainder of the options
  • sample size over 100 000, random assigned to scale order, also tested labeling, orientation, and number of response categories from 2 to 11
  • the order effect was always a primacy effect, differences were significant though small; significant more due to sample size [then why mention the results if you know they aren’t important?]
  • order effects occurred more with fully labeled scales, end labeled scales did not see response order effects
  • second study also supported the primacy effect with half of questions showing the effect
  • much stronger response seen with unipolar scales
  • vertical scales are much stronger response as well
  • largest effect seen for horizontal unipolar scale
  • need to run the same tests with grids, don’t know which response is more valid, need to know what they will be and when

Impact of repsonse scale direction on survey repsonses in web and mobile web surveys

  • why does this effect happen?
  • tested agreement scales and frequency scales
  • shorter scale decreases primacy effect
  • scale length has a signifciant moderating effect – strongly effect for 7 point scales compared to 5 point scale
  • labeling has significant moderating effects – stronger effect for fully labeled
  • question location matters – stronger effect on earlier questions
  • labeled behavioural scale shows the largest impact, end labeled attitudinal scale has the smallest effect
  • scale direction affects responses – more endorsement at start of scale
  • 7 point fully labeled frequency scale is most affected
  • we must use shorter scales and end labeling to reuce scale direction effects in web surveys

Importance of scale direction between different modes

  • term used is forward/reverse scale [as opposed to ascending/descending or positive/negative keyed]
  • in the forward version of the scale, the web creates more agreement; but face to face it’s very weak. face to face shows recency effect
  • effect is the same for general scales (all scales are agreement) and item specific scales (each scale reflects the specific question), more cognitive effort in the item specific scale so maybe less effort is invested in the response
  • item specific scale affected more by the web
  • randomizing scale matters more in online surveys


Related Posts

 

%d bloggers like this: