Tag Archives: AAPOR

Presidential Address: Mollyann Brodie talks about diversity in all its forms #AAPOR @MollyBrodie

Introduction

  • Molly loves !!!!!!!!!!
  • If there is only one ! in an email, then maybe you’ve done something wrong. Several is lukewarm endorsement
  • She also likes 🙂
  • She is so athletic it’s exhausting to sit beside her 
  • Harvard beats Ya; 29 to 29

Address

  • Spent 3 months reading EVERY past presidential address
  • Most set #AAPOR agenda relating to it’s role in the world
  • We’re having. a deep conversation about equality and diversity, we might elect our first woman president after our first black president
  • Is it some for us to have a discussion about inclusion in our own institutions
  • We are stronger and more successful  if we embrace more fully what has made us special, diverse methods, diverse people, diverse issues, diverse culturally 
  • The answer isn’t Sure, of course
  • We study methods and implications, Qual or quant, small or large, chapters and regions, and yes by our personal demographics – gender, race, religion, age, sexual orientation
  • Many feel their perspective has belittled or ignored, 
  • Do we ignore nonprobability, do we care if you aren’t a methodologist
  • There ARE inequities but it is not intentional exclusion, it comes from our inattention
  • We must be more deliberate about who we mentor and include
  • Gender in AAPOR leadership, Molly was preceded and mentored by many women, gender wasn’t an issue for her. But picture changed. Grumbling about all male panels. 7 male presidents in a row. Does the data support a concern?
  • [woot my gender ratio chart is on the screen! You can see that most conferences have a higher proportion of male speakers.  Http://Lovestats.Wordpress.ca]

  • In the old days 1 in 5 members were female, by the 70s had the first woman president
  • Over last 16 years, share of women members increased but not the share of the leaders
  • We’ve only seen ONE female versus female president election, just once in 7 decades
  • We rarely select female achievers as award winners – only five
  • Is this a problem across the board or just some committees
  • This is a long persistent pattern, not intentional or quickly fixed
  • Are we sampling properly to get a full range of perspectives?
  • We need ALL differences of opinions to expand our thinking
  • We can’t risk being irrelevant by not ensuring their is fresh air in our organization
  • We want everyone to feel that AAPOR is their home, we want all companies large and small, surveys and data science to feel at home
  • We want to bring different styles into the organization as well, we are stronger together
  • It is the right thing to do [can we insert Justin Trudeau here? Because it’s 2016!]
  • We need to LOOK like the public for them to believe and trust us, we need to have the voices of the public
  • More diverse is not necessarily easy, more constituencies with limited resources, traditions might have to change
  • We will have to get better at dealing with differences of opinion
  • We need to appreciate some uncomfortable differences to benefit from the collective whole
  • We have the tools to do this
  • We now have a diversity statement
  • Our bylaws call for a rotation of public and private organizations to guarantee equity between the groups
  • Need to understand structural barriers
  • New policy added term limits to ensure one voice isn’t the only voice for decades
  • Pipeline for leadership is being seeded
  • Where are our gaps, what are our impediments, FILL OUT YOUR MEMBERSHIP SURVEY
  • Need for affinity groups so everyone has opportunity to find their voice
  • GAYPOR is one group that self organized in 2012, Hispanic AAPOR established this year, Retired AAPOR might be the next affinity group, we have have so many more
  • ASA has a Women in Statistics group which helps with mentoring
  • #WomenAlsoKnowStuff – find voices for your conferences
  • Racial and ethnic diversity is the hardest at all – we are very white. [As I look around the room, I can’t see any black people. 😦 ]
  • We do care about over sixty but good intentions are not enough. We need to change this going forward.
  • We need an actionable plan
  • Do YOU pay attention to this? It changed her behaviora and choices. Honestly think about inclusion in your role. Planning panels, committed, who you sit with, how are you trying to reach outside your normal circle? Are you seeding the pipeline?  Have you recruited a different type of person?
  • Insights are richer when produced by a diverse set of researchers.
  • Who is and isn’t participating? Who is on the sidelines?

Advertisements

Questionnaire Design #AAPOR 

Live note taking at #AAPOR in Austin Texas. Any errors or bad jokes are my own.

The feedback of respondent committment and tailored feedback on response quality in an online survey; Kristin Cibelli, U of Michigan

  • People can be unwilling or unable to provide high quality data, will informing them of the importance and asking for committment help to improve data quality [I assume this means the survey intent is honourable and the survey itself is well written, not always the case]
  • Used administrative records as the gold standard
  • People were told their answers would help with social issues in the community [would similar statements help in CPG, “to help choose a pleasant design for this cereal box”]
  • 95% of people agreed to the committment statement, 2.5% did not agree but still continued; thus, we could assume that the control group might be very similar in committment had they been asked
  • Reported income was more accurate for committed respondents, marginally significant
  • Overall item nonresponse was marginally better for committed respondents, not committed people skipped more
  • Not committed were more likely to straightlining 
  • Reports of volunteering, social desirability were possibly lower in the committed group, people confessed it was important for the resume
  • Committed respondents were more likely to consent to reviewing records
  • Committment led to more responses to income question, and improved the accuracy, more likely to check their records to confirm income
  • Should try asking control group to commit at the very end of the survey to see who might have committed 

Best Practice Instrument design and communications evaluation: An examination of the NSCH redesign by William Bryan Higgins, ICF International

  • National and state estimates of child well-being 
  • Why redesign the survey? To shift from landline and cell phone numbers to household address based sampling design because kids were answering the survey, to combine two instruments into one, to provide more timely data
  • Moe to self completion mail or web surveys with telephone follow-up as necessary
  • Evaluated communications about the survey, household screener, the survey itself
  • Looked at whether people could actually respond to questions and understand all of the questions
  • Noticed they need to highlight who is supposed to answered the survey, e.g., only for households that have children, or even if you do NOT have children. Make requirments bold, high up on the page. 
  • The wording assumed people had read or received previous mailings. “Since we last asked you, how many…”
  • Needed to personalize the people, name the children during the survey so people know who is being referred to 
  • Wanted to include less legalese

Web survey experiments on fully balanced, minimally balanced, and unbalanced rating scales by Sarah Cho, SurveyMonkey

  • Is now a good time or a bad time to buy a house. Or, is now a good time to buy a house or not? Or, is now a good time to buy a house?
  • Literature shows a moderating effect for education
  • Research showed very little difference among the formats, no need to balance question online
  • Minimal differences by education though lower education does show some differences
  • Conclusion, if you’re online you don’t need to balance your results

How much can we ask? Assessing the effect of questionnaire length on survey quality by Rebecca Medway, American Insitute for research

  • Adult education and training survey, paper version
  • Wanted to redesign the survey  but the redesign was really long
  • 2 version were 20 pages and 28 pages, 138 questions or 98 questions
  • Response rate slightly higher for shorter questionnaire
  • No significant differences in demographics [but I would assume there is some kind of psychographic difference]
  • Slightly more non-response in longer questionnaire
  • Longer surveys had more skips over the open end questions
  • Skip errors had no differences between long and short surveys
  • Generally longer had lower repsonse rate but no extra problems over the short 
  • [they should have tested four short surveys versus the one long survey 98 is just as long as 138 questions in my mind]

Rise Of The Machines: DSc Machine Learning In Social Research #AAPOR #MRX #NewMR 

Enjoy my live note taking at AAPOR in Austin, Texas. Any bad jokes or errors are my own. Good jokes are especially mine.  

Moderator: Masahiko Aida, Civis Analytics

Employing Machine Learning Approaches in Social Scientific Analyses; Arne Bethmann, Institute for Employment Research (IAB) Jonas F. Beste, Institute for Employment Research (IAB)

  • [Good job on starting without a computer being ready. Because who needs computers for a talk about data science which uses computers:) ]
  • Demonstration of chart of wages by age and gender which is far from linear, regression tree is fairly complex
  • Why use machine learning? Models are flexible, automatic selection of features and interactions, large toolbox of modeling strategies; but risk is overfitting, not easily interpretable, etc
  • Interesting that you can kind of see the model in the regression tree alone
  • Start by setting every case in a sample to 0, e.g., male and female are both 0; then predict responses for every person; calculate AME/APE as mean difference between predictions for all cases
  • Regression tree and linear model end up with very different results
  • R package for average M effects – MLAME on github
  • MLR package as well [please ask author for links to these packages]
  • Want to add more functions to these – conditional AME, SE estimation, MLR wrapper

Using Big Census Data to Better Understand a Large Community Well-being Study: More than Geography Divides Us; Donald P. Levy, Siena College Research Institute Meghann Crawford, Siena College Research Institute

  • Interviewed 16000 people by phone, RDD
  • Survey of quality of community, health, safety, financial security, civic engagement, personal well being
  • Used factor analysis to group and test multiple indicators into factors, did the items really rest within in each factor [i love factor analysis. It helps you see groupings that are invisible to the naked eye. ]
  • Mapped out cities and Burroughs, some changed over time
  • Rural versus urban have more in common than neighbouring areas [is this not obvious?]
  • 5 connections – wealthy, suburban, rural, urban periphery, urban core
  • Can set goals for your city based on these scores
  • Simple scoring method based on 111 indicators to help with planning and awareness campaigns, make the numbers public and they are shared in reports and on public transportation so the public knows what they are, helps to identify obstacles, help to enhance quality of life

Using Machine Learning to Infer Demographics for Respondents; Noble Kuriakose, SurveyMonkey; Tommy Nguyen, SurveyMonkey

  • Best accuracy for gender inferring is 80%, Google has seen this
  • Use mobile survey, but not everyone fills out the entire demographic survey
  • Works to find twins, people you look like based on app usage
  • Support vector machines try to split a scatter plot where male and female are as far apart as possible 
  • Give a lot of power to the edges to split the data 
  • Usually the data overlaps a ton, you don’t see men on the left and women on the right
  • “Did this person use this app?” Split people based on gender, Pinterest is often the first node because it is the best differentiator right now, Grindr and emoticon use follow through to define the genders well, stop when a node is all one specific gender
  • Men do use Pinterest though, ESPN is also a good indicator but it’s not perfect either, HotOrNot is more male
  • Use time spend per app, app used, number of apps installed, websites visited, etc
  • Random forest works the best
  • Feature selection really matters, use a selected list not a random list
  • Really big differences with tree depth
  • Can’t apply the app model to the android model, the apps are different, the use of apps is different

Dissonance and Harmony: Exploring How Data Science Helped Solve a Complex Social Science Problem; Michael L. Jugovich, NORC at the University of Chicago; Emily White, NORC at the University of Chicago

  • [another speaker who marched on when the computer screens decided they didn’t want to work 🙂 ]
  • Recidivism research, going back to prison
  • Wanted a national perspective of recidivism
  • Offences differ by state, unstructured text forms means a lot of text interpretation, historical data is included which messes up the data if it’s vertical or horizontal in different states
  • Have to account for short forms and spelling errors (kinfe)
  • Getting the data into a useable format talks the longest time and most work
  • Big data is often blue in pictures with spirals [funny comments 🙂 ]
  • Old data is changed and new data is added all the time
  • 30 000 regular expressions to identify all the pieces of text
  • They seek 100% accuracy rate [well that’s completely impossible]
  • Added in supervised learning and used to help improve the speed and efficiency of manual review process
  • Wanted state specific and global economy models, over 300 models, used brute force model
  • Want to improve with neural networks, auto make data base updates

Machine Learning Our Way to Happiness; Pablo Diego Rosell, The Gallup Organization

  • Are machine learning models different/better than theory driven models
  • Using Gallup daily tracking survey
  • Measuring happiness using the ladder scale, best possible life to worst possible life, where do you fall along this continuum, Most people sit around 7 or 8
  • 500 interviews everyday, RDD of landlines and mobile, English and Spanish, weighted to national targets and phone lines
  • Most models get an R share of .29. Probably because they miss interactions we can’t even imagine
  • Include variables that may not be justified in a theory driven model, include quadratic terms that you would never think of, expanded variables from 15 to 194
  • [i feel like this isn’t necessarily machine learning but just traditional statistics with every available variable crossed with every other variable included in the process]
  • For an 80% solution, needed only five variables
  • This example didn’t uncover significant unmodeled variables
  • [if machine learning is just as fast and just as predictive as a theory driven model, I’d take the theory driven model any day. If you don’t understand WHY a model is what it is, you can’t act on it as precisely.]

Panel: Public Opinion Quarterly Special – Survey research today and tomorrow #AAPOR #MRX #NewMR 

Live note taking at #AAPOR in Austin, Texas. Any errors or bad jokes are my own.

Moderator: Peter V. Miller, U.S. Census Bureau 

  • He is accepting submissions of 400 words regarding these papers, to be published in an upcoming issue, due June 30, send to peter.miller@census.gov

Theory and Practice in Nonprobability Surveys:
Parallels Between Casual Inference and Survey Inference 
Discussant: Jill DeMatteis, Westat
; Andrew Mercer, Pew Research Center; Frauke Kreuter, University of Maryland; Scott Keeter, Pew Research Center; Elizabeth Stuart, Johns Hopkins University

  • Noncoverage – when people can’t be included in a survey
  • Problem is when they are systematically based
  • Selection bias is not as useful in a nonprobability sample as there is no sampling frame, and maybe not even a sample
  • Need a more general framework
  • Random selection and random treatment assignment is the best way to avoid bias
  • Need exchangeability – know all the confounding, correlated variables
  • Need positivity – everyone needs to be able to get any of the treatments, coverage error is a problem
  • Need composition – everyone needs to be in the right proportions 
  • You might know the percent of people who want to vote one way, but then you also know you have more of a certain percentage of demographic in your group, but it’s never just one demographic group, it’s ten or twenty or 100 important demographic and psychographic variables that might have an association with the voting pattern
  • You can’t weight a demographic group up [pay attention!]
  • We like to assume we don’t have any of these three problems and you can never know if you’ve met them all, we hope random selection accomplishes this for us; or with quota selection we hope it is met by design
  • One study was able to weight using census data a gigantic sample and the results worked out well [makes sense if your sample is so ridiculously large that you can put bigger weights on a sample of 50 000 young men]
  • Using demographics and psychographics helps to create more accurate results, religion, political affiliation
  • This needs to be done in probability and nonprobability samples
  • You can never be certain you have met all the assumptions
  • Think about confounding variables during survey design, not just demographics, tailored to the research question at hand
  • Confounding is more important than math – it doesn’t matter what statistic you use, if you haven’t met the requirments first you’re in troubl

Apples to Oranges or Gala vs. Golden Delicious? Comparing Data Quality of Nonprobability Internet Samples to Low Response Rate Probability Samples

Discussant: George Terhanian, NPD Group
, David Dutwin, SSRS,  Trent Buskirk, Marketing Systems Group

  •  S>80000, 9% response rate for probability sample [let’s be real here, you can’t have a probability sample with humans]
  • The matching process is not fool proof, uses categorical match, matching coefficient, randomly selected when there was a tie
  • Looked at absolute bias, standard deviation, and overal mean absolute bias 
  • Stuck with demographics variables, conditional variables, nested within gender, age, race or region
  • Weighted version was good, but matched and raked was even closer, variability is much less with the extra care
  • Nonprobability telephone surveys consistently had less variability in the errors
  • Benchmarks are essential to know what the error actually is, you can’t just the bias without a benchmark
  • You can be wrong, or VERY wrong and you won’t know you’re wrong
  • Low response rate telephone gets you better data quality, much more likely you’re closer to truth
  • Cost is a separate issue of course
  • Remember fit for purpose – in politics you might need reasonably accurate point estimates 

Audience discussion

  • How do you weight polling research when political affiliation is part of both equations, what is the benchmark, you can’t use the same variables for weighting and measuring and benchmarking or you just creating the results you want to see
  • If we look at the core demographics, maybe we’ve looked at something that was important [love that statement, “maybe” because really we use demographics as proxies of humanality]
  • [if you CAN weight the data, should you? If you’re working with a small sample size, you just probably just get more sample. If you’re already dealing with tens of thousands, then go ahead and make those small weighting adjustments]

2016: The year of the outsider #PAPOR #MRX 

live blogged at #PAPOR in San Francisco. Any errors or bad jokes are my own.

The Summer of Our Discontent, Stuart Elway, Elway Research

  • regarding state of Washington
  • it’s generally democratic
  • between elections, more people are independents and then they drift to democrat
  • independents are more social liberals
  • has become more libertarian
  • don’t expect a rebellion to start in Washington state
  • [sorry, too many details for me to share well]

Californian’s Opinion of Political Outsiders, Mark Baldassare, PPIC

  • California regularly elects outsiders – Reagan, Schwarzenegger
  • flavour is often outsider vs insider, several outsiders have run recently
  • blog post on the topic – http://ppic.org/main/blog_detail.asp?i=1922
  • they favour new ideas over experience
  • 3 things are important – approval ratings of elected officials, people who prefer outsiders give officials lower approval, negative attitudes of the two party system
  • majority think a third party is needed – more likely to be interested in new ideas over experience
  • [sorry, too many details for me to share well]

Trump’s Beguiling Ascent: What 50-State Polling Says About the Surprise GOP Frontrunner, Jon Cohen & Kevin Stay, SurveyMonkey

  • 38% of people said they’d be scared if trump is the GOP nominee
  • 25% would be surprised
  • 24% would be hopeful
  • 21% would be angry
  • 14% would be excited
  • List is very different as expected between democrats and republicans, but not exactly opposite
  • quality polling is scale, heterogeneity , correctable self-selection bias
  • most important quality for candidates is standing up for principles, strong leader, honest and trustworthy – experience is lowest on the list
  • Views on Trump’s Muslim statement change by the minute – at the time of this data: 48% approve, 49% disapprove, split as expected by party
  • terrorism is the top issue for republicans; jobs AND terrorism are top for independents; jobs is top for democrats
  • for republicans – day before paris 9% said terrorism was top, after paris 22%
  • support for Cruz is increasing
  • half of trump voters are absolutely certain they will vote for trump; but only 17% of bush voters are absolutely certain
  • among republicans, cruz is the second choice even among trump voters
  • trump has fewer voters who go to religious services weekly, least of all candidates; carson and cruz are on the high end
  • trump voters look demographically the same but carson has fewer male voters and cruz has fewer female voters
  • trump voters are much less educated, rubio voters are much more educated

Improvements to survey modes #PAPOR #MRX 

What Are They Thinking? How IVR Captures Public Opinion For a Democracy, Mary McDougall, Survox

  • many choices, online is cheapest followed by IVR followed by phone interview
  • many still do not have internet – seniors, non-white, low income, no high school degree
  • phone can help you reach those people, can still do specific targeting
  • good idea to include multiple modes to test for any mode effects
  • technology is no longer a barrier for choosing a data collection strategy
  • ignoring cell phones is poor sampling
  • use labor strategically to allow IVR
  • tested IVR on political polling, 300 completes in 2.5 hours, met the quotas, once a survey was started it was generally completed

The Promising Role of Fax in Surveys of Clinical Establishments: Observations from a Multi-mode Survey of Ambulatory Surgery Centers, Natalie Teixeira, Anne Herleth, and Vasudha Narayanan, Weststat; Kelsey O’Yong, Los Angeles Department of Public Health

  • we often want responses from an organization not a company
  • 500 medical facilities, 60 questions about staffing and infection control practices
  • used multimode – telephone, postal, web, and fax
  • many people requested the survey by fax and many people did convert modes
  • because fax was so successful, reminder calls were combined with fax automatically and saw successful conversions to this method
  • this does not follow the current trend
  • fax is immediate and keeps gatekeepers engaged, maybe it was seen as a novelty
  • [“innovative fax methodology” so funny to hear that phrase. I have never ever ever considered fax as a methodology. And yet, it CAN be effective. 🙂 ]
  • options to use “mass” faxing exist

The Pros and Cons of Persistence During Telephone Recruitment for an Establishment Survey, Paul Weinfurter and Vasudha Narayanan, Westat

  • half of restaurant issues are employees coming to work ill, new law was coming into effect regarding sick pay
  • recruit 300 restaurants to recruit 1 manager, 1 owner, and a couple food preparers
  • telephone recruitment and in person interviews, English, Spanish, mandarin, 15 minutes, $20 gift card
  • most of the time they couldn’t get a manager on the phone and they received double the original sample of restaurants to contact
  • it was assumed that restaurants would participate because the sponsor was health inspectors, but it was not mandatory and they couldn’t be told it was mandatory, there were many scams related to this so people just declined, also all of the health inspectors weren’t even aware of the study
  • 73% were unreachable after 3 calls, hard to get a person of authority during open hours
  • increased call attempts to five times, but continued on when they thought recruitment was likely
  • recruited 77 more from people who were called more than 5 times
  • as a result, data were not limited to a quicker to reach sample
  • people called up to ten times remained noncommittal and never were interviewed
  • there wasn’t an ideal number of calls to get maximum recruits and minimum costs
  • but the method wasn’t really objective, the focus was on restaurants that seemed like they might be reachable
  • possibly more representation than if they had stopped all their recruitment at five calls
  • [would love to see results crossed by number of attempts]

Uses of survey and polling data collection: practical and ethical implications #PAPOR #MRX 

Live blogged at #PAPOR in San Francisco. Any errors or bad jokes are my own.

Are California’s Registered Independents Shy Partisans?, David Kordus, Public Policy Institute of California

  • number of independent voters has doubled in the last twenty years
  • automatic voter registration via the DMV will add new voters
  • independents are not one homogeneous group
  • on average, they really are the middle between republicans and democrats, not necessarily more moderate

Exploring the Financial Landscape Facing Veterans in Nevada: Financial Literacy, Decision-making, and Payday Loans, Justin S. Gardner & Christopher Stream, UNLV, Runner-Up Student Paper Competition Winner

  • payday lending only started in the 1990s, more of them in military areas
  • largest security clearance issues were financial, capped interest rate of payday loans
  • 375 respondents, lots of disabled veterans who can’t work
  • use as medical loans is very low, many use it to pay off student loans or other debts, paying for housing also major use
  • most learned about it from tv commercials, or friends and family. If family are encouraging them to do this, something needs to change
  • people who don’t feel prepared for emergencies are more likely to use
  • majority had salary under $50 000, likely to need another loan in the future
  • 20% had used payday, it is cyclical, once you’re in the cycle it’s difficult to break out of it
  • half people could walk there from their home, didn’t need a car

What Constitutes Informed Consent? Understanding Respondents’ Need for Transparency, Nicole Buttermore, Randall Thomas, Frances M. Barlas, & Mansour Fahimi, GfK

  • biggest threat is release of name of participant but should participants be told sponsor of the study?
  • problem is nonresponse and survey bias if people know who the sponsor is
  • 6% thought taking a survey could have a negative impact on their life – worried about data breach, who has access to data, company might be hacked, improper use of data, questions might make me feel uncomfortable
  • 95% think surveys have no or minimal risk to my mental health – about 23% have quit a survey because it made them feel uncomfortable
  • about 20% said a survey has made them feel very uncomfortable – ask abour race, income, too much personal information, can’t give the exact answer they want to, feel political surveys are slanted, surveys are boring, don’t know how to answer the question
  • respondents want to know how personal information will be used and how privacy will be protected
  • want to know how long it will take, the topic, and the points for it
  • about twenty percent want to know company doing the research and company paying for the research

Recent Changes to the Telecommunications Consumer Protection Act, Bob Davis, Davis Research

  • this is not legal advice
  • TCPA issue is regarding calls using automated telephone equipment
  • lawyers like to threaten to sue but settle
  • vicarious liability – responsibility of the superior for the acts of their subordinates, i.e., contract work, sponsor of research
  • any phone with a redial button is an autodialer – so only the old phones where you stick your finger in the hole and turn the dial is not an autodialer
  • if you can get permission, then get it
  • regularly scrub your landline system to make sure there are no cell phones in it
  • use a non-predictive dialing system
  • ask that suppliers are TCPA compliant
  • international partners dialing into the US need to follow the rules as well
  • talk with your lawyer ahead of time so you can say you have already talked to a lawyer and they don’t think you are weak

Analysis, design, and sampling methods #PAPOR #MRX 

Live blogged at #PAPOR in San Francisco. Any errors or bad jokes are my own.

Enhancing the use of Qualitative Research to Understand Public Opinion, Paul J. Lavrakas, Independent Consultant; Margaret R. Roller, Roller Research

  • thinks research has become to quantitative because qual is typically not as rigorous but this should and can change
  • public opinion in not a number generated from polls, polls are imperfect and limited
  • aapor has lost sight of this [you’re a brave person to say this! very glad to hear it at a conference]
  • we need more balance, we aren’t a survey research organization, we are a public opinion organization, our conference programs are extremely biased quantitative
  • there should be criteria to judge the trustworthyness of research – was it fit for purpose
  • credible, transferable, dependability, confirmability
  • all qual research should be credible, analyzable, transparent, useful
  • credible – sample representation and data collection
  • do qual researchers seriously consider non-response bias?
  • credibility – scope deals with coverage design and nonresponse, data gathering – information obtained, researcher effects, participant effects
  • analyzability – intercoder reliability, transcription quaity
  • transparency – thick descriptions of details in final documents

Comparisons of Fully Balanced, Minimally Balanced, and Unbalanced Rating Scales, Mingnan Liu, Sarah Cho, and Noble Kuriakose, SurveyMonkey

  • there are many ways to ask the same question
  • is it a good time or a bad time? – fully balanced
  • is it a good time or not? – minimally balanced
  • do you or do you not think it is getting better?
  • are things headed in the right direction?
  • [my preference – avoid introducing any balancing in the question, only put it in the answer. For instance: What do you think about buying  a house? Good time, Bad time]
  • results – effect sizes are very small, no differences between the groups
  • in many different questions tested, there was no difference in the formats

Conflicting Thoughts: The Effect of Information on Support for an Increase in the Federal Minimum Wage Level, Joshua Cooper & Alejandra Gimenez, Brigham Young University, First Place Student Paper Competition Winner

  • Used paper surveys for the experiment, 13000 respondents, 25 forms
  • Would you favor or oppose raising the minimum wage.
  • Some were told how many people would increase their income, some were told how many jobs would be lost, some were told both
  • Negative info opposed a wage increase, positive info in favor of wage increase, people who were told both opposed a wage increase
  • independents were more likely to say don’t know
  • negatively strongly outweighs the good across all types of respondents regardless of gnder, income, religion, partyID
  • jobs matter, more than anything

Mobile devices and modular survey design by Paul Johnson #PAPOR #MRX 

Live blogged at the #PAPOR conference in San Francisco. Any errors or bad jokes are my own.

  • now we can sample by individuals, phone numbers, location, transaction
  • can reach by an application, eail, text, IVR but make sure you have permission for the method you use (TCPA)
  • 55+ prefer to dial an 800 number for a survey, young perfer prefer an SMS contact method; important to provide as many methods as possible so people can choose the method they prefer
  • mobile devices give you lots of extra data – purchase history, health information, social network information, passive listening – make sure you have permission to collect the information you need; give something back in terms of sharing results or hiding commercials
  • Over 25% of your sample is already taking surveys on a mobile device, you should check what device people are using, skip questions that wont render well on small screens
  • remove unnecessary graphics, background templates are not helpful
  • keep surveys under 20 minutes [i always advise 10 minutes]
  • use large buttons, minimal scrolling; never scroll left/right
  • avoid using radio buttons, aim for large buttons intead
  • for openends, put a large box to encourage people to us a lot of words
  • mobile open ends have just as much content although there may be fewer words, more acronyms, more profanity
  • be sure to use a back button if you use auto-next
  • if you include flash or images be sure to ask whether people saw the image
  • consider modularizing your surveys, ensure one module has all the important variables, give everyone a random module, let people answer more modules if they wish
  • How to fill in missing data  – data imputation or respondent matching [both are artificial data remember! you don’t have a sense of truth. you’re inferring answers to infer results.   Why are we SOOOOO against missing data?]
  • most people will actually finish all the modules if you ask politely
  • you will find differences between modular and not but the end conclusions are the same [seriously, in what world do two sets of surveys ever give the same result? why should this be different?]

If math is hard, you can always do qualitative research #MRX 

Yup, I heard that from a speaker on stage at the recent AAPOR conference. You see, because if you’re smart, then you’re probably doing quantitative research. Because quant is for smart people and qual is for dumb people. Because qual requires no skills, or at least the skills are basic enough for non-mathematical people to muddle through. Because qual isn’t really a valid type of research. Because nonprobability research is worthless (yup, I heard that too).

Perhaps I’ve taken the statement out of context or misrepresented the speaker. Perhaps. But that’s no excuse. Qual and quant both require smart people to be carefully trained in their respective methods. Each research method is appropriate and essential given the right research objective. 

The marketing research industry has improved greatly in this pointless debate of whose method is better and right. (Neither) Now it’s time for the polling industry to do the same. 

%d bloggers like this: