Tag Archives: chunking

Mobile devices and modular survey design by Paul Johnson #PAPOR #MRX 

Live blogged at the #PAPOR conference in San Francisco. Any errors or bad jokes are my own.

  • now we can sample by individuals, phone numbers, location, transaction
  • can reach by an application, eail, text, IVR but make sure you have permission for the method you use (TCPA)
  • 55+ prefer to dial an 800 number for a survey, young perfer prefer an SMS contact method; important to provide as many methods as possible so people can choose the method they prefer
  • mobile devices give you lots of extra data – purchase history, health information, social network information, passive listening – make sure you have permission to collect the information you need; give something back in terms of sharing results or hiding commercials
  • Over 25% of your sample is already taking surveys on a mobile device, you should check what device people are using, skip questions that wont render well on small screens
  • remove unnecessary graphics, background templates are not helpful
  • keep surveys under 20 minutes [i always advise 10 minutes]
  • use large buttons, minimal scrolling; never scroll left/right
  • avoid using radio buttons, aim for large buttons intead
  • for openends, put a large box to encourage people to us a lot of words
  • mobile open ends have just as much content although there may be fewer words, more acronyms, more profanity
  • be sure to use a back button if you use auto-next
  • if you include flash or images be sure to ask whether people saw the image
  • consider modularizing your surveys, ensure one module has all the important variables, give everyone a random module, let people answer more modules if they wish
  • How to fill in missing data  – data imputation or respondent matching [both are artificial data remember! you don’t have a sense of truth. you’re inferring answers to infer results.   Why are we SOOOOO against missing data?]
  • most people will actually finish all the modules if you ask politely
  • you will find differences between modular and not but the end conclusions are the same [seriously, in what world do two sets of surveys ever give the same result? why should this be different?]

A “How-To” Session on Modularizing a Live Survey for Mobile Optimization by Chris Neal and Roddy Knowles #FOCI14 #MRX

Live blogging from the #FOCI14 conference in Universal City. Any errors or bad jokes are my own.foci14

A “How-To” Session on Modularizing a Live Survey for Mobile Optimization
Chris Neal, CHADWICK MARTIN BAILEY 
& Roddy Knowles, RESEARCH NOW

  • conducted a modularized survey for smartphone survey takers, studied hotels for personal travel and tablets for personal use, excluded tablet takers to keep the methodology clean
  • people don’t want to answer a 20 minute survey on a phone but clients have projects that legitimately need 20 minutes of answers
  • data balanced and weighted to census
  • age was the  biggest phone vs computer difference
  • kept survey to 5 minutes, asked no open ended questions, minimize the word count, break grids into individual questions to avoid burden of scrolling and hitting a tiny button with a giant finger
  • avoid using a brand logo even though you really want to. space is at a premium
  • avoid flash on your surveys, avoid images and watermarks, avoid rich media even though it’s way cool – they don’t always work well on every phone
  • data with more variability is easier to impute – continuous works great, scale variables work great, 3 ordinal groups doesn’t work so well, nominal doesn’t work so well at all
  • long answer options lists are more challenging – vertical scrolling on a smartphone is difficult, affects how many options responders choose, ease of fewer clicks often wins out
  • branching is not your friend. if you must branch, have the survey programmers account for the missing data ahead of time, impute all the top level variables and avoid imputing the bottom level branched variables
  • Predictive mean matching works better than simply using a regression model to replace missing data
  • hot decking (or data stitching which combines several people into one)  replaces missing data with that from someone who looks the same, worked really well though answers to “other” or “none of the above” didn’t work as well
  • hot decking works better if you have nominal data
  • good to have a set of data that EVERYONE answers
  • smartphone survey takers aren’t going away, we need to reach people on their own terms, we cannot force people into our terms
  • we have lots of good tools and don’t need to reinvent the wheel. [i.e., write shorter surveys gosh darn it!!!]

Other Posts

Big Things from Little Data by Sherri Stevens and Frank Kelly #CASRO #MRX

Live blogging from the CASRO Digital conference in San Antonio, Texas. Any errors or bad jokes are my own.CasroDigital

 “Big Things from Little Data”

Sherri Stevens
While great effort has been expended on improving how we collect online data, there has been insufficient attention on making full use of the data collected. Partial completes of long surveys are discarded. But if there was an effective method to salvage this data, we could increase the average sample size for any given question in a survey by 20% for no additional cost. As an extension of previous research around survey modularization, this research evaluates the potential of partial completes in a modularized and randomized survey design.

  • Frank Kelly, SVP, Global Marketing and Strategy, Lightspeed Research
  • Sherri Stevens, Vice President, Global Innovation, Millward Brown

Frank Kelly
  • Online surveys averaged around 20 minutes for the last ten years
  • 65% of smartphone users not willing to spend more than 15 minutes on a survey.
  • Almost half of the time spent on completed survey are on surveys that are more than 25 minutes.
  • Longer surveys have higher drop out rates.12% on up to ten minutes, 28% on 31 minutes or more. Why can’t we use the partial data?
  • Drop outs on mobile data are way higher than computer. 46% on smartphone, 25% on tablet, 12% on computer.
  • New panelists have much higher drop out rates
  • Around 40% of new panel joins do so via mobile. People think it makes sense and then realize it’s not that good after all.
  • A fully optimized survey still took 34% longer to complete on the phone than on the computer.
  • We could charge more for long surveys, tell people to write shorter surveys, chunk surveys into pieces and impute or fuse
  • Proposal – don’t ask everybody everything. work with human nature, encourage responses through smaller surveys. Embedded image permalink
  • Tried various orders of various modules, not all had same sample size depending on important of  module
  • 1000 completes, 26 minutes cost $6500; 1400 completes 17 minutes cost $6500; 1000 completes 19 minutes $5000. Modular design allowed them to save some costs.
  • Incompletes could be by module, by skip pattern, or by drop-outs
  • High incidence study of social media, common brands, respondent info
  • In general 17% of people dropped out as in this study. But within those 35% completed at least one section.
  • What drives drop out? boring question or topic, hard questions, extended screening, low tolerance for survey taking
  • Survey enjoyability was higher with module surveys, survey length satisfaction higher in module survey
  • Reported more social media activities and brand engagement within module survey
  • Richer open ends in module survey
  • It’s not fusion and bayesian networks. it’s a generally applicable model but it still requires careful design. can be generally applied.
  • Think about partial completes as modular completes
  • Look for big positive effect on fieldwork costs and data quality
  • Are there better question types to do this with? How to randomize modules best?

Other Posts

Cyborgs vs Monsters in modularizing surveys: Edward Paul Johnson and Lynn Siluk #CASRO #MRX

casrobanner
… Live blogging from beautiful San Francisco…

Cyborgs vs. Monsters: Assembling Modular Surveys to Create Complete Datasets”

By Edward Paul Johnson, Director of Analytics, SSI and Lynn Siluk, Vice President, Marketing Sciences, Gongos Research

  • Cyborgs are like data imputation – estimates how a repsondent might have replied based on their characteristics
  • Monsters are like respondent matching – two respondents who have similar characteristics are turns into one complete respondent
  • Abandon rates on the mobile test cells was very high because of technical difficulties
  • Modular surveys had very slighter higher data quality, 4% vs 8% data quality issues
  • 72% of respondents were willing to take all the modules, 19% did two modules, 9% did only 1 of the modules
  • Same results came out of each method of surveying, agreed in 80% of the cases
  • One difference is the segment sizes were different because there were different numbers of people completing the different modules
  • Modules increase the relationship between the questions and thereby reduces the strength of the correlations [e.g., think about answering two questions immediately after each other vs today and then next week, you won’t answer them the same way]
  • We need to allow respondents to choose the mode they want to take the survey on
  • Within respondent modularization is key to reducing holes in the data
  • Advanced analytics are feasible
  • Both fusion techniques work with unique advantages
%d bloggers like this: