Tag Archives: survey

How do speakers see themselves? A survey of Speaker perceptions

The entirety of this post is available on the Gender Avenger website. 

.

Why are women underrepresented as speakers?

Why are women underrepresented as speakers, particularly at the conferences I go to where half of the audience members are women? Does fear chase them off the stage in disproportionate numbers?

I’ve pondered this question for years but I never knew if my hypothesis was grounded in fact or in stereotype. Fortunately, or unfortunately as the case may be, the opportunity presented itself and here we are pondering real data from a survey I did of 297 male and 252 female computer or data scientists, and market researchers aged 25 to 49 — people who ought to be on their way to securing spots on the conference circuit.

One of the questions in the survey asked people to imagine speaking at an event and to choose any attributes that would describe themselves as a conference speaker. I was careful to include an equal number of both positive and negative attributes so as to avoid leading people to choose a greater percentage of positive (or negative) items.

Curious how men and women viewed thselves? I know you are. Read the entirety of this post on the Gender Avenger website. If you’re braver enough. 

Fusing Marketing Research and Data Science by John Colias at @DecisionAnalyst

Live note-taking of the November 9, 2016 webinar. Any errors are my own.

  • Survey insights have been overshadowed in recent years, market research is struggling to redefine itself, there is an opportunity to combine big data and surveys
  • Preferences are not always observable in big data, includes social data, wearable data
  • Surveys can measure attitudes, preferences, and perceptions
  • Problem is organizational – isolation, compartmentalization of market research and big data functions
  • Started with a primary research survey about health and nutrition, one question is how often do you consume organic foods and beverages; also had block-group census data from American community survey five-year summary data with thousands of variables
  • Fused survey data and block group data using location of survey respondent from their address and matched to block group data, brought in geo data for that block group
  • Randomly split the data, built predictive model on training model, determined predictive accuracy using validation data (the hold-out data), 70% of data for model development, 30% for validation – independent objective model
  • Created a lift curve, predictive model identified consumers of organic foods more than 2.5 times better than chance
  • When predictive models bows out from random model, you have achieved success
  • Which variables were most predictive, not that they’re correlated but they predict behaviour – 26 or older, higher income, higher education, less likely Hispanic; this may be known but now they have a model to predict where these people are
  • Can map against actual geography and plot distances to stores
  • High-tech truck research
  • Used a choice modeling survey, design attributes and models to test, develop customer level preferences for features of the truck
  • Cargo space, hitching method, back up cameras, power outlets, load capacity, price
  • People chose preferred model from two choices, determined which people are price sensitive, or who value carrying capacity, biggest needs were price, capacity, and load
  • How to target to these groups of people
  • Fused in external data like previously, but now predicting based on choice modeling not based on survey attitudes, lift curve was again bowed to left, 1.8 times better than chance – occupation, education, income, and household size were the best predictors
  • [these are generic results – rich people want organic food and trucks, but point taken on the method. If there is a product whose users are not obvious, then this method would be useful]
  • Fusion can use primary and secondary data, also fuses technology like R dashboards and google maps, fuses survey and modeling, fuses consumer insights database marketing and big data analytics
  • Use this to find customers whose preferences are unobserved, improve targeting of advertising and promotions, optimize retail location strategies, predict preferences and perceptions of consumers, collaboration of MR departments with big data groups would benefit both entities
  • In UK and Spain, demographics are more granular, GPS tracking can be used in lesser developed countries
  • Used R to query public data set, beauty of open-source code and data

People Aren’t Robots – New questionnaire design book by Annie Pettit

I’ve been busy writing again!

People Aren’t Robots: A practical guide to the psychology and technique of questionnaire is the best 2 bucks you’ll ever spend!

Questionnaire design is easy until you find yourself troubled with horrid data quality. The problem, as with most things, is that there is an art and science to designing a good quality and effective questionnaire and a bit of guidance is necessary. This book will give you that guidance in a short, easy to read, and easy to follow format. But how is it different from all the other questionnaire design books out there?

  • It gives practical advice from someone who has witnessed more than fifteen years of good and poor choices that experienced and inexperienced questionnaire writers make. Yes, even academic, professional researchers make plenty of poor questionnaire design choices.
  • It outlines how to design questions while keeping in mind that people are fallible, subjective, and emotional human beings. Not robots. It’s about time someone did this, don’t you think?

This book was written for marketers, brand managers, and advertising executives who may have less experience in the research industry.

It was also written to help academic and social researchers write questionnaires that are better suited for the general population, particularly when using research panels and customer lists.

I hope that once you understand and apply these techniques, you think this is the best $2 you’ve ever spent and that you hear your respondents say “this is the best questionnaire I’ve ever answered!”

Early reviews are coming in!

  • For the researchers and entrepreneurs out there, here’s a book from an expert. Pick it up (& read & implement). 👌
  • Congrats, Annie! A engagingly written and succinct book, with lots of great tips!
  • Congratulations! It’s a joy watching and learning from your many industry efforts.
  • It looks great!!! If I could, I would buy many copies and give to many people I know who need some of your advice.🙂

Improvements to survey modes #PAPOR #MRX 

What Are They Thinking? How IVR Captures Public Opinion For a Democracy, Mary McDougall, Survox

  • many choices, online is cheapest followed by IVR followed by phone interview
  • many still do not have internet – seniors, non-white, low income, no high school degree
  • phone can help you reach those people, can still do specific targeting
  • good idea to include multiple modes to test for any mode effects
  • technology is no longer a barrier for choosing a data collection strategy
  • ignoring cell phones is poor sampling
  • use labor strategically to allow IVR
  • tested IVR on political polling, 300 completes in 2.5 hours, met the quotas, once a survey was started it was generally completed

The Promising Role of Fax in Surveys of Clinical Establishments: Observations from a Multi-mode Survey of Ambulatory Surgery Centers, Natalie Teixeira, Anne Herleth, and Vasudha Narayanan, Weststat; Kelsey O’Yong, Los Angeles Department of Public Health

  • we often want responses from an organization not a company
  • 500 medical facilities, 60 questions about staffing and infection control practices
  • used multimode – telephone, postal, web, and fax
  • many people requested the survey by fax and many people did convert modes
  • because fax was so successful, reminder calls were combined with fax automatically and saw successful conversions to this method
  • this does not follow the current trend
  • fax is immediate and keeps gatekeepers engaged, maybe it was seen as a novelty
  • [“innovative fax methodology” so funny to hear that phrase. I have never ever ever considered fax as a methodology. And yet, it CAN be effective. 🙂 ]
  • options to use “mass” faxing exist

The Pros and Cons of Persistence During Telephone Recruitment for an Establishment Survey, Paul Weinfurter and Vasudha Narayanan, Westat

  • half of restaurant issues are employees coming to work ill, new law was coming into effect regarding sick pay
  • recruit 300 restaurants to recruit 1 manager, 1 owner, and a couple food preparers
  • telephone recruitment and in person interviews, English, Spanish, mandarin, 15 minutes, $20 gift card
  • most of the time they couldn’t get a manager on the phone and they received double the original sample of restaurants to contact
  • it was assumed that restaurants would participate because the sponsor was health inspectors, but it was not mandatory and they couldn’t be told it was mandatory, there were many scams related to this so people just declined, also all of the health inspectors weren’t even aware of the study
  • 73% were unreachable after 3 calls, hard to get a person of authority during open hours
  • increased call attempts to five times, but continued on when they thought recruitment was likely
  • recruited 77 more from people who were called more than 5 times
  • as a result, data were not limited to a quicker to reach sample
  • people called up to ten times remained noncommittal and never were interviewed
  • there wasn’t an ideal number of calls to get maximum recruits and minimum costs
  • but the method wasn’t really objective, the focus was on restaurants that seemed like they might be reachable
  • possibly more representation than if they had stopped all their recruitment at five calls
  • [would love to see results crossed by number of attempts]

It’s a dog eat DIY world at the #AMSRS 2015 National Conference

  What started out as a summary of the conference turned into an entirely different post – DIY surveys. You’ll just have to wait for my summary then!

My understanding is that this was the first time SurveyMonkey spoke at an #AMSRS conference. It resulted in what seemed to be perceived by the audience as a controversial question and it was asked in an antagonistic way – what does SurveyMonkey intend to do about the quality of surveys prepared by nonprofessionals. This is a question with a multi-faceted answer.

First of all, let me begin by reminding everyone that out of all the surveys prepared by professional, fully-trained survey researchers, most of those surveys incorporate at least a couple of bad questions. Positively keyed grids abound, long grids abound, poorly worded and leading questions abound, overly lengthly surveys abound. For all of our concerns about amateurs writing surveys, I sometimes feel as though the pot is calling the kettle black.

But really, this isn’t a SurveyMonkey question at all. This is a DIY question. And it isn’t a controversial question at all. The DIY issue has been raised for a few years at North American conferences. It’s an issue with which every industry must deal. Taxis are dealing with Uber. Hotels are dealing with AirBnB. Electricians, painters, and lawn care services in my neighbourhood are dealing with me. Naturally, my electrical and painting work isn’t up to snuff with the professionals and I’m okay with that. But my lawn care services go above and beyond what the professionals can do. I am better than the so-called experts in this area. Basically, I am the master of my own domain – I decide for myself who will do the jobs I need doing. I won’t tell you who will do the jobs at your home and you won’t tell me who will do my jobs. Let me reassure you, I don’t plan to do any home surgery.

You can look at this from another point of view as well. If the electricians and painters did their job extremely well, extremely conveniently, and at a fair price, I would most certainly hire the pros. And the same goes for survey companies. If we worked within our potential clients’ schedules, with excellent quality, with excellent outcomes, and with excellent prices, potential clients who didn’t have solid research skills wouldn’t bother to do the research themselves. We, survey researchers, have created an environment where potential clients do not see the value in what we do. Perhaps we’ve let them down in the past, perhaps our colleagues have let them down in the past. 

And of course, there’s another aspect to the DIY industry. For every client who does their own research work, no matter how skilled and experienced they are, that’s one less job you will get hired to do. I often wonder how much concern over DIY is simply the fear of lost business. In this sense, I see it as a re-organization of jobs. If research companies lose jobs to companies using DIY, then those DIY company will need to hire more researchers. The jobs are still there, they’re just in different places. 

But to get back to the heart of the question, what should DIY companies do to protect the quality of the work, to protect their industry, when do-it-yourselfers insist on DIY? Well, DIY companies can offer help in many forms. Webinars, blog posts, and white papers are great ways to share knowledge about survey writing and analysis. Question and survey templates make it really easy for newbies to write better surveys. And why not offer personalized survey advice from a professional. There are many things that DIY companies can do and already do.

Better yet, what should non-DIY companies do? A better job, that’s what. Write awesome surveys, not satisfactory surveys. Write awesome reports, not sufficient reports. Give awesome presentations, not acceptable presentations. Be prompt, quick, and flexible, and don’t drag clients from person to person over days and weeks. When potential clients see the value that professional services provide, DIY won’t even come to mind.

And of course, what should research associations do? Advocate for the industry. Show Joe nonresearcher what they miss out on by not hiring a professional. Create guidelines and standards to which DIY companies can aspire and prove themselves. 

It’s a DIY world out there. Get on board or be very, very worried.

Assessing the quality of survey data (Good session!) #ESRA15 #MRX 

Live blogged from #ESRA15 in Reykjavik. Any error or bad jokes in the notes are my own. As you can see, I managed to find the next building from the six buildings the conference is using. From here on, it’s smooth sailing! Except for the drizzle. Which makes wandering between buildings from session to session a little less fun and a little more like going to a pool. Without the nakedness. 

Session #1 – Data quality in repeated surveys: evidence from a quasi-experimental design by multiple professors from university of Rome

  • respondents can refuse to participate in the study resulting in series of missing data but their study had very little missing data, only about 5% this time [that’s what student respondents does for you, would like to see a study with much larger missing rates]
  • questions had an i do not know option, and there was only one correct answer
  • 19% of gender/birthday/socioeconomic status changed from survey to survey [but we now understand that gender can change, researchers need to be open to this. And of course, economic status can change in a second]
  • Session #2 – me!  Lots of great questions, thank you everyone!

Session #3 – Processing errors in the cross national surveys

  • we don’t consider process errors very often as part of total survey error
  • found 154 processing errors in the series of studies – illegitimate variable values such as education that makes little sense or age over 100, misleading variable values, contradictory values, value discrepancies, lack of value labels, maybe you’re expecting a range but you get a specific value, what if 2 is coded as yes in the software but no in the survey
  • age and education were most problematic, followed by schooling
  • lack of labels was the worst problem, followed by illegitimate values, and misleading values
  • is 22% discrepancies out of all variables checked good or bad?

Session #4 – how does household composition derived from census data describe or misrepresent different family types

  • strength of census data is their exhaustivity, how does census data differ from a smaller survey
  • census counts household members, family survey describes families and explores people outside the household such as living apart, they desribe different universe. a boarder may not be measured in the family survey but yes mentioned in the census survey
  • in 10% of cases, more people are counted in the census, 87% have the same number of people on both surveys
  • census is an accounting tool, not a tool for understanding social life, people do not organize their lives to be measured and captured at one point and one place in time
  • census only has a family with at least one adult and at least one child
  • isolated adult in a household with other people is 5% of adults in the census, not classified the same in both surveys
  • there is a problem attributing children to the right people – problem with single parent families; single adults are often ‘assigned’ a child from the household
  • a household can include one or two families at the most – complicated when adult children are married and maybe have a kid. A child may be assigned to a grandparent which is in err.
  • isolated adults may live with a partner in the dwelling, some live with their parents, some live with a child (but children move from one household to another), 44% of ‘isolated’ adults live with family members, they aren’t isolated at all
  • previously couples had to be heterosexual, even though they survey as a union the rules split them into isolated adults [that’s depressing. thank you for changing this rule.]
  • census is more imperfect than the survey, it doesnt catch subtle transformations in societal life. calls into question definitions of marginal groups
  • also a problem for young adults who leave home but still have strong ties to the parents home – they may claim their own home and their parents may also still claim them as living together
  • [very interesting talk. never really thought about it]

Session #5 – Unexpectedly high number of duplicates in survey data

  • simulated duplicates created greater bias of the regression coefficient when up to 50% of cases were duplicated 2 to 5 times
  • birthday paradox – how many persons are needed in order to find two having an identical birthday – 23. A single duplicate in a dataset is likely.
  • New method – the Hamming diagram – diversity of data for survey – it looks like a normal curve with some outliers so i’m thing Hamming is simply a score like mahalonobis is for outliers
  • found duplicate sin 10% of surveys, 14 surveys comprised 80% of total duplicates with one survey at 33%
  • which case do you delete? which one is right if indeed one is right. always screen your data before starting a substantial analysis.
  • [i’m thinking that ESRA and AAPOR are great places to do your first conference presentation. there are LOTS of newcomers and presentation skills aren’t fabulous. so you won’t feel the same pressure as at other conferences. Of course, you must have really great content because here, content truly is king]
  • [for my first ESRA conference, i’m quite happy with the quality of the content. now let’s hope for a little sun over the lunch hour while I enjoy Skyr, my new favourite food!]

Related Posts

Keynote: Design and implementation of comparative surveys by Lars Lyberg #ESRA15 #MRX 

Live blogged from the #ESRA15 conference in Reykjavik. Any error or bad jokes in these notes are my own. Thank you ESRA for the free wifi that made this possible. Thank you Reykjavik for coconut Skyr and miles upon miles of beautiful lupins.

  • Introduction
  • Data quality starts with high response rates and also needs trust from the public in order to provide honest answers
  • Iceland is the grandmother of democracy, a nation of social trust, icelanders still give high reponse rates
  • Intro: Guni Johannesson
  • Settlement until 1262, decline until 19th century, rise again in 20th centure – traditional story
  • founded the first democratic parliament, vikings were traders but also murderous terrorists, there was no decline, struggle for independance – this is the revised story; 2008 economic collapse due to many factors including misuse of history – people thought of themselves as vikings and took irresponsible risks

  • Lars Lyberg
  • WIsh we had more interesting presentations like the previous
  • 3M – multi-national, regional, cultural surveys, reveal differences between countries and cultures
  • [i always wonder, is cross cultural comparison TRULY possible]
  • Some global surveys of happiness included the presence of lakes or strong unions which automatically excludes a number of countries
  • problems with 3m studies  normally emphasize minimum response rates, specifications are not always adhered to, sometimes fabricated data, translations not done well, lack of total survey error awareness, countries are very different
  • special features of these studies – concepts must have a uniform meaning across countries, risk management differs, financial resources differ, national interests are in conflict, scientific challenges, adminisrative challenges, national pride is at stake especially when the media gets a hold of results
  • basic design issues – conditions cannot vary from definitions to methods to data collection, sampling can and should vary, weighting and stats are grey zones, quality assurance is necessary
  • Must QC early interviews of each interviewer, specs are sometimes not understood, sometimes challeneged, not affordable, not in line with best practice, overwhelming
  • Common challenges – hard to reach respondents, differences in literacy levels, considerable non response
  • interviewers should be alone with respondent for privacy reasons but it is common to not be alone – india, iraq, brazil there are often extra people around which affects the results, this is particularly important re mental health
  • a fixed response rate goal can be almost impossible to achieve, 70% is just unreasonable in many places. spending so much money to achieve that one goal is in conflict with TSE and all the other errors that could be attended to instead. in this example, only a few of the countries achieved it and only barely [and I wonder to what unethical means they went to achieve those]
  • strategies – share national exeriences, training, site visits, revised contact forms, explore auxiliary data, monitor fieldwork, assess non response bias
  • data fabrication [still cant believe professionals do this 😦 ] 10 of 70 countries in a recent study have questionnable data, in 3 cases they clearly showed some data was fabricated PISA 2009, they often copy paste data [sigh, what a dumb method of cheating, just asking to be caught. so i’m glad they were dumb]
  • [WHY do people fabricate? didn’t get the desired response rate? embarrassed about results? too lazy to collect data?]
  • Translation issues – translation used to be close transalation with back translation, focus on replication “are you feeling blue” doesnt have the same meaning in another language, this still happens
  • Team Translation Model – TRAPD – draft translations, review and refine, adjudicate for pretest
  • Social desireaility differs in conformist and individual societies, relative status between interviewer and respondents, response process is different, perceptual variation is magnified even within a country, questionnaires must be different across countries
  • workloads differ – countries use different validation methods, countries dont know how to calculate weights, interviewer workload differed
  • specifications are often dubious, all kinds of variations are permitted, proxy responses can range fro 0% to 50% which is really bad for embarrassing questions where people don’t want others to know (e.g., a spouse could say the other spouse is happy)
  • Quality management approach – descrease distance between user and producer, find root causes of problems, allocate resources based on risk assessment, coordinate team responsibilities, strive for real time interventions, build capacity
  • Roger Jowell – 10 golden rules for cross national studies [find and reach this, it’s really good]
  • don’t confuse respect for cultural variations with tolderance of methodological anarchy, don’t aim for as many countries as possible, never do a survey in a country you know little about, pay as much attention to aggregate level background information as the individual level variables, assume any new variation you discover is an artifact, resist the temptation to crosstab everything [smart dude, i like these!]

https://twitter.com/cernat_a/status/620894138339336192

 

Related Posts

 

Determinants of survey and biomarker participation #AAPOR #MRX  

prezzie #1: consent to biomarker data collection

  •  blood pressure, grip strength, balance, blood spot collection, saliva collection, 
  • agreeement ranged from 70% to 92% 
  • used the Big 5 personality test, agreeableness, openness, conscientiousness, self control
  • used R for the analysis [i’m telling you, R is the future, forget SAS and SPSS]
  • for physical measurements, people who are more open are less likely to consent, agreeable is positively related
  • for saliva, we also see that higher hostility are less likely to consent, and higher self-control is less likely
  •  for blood spot collection, more hostile and more self control are less likely to consent
  • openness was a surprise

prezzie #2: nonresponse to physical assessments

  • strength, balance, mobility, physical activity – not influenced by reporting bias
  • half of the sample did a face to face physical assessment, perhaps in a nursing home, n=15000
  • nonconsent highest for walking speed test, lowests for breathing and grip strength test, balance test was in the middle
  • oldest old are less likely to complete, women less likely to complete the grip walking and balance test
  • less educated are less likely to complete the tests
  • much more missing data for balance test and breathing test – can impute based on reasons for noncompletion – eg no space to do the test, not safe to do it, don’t understand instructions, unwilling, health issues

prezzie #3: data quality trends in the general social survey

  • social security number was most interesting
  • theres been a lot of leaked and stolen government data which can make people nervous about completing govt surveys
  • low refusal rate for phone number and birth date but half refused social security
  • varied widely by geo region
  • trend is not drastically different despite data issues

prezzie #4: does sexual orientation, race, language correlate with nonresponse

  • correlates of nonresponse – we know younger, male, hispanic, single, renters. but what about sexual orientation or language use
  • few differences by sexuality
  • nonhispanic black had highest non response, hispanic also high when the survey was in spanish
  • self reported good health had higher nonresponse, could mean our surveys tell us people are less healthy than they really are
  • spanish speakers had higher nonresponse

The funniest marketing research joke #MRX

I have a favourite research joke. Have you heard it before? It has two possible punch lines and it goes like this.

What’s worse than a pie chart?
Answer 1: a 3d pie chart
Answer 2: several pie charts

I bet you’re rolling on the floor laughing aren’t you! Well, I have a treat for you because that is not the funniest marketing research joke I know. There are several formats of the funniest ever market research joke too so get ready…

We know 40 minute surveys are too long but no one’s ever going to stop writing them. HA HA HA HA HA HA
We know long grid questions are a bad idea but no one’s ever going to stop using them. HA HA HA HA HA HA
We know respondents hate multiple loops but no one’s ever going to stop writing them. HA HA HA HA HA HA

You think these aren’t jokes? I challenge you to prove otherwise. I’ve been in numerous situations where laughter is the standard response to these statements.

I find it infuriating to listen to smart and informed people overtly display a lack of effort to address the problem and laugh it off as silly. They generally feel they have no power. Clients feel they have zero alternatives in writing their surveys and vendors feel clients will take their business elsewhere if they refuse to run a bad survey.

Let me say this. Every single person out there has power to make surveys better. Imagine if we all worked together. Imagine if everyone spoke up against bad surveys. Imagine if everyone took quality surveys seriously. Imagine what would happen to our complete rates. Just imagine.

Gamification in survey research: do results support the evangelists by Lisa Weber-Raley and Kartik Pashupati #CASRO #MRX

Live blogging from Nashville. Any errors or bad jokes are my own.

– game mechanics include a back story, a game like aesthetic, rules for play and advancement, a challenge, rewards
– Gamification can be as simple as changing the way questions are worded [shout out to Jon Puleston : ]
– frame questions in way that makes responders WANT to answer them
– change a task into a game
– add an element of competition to a question such as putting a time limit
– “i engaged with a brand and all i got was this lousy badge” 🙂
– people don’t always think gamified is easier to read or answer, or quicker, or more fun, it’s a statistical different though not a substantive different
– should we trash gamification?
– greater survey engagement lies in dealing with the components of respondent burden. but creating a more enjoyable survey is still a worthwhile goal even if it doesn’t lead to all the claimed benefits
– did a survey on college experience, needed to develop a tool to build a tool for highschool students to choose a college, it’s not a genpop sample. it’s a sample that might be more inclined to gamification
– four survey types – standard, one with photo breaks, one with letter finding game throughout the survey, one with avatar
– not many differences between these four groups [did they all get the exact same words of questions?]
– photo break people may have actually used the photos to take a break
– picture break was more enjoyable for people
– there were no differences in data quality
[i wonder what would happen if the survey was actually gamified or the questions were worded differently]