I searched online for a list of the steepest streets in San Francisco and found five near me. All are over 30 degrees steep, the kind where you slip inside your shoes and the sidewalks are often stairs. I have no idea how many times I stared in disbelief wondering how someone thought building these houses and driveways was a good thing. Do NOT try this walk if you have bad knees! My fitbit says it was a journey of 29000 steps and 133 floors.
Below are Broadway at jones, Jones at pine and union, filbert at Hyde, and Vallejo at grant. A few others are thrown in as well because come on, you can tell they belong here.
live blogged at #PAPOR in San Francisco. Any errors or bad jokes are my own.
The Summer of Our Discontent, Stuart Elway, Elway Research
- regarding state of washington
- it’s generally democratic
- between elections, more people are independents and then they drift to democrat
- indepents are more social liberals
- has become more libertarian
- dont expect a rebellion to start in washington state
- [sorry, too many details for me to share well]
Californian’s Opinion of Political Outsiders, Mark Baldassare, PPIC
- california regularly elects outsiders – Reagan, Schwarzenegger
- flavour is often outsider vs insider, several outsiders have run recently
- blog post on the topic – http://ppic.org/main/blog_detail.asp?i=1922
- they favour new ideas over experience
- 3 things are important – approval ratings of elected officials, people who prefer outsiders give officials lower approval, negative attitudes of the two party system
- majority think a third party is needed – more likely to be interested in new ideas over experience
- [sorry, too many details for me to share well]
Trump’s Beguiling Ascent: What 50-State Polling Says About the Surprise GOP Frontrunner, Jon Cohen & Kevin Stay, SurveyMonkey
- 38% of people said they’d be scared if trump is the GOP nominee
- 25% would be surprised
- 24% would be hopeful
- 21% would be angry
- 14% would be excited
- List is very different as expected between democrats and republicans, but not exactly opposite
- quality polling is scale, heterogeneity , correctable self-selection bias
- most important quality for candidates is standing up for principles, strong leader, honest and trustworthy – experience is lowest on the list
- Views on Trump’s muslim statement change by the minute – at the time of this data: 48% approve, 49% disapprove, split as expected by party
- terrorism is the top issue for republicans; jobs AND terrosiam are top for independants; jobs is top for democrats
- for republicans – day before paris 9% said terrorism was top, after paris 22%
- support for Cruz is increasing
- half of trump voters are absolutely certain they will vote for trump; but only 17% of bush voters are absolutely certain
- among republicans, cruz is the second choice even among trump voters
- trump has fewer voters who go to religious services weekly, least of all candidates; carson and cruz are on the high end
- trump voters look demographically the same but carson has fewer male voters and cruz has fewer female voters
- trump voters are much less educated, rubio voters are much more educated
What Are They Thinking? How IVR Captures Public Opinion For a Democracy, Mary McDougall, Survox
- many choices, online is cheapest followed by IVR followed by phone interview
- many still do not have internet – seniors, non-white, low income, no high school degree
- phone can help you reach those people, can still do specific targeting
- good idea to include multiple modes to test for any mode effects
- technology is no longer a barrier for choosing a data collection strategy
- ignoring cell phones is poor sampling
- use labor startegically to allow IVR
- tested IVR on political polling, 300 completes in 2.5 hours, met the quotas, once a survey was started it was generally completed
The Promising Role of Fax in Surveys of Clinical Establishments: Observations from a Multi-mode Survey of Ambulatory Surgery Centers, Natalie Teixeira, Anne Herleth, and Vasudha Narayanan, Weststat; Kelsey O’Yong, Los Angeles Department of Public Health
- we often want responses from an organization not a company
- 500 medical facilities, 60 questions about staffing and infection control practices
- used multimode – telephone, postal, web, and fax
- many people requested the survey by fax and many people did convert modes
- because fax was so successful, reminder calls were combined with fax automatically and saw successful conversions to this method
- this does not follow the current trend
- fax is immediate and keeps gatekeepers engaged, maybe it was seen as a novelty
- [“innovative fax methodology” so funny to hear that phrase. I have never ever ever considered fax as a methodology. And yet, it CAN be effective. :) ]
- options to use “mass” faxing exist
The Pros and Cons of Persistence During Telephone Recruitment for an Establishment Survey, Paul Weinfurter and Vasudha Narayanan, Westat
- half of restaurant issues are employees coming to work ill, new law was coming into effect regarding sick pay
- recruit 300 restaurants to recruit 1 manager, 1 owner, and a couple food preparers
- telephone recruitment and in person interviews, english, spanish, mandarin, 15 minutes, $20 gift card
- most of the time they couldn’t get a manager on the phone and they received double the original sample of restaurants to contact
- it was assumed that restaurants would participate because the sponsor was health inspectors, but it was not mandatory and they couldn’t be told it was mandatory, there were many scams related to this so people just declined, also all of the health inspectors weren’t even aware of the study
- 73% were unreachable after 3 calls, hard to get a person of authority during open hours
- increased call attempts to five times, but continued on when they thought recruitment was likely
- recruited 77 more from people who were called more than 5 times
- as a result, data were not limited to a quicker to reach sample
- people called up to ten times remained noncommittal and never were interviewed
- there wasn’t an ideal number of calls to get maximum recruits and minimum costs
- but the method wasn’t really objective, the focus was on restaurants that seemed like they might be reachable
- possibly more representation than if they had stopped all their recruitment at five calls
- [would love to see results crossed by number of attempts]
Live blogged at #PAPOR in San Francisco. Any errors or bad jokes are my own.
Enhancing the use of Qualitative Research to Understand Public Opinion, Paul J. Lavrakas, Independent Consultant; Margaret R. Roller, Roller Research
- thinks research has become to quantitative because qual is typically not as rigorous but this should and can change
- public opinion in not a number generated from polls, polls are imperfect and limited
- aapor has lost sight of this [you’re a brave person to say this! very glad to hear it at a conference]
- we need more balance, we aren’t a survey research organization, we are a public opinion organization, our conference programs are extremely biased quantitative
- there should be criteria to judge the trustworthyness of research – was it fit for purpose
- credible, transferable, dependability, confirmability
- all qual resaerch should be credible, analyzable, transparent, useful
- credible – sample repreentation and data collection
- do qual researchers seriously consider non-response bias?
- credibility – scope deals with coverage design and nonresponse, data gathering – information obtained, researcher effects, participant effects
- analyzability – intercoder reliability, transcription quaity
- transparency – thick descriptions of details in final documents
Comparisons of Fully Balanced, Minimally Balanced, and Unbalanced Rating Scales, Mingnan Liu, Sarah Cho, and Noble Kuriakose, SurveyMonkey
- there are many ways to ask the same question
- is it a good time or a bad time? – fully balanced
- is it a good time or not? – minimally balanced
- do you or do you not think it is getting better?
- are things headed in the right direction?
- [my preference – avoid introducing any balancing in the question, only put it in the answer. For instance: What do you think about buying a house? Good time, Bad time]
- results – effect sizes are very small, no differences between the groups
- in many different questions tested, there was no difference in the formats
Conflicting Thoughts: The Effect of Information on Support for an Increase in the Federal Minimum Wage Level, Joshua Cooper & Alejandra Gimenez, Brigham Young University, First Place Student Paper Competition Winner
- Used paper surveys for the experiment, 13000 respondents, 25 forms
- Would you favor or oppose raising the minimum wage.
- Some were told how many people would increase their income, some were told how many jobs would be lost, some were told both
- Negative info opposed a wage increase, positive info in favor of wage increase, people who were told both opposed a wage increase
- independents were more likely to say don’t know
- negatively strongly outweighs the good across all types of respondents regardless of gnder, income, religion, partyID
- jobs matter, more than anything
Live blogged at the #PAPOR conference in San Francisco. Any errors or bad jokes are my own.
- now we can sample by individuals, phone numbers, location, transaction
- can reach by an application, eail, text, IVR but make sure you have permission for the method you use (TCPA)
- 55+ prefer to dial an 800 number for a survey, young perfer prefer an SMS contact method; important to provide as many methods as possible so people can choose the method they prefer
- mobile devices give you lots of extra data – purchase history, health information, social network information, passive listening – make sure you have permission to collect the information you need; give something back in terms of sharing results or hiding commercials
- Over 25% of your sample is already taking surveys on a mobile device, you should check what device people are using, skip questions that wont render well on small screens
- remove unnecessary graphics, background templates are not helpful
- keep surveys under 20 minutes [i always advise 10 minutes]
- use large buttons, minimal scrolling; never scroll left/right
- avoid using radio buttons, aim for large buttons intead
- for openends, put a large box to encourage people to us a lot of words
- mobile open ends have just as much content although there may be fewer words, more acronyms, more profanity
- be sure to use a back button if you use auto-next
- if you include flash or images be sure to ask whether people saw the image
- consider modularizing your surveys, ensure one module has all the important variables, give everyone a random module, let people answer more modules if they wish
- How to fill in missing data – data imputation or respondent matching [both are artificial data remember! you don’t have a sense of truth. you’re inferring answers to infer results. Why are we SOOOOO against missing data?]
- most people will actually finish all the modules if you ask politely
- you will find differences between modular and not but the end conclusions are the same [seriously, in what world do two sets of surveys ever give the same result? why should this be different?]
- Family Membership at your local art gallery or museum where you can appreciate the cultural diversity and wonders of our world as a family.
- Classes. Art or music or sign language or dancing or that embarrassing thing they’ve always been too scared to sign up for because people will laugh at them. Dungeons and Dragons sessions? Absolutely!
- Tickets to an event they won’t splurge on. A concert, a play, a weird-ass art installation, a sporting event.
- Charitable donations to a cause that is near and dear to your heart. For me, that means a mental health service and a crohns and colitis foundation. And a recent addition, a refugee settling organization.
- A really nice winter coat that isn’t their size. Or a really nice business suit in the wrong size. We have have coats and work clothes don’t we. Bring these new ones to a homeless shelter because those folks can’t just go out buy whatever they feel like whenever they feel like it.
- Homemade gifts like pancake mixes, bread, ready-to-cook meals, bird houses, paintings, mittens, and hats.
- Homemade gift certificates for an on-demand neck massage, leaf raking, snow shoveling, home manicure, or some other chore that they always do for you.
- A middle-aged shelter dog. Do you have bunches of love left to give? Do you need to exercise more? Should you spend more time with the kiddies? Make it a three-for-one deal.
Bonus gift: Membership in an industry organization. In marketing research, there are literally hundreds of choices and as much as we’d like to join all of them, most of us only join two or three. Try the MRIA in Canada, or MRA/CASRO/TheARF in the USA, MRS in the UK, AMSRS or RANZ in the pacific region, or ESOMAR globally.
Why? Because most of us don’t NEED anything. We have food on the table, clothes in our closets, and warm homes to go to every day. We don’t need more stuff. Just more love.
It’s true that for the most part, leading questions are the sign of a poorly skilled, inexperienced survey writer. When it’s pointed out, most of us can see that these are terrible questions.
- Do you agree that sick babies deserve free healthcare?
- Should poorly constructed laws be struck down?
- Is it important to fund new products that improve the lives of people?
- Should products that cause rashes be pulled from stores?
- Should stores always have enough cashiers so that no one has to wait in a long line?
But are leading questions always bad? I think not. However, these are situations that only experienced researchers should attempt. Leading questions may be appropriate when you are trying to measure socially undesirable, embarrassing, unethical, inappropriate, or illegal activities. Consider these examples.
Would you say yes to this:
- Have you driven drunk in the past three months?
What about to this?
- Many people realize that they have driven after having too much to drink. Is this something you have done in the last three months?
Would you say no this?
- Have you donated to charity in the past three months?
What about to this?
- Sometimes it’s hard to donate to charity even when you really want to. Have you donated to charity in the past three months?
In both cases, it is possible that the first question will cause people to give a more socially appropriate answer, but not necessarily the valid answer. In both cases, the second question might create a mindset where the responder feels better about sharing a socially undesirable answer.
The next time you need to write a survey, consider whether you need to write a leading question. Consider your wording carefully!
Demand that your conferences be Diversity Approved! (Tweet this post!)
When Canada’s new Prime Minister, Justin Trudeau, was asked why his cabinet was 50% male and 50% female, his answer was simple. Because it’s 2015. Such a simple answer to a long standing problem.
As I look back over 2015, I see that “because it’s 2015” didn’t apply to every market research conference. Some conferences had speaker lists that were 70% male. Some conferences had speaker panels that were 100% male. No conferences had attendee lists nor industry lists that were 100% male let alone 70% male.
There are many reasons that men might be over-represented as speakers, but few that are acceptable.
- Random chance. As a lover of statistics, I accept that random chance will create some all male panels. But since I’ve never seen an all female panel, random chance is not what’s at play here. If you’d rather see the math, Greg Martin calculated the chance of having all male speakers here. It’s not good.
- 70% or more of submissions were from men. That also is an acceptable reason. If women aren’t submitting, then they can’t be selected. So on that note, it’s up to you ladies to make sure you submit at every chance you get. And don’t tell me you’re not good enough to speak. I ranted on that excuse already.
- You haven’t heard of any women working in this area. This excuse is unacceptable. You can’t look for speakers only inside your own comfortable friend list. Get out of your box. Get online. There are tons of women talking about every conceivable industry issue. Find one woman and ask her for recommendations. You can start here: Data science, Marketing research, Statistics, Tech.
- The best proposals happen to be from men. This excuse is also unacceptable. It demonstrates that you believe men are better than women. You need to broaden your perception of what ‘better’ means. Men and women speak in different ways so you need to listen in different ways. It’s good for you. Try it.
- Women decline when we ask them to speak. It’s a real shame particularly if women decline invitations more often than men. But any time a woman declines, ask her for a list of people she recommends. And then consider the women on that list. No women in the list? Then specifically ask her if she knows any women.
- It’s a paid talk and they only sent men. Know what? It’s okay to remind companies that their panel isn’t representative of the industry. You can suggest that they send a broader range of people.
- We didn’t realize this was a problem. Inexcusable. Diversity has been an issue for years. People have been pointing this out to market research conferences for years. The right time to fix things is always now.
When was the last time you prepared a sampling matrix balanced on age, gender, and ethnicity and then were pleased when it was 70% female, 70% age 50+, and 90% white? Never, that’s when. You stayed in field and implemented appropriate sampling techniques until your demographics were representative. This is absolutely no different.
So, to every conference organizer out there, ESOMAR, CASRO, MRA, MRIA, ARF, MRS, AMSRS, ESRA, AAPOR, I challenge you to review and correct your speaker list before announcing it.
- What percentage of submissions are from men versus women? Only when submissions are far from balanced is it acceptable for the acceptance list to be unbalanced.
- Are there any all male panels? Are there any all female panels? (By the way, all female panels talking about female issues do NOT count.)
- Are more than 55% of speakers male? Are more than 55% of speakers female?
- Is the invited speaker list well balanced? There is zero reason for invited speakers to NOT be representative.
- Did you actively ask companies to assist with ensuring that speakers were diverse?
If you can give appropriate answer to those questions, I invite you to publicly advertise your conference as Diversity Approved.
Will you accept this challenge for every conference you run in 2016? Will you:
- Post the gender ratio of submissions
- Post the gender ratio of acceptances
- Proudly advertise that your conference is “Diversity Approved”
Demand that your conferences be Diversity Approved! (Tweet this demand!)
I recently debated big data with a worthy opponent in Marc Alley at the Corporate Research Conference. He stood firm in his belief that big data is the best type of data whereas I stood firm in my position that traditional research is the only way to go. You can read a summary of the debate written by Jeffrey Henning here.
The interesting thing is that, outside of the debate, Marc and I seemed to agree on most points. Neither of us think that big data is the be all and end all. Neither of us think that market research answers every problem. But both of us were determined to present our side as if it was the only side.
In reality, the best type of data is ALL data. If you can access survey data and big data, you will be better off and have an improved understanding of thoughts, opinions, emotions, attitudes AND validated actions. If you can also access eye tracking data or focus group data or behavioural data, you will be far better off and have data that can speak to reliability or validity. Each data type will present you with a different view and a different perspective on reality. You might even see what looks like completely different results.
Different is not wrong. It’s not misleading. It’s not frustrating. Different results are enlightening, and they are indeed valid. Why do people do different than what they say? Why do people present contradictory data? That’s what is so fascinating about people. There is no one reality. People are complex and have many contradictory motivations. No single dataset can describe the reality of people.
There is no debate about whether big data has anything to offer. Though Marc and I did our best to bring you to our dark side, we must remember that every dataset, regardless of the source, has fascinating insights ready for you to discover. Grab as much data as you can.