How the best research panel in the world accurately predicts every election result #polling #MRX
Forget for a moment the debate about whether the MBTI is a valid and reliable personality measurement tool. (I did my Bachelors thesis on it, and I studied psychometric theory as part of my PhD in experimental psychology so I can debate forever too.) Let’s focus instead on the MBTI because tests similar to it can be answered online and you can find out your result in a few minutes. It kind of makes sense and people understand the idea of using it to understand themselves and their reactions to our world. If you’re not so familiar with it, the MBTI divides people into groups based on four continuous personality characteristics: introversion/extroversion, sensing/intuition, thinking/feeling, judging/perception . (I’m an ISTJ for what it’s worth.)
Now, in the market and social research world, we also like to divide people into groups. We focus mainly on objective and easy to measure demographic characters like gender, age, and region though sometimes we also include household size, age of children, education, income, religion, and language. We do our best to collect samples of people who look like a census based on these demographic targets and oftentimes, our measurements are quite good. Sometimes, we try to improve our measurements by incorporating a different set of variables like political affiliation, type of home, pets, charitable behaviours, and so forth.
All of these variables get us closer to building samples that look like census but they never get us all the way there. We get so close and yet we are always missing the one thing that properly describes each human being. That, of course, is personality. And if you think about it, in many cases, we’re only using demographic characteristics because we don’t have personality data. Personality is really hard to measure and target. We use age and gender and religion and the rest to help inform about personality characteristics. Hence why I bring up the MBTI. The perfect set of research sample targets.
The MBTI may not be the right test, but there are many thoroughly tested and normed personality measurement scales that are easily available to registered, certified psychologists. They include tests like the 16PF, the Big 5, or the NEO, all of which measure constructs such as social desirability, authoritarianism, extraversion, reasoning, stability, dominance, or perfectionism. These tests take decades to create and are held in veritable locked boxes so as to maintain their integrity. They can take an hour or more for someone to complete and they cost a bundle to use. (Make it YOUR entire life’s work to build one test and see if you give it away for free.) Which means these tests will not and can not ever be used for the purpose I describe here.
However, it is absolutely possible for a Psychologist or psychological researcher to build a new, proprietary personality scale which mirrors standardized tests albeit in a shorter format, and performs the same function. The process is simple. Every person who joins a panel answers ten or twenty personality questions. When they answer a client questionnaire, they get ten more personality questions, and so on, and so on, until every person on a panel has taken the entire test and been assigned to a personality group. We all know how profiling and reprofiling works and this is no different. And now we know which people are more or less susceptible to social desirability. And which people like authoritarianism. And which people are rule bound. Sound interesting given the US federal election? I thought so.
So, which company does this? Which company targets people based on personality characteristics? Which company fills quotas based on personality? Actually, I don’t know. I’ve never heard of one that does. But the first panel company to successfully implement this method will be vastly ahead of every other sample provider. I’d love help you do it. It would be really fun. 🙂
The Mechanics of Election Polls #AAPOR
Live note taking at #AAPOR in Austin Texas. Any errors or bad jokes are my own.
Moderator: Lisa Drew, two.42.solutions
RAND 2016 Presidential Poll Baseline Data – PEPS; Michael S. Pollard, RAND Corporation Joshua Mendelsohn, RAND Corporation Alerk Amin, RAND Corporation
- RAND is nonprofit private company
- 3000 people followed at six points throughout the election, starting with a full baseline survey in December, before candidates really had an effect, opinions of political issues, of potential candidates, attitudes towards a range of demographic groups, political affiliation and prior voting, a short personality questionnaire
- Continuously in field at first debate
- RDD but recruited RDD and then offered laptops or Internet service if needed
- Asked people to say their chance of voting, and of voting for democrat, republican, someone else, out of 100%
- Probabilistic polling gives an idea of where people might vote
- In 2012 it was one of the most accurate popular vote systems
- Many responders a have been surveyed since 2006 providing detailed profiles and behaviors
- All RAND data is publicly available unless it’s embargoed
- Rated themselves and politicians on a liberal to conservative scale
- Perceptions of candidates have chanced, Clinton, Cruz, and average democrat more conservative now, trump more liberal now; sanders, kasich, average republican didn’t move at all
- Trump supporters more economically progressive than Cruz supporters
- Trump supporters concerned about immigrants and support tax increases for rich
- If they feel people like me don’t have a say in government, they are more likely to support trump
- Sanders now rates higher than Clinton on “cares about people like me”
- March – D was 52% and R was 40%, but we are six months aware from an election
- Today – Clinton is 46% and Trump is 35%
- Didn’t support trump in December but now do – Older employed white men born in US
- People who are less satisfied in life in 2014 more likely to support rump now
- Racial resentment, white racists predict trump support [it said white ethnocentrism but I just can’t get behind hiding racism is pretty words]
Cross-national Comparisons of Polling Accuracy; Jacob Sohlberg, University of Gothenburg Mikael Gilljam, University of Gothenburg
- Elections are really great [ made me chuckle, good introduction 🙂 ]
- Seen a string of failures in many different countries, But we forget about accurate polls, there is a lot of variability
- Are some elections easier than other? Is this just random variance? [well, since NO ONE uses probability sampling, we really don’t know what MOSE and MONSE is. ]
- Low turnout is a problem
- Strong civil society has higher trust and maybe people will be more likely to answer a poll honestly
- Electoral turnover causes trouble, when party support goes up and down constantly
- Fairness of elections, when votes are bought, when processes and systems aren’t perfect and don’t permit equal access to voting
- 2016 data
- Polls work better when turnout is high, civil society is Truong, electoral stability is high, vote buying is low [we didn’t already know this?]
- Only electoral turmoi is statistically significant in the Multivariate analysis
Rational Giving? Measuring the Effect of Public Opinion Polls on Campaign Contributions; Dan Cassino, Fairleigh Dickinson University
- Millions of people have given donations, it’s easier now than ever before with cell phone and Internet donations
- Small donors have given more than the large donors
- Why is Bernie not winning when he has consistently out raised Hillary
- What leads people to give money
- Wealthy people don’t donate at higher rates
- It’s like free to play apps – need to really push people to go beyond talking about it and then pay for it
- Loyalty base giving money to the candidate they like, might give more to her if they see her struggling
- Hesitancy based only give if they know they are giving to the right and iodate, so they wait
- Why donate when your candidate seems to be winning
- Big donors get cold called but no one gets personality phone calls if you’re poor
- Horse race coverage is rational, coverage to people doing well, don’t really know about their policies
- Lots of covereage on Fox News doesn’t mean someone is electable
- People look at cues like that differently
- In 2012 sometimes saw 5 polls every day, good for poll aggregators not good for people wanting to publicize their poll
- You want a dynamic race for model variance
- Used data from a variety of TV news shows, Fox, ABC, CBS, NBC
- Don’t HAVE to report donation under $200, many zero dollar contributions – weirdness needed to be cleaned out
- Predict contributions will increase when Romney is threatened in the polls
- Predict small contributions will increase in response to good coverage on Fox News
- Fox statements matter for small contributors, doesn’t matter which direction
- Network news doesn’t matter for small contributors
- Big donor are looking for more electable candidates so if fox hates them then we know they’re electable and they get more money
- Romney was a major outlier though, the predictions worked differently for him
Panel: Public Opinion Quarterly Special – Survey research today and tomorrow #AAPOR #MRX #NewMR
Live note taking at #AAPOR in Austin, Texas. Any errors or bad jokes are my own.
Moderator: Peter V. Miller, U.S. Census Bureau
- He is accepting submissions of 400 words regarding these papers, to be published in an upcoming issue, due June 30, send to peter.miller@census.gov
Theory and Practice in Nonprobability Surveys:
Parallels Between Casual Inference and Survey Inference
Discussant: Jill DeMatteis, Westat
; Andrew Mercer, Pew Research Center; Frauke Kreuter, University of Maryland; Scott Keeter, Pew Research Center; Elizabeth Stuart, Johns Hopkins University
- Noncoverage – when people can’t be included in a survey
- Problem is when they are systematically based
- Selection bias is not as useful in a nonprobability sample as there is no sampling frame, and maybe not even a sample
- Need a more general framework
- Random selection and random treatment assignment is the best way to avoid bias
- Need exchangeability – know all the confounding, correlated variables
- Need positivity – everyone needs to be able to get any of the treatments, coverage error is a problem
- Need composition – everyone needs to be in the right proportions
- You might know the percent of people who want to vote one way, but then you also know you have more of a certain percentage of demographic in your group, but it’s never just one demographic group, it’s ten or twenty or 100 important demographic and psychographic variables that might have an association with the voting pattern
- You can’t weight a demographic group up [pay attention!]
- We like to assume we don’t have any of these three problems and you can never know if you’ve met them all, we hope random selection accomplishes this for us; or with quota selection we hope it is met by design
- One study was able to weight using census data a gigantic sample and the results worked out well [makes sense if your sample is so ridiculously large that you can put bigger weights on a sample of 50 000 young men]
- Using demographics and psychographics helps to create more accurate results, religion, political affiliation
- This needs to be done in probability and nonprobability samples
- You can never be certain you have met all the assumptions
- Think about confounding variables during survey design, not just demographics, tailored to the research question at hand
- Confounding is more important than math – it doesn’t matter what statistic you use, if you haven’t met the requirments first you’re in troubl
Apples to Oranges or Gala vs. Golden Delicious? Comparing Data Quality of Nonprobability Internet Samples to Low Response Rate Probability Samples
Discussant: George Terhanian, NPD Group
, David Dutwin, SSRS, Trent Buskirk, Marketing Systems Group
- S>80000, 9% response rate for probability sample [let’s be real here, you can’t have a probability sample with humans]
- The matching process is not fool proof, uses categorical match, matching coefficient, randomly selected when there was a tie
- Looked at absolute bias, standard deviation, and overal mean absolute bias
- Stuck with demographics variables, conditional variables, nested within gender, age, race or region
- Weighted version was good, but matched and raked was even closer, variability is much less with the extra care
- Nonprobability telephone surveys consistently had less variability in the errors
- Benchmarks are essential to know what the error actually is, you can’t just the bias without a benchmark
- You can be wrong, or VERY wrong and you won’t know you’re wrong
- Low response rate telephone gets you better data quality, much more likely you’re closer to truth
- Cost is a separate issue of course
- Remember fit for purpose – in politics you might need reasonably accurate point estimates
Audience discussion
- How do you weight polling research when political affiliation is part of both equations, what is the benchmark, you can’t use the same variables for weighting and measuring and benchmarking or you just creating the results you want to see
- If we look at the core demographics, maybe we’ve looked at something that was important [love that statement, “maybe” because really we use demographics as proxies of humanality]
- [if you CAN weight the data, should you? If you’re working with a small sample size, you just probably just get more sample. If you’re already dealing with tens of thousands, then go ahead and make those small weighting adjustments]
Beyond the poll: responses to the failures of GE2015 #MRSlive @TweetMRS #MRX
Live blogged at MRS in London. Any errors, bad jokes, or comments in [] are my own.
The power of small data in understanding the unknowable by Cordelia Hay
- Used mobile Qual to understand voters, they were frustrated and disillusioned with politics
- Ethnographic approach helped understand what really was happening
- When so many important political events happen, things like a boy band can make people completely ignore the news
- People were more concerned with national issues over local issues
- People mattered much more than policy even though they say only policy matters
- People really only cared about the economy
- The winning party had people rallying around one single issue not many different issues
- Small data provides diagnosis, deep insight into specific audiences, Behavioral insight, vivid, co-creative and ethnographic
Notes from a pollster: how to move forward rom GE2015 by Tom Mludzinski
- Narrative was driven by polling, race was neck and neck of top two parties
- Campaign rolling average had them identical
- Tried looking at difference between online polls and telephone polls
- 70% of telephone polls had a conservative lead but 56% of online polls had a labour lead
- We will have a new set of problems five years from now so need a broader more durable solution
- Using different methods of assigning unknown votes led to different results – squeeze questions, asking who they’d like to see as prime minister, who they related to
- Start by trying to get a national rep sample of voting population, but we don’t know who will actually vote, and people can’t predict their own behaviours in terms of whether they will vote
- They considered that past voting was a better predictors of future voting
- But this time, 12% more than predicted said they do and did vote
- Older people are much more likely to vote, bottom ten turnout constituencies were labour constituencies
- Correlation extremely strong for social grade, higher affluence is higher turnout
- Maybe turnout was the biggest problem
- Online is more likely to want to remain in EU, maybe it’s also age and Internet access
Heuristics, hatred, and hair: forecasting elections the system 1 way by Tom Ewing and Orlando Wood
- Fame, feeling, and fluency
- Fame – if it comes readily to mind it must be a good choice
- FLuecy – if I recognize a brand it must be a good choice
- Feeling – If I have a feeling about a brand it must be a good choice?
- Asked people to list out as many political candidates they can think of, ask how they feel about those candidates, and then ask if the candidates has distinctive assets whether personality policy or physical characteristics
- Trump is dominant and out in front of Hillary by a hair
- People named Clinton and trump easily, but Donald trumps hair was more recognizable than other candidates
- This election will be the lesser of two evils
- Hillary has much more “happiness” than trump but both trump and Hillary are hated by the electorate
- Trump has an advantage in fluency, most distinctive appearance, he owns the conversation
- People know all of his slogans
- Only #FeelTheBern is ahead of trump
- Republicans really hate Clinton but democrats love her. Replicants really like trumpt but they are far more frightened up him
- Hillary is more associated with the trappings of office and does better than Joe Biden
- When feeling is taken into account, they think Hillary will win
2016: The year of the outsider #PAPOR #MRX
live blogged at #PAPOR in San Francisco. Any errors or bad jokes are my own.
The Summer of Our Discontent, Stuart Elway, Elway Research
- regarding state of Washington
- it’s generally democratic
- between elections, more people are independents and then they drift to democrat
- independents are more social liberals
- has become more libertarian
- don’t expect a rebellion to start in Washington state
- [sorry, too many details for me to share well]
Californian’s Opinion of Political Outsiders, Mark Baldassare, PPIC
- California regularly elects outsiders – Reagan, Schwarzenegger
- flavour is often outsider vs insider, several outsiders have run recently
- blog post on the topic – http://ppic.org/main/blog_detail.asp?i=1922
- they favour new ideas over experience
- 3 things are important – approval ratings of elected officials, people who prefer outsiders give officials lower approval, negative attitudes of the two party system
- majority think a third party is needed – more likely to be interested in new ideas over experience
- [sorry, too many details for me to share well]
Trump’s Beguiling Ascent: What 50-State Polling Says About the Surprise GOP Frontrunner, Jon Cohen & Kevin Stay, SurveyMonkey
- 38% of people said they’d be scared if trump is the GOP nominee
- 25% would be surprised
- 24% would be hopeful
- 21% would be angry
- 14% would be excited
- List is very different as expected between democrats and republicans, but not exactly opposite
- quality polling is scale, heterogeneity , correctable self-selection bias
- most important quality for candidates is standing up for principles, strong leader, honest and trustworthy – experience is lowest on the list
- Views on Trump’s Muslim statement change by the minute – at the time of this data: 48% approve, 49% disapprove, split as expected by party
- terrorism is the top issue for republicans; jobs AND terrorism are top for independents; jobs is top for democrats
- for republicans – day before paris 9% said terrorism was top, after paris 22%
- support for Cruz is increasing
- half of trump voters are absolutely certain they will vote for trump; but only 17% of bush voters are absolutely certain
- among republicans, cruz is the second choice even among trump voters
- trump has fewer voters who go to religious services weekly, least of all candidates; carson and cruz are on the high end
- trump voters look demographically the same but carson has fewer male voters and cruz has fewer female voters
- trump voters are much less educated, rubio voters are much more educated
Connected homes, political polls, and quicksand oh my! #CRC2015 #MRX
Live blogged at #CRC2015 in St. Louis. Any errors or bad jokes in the notes are my own.
Leveraging methodologies and optimizing your product: How NRG developed a connected home solution
- help consumers better understand how to use and control energy when we don’t really think about it unless it doesn’t work
- connect/smart homes can help you control anything from an app on your smart phone, lights on and off, check on kids, see what time people come home
- people will always want a $10 000 ferrari. stop asking about that. no one wants a $200 000 kia either. stop asking that.
- what is the most efficient product based on costs and preferences
- Learn about the efficient fontier – optimizing preferences and costs
- what features do people really like and which are undervalued
- must narrow down the features using max diff first
- people really wanted to be able to confirm that they had closed and locked the door
- price obviously had to be included
- there is no right answer but you get data to make decisions [totally agree. statistics never give you th right answer. they give you something to ponder]
Political polling 2016: what pollsters and corporate researchers can learn from each other
- “This election finally prove that most market research is probably twaddle”
- research is used to find the idioms that people like and these words show up in speeches
- RDD has been the most prefered data collection mode, and in some cases still are
- compare a phone and online survey side by side, n=500 for both, 21 questions around 5 minutes
- housekeeping variables generally matched
- no major differences between the two groups for many variables on voting, immigration, economics
- study that appends voter data from registered voters
- questions were about favorability and support, e.g., i don’t really like them but i’m voting for them
- in single select questions, trump is favoured.
- tending data is more important than single point estimates
- brand liking is not always a good predictor of buying behaviora
- change in question wording can yield substantially different data
- online data can provide a reliable supplement, if not replacement, for phone surveys [in other words, there is NO perfect data collction method. be SMART in your data collection and interpretation]
Tiptoeing through innovation quicksand: methods to die for and methods that might kill you
- Gartner hype cycle for methods
- 100 responses so far, convenience sample – researchers, supply side, corporate; skewed quant
- asked about 34 techniques, no forced answers
- 3 people said they were fired for using a technique – virtual store research, prediction marketets, A/B testing, emotion detection [i wouldn’t want to work there anyways!]
- if you did an online survey in 1996, it might have had a big impact on your career
- people said they were rewarded, promoted, raise for using a new technique; helped companies shift directions
- only 64% had used online surveys, half had used focus groups
- never use again -mail surveys, facial recogniton
- Career builders, Career opportunities, Career investments, Career challenges
- Microsegmentation – can identify micromarketing action to take, but complex to implement
- Customer journey mapping – great holistic view of customer experience, too complex or externally focused
- Uplift modeling – trying to action individual, strong ROI,
- social media analytics – unprompted items of concern, difficult to decipher, not-reliable data
- mobile intercept surveys – on the spot real data, hard to get participation
- neuromarketing – higher price, opinions vs actions
- microsurveys under ten questions – quick,better for niche audience, low barrier of entry
- facial recognition – might feel it doesn’t give new information
The impact of social #ESOMAR #MRX
Live blogged from Esomar in Dublin. Any errors or bad jokes are my own.
When democracy fails to deliver by Ijaz Shafi Gilani and Jean-Marc Leger
- what explains satisfaction and dissatisfaction with democracy
- democracy is the worst form of government except for all the others – Winston Churchill
- Failed as a norm? no
- Failed in specific cases? yes
- 75% of people believe democracy is the best
- 50% believe they are ruled by the will of the people
- 35% of upper income americans believe a good way to govern is to have the army rule
- Nat rep, 52 countries, n=50 000, 10 years apart survey
- countries who’ve practiced democracy the longest are most disillusioned
- correlates of disatisfaction include:
- macroeconomic factors – ecnomy, inequality, size of country
- demographic factors – gender, age, education
- identify factor – nationalism, patriotism, attitudes towards globalization
- Identify factors seemed to be most relevant for countries practicing democracy the longest
- political rights and civil liberties have taken a back seat, now its become flight of jobs and immigration
- linked to inability of govt to copy with “encroachment of globalization”, these people are most dissatisfied
- does democacy fail to deliver in a globalized world?
- democracy might need to reinvent itself
Ireland and same sex marriage by Eric Meerkamper and Aengus Carroll
- Bill Gates says he is struck by how important measurement is to the human condition
- we have a unique skillset and tools to measure
- we have relied too heavily on the same repsondent for too long – Dan Foreman
- Random Domain InterceptTechnology, based on making errors in the browser bar
- 51 countries, 51 000 respondents
- should same sex marriage be legal
- seems like a safe question but in many parts of the world, this is a death penalty for you and even your family, people need anonymity to answer this question
- across 8 other countries with marriage quality, only about 50% of population wanted it, so it is still risky
- about three quarters of of people disagree with marriage eqality in countries where sexual orientation can be a crime [naturally, you’ll be killed if you say otherwise!]
- yes campaign: what kind of country do you want to grow up in, it’s about human rights, inclusion
- no campaign wanted a civil partnership not marriage, that kids needs a mom and a dad
- 72% of young voters wanted same sex marriage which matched the campaign they used, focus on young people
- young people brought older people to come and vote
- marriage was not the issue, the issue was discrimination and exlusion
- this method allows safe measurement
Advancements of survey design in election polls and surveys #ESRA15 #MRX
Live blogged from #ESRA15 in Reykjavik. Any errors or bad jokes are my own.
I decided to take the plunge and choose a session in a different building this time. The bravery isn’t much to be noted as I’ve realized that the campus and buildings and rooms at the University of Iceland are far tinier than what I am used to. Where I’d expect neighboring buildings to be a ten minute walk from one end to the other, here it is a 30 second walk. It must be fabulous to attend this university where everything and everyone is so close!
I’m quite loving the facilities. For the most part, the chairs are comfortable. Where it looks like you just have a chair, there is usually a table hiding in the seat in front of you. There is instantly connecting and always on wifi no matter which building you’re in. There are computers in the hallways, and multiple plugs at all the very comfy public seating areas. They make it very easy to be a student here! Perhaps I need another degree?
Designing effective likely voter models in pre-election surveys
- voter and intention and turnout can be extremely different. 80% say they will vote but 10% to 50% is often the number that actually votes
- democratic vote share is often over represented [social desirability?]
- education has a lot of error – 5% error rate, worst demographic variable
- what voter model reduces these inaccuracies
- behavioural models (intent do vote, have you voted, dichotomous variables) and resource based models (
- vote intention does predict turnout – 86% are accurate, also reduces demographic errors
- there’s not a lot of room to improve except when the polls look really close
- Gallup tested a two item measure of voting intention – how much have you thought about this election, how likely are you to vote
- 2 item scale performed far better than the 7 item scale, error rate of 4% vs 1.4%
- [just shown a histogram with four bars. all four bars look essentially the same. zero attempt to create a non-existent different. THAT’S how you use a chart 🙂 ]
- gallup approach didnt work well, probability approach performed better
- best measure of voting intention = Thought about election + likelihood of voting + education + voted before + strength of partisan identify
polls on national independence: the scottish case in a comparative perspective
- [Claire Durand from the University of Montreal speaks now. Go Canada! 🙂 ]
- what happened in Quebec in 1995? referendum on independence
- Quebec and Scotland are nationalist in a British type system, proportion of non-nationals is similar
- referendum are 50% + 1 wins
- but polls have many errors, is there an ant-incumbent effect
- “no” is always underestimated – whatever the no is
- are referendum on national independence different – ethnic divide, feeling of exclusion, emotional debate, ideological divide
- No side has to bring together enemies and don’t have a unified strategy
- how do you assign non-disclosure?
- don’t know doesn’t always mean don’t know
- don’t distribute non-disclosures proportionally, they aren’t random
- asking how people would vote TODAY resulted in 5 points less nondisclosure
- corrections need to be applied after the referendum as well
- people may agree with the general demands of the national parties but not with the solution they propose. maintaining the threat allows them to maintain pressure for change.
- the Quebec newspapers reported the raw data plus the proportional response so people could judge for themselves
how good are surveys at measuring past electoral behaviour? lessons from an experiment in a french online panel study
- study bias in individual vote recall
- sample size of 6000
- over-reporting of popular party, under-reporting of less popular party
- 30% of voter recall was inconsistent
- inconsistent respondents change their recall, changed parties, memory problems, concealing problems, said they didn’t vote, said you vote and then said you didn’t or vice versa
- could be any number of interviewer issues
- older people found it more difficult to remember but perhaps they have more voter loyalty
- when available, use ]vote real from pre-election survey
- use vote recall from post election underestimates voter transfers
- caution in using vote recall to weight samples
methodological issues in measuring vote recall – an analysis of the individual consistency of vote recall in two election longitudinal surveys
- popularity = weighted average % of electorate represented
- universality = weighted frequency of representing a majority
- used four versions of non/weighting including google hits
- measured 38 questions related to political issues
- voters are driven by political traditional even if outdated, or by personal images of politicians not based on party manifestors
- voters are irrational, political landscape has shifted even though people see the parties the same way they were decades ago
- coalition formation aggravate the situation even more
- discrepancy between the electorate and the government elected
If math is hard, you can always do qualitative research #MRX
Yup, I heard that from a speaker on stage at the recent AAPOR conference. You see, because if you’re smart, then you’re probably doing quantitative research. Because quant is for smart people and qual is for dumb people. Because qual requires no skills, or at least the skills are basic enough for non-mathematical people to muddle through. Because qual isn’t really a valid type of research. Because nonprobability research is worthless (yup, I heard that too).
Perhaps I’ve taken the statement out of context or misrepresented the speaker. Perhaps. But that’s no excuse. Qual and quant both require smart people to be carefully trained in their respective methods. Each research method is appropriate and essential given the right research objective.
The marketing research industry has improved greatly in this pointless debate of whose method is better and right. (Neither) Now it’s time for the polling industry to do the same.
Media Influence on Public Opinion #AAPOR #MRX
Prezzie #1: Do polls drive the news or vice versa
- used many popular news stories – ground zero mosque, occupy, gays in military, etc
- there was no clear relationship… it depends [so maybe that just means its random and we’re fishing for nothing]
prezzie #2: perceptions of news coverage among blacks and hispanics
- survey done in 2014, it was pre Ferguson, oversampled the minority groups
- digital divide was not as expected, re minorities would have less internet access
- blacks use TV and cell phone more
- hispanics more likely to use cell phone, and less to use paper newspapers or computer/tablet
- 78% overall use smartphones and actually, more blacks use smart phones for news gathering
- the major difference is smartphone usage not race
- diversity of content on digital sources has not happened yet
- people do find it easier to keep up with news now compared to five years ago, same across all groups
- but the finding is not so good when trying to find news about their own community (racial community)
- 3 to 5% believe their community is not covered in the news
- a quarter believe their community is not reported accurately in the news
- the two groups go to different media to learn about their community, blacks go to local news organizations, hispanics go to ethincally focused media
prezzie #3: political conspiracies
- for instance, the obama birth certificate, JFK assassination
- sharing information does not work, they are motivated by reasoning, people want to believe it, they aren’t motivated to get to THE answer, they are motivated to get to THEIR answer
- controversy debates may create the perceptions of a controversy even when there isn’t one
- counter arguments may lead people to hold their beliefs more strongly
- global warming covered on fox news is controversial stye, cnn is dominant covering vaccines, birtherism was covered on NBC and it was one-sided style
- there is definite link between the types of news stations you watch and whether you believe in the hoaxes
- increased attention to current events leads people to be less likely to endorse hoaxes except when people are exposed to controverial coverage, creating controversy
- increased coverage by cnn saying that vaccines do not cause autism led to 4% increase in people believing vaccines cause autism
prezzie #4: different types of voters to media reporting
- is there a media agenda effect and does it influence individual agendas
- used survey data over 4 years since 2009, surveys every 3 months
- media agenda does affect individual agends with regard to events but not for employment
- but individual characteristics does moderate this
- have to adapt model by event
prezzie #5: survey literacy and poll interpretation
- how to people interpret polls in media
- credibility of results depends on source characteristics, poll characteristics, person characteristics – media souce ideaology, poll quality, reporting transparency, political interest and knowledge
- transparency initatives – quality of polls being reported, sampling details
- used amazons mechanical turk for data preparation
- 2 issues were gun control and abortion which weren’t in the policy agenda during the coding phase, and fairly close to 50/50 issues
- [more pages of huge regression results, sigh, come on aapor presenters, we can do better than this]
- credibility was affected by media source
- it also matters whether the general public is evenly split or more extreme
- education also had a signficant effect
- motivated reasoning plays an important role in credibility, transparency matters for consumers and for experts, transparency increases credibility
Liked this session 🙂