Forget for a moment the debate about whether the MBTI is a valid and reliable personality measurement tool. (I did my Bachelors thesis on it, and I studied psychometric theory as part of my PhD in experimental psychology so I can debate forever too.) Let’s focus instead on the MBTI because tests similar to it can be answered online and you can find out your result in a few minutes. It kind of makes sense and people understand the idea of using it to understand themselves and their reactions to our world. If you’re not so familiar with it, the MBTI divides people into groups based on four continuous personality characteristics: introversion/extroversion, sensing/intuition, thinking/feeling, judging/perception . (I’m an ISTJ for what it’s worth.)
Now, in the market and social research world, we also like to divide people into groups. We focus mainly on objective and easy to measure demographic characters like gender, age, and region though sometimes we also include household size, age of children, education, income, religion, and language. We do our best to collect samples of people who look like a census based on these demographic targets and oftentimes, our measurements are quite good. Sometimes, we try to improve our measurements by incorporating a different set of variables like political affiliation, type of home, pets, charitable behaviours, and so forth.
All of these variables get us closer to building samples that look like census but they never get us all the way there. We get so close and yet we are always missing the one thing that properly describes each human being. That, of course, is personality. And if you think about it, in many cases, we’re only using demographic characteristics because we don’t have personality data. Personality is really hard to measure and target. We use age and gender and religion and the rest to help inform about personality characteristics. Hence why I bring up the MBTI. The perfect set of research sample targets.
The MBTI may not be the right test, but there are many thoroughly tested and normed personality measurement scales that are easily available to registered, certified psychologists. They include tests like the 16PF, the Big 5, or the NEO, all of which measure constructs such as social desirability, authoritarianism, extraversion, reasoning, stability, dominance, or perfectionism. These tests take decades to create and are held in veritable locked boxes so as to maintain their integrity. They can take an hour or more for someone to complete and they cost a bundle to use. (Make it YOUR entire life’s work to build one test and see if you give it away for free.) Which means these tests will not and can not ever be used for the purpose I describe here.
However, it is absolutely possible for a Psychologist or psychological researcher to build a new, proprietary personality scale which mirrors standardized tests albeit in a shorter format, and performs the same function. The process is simple. Every person who joins a panel answers ten or twenty personality questions. When they answer a client questionnaire, they get ten more personality questions, and so on, and so on, until every person on a panel has taken the entire test and been assigned to a personality group. We all know how profiling and reprofiling works and this is no different. And now we know which people are more or less susceptible to social desirability. And which people like authoritarianism. And which people are rule bound. Sound interesting given the US federal election? I thought so.
So, which company does this? Which company targets people based on personality characteristics? Which company fills quotas based on personality? Actually, I don’t know. I’ve never heard of one that does. But the first panel company to successfully implement this method will be vastly ahead of every other sample provider. I’d love help you do it. It would be really fun. 🙂
Live note taking at #AAPOR in Austin Texas. Any errors or bad jokes are my own.
Moderator: Lisa Drew, two.42.solutions
RAND 2016 Presidential Poll Baseline Data – PEPS; Michael S. Pollard, RAND Corporation Joshua Mendelsohn, RAND Corporation Alerk Amin, RAND Corporation
- RAND is nonprofit private company
- 3000 people followed at six points throughout the election, starting with a full baseline survey in December, before candidates really had an effect, opinions of political issues, of potential candidates, attitudes towards a range of demographic groups, political affiliation and prior voting, a short personality questionnaire
- Continuously in field at first debate
- RDD but recruited RDD and then offered laptops or Internet service if needed
- Asked people to say their chance of voting, and of voting for democrat, republican, someone else, out of 100%
- Probabilistic polling gives an idea of where people might vote
- In 2012 it was one of the most accurate popular vote systems
- Many responders a have been surveyed since 2006 providing detailed profiles and behaviors
- All RAND data is publicly available unless it’s embargoed
- Rated themselves and politicians on a liberal to conservative scale
- Perceptions of candidates have chanced, Clinton, Cruz, and average democrat more conservative now, trump more liberal now; sanders, kasich, average republican didn’t move at all
- Trump supporters more economically progressive than Cruz supporters
- Trump supporters concerned about immigrants and support tax increases for rich
- If they feel people like me don’t have a say in government, they are more likely to support trump
- Sanders now rates higher than Clinton on “cares about people like me”
- March – D was 52% and R was 40%, but we are six months aware from an election
- Today – Clinton is 46% and Trump is 35%
- Didn’t support trump in December but now do – Older employed white men born in US
- People who are less satisfied in life in 2014 more likely to support rump now
- Racial resentment, white racists predict trump support [it said white ethnocentrism but I just can’t get behind hiding racism is pretty words]
Cross-national Comparisons of Polling Accuracy; Jacob Sohlberg, University of Gothenburg Mikael Gilljam, University of Gothenburg
- Elections are really great [ made me chuckle, good introduction 🙂 ]
- Seen a string of failures in many different countries, But we forget about accurate polls, there is a lot of variability
- Are some elections easier than other? Is this just random variance? [well, since NO ONE uses probability sampling, we really don’t know what MOSE and MONSE is. ]
- Low turnout is a problem
- Strong civil society has higher trust and maybe people will be more likely to answer a poll honestly
- Electoral turnover causes trouble, when party support goes up and down constantly
- Fairness of elections, when votes are bought, when processes and systems aren’t perfect and don’t permit equal access to voting
- 2016 data
- Polls work better when turnout is high, civil society is Truong, electoral stability is high, vote buying is low [we didn’t already know this?]
- Only electoral turmoi is statistically significant in the Multivariate analysis
Rational Giving? Measuring the Effect of Public Opinion Polls on Campaign Contributions; Dan Cassino, Fairleigh Dickinson University
- Millions of people have given donations, it’s easier now than ever before with cell phone and Internet donations
- Small donors have given more than the large donors
- Why is Bernie not winning when he has consistently out raised Hillary
- What leads people to give money
- Wealthy people don’t donate at higher rates
- It’s like free to play apps – need to really push people to go beyond talking about it and then pay for it
- Loyalty base giving money to the candidate they like, might give more to her if they see her struggling
- Hesitancy based only give if they know they are giving to the right and iodate, so they wait
- Why donate when your candidate seems to be winning
- Big donors get cold called but no one gets personality phone calls if you’re poor
- Horse race coverage is rational, coverage to people doing well, don’t really know about their policies
- Lots of covereage on Fox News doesn’t mean someone is electable
- People look at cues like that differently
- In 2012 sometimes saw 5 polls every day, good for poll aggregators not good for people wanting to publicize their poll
- You want a dynamic race for model variance
- Used data from a variety of TV news shows, Fox, ABC, CBS, NBC
- Don’t HAVE to report donation under $200, many zero dollar contributions – weirdness needed to be cleaned out
- Predict contributions will increase when Romney is threatened in the polls
- Predict small contributions will increase in response to good coverage on Fox News
- Fox statements matter for small contributors, doesn’t matter which direction
- Network news doesn’t matter for small contributors
- Big donor are looking for more electable candidates so if fox hates them then we know they’re electable and they get more money
- Romney was a major outlier though, the predictions worked differently for him
Prediction Science: How Well Can We Predict the Future? Jon Puleston, Lightspeed GMI #NetGain2015 #MRX
Live blogging from the Net Gain 2015 conference in Toronto, Canada. Any errors or bad jokes are my own.
Prediction Science: How Well Can We Predict the Future?
Jon Puleston, VP of Innovation at Lightspeed GMI
- [Jon had the room take a quiz on a random series of questions]
- What has he learned about predictions? Better at predicting certain things, behaviors. Not so good at predicting prices.
- You can isolate people who are good at predicting, perhaps find them and use their super power 🙂
- Prediction isn’t dependent on sample size. One person can be sufficient. It’s more about sample diversity and the intelligence of that sample.
- 16 is a crowd if they are all well-informed. That’s all you need.
- You could ask 1000 people or 5 people about the weather next week, but you really only need to ask the 1 person who saw a weather forecast.
- How do you aggregate crowd wisdom – mean/median/mode
- Crowds mean that errors get distilled out
- A crowd of people predicted the price of the ipad within 1%
- But without any knowledge, the crowd is ignorant. People NOW can’t predict the weight of a cow.
- Prediction is littered with cognitive biases – 68% of people will say that a coin will toss heads because we always hear ‘heads’ first
- People’s preferences for wine depend on whether you ask “Who prefers red wine?” vs “Who prefers white wine?”
- People who check their emails before breakfast are more likely to say people check their email before breakfast
- Emotions get in the way of making valid predictions. We are more positive about our own teams vs other teams when predicting score counts.
- Do you you clean up after a meeting vs Do people clean up after meetings. People say they do but they don’t. [I do. Even when I wasn’t in the meeting.]
- Read: http://www.amazon.com/Expert-Political-Judgment-Good-Know/dp/0691128715
- only 48% of stock market gurus stock market predictions were correct
- People who ‘bet’ 1 unit not as good as 2, but betting 3 is a little better. People who bet against are really good.
- After 15 people predict and other people see that, they start to predict the same way, the answers don’t move.
- But in an independent voting method, larger surveys are better than 15 people
- Best predictive market situations allow sharing of information, let people discuss and debate
- e.g., in guess which mug is most popular, someone will suggest mugs are good gifts, someone will suggest lots of people garden, people decide that the gardening mug will be most popular
- Think of board room meetings where they didn’t discuss things before they vote on a decision. Stray comments are problematic.
- Try dividing up the herd and then recombine the three groups back into one. Helps improve accuracy just like how we run 3 focus groups not 1.
- Let people change their opinions in surveys [We NEVER let people do that!]
The Power Of Big Data: How We Predicted the World’s Largest Music Poll Using Social Media by Nick Drewe #CASRO #MRX
Live blogging from the #CASRO tech conference in Chicago. Any errors or bad jokes are my own.
The Power Of Big Data: How We Predicted the World’s Largest Music Poll Using Social Media by Nick Drewe, Creative Technologist, The Data Pack
- it used to be taboo to say your real name online
- Nick Drewethere is no line between what is signal and what is noise. what is gold to me is trash to you
- when you know that hundreds and thousands of people have posted noise about a brand, it’s no longer noise
- Radio station called TripleJ, a national funded station, like NPR or BBC and it’s aimed for a younger audience. They post the hottest 100 songs as voted by listeners through the year. it’s a national institution. 1.4 million votes cast last year.
- results used to be closely guarded up until number 1 song
- station had people share what the voted for in hopes of getting more people to vote. every page was hosted on a unique URL which suggests every vote has a page. other little bits of code with info were on the page too. if they could find and collect enough of these pages, they might be able to predict.
- used twitter api and found 40 000 votes in a few minutes, a sample size or 3 or 4%
- created a list that seemed realistic but didn’t know what to do with it yet
- set up a website where people could see their predictions and play the songs
- turned the website into a disclaimer, people had to scroll way way way down to get to the number one song
- got a ton of traffic, more people saw it than people who voted
- made the front page of one of the biggest newspapers
- not yet sure how accurate they were yet
- colleague ran a bootstrap of 3.5% sample and concluded they’d get 90 songs 100% accurate or 95 songs at 90% accurate, and #1 song with 83% accuracy
- the next year, the station closed all the social sharing features
- found 400 votes that were posted as screencaps to twitter, their confirmation emails
- but photos are also posted on instagram, found 20 000 votes there after searching for them
- even if you really really really don’t want people to share something online, they will do it anyways
- predicted 82 out of 100 songs in the second year with half the amount of data
- it was an experiment in social data
- most networks have free APIs to share and use data, most networks don’t really know what to do with the data
- posts don’t have to sit in isolation online, we can turn these into insights
- people don’t post the same things in social media that they post on surveys
- 60 million posts on instagram every day, rich with metadata, a photo contains geolocation, 20 million photos a day have a location [i always turn off my geolocation, decline, decline, decline]
- can search on username, hashtag, and location – but it must be part of a hashtag not a description
- youtube is still the largest music sharing site
- can use youtube, twitter, facebook data to predict music you will like [try me – rankin family, leahy, michelle branch, vanessa carlton]
- a single message is rarely valuable but a group of messages is, particularly with all the metadata
- every link tells you something about the person who shared it – what they like, don’t like, know and don’t know, cat gifs too
- google’s page rank looks at links to your website, more websites gets you a higher page rank, and greater likelihood to appear in a search result – this is a social graph and can be done on a personal level, not just what they’ve shared about a specific topic but everything else they’re doing
- [Nick is wearing the same shirt today that is shown in his bio. LOVE that as I find matching people to photos very difficult]
- everyone should try a social api, it’s not a difficult to use as you think it is, point isn’t to start writing code but to start thinking about big data and social data in a different way
I’ve talked about probability sampling before. Unless your population is your immediate family, your immediate colleagues, or your immediate classmates, chances are probability sampling is a theoretical idea. The premise behind probability sampling is that by generating a sample that approximates the population in important characteristics, you will be able to accurately predict population values with sample values.
So let me propose a better system. Forget probability sampling. Strive for predictive sampling. In this sense, select samples that consistently and accurately predict the phenomenon in which you are interested. If a bunch of twelve year olds standing outside the candy store accurately predict the weekly cast-off on American Idol, then it is a GREAT predictive sample. If a bunch of eighty year grammas accurately predict each state election, then it is a GREAT predictive sample. That is predictive sampling.
I truly don’t care what a sample looks like as long as it reliably and accurately predicts future behaviour. Isn’t that what you too are striving for? I think so. The trick is though, what exactly is the predictive sample? Whomever discovers that will be a wealthy person.
- What is Proper Research? #MRX (lovestats.wordpress.com)
- My Fight with DIY #MRX (lovestats.wordpress.com)
- A Discussion with Coca-Cola by Diane Hessan and Stan Sthanunathan (Read!) #TMRE #MRX (lovestats.wordpress.com)
- What is Social Media Research? #MRX (lovestats.wordpress.com)
Welcome to my #Netgain6 MRIA live blogs. What happens at St. Andrews Conference Centre, gets blogged for all to read about. Each posting is published immediately after the speaker finishes. Any inaccuracies are my own. Any silly comments in [ ] are my own. Enjoy!
Adam de Paula – Managing Director, Sentis Market Research, Inc
Measuring Without Needing to Ask: Using Implicit measures to Predict Choice
- It’s not the consumers job to know what they want – Steve Jobs
- People have no idea why they’re doing what they’re doing so they try to make something up that makes sense – Clotaire Rapaille [TOTALLY agree]
- Surveys are about measuring explicit attitudes and behaviours
- Explicit measures rely on conscious thought have limited predictive value
- Implicit measures tap preferences and feelings indirectly
- You can predict divorce over 3 years with non-verbal measures – eye rolling, sneering, silence, monosyllabic mutterings
- Predict litigation with dominance and lack of concern
- Predict career based on people’s names – Dennis more likely to be a dentist, Louis more likely to move to St. Louis – Unconsciously, we associate things with ourselves. Dennis won’t admit it but statistics will prove it.
- BE is how people really make judgments and choices. The old model of people think through all the options is off the table now.
- Implicit associations between words – old/weak, beauty/youth. Activation of one word, automatically activates the second word.
- People are bad at accurately reporting on what has influenced them. We can prove people are influenced by the space or size but people won’t recognize that.
- Group task – tap your left knee or right knee to indicate whether a word belongs to younger or older male [room full of tapping sounds now] Now task is good vs bad [quick tapping from everyone] Now task is young and good vs older and bad [woah …. tapping sounds are few and slow] . As tapping gets slower, people are having harder timing matching a picture to multiple words that may or may not represent a cohesive theme.
- Great process for stereotyping research because people don’t feel comfortable saying what they really feel. There’s no social desirability here.
- Useful conditions for implicit measures – quick judgments, many alternatives, early life attitudes
- Netgain 6.0: An MRIA Social Media Workshop by Annie Pettit
- New book! The Listen Lady: A novel and social media research guide baked into one
- Riding the Change Wave: Lenny Murphy #Netgain6 #MRIA (lovestats.wordpress.com)
- Engaging the high tech consumer: Bob Fawson #Netgain6 #MRIA (lovestats.wordpress.com)