It’s been a long day full of short presentations, and a few things come to mind.
- I’ve advocated on behalf of people who participate in our surveys for so long that I sometimes feel like a broken record. But today, on numerous occasions, speakers specifically demanded that we treat people like human beings. We might think we’re already doing that but the percentage of people who don’t know what being screened out means says differently. We still write really long surveys and we still write them as if we’re Charles Dickens not J. K. Rowlings. What I heard today is that more and more researchers are starting to think about and talk to research participants as if they are actual human beings. Strange concept. I look forward to seeing this theory become reality.
- The age of marketing products has ended. We are now listening to what people want and trying to respond to those needs. Brands that want to remain relevant and in demand also need to treat people like people. (Is this strange concept a trend?) Let’s remember that most people don’t want a relationship with most of the brands they use. Hey, I don’t even KNOW most of the brands I use. Out of the thousands of brands I use, I only have space to remember a few of them by name. Sorry carpets, tiles, shingles, shelving and more. Just because you want your brand to be everyone’s best friend doesn’t mean they want to.
- Did you know that “Garbage in, garbage out” comes from our good friend Charles Babbage who lived in the 1800s? He actually said something closer to “to put in the wrong data and expect the right answers is absurd.” Well, is YOUR survey/focus group/big data putting in the wrong data? And are you still expecting the right answers? We’re so used to the “garbage in, garbage out” phrase that we automatically discard it as not being relevant to US. But is it? Maybe it’s time to think about it again.
- Can’t say, won’t say is a fun little problem for most surveys and traditional research methods. I would never say I’m racist or sexist or homophobic because I know those things are bad. I also can’t tell you why I like the colour pink and hate the colour black. I can’t and I won’t. These few words are a good reminder that the absolute best methodology is the multi-mode methodology. What can’t be measured with one method will be measureable with another. And don’t think otherwise.
- Please explain this to me. Why do we keep on saying that innovation isn’t coming from market research. Of course it is. If you are in the business of understanding consumer behaviour, you work in market research. I don’t care if you call yourself a techie or a programmer or some funky weird fad title. What is the real problem? Well, people who are in traditional market research paths have defined market research far too narrowly and can’t see the light for their blinders. Is a doctor someone who is skilled in the ancient art of bloodletting, or is it someone who is skilled in healing people? It’s no different with market research. Market researchers focus on consumer behaviour HOWEVER that is measured.
- I learned today that panel companies offer no value because anyone can go online and use DIY services. Well, if panel companies were simply DIY companies, I wouldn’t be interested in them either. In fact, I’d run very quickly from them. You see, I’ve worked on the panel side of full service research companies for quite a few years. I’m the person behind the scenes running data quality processes to evaluate individual responders and determine who is and isn’t earning their keep with engaged and honest answers. I’m the person figuring out new algorithms for generating more representative samples. I’m the person making sure your dataset isn’t a big pile of crap. DIY sampling? I’m all for it. But only if it’s DIY sampling of good quality panelists.
- Lastly, the best conference sales pitch is a great presentation. And a great presentation includes ZERO mentions of your company name. ZERO mentions like “Our companies works hard to….” And ZERO videos about your great products. Great presentations DO include engaging, entertaining, personable research experts. Try it. You’ll like it.
Why. Tell me why.
I know it’s not because the speaker organizers didn’t try. I know it’s not because there aren’t qualified female speakers. So what it leaves is this.
- Women think they aren’t qualified (Sorry, there are plenty of qualified women)
- Women think they have nothing new to talk about (Sorry, women have plenty to talk about)
- Women are too busy (Sorry, you’re no busier than anyone else)
- Women are terrible speakers (Sorry, you’re no worse than anyone else)
- Women aren’t submitting speaker proposals —- Well?
- Women are turning down speaker requests —- Well?
So ladies, the next time a request for proposals comes around, submit a proposal! Think about that awesome project you just worked on and turn it into a presentation. Ask a great speaker to mentor you so you feel more comfortable as a speaker yourself. Make the time to do it. It’s good for you and your career. Diversity comes in all forms and you are one of them.
Submit. Speak. Make me proud. :)
Well, my little one, if you insist. Just one more bedtime story.
A long, long time ago, a bunch of people who really loved weather and biology and space and other areas of natural science noticed a lot of patterns on earth and in space. They created neato datasets about the weather, about the rising and setting of the sun, and about how long people lived. They added new points to their datasets everyday because the planets always revolved and the cells always went in the petri dish and the clouds could always be observed. All this happened even when the scientists were tired or hungry or angry. The planets moved and the cells divided and the clouds rained because they didn’t care that they were being watched or measured. And, the rulers and scales worked the same whether they were made of wood or plastic or titanium.
Over time, the scientists came up with really neat equations to figure out things like how often certain natural and biological events happened and how often their predictions based on those data were right and wrong. They predicted when the sun would rise depending on the time of year, when the cells would divide depending on the moisture and oxygen, and when the clouds would rain depending on where the lakes and mountains were. This, my little curious one, is where p-values and probability sampling and t-tests and type 1 errors came from.
The scientists realized that using these statistical equations allowed them to gather small datasets and generalize their learnings to much larger datasets. They learned how small a sample could be or how large a sample had to be in order to feel more confident that the universe wasn’t just playing tricks on them. Scientists grew to love those equations and the equations became second nature to them.
It was an age of joy and excitement and perfect scientific test-control conditions. The natural sciences provided the perfect laboratory for the field of statistics. Scientists could replicate any test, any number of times, and adjust or observe any variable in any manner they wished. You see, cells from an animal or plant on one side of the country looked pretty much like cells from the the same animal or plant on the other side of the county. It was an age of probability sampling from perfectly controlled, baseline, factory bottled water.
In fact, statistics became so well loved and popular that scientists in all sorts of fields tried using them. Psychologists and sociologists and anthropologists and market researchers started using statistics to evaluate the thoughts and feelings of biological creatures, mostly human beings. Of course, thoughts and feelings don’t naturally lend themselves to being expressed as precise numbers and measurements. And, thoughts and feelings that are often not understood and are often misunderstood by the holder. And, thoughts and feelings aren’t biologically determined, reliable units. And worst of all, the measurements changed depending on whether the rulers and scales were made of English or Spanish, paper or metal, or human or computer.
Sadly, these new users of statistics grew to love the statistical equations so much that they decided to ignore that the statistics were developed using bottled water. They applied statistics that had been developed using reliable natural occurrences to unreliable intangible occurrences. But they didn’t change any of the basic statistical assumptions. They didn’t redo all the fundamental research to incorporate the unknown, vastly greater degree of random and non-randomness that came with measuring unstable, influenceable creatures. They applied their beloved statistics to pond water, lake water, and ocean water. But treated the results as though it came from bottled water.
So you see, my dear, we all know where statistics in the biological sciences come from. The origin of probability sampling and p-values and margins of error is a wonderful story that biologists and chemists and surgeons can tell their little children.
One day, too, perhaps psychologists and market researchers will have a similar story about the origin of psychological statistics methods to tell their little ones.
If you haven’t yet seen the book “I can be a computer engineer” starring Barbie, you’re in for a disappointing treat. The title sounds great but when you read that Barbie needs two boys to actually do the programming for her, you’ll start shaking your head.
Fortunately, to the rescue comes Feminist Hacker Barbie, a website where you correct all the words to the story. However you see fit. You’re welcome.
Given the backlash, some people have taken the meme to the complete other extreme making the boys look stupid, the point has been made. Barbie CAN be a computer engineer. By herself. Without being rescued by someone else.
Follow the tweetstream here.
Do feel free to share the link to YOUR Hacker Barbie and I’ll include it here.
Well, it’s that time of year again!
Regardless of which holiday you celebrate and even if you celebrate the holiday of “I deserve a treat today”, you’re sure to find a statistics gift for yourself or your loved ones below. Just click on the image to go to the website and order. Go! Quickly before they run out! Shirts, cups, hats, toddler toys, and more, they’re all here.
Excel has a Venn diagram option in its SmartArt but three pretty identical circles doesn’t do it for me. When the sizes of the circles are supposed to mean something, I want them to look the part as much as I can.
So, instead of using SmartArt, I do it manually. I manually create three circles and then make them the right size. What this technique does NOT do is size the crossed parts properly but for me it’s still better than three identical circles. And, it also doesn’t account for perceptions of the size depending on the height of the circle vs the area of the circle.
Step #1: I’ve put my raw data into Excel though it is completely unnecessary. For this technique, the data table can be written solely in your brain. Using the ‘insert shape’ option, choose a circle. If you hold the ‘shift’ key down while you draw the circle, it will turn into a perfect circle.
Step #2: Copy the circle 3 times for each of the 3 circles of the Venn diagram. Now, select one of the circles and then right click. A menu will appear and you can choose the size and properties option. This option let’s you choose the exact size of the circle. First, click on the ‘Lock aspect ratio’ button. Second, in the height box, type in the number that represents the size of the circle. Don’t worry if the circle is way too big or small. Just get the number right. Repeat for each of the three circles.
Step #3: Now you should have 3 circles that represent the 3 sizes in the right proportions. You can now reposition the circles so that they overlap properly. If the circles are way too big or small, then use the + or – options in the bottom right of the screen. The circles will then fit into the screen.
Step #4: Now you can re-colour the circles or make the circle lines transparent. Just right click on the circle you want to revise and choose line and fill until you’ve got it the way you like. You can also make the rows and columns have white lines by clicking on the cross-hatched box and re-colouring them. Now you can treat the object as you might any other in Excel. Myself, I make the screen as large as possible, screen cap it, paste it into the Paint program on my machine, and then save it as a brand new jpg image. You’re done!
Statistics are boring. They’re hard. They’re useless. You’ll never use them in real life.
Oh, how wrong that is. I’ll agree that if you aren’t blessed with the genes that make math and statistics a piece of pie (mmmm, pie), then yes statistics are hard. But there are innumerable real-life examples to show just how important it is to be comfortable with statistics.
Sports: If you’re a fan of sports, you no doubt are bombarded with statistics throughout the season just like these interpretations of statistics shared by James Conley on the Pensburgh. The headlines are exciting but the reality of each headline is simple – they mislead and even outright lie. If you understood statistics, you’d immediately see for yourself what the numbers really said.
HEADLINE! Pittsburgh’s penalty kill is going to be an Achilles’ Heel this season!
(The team has killed 18 straight chances over its last five games.)
BREAKING! The team is going to walk away with the Metropolitan Division again!<
(They’re in second place, and half the division is within a point of catching them.)
THIS JUST IN! The Penguins just can’t put away teams late in the game!
(They’ve outscored their opponents 5-0 in the third period of two straight games, both wins.)
Medicine: How many commercials on TV and ads in magazines extol the virtues of amazing new drugs, perhaps even drugs that you are desperate to try to alleviate your own health issues? If you understood statistics, you would know right away when the ads were misleading. You’d spot when the sample sizes were too small to be reliable, when the effect size was too small to be meaningful, or when the lack of a test-retest design suggested insufficient testing.
Sometimes companies egregiously exaggerate how well their drugs work. In a brochure given to doctors and nurses last year, the Japanese drug company Eisai claimed that its Dacogen drug helped 38% of patients with a rare blood cell disorder in a clinical study. This figure was false, the FDA said in a November 2009 warning letter. In fact, the figure was taken from a tiny subgroup of patients who responded well to the drug. When all patients in the study were included, the real response rate was a much less impressive 20%, the FDA noted.
Read more on this and other misleading advertisements here.
Politics: Political polling is becoming more and more prominent in the news. If you had a better understanding of statistics, you would know when to trust the polls. You would know why percentages don’t always add to 100, why polls ‘weight’ data, or why the margin of error is ridiculously important (even if you don’t have a random sample).
Seven hundred randomly selected New York likely voters were interviewed by landline and cell telephone between October 1 and November 1, 2014. The margin of sampling error is +/- 3.6 percent. The data have been weighted to adjust for numbers of adults and telephone lines within households, sex, age, and region. Due to rounding, percentages may not sum to 100%. Responder numbers in each demographic may not equal the total respondent number due to respondents choosing not to answer some questions.
No matter how you look at it, statistics are among the most important classes you can take. It’s in your best interest to sign up for a class now.
- Data Tables: The scourge of falsely significant results #MRX (lovestats.wordpress.com)
- Proud to be a member of survey research #MRX (lovestats.wordpress.com)
Even though this infographic is out of date, having first been published in January 2014, the points it lays out are still relevant today. Jade Furubayashi from Simply Measured describes the Twitter practices of the top brands including how many times they tweet every day and how engagement is affected by the number of followers. But don’t misinterpret that correlation by buying yourself 100,000 followers. Paid followers won’t add to your engagement and they won’t love and adore your brand by sharing, tweeting, and retweeting. Only genuine brand love creates engagement.
- Interesting infographic: How your brain sees a logo (lovestats.wordpress.com)
- Missing Data: Whose problem is it anyways? (web.peanutlabs.com)
- 13 tips for giving the worst presentation ever (lovestats.wordpress.com)
- How women should ask for a raise if they don’t want to follow Microsoft’s CEO advice of Trust Karma (lovestats.wordpress.com)
On the Minitab Blog, Carly Barry listed a number of common and basic statistics errors. Most readers would probably think, “I would never make those errors, I’m smarter than that.” But I suspect that if you took a minute and really thought about it, you’d have to confess you are guilty of at least one. You see, every day we are rushed to finish this report faster, that statistical analysis faster, or those tabulations faster, and in our attempts to get things done, errors slip in.
Number 4 in Carly’s list really spoke to me. One of my pet peeves in marketing research is the overwhelming reliance on data tables. These reports are often hundreds of pages long and include crosstabs of every single variable in the survey crossed with every single demographic variable in the survey. Then, a t-test or chi-square is run for every cross, and carefully noted for which differences is statistically significant. Across thousands and thousands of tests, yes, a few hundred are statistically significant. That’s a lot of interesting differences to analyze. (Let’s just ignore the ridiculous error rates of this method.)
But tell me this, when was the last time you saw a report that incorporated effect sizes? When was the last time you saw a report that flagged the statistically significant differences ONLY if that difference was meaningful and large? No worries. I can tell you that answer. Never.
You see, pretty much anything can be statistically significant. By definition, 5% of differences are significant. Tests run with large samples are significant. Tests of tiny percents are significant. Are any of these meaningful? Oh, who has time to apply their brains and really think about whether a difference would result in a new marketing strategy. The p-value is all too often substituted for our brains. (Tweet that quote)
It’s time to redo those tables. Urgently.
Read an excerpt from Carly’s post here and then continue on to the full post with the link below.
Statistical Mistake 4: Not Distinguishing Between Statistical Significance and Practical Significance
It’s important to remember that using statistics, we can find a statistically significant difference that has no discernible effect in the “real world.” In other words, just because a difference exists doesn’t make the difference important. And you can waste a lot of time and money trying to “correct” a statistically significant difference that doesn’t matter.
- Are There Perils in Changing the Way We Sample our Respondents by Inna Burdein #CASRO #MRX (lovestats.wordpress.com)
- 11 signs that you don’t have a research objective #MRX (lovestats.wordpress.com)
- Do Google Surveys use Probability Sampling? #MRX #MRMW (lovestats.wordpress.com)