Ask Annielytics!

Got a research or survey question? ? I’ll to try to answer any question you may have. Just leave your chatter below and I’ll chatter in return.

18 responses

  1. Hi Annie,

    Firstly, I just wanted to say that I love your blog so thank you. It has helped me a lot as I am interested in a career in research. I also wanted to ask you a question around social media research.

    I have recently been asked to conduct social media listening research despite having little experience in market research and none in social media analysis. I am struggling however to articulate to my boss why this cannot be done on the fly. I feel social media research is felt to be simple keyword searches into a social monitoring tool. What would your advice be?

    1. Here is some very quick advice. Get your boss into a discussion of driving cars and then out of the blue ask what “BP” means. Then explain it means more than British Petroleum – blood pressure, basis points, brad pitt, boston pizza. Then ask whether the eff word is positive or negative – and follow up with effing awesome and effing horrible. Then ask them to use the word “new” in a sentence without meaning something that just launched or debuted – and follow up with “I new that” and “New York.”

      Typing words into a search box returns EXACTLY what you asked for, not exactly what you meant. Computers are not intuitive and can’t read your mind. Surprise!

  2. Hi Annie,

    I learned about your work from a search through who @fivedirections is following at Twitter.

    There is a question which directly relates to research and statistics which has been on my mind.

    As part of building The Interfaith Peacebuilding and Community Revitalization (IPCR) Initiative (www.ipcri.net ), I have made an effort to search for “indicator” statistics, which illustrate positive or negative trends impacting this field of activity. The first document I created as a compilation of these statistics (and observations) was “An Assessment of the Most Difficult Challenges of Our Times” (2007)(accessible from IPCR homepage, all documents free). The most recent is “Recalibrating Our Moral Compasses”. The statistics and observations in this “Recalibrating…” document seem very compelling to me; so much so that in “IPCR Outreach 2011” efforts (see bottom of IPCR homepage), I am straightforward about saying that I see “many danger signs flashing now”.

    I concede that the 3 main features of “IPCR Outreach 2011” [“Recalibrating Our Moral Compasses”, “A Four Page Summary of The IPCR Initiative”, and The IPCR Journal/Newsletter (Winter 2010-2011 issue)] are not “polished” presentations… however, I do believe there is some significant content there.

    Maybe there is a way to present the material in “Recalibrating Our Moral Compasses” which is more accessible… Also, there may be other “constellations” of statistics which are more representative of the challenges ahead…. These are my questions. They may seem like big questions to respond to in an informal way—however, I think the times are such that it is important for me to try asking, when I believe there is a sincere person with relevant experience to ask. If there is anything you’d be willing to share in response, I would be grateful for your assistance.

    Kind Regards,

    Stefan Pasti, Founder and Outreach Coordinator
    The IPCR Initiative

    1. This sounds like a gigantic job if you want it done well. Perhaps there are a couple graduate degree thesis and/or dissertations in there. Are there any students reading this comment who want to take on this big job?

  3. Hi Annie,

    I recently launched a site that I think you (and your readers) would get a kick out of.

    It’s called Correlated (http://www.correlated.org), and it seeks to find unexpected correlations between seemingly unrelated things.

    Visitors to the site are asked to respond to a daily survey question, and at the end of the day, the answers are compared to the answers of all previous survey questions. The results are then sorted by how strongly correlated the various surveys are.

    I hope you’ll check it out — and I especially hope you’ll find it interesting enough to share with your readers.

    Shaun Gallagher

  4. Hi Annie, Very congratulations on Chief Research Officer, Conversition Strategies. I realized it this week. Hope that you are doing well. It must be very cold there. By the way, I wrote the online ethonography (OE) or netnography in my blog this week. Unfortunately, traditional ethnography (TE) for research became popular this year in Japan. Japan Marketing Research Association or JMRA plans to hold the training session on it in March. Thus, we are much behind US and Canada on this. There is no/little OE projects here. Someone asked me the cost of OE in US and Canada. I understand that OE has advantage of the cost over TE. How much can we reduce the cost by using OE against TE? Thank you fro your time.

    1. Thanks for the congrats!

      I think OE is very similar to TE. The big difference is that we can now use automated procedures. Computer programs are able to handle a lot of sentiment analysis and content analysis. This is how we can save costs. The automated methods mean that we can handle much more data that we used to. Instead of hundreds or thousands of records, OE processes handled millions of records. I’m not sure if you will save lots of money because you must still create all of the data quality processes and you must still have qualified researchers using the data. But, you will save lots of time.

    2. Thank you very much for your kind reply. I am very impressed with your new company concept. Thus, I sent some message to Conversion yesterday. I hope that your team considers it positively. Thanks again for your time. –Shiggy

    3. You’re very welcome and thank you for the kind words!

  5. Hi Annie, thank you very much for your reply. ’Survey research without surveys.’ It is an interesting definition as you mentioned ‘free from bias’ caused by survey. By the way, Socila media marketing research is a bit long. I hope that I can learn from you. Thank you for your help. –Shiggy

  6. Hi Annie, hope that you are doing well. Thank you for reading my comments on your great blog.

    You use ‘social media research (SMR).’ I am afraid that someone might confuse ‘research for social media.’ Nobody uses it so far in Japan. How about the US? Popular word? I love the word of ‘SMR,’ which suggests the new possiblity of marketing research to me. I would like to expand it in Japan. My question is: 1) by SMR, how would you define it? One example is that : research method to find consumer insights by using social media. 2) would you please suggest the books or materials related to SMR? I plan to write the book on SMR in the near future. Thank you for your help. –Shiggy

    1. I think it is so brand new that people haven’t decided what the word is going to be. I look at social media research as simply ‘survey research without surveys.’ Of course, it is not that simple but it’s easier for people to understand. SMR is also so new that there really aren’t any books on it. There are a few out there that talk about the notion or SMR or the theory of SMF, but i haven’t seen one yet that was written by someone who actually does it and thoroughly understands all the intricate issues.

  7. Annie

    You on twitter > “Real people hate long surveys. And we keep doing it! RT @SueFontaine Just been stopped for the longest market research survey ever. Rubbish.”

    Me on twitter > “@LoveStats re your long surveys tweet. We reckoned that 8 minutes is all you need. Contentious natch but helps to focus on important stuff.”

    A bit more on this, as requested.

    The 8 minutes came from an attempt to focus the mind of our internal clients on what was really important. The danger with most studies was/is that of ‘scope-creep’ i.e. the tendency for clients to shoehorn additional and increasingly peripheral questions into a questionnaire/discussion guide.

    And trying to nail down the objectives for a Research Brief was always a nightmare so we would get our retaliation in first and tell them up front at the briefing stage that they only had three ‘free questions’ to play with. So, what did they REALLY want to know? What would make a difference to their decision making?

    Now they began to focus on their objectives.

    The 8 minutes came from an illustration we used – “If you had the chance to walk from Nelson’s Column to Big Ben with a customer by your side do you think thats enough time to find out what you really need?” That’s an 8 minute walk.

    Ok, so the reality check here is that my clients 3 free questions dont preclude some profiling questions before and after the core set. Nor does it rule out the right and senisble addition of a 4th or 5th question should they be absolutely necessary in light of Q1, 2 and 3!

    And in the end we’d feel good about something that was about 15 mins long. But hopefully a good 15 mins because we’d started with real focus.

    In the end my internal clients appreciated the challenge and the rigour becuase it meant their customers were treated with respect, time wasn’t wasted and ultimately they got good value for money.

    Remember I was on the client side so these are conversations I could have without fear of ‘losing a deal’. It’s much harder for vendors to challenge like this.

  8. Thank you so much for responding and extending the conversation. Totally agree about the all-or-none arbitrariness of thresholds for statistical significance and the problem of ignoring Type I error likelihoods. Let me now discuss my “small sample sizes are invaild” problem in more detail. I have conducted small-N surveys and found statistically significant differences between different respondent groups as well as between responses to different questions in the same survey, only to have someone declare that these results are “invalid” because the sample size was “too small” (i.e., issue was not how it was sampled, but sample SIZE). If I find a large, meaningful, statistically significant difference within a small sample, I am more excited about the “truthiness” and usefulness of this difference than if I find a statistically significant difference within a very large sample–all other things being equal. In other words, if an “effect” is big enough/real enough to exhibit itself in a small sample, then the consideration that should be given to acting upon it should (e.g., using it in the design of a marketing/communications program) should go up–and not to zero because of some mistaken conventional wisdom about “minimum sample sizes.” I understand that smaller sample sizes make Type II errors more likely, and that this is a very legitimate concern in drawing “no difference” conclusions from small sample sizes. What I am focusing on, however, is the legitimacy of concluding real differences/effects from small sample sizes that yield statistical significance (all other things being equal and done right). Does my argument make any sense to you or have I lost my mind from hanging out with MBA “trained” marketing “researchers” for the last 20 years?

    1. It does make sense. The problem is that with small samples, you are far more likely to have selected an unrepresentative group hence the lack of statistical significance. However! If I am reasonably satisfied that my sampling technique wasn’t completely retarded, i.e., twenty women waiting in line at the free botox clinic, and I see a fairly large difference, I am absolutely going to pay attention even if the significance is p=0.15. I’d probably do a second and third test of my hypothesis (for reliability or as a meta-analysis) and see if I get the same result again. Even if I got 2 out of 3, i would still be reluctant to disregard everything. I prefer to use my brains to interpret statistics rather than blind allegiance to the queen. Statistics guide and inspire me.

  9. Okay, I’d like to have your opinion on this. Almost everyone I’ve ever run into/worked with/presented to in advertising/marketing has got this idea that there is a minimum acceptable sample size. You’ll actually here people say stuff like “If the sample size is 200 or less, it’s invalid.” My training in statistics says “bullshit.” As long as you can draw a useful conclusion within the range of whatever the confidence limits are for sample size-x, you’ve got a “valid” conclusion. Think I remember seeing a Psych Bulletin article once about research studies with N=1 (i.e., exceptions that disconfirm a universal assertion). Looking forward to your response (and hopefully being able to cite it for the next “can’t have small sample sizes” bozo). -Allen Bukoff

    1. I can argue this one both ways. I’m always very disappointed when someone does a study that achieves p=.06 and they conclude no significant effects. Are you kidding me? What makes .05 any better than .06? The fact that the sample size had ten more people? This to me reflects an unintelligent use of statistics. On the same vein, let’s say a study is done with 100 people and group A gets a score of 10% while group B gets a score of 20%. (I don’t have my significance cheat card with me so) Let’s assume that also is not significant. Again, are you kidding me? You’re going to ignore a 10% or twice as large effect size simply because someone told you to?

      On the other hand, I’ve seen a lot of research knowingly done with insufficient sample sizes. A budget is decided upon and that budget dictates the sample size. Even when the researcher knows ahead of time that they are going to split the data into numerous subgroups upon subgroups and perform unlimited numbers of post-hocs, that they still stick with n=200. So, let’s see, a sample size of 200 and then 200 t-tests are done. And they completely forget one of the rules of statistics which is if you’re choosing the 5% error rate, 5% of all your findings are going to be spuriously wrong. Sigh.

      I think many people take statistics courses because they are forced to and then once they pass the course, they stop trying to understand. I think statistics are there to guide you, to open your eyes where you might not have otherwise, to make you think twice when an unexpected result is obtained. And when that results becomes apparent, it’s up to YOU to personally decide is the difference meaningful, is the difference worth taking seriously, worth examining in a different light.

      I think this point of view is in the minority but I stand by it!

  10. hi Annie!

    Love the site! Are you OK with me passing your name to a colleague — she may want some of your special brand of advice/consulting …

    Let me know if you’re ever in the neighbourhood of Yonge/Bloor, would love to hear your latest adventures.

    Joe

Agree? Disagree? Share your thoughts!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: