Tag Archives: IJMR

Developing skill sets across a global organization: Corrine Moy #IJMR2012 #MRX


Welcome to this series of live blogs from the IJMR Research Methods Forum in London. Any errors, omissions, or silly side comments are my own.

Skills, competencies and working practices – Filling the gaps: How to develop consistent skill sets across a global organisation
Corrine Moy, Global Director of Marketing Sciences, GfK NOP

  • Suppliers want vendors to meet objectives and deliver profit
  • We train researchers on methodology and then we send them on short sessions to learn the commercial side
  • Dilemma is are researchers are methodologists not commercial people, not used to working in complex matrix structures, work in small practice areas
  • How to structure a training program
    • Face to face by experts
    • local culture and language must be taken into account
    • eLearning is for support and backup, and process training; it takes way longer and much more money to do it
  • Africa is an emerging force in market research, we need people on the ground here. easy to underestimate the size of Africa: US + China + India + Europe fit into it.
  • Developed MSc in MR in four African countries, collaborated with National Statistics Offices, collaborated with other training instuties to offer interviewer and DP training including SPSS [how biased is my life view? I rarely think of Africa as having educated, working people. I still have the poor starving child in my head.]
  • Training the trainer issues
    • Things will not be how you think they will be. The teachers didn’t really know how to teach so that was part of the training.
    • When people say they understand statistics, they feel that they must say yes even if they have never taken statistics.
    • Interviewers are poorly paid so cheating is rife and costs of checking are high. Accredited systems are hard to support due to costs of joining and maintaining.
Advertisements

Will big data elimiate the next national census? Keith Dugmore #IJMR2012 #MRX


Welcome to this series of live blogs from the IJMR Research Methods Forum in London. Any errors, omissions, or silly side comments are my own.

Big data – Can the use of ‘big data’ eliminate the need for yet another traditional Census in 2021?
Keith Dugmore, Director, Demographic Decisions

  • About a third of people in the room answered the last census online [I’ve only answered on paper]
  • 94% completion rate even though it’s compulsary to answer the census
  • All the data is free [are you listening Canada and US, free!]
  • Census hasn’t really changed since 1961 – household forms on paper, some postal innovations in 2001, online options in 2011
  • Increasing costs and difficulties of traditional census means they need to consider alternatives
  • Options
    • full census to everyone
    • Rolling census over 5 or 10 year period
    • Short form to everyone and short form to some
    • and others
  • Data sources – patient data, election data, school data, DVLA, maybe even customer databases, loyalty card data
  • Each one might miss miss migrant worker dependants, international students, asylum seekers, expatriats, deceased, newborn babies, some duplicates, home schooled children. But across everything, nearly everyone should be covered.
  • Problems – updating may not be great, not representative samples, biases by region
  • Electricity has 100% coverage, gas 80%
  • Telecoms – only have address for 50%
  • Sharing of this data may be prevented due to reputational risk or the data protection act
  • The quality of the census is great for a day and then gets worse and worse after that. An alternative method might have greater, longer lasting accuracy. It could be done annually or quarterly. We might even get new data like income. [um… now you’re starting to push it. that’s getting really big brother.]
  • We want an alternative method but is there one out there? A recommendation will be made by September 2014. [I think a recommendation on that data will be out of date in September 2013]

Big Data in the Music Industry: Richard Bowman #IJMR2012 #MRX


Welcome to this series of live blogs from the IJMR Research Methods Forum in London. Any errors, omissions, or silly side comments are my own.

11.15 Big data – Making a difference with consumer insight-generated ‘big data’ in the music industry
Richard Bowman, Artist Development Insight Director, EMI Music

  • The old world – Hits paid for misses
  • Cassettes are dead, LPs are just about dead, CDs took over the world
  • People aren’t paying for music they way they once did
  • In music, it never works like this: consumer research to insight to decisions
  • They combine insight with skills and expertise of their people.  These are evidence based decisions that are owned by everyone. [THAT is how it should be done. Researchers are not specialists in your business.]
  • Insight is not a powerpoint presentation. It’s consulting with the internal experts.
    • Want relevant, valuable, and quick insights for all of their colleagues
    • It needs to be powerful, broad, global, and repeatable [there’s that annoying reliability thing again]
    • Must add to shared language and guide decision making
  • Can market research produce big data? They have spoken to 1 million people in 3 years. That comes out of 100 people surveys and 6 people focus groups. At any point, they are interviewing 12 people somewhere in the world.
  • Inferior research is not an option [Question: are British researchers more concerned with validity and reliability or is it just this single conference? I liiiiiiike it!]
  • The key skill is the people, not the technology
  • They have ONE database per function containing ALL global data they’ve every worked with  [that’s exactly how it should be!]
  • New analysis technique: contest where anyone could analyze a set of data that they provided – e.g., July 2012 hackathon

The power of wisdom: Martin Boon #IJMR2012 #MRX


Welcome to this series of live blogs from the IJMR Research Methods Forum in London. Any errors, omissions, or silly side comments are my own.

10.10 New methods – The power of wisdom: The crowd’s ability to predict things
Martin Boon, Director, ICM

  • We have a duty to challenge boundaries and come up with new ideas. Never apologize for coming up with new idea regardless of whether they work.
  • Under the right circumstances, groups can be smarter than the smartest people. The average guess of people at a fair guessing the weight of ox, was exactly correct.
  • For a crowd to be right
    • There must be diverse opinion where each person has some information even if it’s a strange interpretation of the information
    • Views are independent, not determined of the view of anyone around them
    • People can use their own specialized knowledge and experience
    • Some method of aggregation, for turning private into a collective number
  • How has UK polling performed since 1992? 1992 wrong, 1997 wrong, 2001 right, 2005 right, 2010 ok.
  • ICM added two questions to their prediction poll – “For a bit of fun, tell me what % share of the vote you think …party…. will win in the forthcoming general election?”
  • Wisdom prompted predictions were the most accurate results. Wisdom outperformed conventional polling. Every poll overstated the Liberal Democrat share but the crowd wisdom got it exactly right.
  • Polls find it hard to predict parties that people know are unpopular to vote for – embarrassment to say you are voting for that party.
  • Wisdom approach produces a consistent share of vote compared to actual.
  • But in a Wales referendum and an AV referendum, the method didn’t work as well with 9% error. Was the diversity of opinion condition present, and were they suitable informed to be able to take a smart view? Most people said they didn’t have enough information to vote well. Ensure responders know the key features of products and services if you’re going to try this.
  • Wisdom predicted 18 gold medals at the Olympic games but then they got 29. You just can’t predict how well athletes are going to perform.
  • Our job as researchers is to test things to destruction [i like that!]
  • Wisdom could be good for market sizing and product supply estimates giving marketers an advantage ahead of product releases

How do we know which research to trust? Rachel Kennedy #IJMR2012 #MRX


Welcome to this series of live blogs from the IJMR Research Methods Forum in London. Any errors, omissions, or silly side comments are my own.

09.25 Keynote address – How do we know which research to trust?
Rachel Kennedy, Associate Professor, Ehrenberg-Bass Institute, University of South Australia

[Great presentation!]

  • Would you trust EG or eyetracking? The neuroscience technologies look cool. It’s easy to be bamboozled. How do you know what’s right for your research?
  • Galvic skin measurements have been around since 1888 but we still don’t completely know what it means or what to do with it
  • “This is old therefore it’s good” “This is new, therefore it is better”  Willima Ralph Inge
  • #1 We need to get repeatable results among different researchers
  • #2 It must predict in market behaviours of interest, not interest, not liking, but sales
  • #3 Providers must be transparent – what we measure, how it is being analyzed, and what we expect to see in given conditions
  • #4 It should be actionable
  • We’d like to know that the research in journals is the trustworthy research, we prefer peer review journals. BUT we can’t always trust is.
  • 10% of psychologists report they have falsified data – selectively reporting only studies that worked, don’t report all independent measures, excluding post-hoc data
  • Most results from the TOP journals can’t be reproduced
  • Science has a lovely way of correcting itself over time. You can’t accept a single validation study. You need lots of validation.
  • Biometics are exciting because they can give you an objective measure of truth – that’s the gut feel. But really?
  • Case study 1: They put a salmon into an fMRI. It responded to the stimuli in the scanner. But the fish was dead. So what then when humans respond to ads? Are the researcher’s data being corrected for false positives?
  • Case study 2:  Study took the same data and put it through two different software systems. They got two different results. [Yikes, I hope SPSS and SAS don’t do this. 🙂 ]
  • Case study 3: What test-retest reliability would you want to see ? [95% please] With fMRI, only one third to one half of the same neural activity occurs. [sooooo, chance]
  • Add one complex brain (experience, IQ) to one complex marketing stimuli (sound, colour, movement), how can you get any reliability?
  • Hemispheric asymmetry – some favour the left side, some favour the right. Do you balance your research responders on this? Should you? Does it matter? Asymmetry depends on time of day and time of year. How do you balance on that? We just don’t know which matter.
  • How do you analyze evoked potential? Patterns of response over the time of an event (an ad).
  • Skin conductance claims says 98% rank order correlations [as soon as you said rank ordered, I could tell the researcher was hiding something.]. This study can’t be found now, company doesn’t exist anymore.
  • In one study, at best, only biometrics OR traditional measure can be correct because they lead to different conclusions. They are measuring very different things.
  • Traditional measures such as pleasure, likeability don’t correlate highly with sales.
  • Don’t take the negativity as we should not do neuroscience, but as know what you’re getting into.
  • What about virtual shopping to understand real life? market share results are different.
  • People are less likely to buy store brand and more likely to buy premium products in virtual testing. The penalities in fake life aren’t the same. Can we calibrate these differences?  People know how to “play” shop.
  • Purchase rates and dollar estimates are inflated. Most (mode) people actually buy only one item. But in virtual environment, most people actually buy three to six items. Maybe they are showing you the average or maybe it’s just wrong. We are missing standard research control. [yeah baby!]
  • It’s really easy to get things completely wrong without rigorous controls.
  • What about a vending maching mini-shop study. How long at the machine, what they look at, how long they look at it. [never thought of that!]
  • Big data doesn’t mean necessarily NOT biased. e.g., facebook fan base can be huge, like Skittles. But that’s a biased base. You need to understand the bias. Facebook skews to heavy brand bias, the people who already like you and don’t need to be convinced. Most people are actually non-buyers, the completely opposite.
  • You must have good research design and the right researchers for the job.
  • Ask suppliers about their validation techniques, what they predict, their knowledge of marketing, their knowledge of market research (e.g., experimental control)
  • What is trustworthy? Well grounded measures, validated technologies, passive/unobtrusive measurement, natural environment [hmmm…. do I hear social media listening research…]
  • [Thank you Rachel for reminding people that we need GOOD research with experimental design not just cool data and neato conclusions]
%d bloggers like this: