Is Facebook the only emotional manipulator? #MRX


If you haven’t heard of the Facebook ‘scandal’ by now then I’m jealous of the holiday you’ve just taken on a beautiful tropical island with no internet access. The gist of the scandal is that the feeds of 689,003 people were curated differently than everyone else to gauge the subsequent effect on emotion.

While most people’s feeds are curated based on which friends you like, share, and comment on more often, the feeds of these people were curated in addition, by considering the positive and negative words they included. In both conditions, Facebook chose which of your friends’ posts you would see though in the Test condition, you might be offered a greater proportion of their positive or negative posts. The conclusion was that you can indeed affect people’s emotions based on what they read. You can read the published study here.

I honestly don’t know where I stand on the ethics of this study right now. Ethics interest me but I’m not an ethicist. So instead, let me think about this from a scientific point of view.

Do you deliberately manipulate emotions in the work you do? As a marketing researcher, your job is ONLY to manipulate emotions. You know very well that this brand of cola or that brand of chips or the other brand of clothing cannot boast better taste, feel, look, or workmanship. All of those features are in the eye, or taste buds, of the beholder. Through careful research, we seek to learn what makes different kinds of people happy about certain products so that marketers can tout the benefits of their products. But, at the same time, we also seek to learn what disappoints and makes people unhappy about the products and services they use such that those weaknesses can be exploited by marketers.

Through a strange twist of fate, a colleague and I recently conducted a tiny study. We found the results quite interesting, and wrote a quick blog post about it. Then the Facebook news broke. As Facebook did on a larger scale, I will confess that I manipulated the emotions of about 300 people.

Previously, I saw on a number of studies that age breaks are inconsistent. Sometimes researchers create an 18 to 34 age break, and other times they create an 18 to 35 age break. In other words, sometimes you’re the youngest person in a group, and sometimes you’re the oldest person in a group. Would you rather be the oldest person in a young group, or the youngest person in an old group? What did we find? Well, people did indeed express greater happiness when they were part of the younger group, even though they were the oldest person in that group. I deliberately and knowingly manipulated happiness. Just like Facebook did. Do you hate me now? Do you think I’m unethical? You can read the post here.

As marketing researchers, every bit of research we do, every interaction we have with people, is intended to manipulate emotions. We collect data that marketers use to criticise our favourite products. We collect data so that politicians can directly criticise other politicians through their negative ad campaigns. Has that bothered you yet? Has that bothered you enough to warrant outcries in social media? Have you campaigned for an immediate ban of television, radio, and viewing products on the shelves at supermarkets knowing that those things are intended to manipulate our emotions?

Since you know that your research is intended to affect emotions, do you inform your research participants about the potential negative consequences of participating in your research? Do you tell them that seeing their age in the older age bracket may make them unhappy, that viewing critical ads may make them unhappy, that being asked to select up to five negative attributes might make them unhappy?

Given that we’ve done it this way for so long, have we become complacent about the ethics of the research we conduct? In this age of big data, is it time to take a fresh look at the ethics of marketing research?

[Originally published on Research Live]

 

Read more opinions:

Related articles

One response

  1. I guess the threat lies in the idea that Facebook could do something more malicious than this without the users’ consent. Who knows, they already could have.

    This whole data-farming issue raises a lot of questions, but frankly, a lot of people take their privacy for granted too. Even if experiments like this keep happening, users won’t abandon the site until they decide that the cons of Facebook’s services override it’s pros. I mean, most everybody is on Facebook. My professors use them to coordinate lessons and post assignments. It’s easier to stay than to leave. Not all people can up and abandon society; just like that, there will always be times when marketing will manipulate us. Being informed is about the best thing one can be these days.

%d bloggers like this: