1. Clear Seas Research
  2. SIS International Research
  3. Research Live
  4. RN_In_App_Effectiveness_GBook_480_60

What’s To “Like” About Facebook’s Experiment?

Facebook announced this week that they conducted a blind experiment on emotional triggers among its members in 2012, testing psychological reactions to messaging on nearly 700K of its members. What do we like about the experiment, what do we think is problematic, and what does it mean for us?

facebook-dislike-button

 

Editor’s Note: When the news broke a few days ago about the Facebook Emotional Contagion Experiment  my initial reaction was focused more on the results: compelling proof of the virality of emotions in a social network. The implications for the social sciences, behavioral health, and yes, marketers is pretty astounding.

And then the outcry against the process emerged, and I scratched my head a bit.

My thinking was that this was covered under the ToS of Facebook, so it was OK. Caveat Emptor, yada yada yada…

And how was this different than live A/B testing, a standard and accepted practice in marketing and market research that also deals with large samples. Was it because of the “emotional manipulation” component of the study? Isn’t EVERY form of media designed to manipulate emotions for a desired effect and outcome? Billions of dollars a year are spent optimizing the emotional resonance of advertizing, movies, TV, etc…, money which flows into the coffers of the MR industry.

After all, as Alex Batchelor spoke about at IIeX just 2 weeks ago, the ultimate goal of MR is behavior change.  And no, we don’t get informed consent from targets of the results of our research who’s emotions are manipulated: it is implied because they choose to view it.

In short, I just didn’t get why all of the hullabaloo was being raised from anyone in the marketing or MR space: it struck me as short sighted at best, hypocritical at worst.

But then I started to think about the broader implications. Things like Privacy. Corporate Social Responsibility. Ethics. Doing No Harm. In that regard the issue got very cloudy for me. My “daddyness” started to shine through: I have teen girls who are active Facebook users. Were they part of the experiment? Was one of the days they were filled with angst and sadness that broke my heart for them (an admittedly common state for teens, at least mine) fueled by this manipulation? Not cool.

Finally, what about how the backlash here could impact MR? Would this be another straw on the camel’s back for the reactionary element who so often end up influencing legislation that is short sighted and limits our industry?

Honestly, I am still not sure where I stand. Not so much regarding this particular experiment (I think they mishandled it on many levels, but the results are compelling), but about how our technologically connected social age may force us to rethink many sacred cows.

All of that was in my mind when I reached out to folks I trust and respect to see if they wanted to take a stab at writing a post from the MR perspective on this, and one of my all time favorite people, Melanie Courtright of Research Now, jumped in. Mel is a fantastic thinker, a wise leader in the industry, and a straight shooter. There is no one better to dive into this topic with the impact of MR in mind.

So, here is Mel’s take. I suspect this won’t be the last time this topic is addressed here, and I look forward to your comments!

 

By Melanie Courtright

Facebook announced this week that they conducted a blind experiment on emotional triggers among its members in 2012, testing psychological reactions to messaging on nearly 700K of its members. News of the test has elicited both positive and negative reactions, from member “furor” to researcher intrigue. The pertinent questions are, what do we like about the experiment, what do we think is problematic, and what does it mean for us?

So what was the experiment?

Facebook wanted to test the theory that members going to its site and seeing negative content made them more negative about their lives, while seeing more positive content made them more positive. So they created an algorithm that would automatically omit positive or negative word associations from users’ news feeds for one week. During that same time, they would score the users content and see if those whose positive content was reduced became less positive in their posts, or if those who negative content was reduced became less negative in their posts.

Now never mind that the results were determined based on what people chose to share as a result of any psychological change. Never mind the theories around the scoring of the words that were removed or how they scored user created content to determine if they became more or less negative during that time. Never mind any other methodological concerns. Let’s even say never mind to the actual findings, though if you are interested, even those are up to interpretation. What’s really interesting is, what do we think about the test?

Should we “like” what they did, or “dislike” it? Should we applaud them or chastise them? Or both?

First, okay, I’ll say it… A psychographic test among 700,000 people, who were unbiased on their data input as a result of a truly blind experiment. Wow! That’s a huge data set, and creates the potential for broadly sweeping implications! Am I jealous of that data set? Maybe just a little.

But here it comes. They didn’t tell people what they were doing? And they removed positive content from some feeds to see if they felt more negative? And they removed negative content from feeds to see if they felt more positive? And they didn’t ask permission? They literally ran the risk of affecting people’s psychological state of mind without their approval?

Uh-oh. I think that might go against a few ethical principles we hold very dear.

• Get permission: This one is easy. Nope. No they did not.

• Be transparent: They said nothing, even afterwards, for years.

• Do No Harm: Were some people psychologically harmed? Slate.com is quoted as saying “Facebook intentionally made thousands upon thousands of people sad.” Facebook said the statistical results showed only a small statistical difference in “sadness” results, but they didn’t know going in that would be the result. They did make some people more sad, and what if the results had been more dramatic? What if someone was already in an emotional state? Were other people’s perceptions of a user impacted when their “positive” content was removed from friends’ feeds, leaving only the negative content?

You might say that people accept these risks in the Terms of Service. Okay. Maybe. But I have two issues with that. Reasonableness and Research. Is it reasonable to think, based on the terms, that FB would experiment with your moods? Most people I’ve spoken to would say no. And if you are going to call it research, shouldn’t it adhere to research standards? Most I’ve spoken to would say yes.

If you attended the CASRO session at IIeX on data privacy, you learned that 40% of US research participants have “very little” trust of the MR category with their personal data, and 51% have “very little” trust of social media companies. As a result, 97% say that getting their approval is a universal mandate. This is a classic example of their concern. People don’t want big brother affecting their content. They certainly don’t want to feel like human guinea pigs. And they won’t stand for feeling manipulated. So if you ask me, there’s nothing to “like” about this experiment.

I think FB owes an apology, not only to their members for violating their trust, but to the research industry as well for labeling this social experiment as Market Research.

What do you think?

Share
You can leave a response, or trackback from your own site.

5 Responses to “What’s To “Like” About Facebook’s Experiment?”

  1. Tom Ewing says:

    July 2nd, 2014 at 6:44 am

    Good piece! You’re right that this kind of shenanigans won’t reflect well on MR. I strongly recommend this piece on the experiment by danah boyd, who is probably the smartest social media commentator I know of http://www.zephoria.org/thoughts/archives/2014/07/01/facebook-experiment.html – she talks a bit about the ethics of the IRB rubber-stamp process, and situates the experiment in a wider context of a culture of emotional manipulation (which research obviously fully colludes in) but also echoes your points on how this study makes privacy and “big data” objections tangible.

    Meanwhile – a more technical question! How good were Facebook’s methods? I haven’t actually done that much sentiment/text analysis, and a little knowledge is a dangerous thing, but my encounters with methods based on LIWC – the dictionary FB used – haven’t been particularly fruitful. A lot of brute force categorisation and a pretty high error rate. 700k sample doesn’t mean a lot if the off-the-shelf dictionary you’re using is sub-optimal. So text analytics and sentiment people – how useful are its results likely to be anyway?

    (This is even before you get to the distinction between measuring emotion and measuring the public performance of emotion, of course!)

  2. Kevin Gray says:

    July 3rd, 2014 at 5:47 am

    You are not alone in your sentiments, Melanie. Here, the Chronicle of Higher Education http://chronicle.com/article/In-Backlash-Over-Facebook/147447/ and MIT Technology Review http://www.technologyreview.com/news/528706/facebooks-emotional-manipulation-study-is-just-the-latest-effort-to-prod-users/ weight in.

  3. john griffiths says:

    July 3rd, 2014 at 5:50 am

    if there is a line that runs from consensual to totalitarian – then market research needs to be firmly on the consensual end – although it can never go all the way ( I can’t give up observation/listening – without having to ask permission first) Facebook’s experiment is right at the totalitarian end. Its Facebook’s space. Facebook users don’t pay for it so we have no leverage other than leaving. Humans have an appetite for the totalitarian as well as the consensual. Its helpful to know what kind of space you’re in. And Facebook have at least told us they’re doing it. Now you know!

    But it is even more important that MR is transparent and accountable to those who participate because the large tech brands who are creating these spaces find it very difficult to be. And those busy embedding big data inside market research need to think about where it fits on the consensus totalitarian line.

  4. Bob Lederer says:

    July 3rd, 2014 at 12:33 pm

    Mel, in addition to the underhandedness of this study and manipulation, there really is no research results benefit. The impact, positive and negative, was TINY, About 1 emotional word per every 1,000 other emotional words. Value of this research is really not open to debate

  5. #FridayFive – Stories from Around the Web | QuestionPro Blog says:

    July 3rd, 2014 at 11:00 pm

    […] What’s To “Like” About Facebook’s Experiment? (GreenBook) […]

Leave a Reply

*

%d bloggers like this: