What’s To “Like” About Facebook’s Experiment?
Editor’s Note: When the news broke a few days ago about the Facebook Emotional Contagion Experiment my initial reaction was focused more on the results: compelling proof of the virality of emotions in a social network. The implications for the social sciences, behavioral health, and yes, marketers is pretty astounding.
And then the outcry against the process emerged, and I scratched my head a bit.
My thinking was that this was covered under the ToS of Facebook, so it was OK. Caveat Emptor, yada yada yada…
And how was this different than live A/B testing, a standard and accepted practice in marketing and market research that also deals with large samples. Was it because of the “emotional manipulation” component of the study? Isn’t EVERY form of media designed to manipulate emotions for a desired effect and outcome? Billions of dollars a year are spent optimizing the emotional resonance of advertizing, movies, TV, etc…, money which flows into the coffers of the MR industry.
After all, as Alex Batchelor spoke about at IIeX just 2 weeks ago, the ultimate goal of MR is behavior change. And no, we don’t get informed consent from targets of the results of our research who’s emotions are manipulated: it is implied because they choose to view it.
In short, I just didn’t get why all of the hullabaloo was being raised from anyone in the marketing or MR space: it struck me as short sighted at best, hypocritical at worst.
But then I started to think about the broader implications. Things like Privacy. Corporate Social Responsibility. Ethics. Doing No Harm. In that regard the issue got very cloudy for me. My “daddyness” started to shine through: I have teen girls who are active Facebook users. Were they part of the experiment? Was one of the days they were filled with angst and sadness that broke my heart for them (an admittedly common state for teens, at least mine) fueled by this manipulation? Not cool.
Finally, what about how the backlash here could impact MR? Would this be another straw on the camel’s back for the reactionary element who so often end up influencing legislation that is short sighted and limits our industry?
Honestly, I am still not sure where I stand. Not so much regarding this particular experiment (I think they mishandled it on many levels, but the results are compelling), but about how our technologically connected social age may force us to rethink many sacred cows.
All of that was in my mind when I reached out to folks I trust and respect to see if they wanted to take a stab at writing a post from the MR perspective on this, and one of my all time favorite people, Melanie Courtright of Research Now, jumped in. Mel is a fantastic thinker, a wise leader in the industry, and a straight shooter. There is no one better to dive into this topic with the impact of MR in mind.
So, here is Mel’s take. I suspect this won’t be the last time this topic is addressed here, and I look forward to your comments!
By Melanie Courtright
Facebook announced this week that they conducted a blind experiment on emotional triggers among its members in 2012, testing psychological reactions to messaging on nearly 700K of its members. News of the test has elicited both positive and negative reactions, from member “furor” to researcher intrigue. The pertinent questions are, what do we like about the experiment, what do we think is problematic, and what does it mean for us?
So what was the experiment?
Facebook wanted to test the theory that members going to its site and seeing negative content made them more negative about their lives, while seeing more positive content made them more positive. So they created an algorithm that would automatically omit positive or negative word associations from users’ news feeds for one week. During that same time, they would score the users content and see if those whose positive content was reduced became less positive in their posts, or if those who negative content was reduced became less negative in their posts.
Now never mind that the results were determined based on what people chose to share as a result of any psychological change. Never mind the theories around the scoring of the words that were removed or how they scored user created content to determine if they became more or less negative during that time. Never mind any other methodological concerns. Let’s even say never mind to the actual findings, though if you are interested, even those are up to interpretation. What’s really interesting is, what do we think about the test?
Should we “like” what they did, or “dislike” it? Should we applaud them or chastise them? Or both?
First, okay, I’ll say it… A psychographic test among 700,000 people, who were unbiased on their data input as a result of a truly blind experiment. Wow! That’s a huge data set, and creates the potential for broadly sweeping implications! Am I jealous of that data set? Maybe just a little.
But here it comes. They didn’t tell people what they were doing? And they removed positive content from some feeds to see if they felt more negative? And they removed negative content from feeds to see if they felt more positive? And they didn’t ask permission? They literally ran the risk of affecting people’s psychological state of mind without their approval?
Uh-oh. I think that might go against a few ethical principles we hold very dear.
• Get permission: This one is easy. Nope. No they did not.
• Be transparent: They said nothing, even afterwards, for years.
• Do No Harm: Were some people psychologically harmed? Slate.com is quoted as saying “Facebook intentionally made thousands upon thousands of people sad.” Facebook said the statistical results showed only a small statistical difference in “sadness” results, but they didn’t know going in that would be the result. They did make some people more sad, and what if the results had been more dramatic? What if someone was already in an emotional state? Were other people’s perceptions of a user impacted when their “positive” content was removed from friends’ feeds, leaving only the negative content?
You might say that people accept these risks in the Terms of Service. Okay. Maybe. But I have two issues with that. Reasonableness and Research. Is it reasonable to think, based on the terms, that FB would experiment with your moods? Most people I’ve spoken to would say no. And if you are going to call it research, shouldn’t it adhere to research standards? Most I’ve spoken to would say yes.
If you attended the CASRO session at IIeX on data privacy, you learned that 40% of US research participants have “very little” trust of the MR category with their personal data, and 51% have “very little” trust of social media companies. As a result, 97% say that getting their approval is a universal mandate. This is a classic example of their concern. People don’t want big brother affecting their content. They certainly don’t want to feel like human guinea pigs. And they won’t stand for feeling manipulated. So if you ask me, there’s nothing to “like” about this experiment.
I think FB owes an apology, not only to their members for violating their trust, but to the research industry as well for labeling this social experiment as Market Research.
What do you think?