(Updated) A Client-side Technologist’s Perspective On The MR Data Privacy Issue
(Updated) Editor’s Note: As I suspected, numerous folks have been posting their thoughts about the data privacy debate that occurred yesterday. A few to take a look at are:
Reg Baker: The maybe not-so-great privacy debate
Robert Bain: Time to rewrite the rules of research?
Tamara Barber: The Great Data Privacy Debate: A Summary
Tony Jarvis for MediaPost: Updated (Again!): The Data Privacy Debate
Finn Raben of ESOMAR: Reflections on the Privacy Debate
Tom Anderson: The Spectrum of Social Media Expertise
You can access the recording of the debate itself right here: http://www.greenbookblog.org/wp-content/uploads/2011/08/Privacy_Debate.mp3
Today’s guest post jumped out at me since Jason Anderson brings the unique perspective of a client-side researcher who works within the technology sector, specifically gaming. Gaming is a unique segment that is inherently focused on multi-channel data analysis, with much of that data coming through non traditional research sources. As gamification (and game play platforms) becomes more and more integrated into technology, marketing, social media and research this model may well become the new standard we use to engage and communicate with each other.
This is an important perspective for the industry to hear, and I think you’ll enjoy it it very much.
By Jason Anderson
Several marketing research trade groups have been exploring online research privacy issues, in the hopes of establishing common standards and a code of conduct to govern how “responsible” researchers should treat consumer data. An earlier debate on privacy practices in the social media age can be summed up (in my opinion) in two words: losing battle. This is particularly true as it relates to games research — console, handheld, casual, or any other.
It’s not difficult to understand the thinking that led the Marketing Research Society to draft its discussion paper on the risks and obligations of online data collection. There are many more laws in the US and EU related to the handling of “customer data” or “private data” than there were even 10 years ago. Extensive data mining and efforts to analyze the text streams found on Twitter and Facebook led to concerns about the legitimate ownership and usage rights related to that data. What is the legal distinction between a focus group transcript or a series of posts on Twitter? Or between survey data and the weight of data scrubbed from WordPress blogs?
The difference is ownership. Survey respondents intentionally complete a survey, as the result of some effort to market that survey. Focus group participants show up, either online or in person, and contribute their time. They own the rights to their opinions, but license them to the researcher for an explicit benefit. Tweeters and social media users engage with those platforms for reasons completely unrelated to the research process. They still own their opinions, but have already licensed them to Facebook or Twitter. Those companies, in turn, redistribute that content.
A research participant still directly owns their “content.” As such, it seems fair to expect some reasonable code of conduct be applied to the process so both parties (researcher and subject) have fair expectations. A social media participant does not “know” they are being studied, but they expect that their comments are being heard by people they do not know. Tweets and status updates are today’s Letters to the Editor, where the Editor-in-Chief is the Internet. We expect our public statements to be public. There are dozens of news stories about the consequences of publishing private content to Twitter or Facebook, as a result of a fundamental misunderstanding of how social media works.
If Twitter and Facebook were instituting policies in their Terms of Service to discourage usage of this data, I would be concerned. Instead, they publish APIs. Twitter has a robust devkit including advertising analytics; Facebook has single-handedly enabled Zynga to blossom with its own API suite.
Any future risk to social media analytics and any rules of conduct with regards to that data will not come from the courts, or from trade organizations; it will come from the social media network providers. And there is no incentive for these companies to restrict their developer communities. So for “marketing research” shops out there who are concerned about how to ethically treat social media data, just remember one rule: it’s not private data.
This whole privacy debate is even less relevant to the games industry; we all have our own customer databases and behavioral data about our products:
- Sell content on Steam, Xbox Live, or PSN? Your sales data is more robust that it ever was under the NPD/GfK regime, plus your platform partners have rich data about who plays your games. If you designed your achievement systems well, you have a great analytics bench for understanding why they play.
- Still making traditional boxed product? That’s OK — you’re probably making downloadable content for it, through the same platforms mentioned above.
- Operate an MMO? You already know that you have the holy grail of customer data.
- Are you Zynga? Or a Zynga wannabe? You already do most of your research through social media platforms and probably find this whole topic quaint.
In fact, gamers expect that we know more about them (through their data) than we actually do. As a gamer, I always assume that everything I do in an online game is being recorded and used by the developers to learn more about how I game. And I want them to do this, because it makes it easier for game designers to create new games that I care about.
Leonard Murphy summarizes well some of the ethical considerations, but they’re no different than the ethics of research vs. marketing in general: do no harm, and don’t sell during the research process. Tom Anderson argues that privacy standards for social media research are as necessary as ISO certification for market research (i.e., not needed at all) and I would agree.