Research Methodologies

September 13, 2012

Garbage In Garbage Out Part Deux (aka Panels Suck)

What’s the difference between a $6 respondent and a $25 respondent? Is a $25 respondent better in terms of recruiting practices?

Jason Anderson

by Jason Anderson

Owner at Datagame

0

 

Editor’s Note: All I can say here is don’t shoot the messenger folks! There is a reason that the the two aspects of MR that are facing massive new competitive threats are sample access and survey based research, and Jason explores a part of that reason in this post.

 

By Jason Anderson

In my role as a client-side researcher, I am blessed with an unbelievably large global customer database in the tens of millions of people. Not email addresses, but real people that I know exist in the real world by virtue of their purchase histories, credit card information, and behavioral metrics on our various online services. Because of this blessing, over 90% of the survey work I field is CRM-driven.

But there’s still that other 10%. And while I have a reasonable degree of confidence about what the sources of error and bias are in my CRM-based sampling efforts, my trust in panel recruiting erodes more and more every month. Consider, for example, a recent vendor selection experience:

  1. I decide to run a study in Country A, Country B, and Country C. I request bids from Company X, Company Y, and Company Z.
  2. I receive bids for $25 per complete, $20 per complete, and $6 per complete.
  3. I contact the local office for Company X in Country A and get an additional quote of $12 per complete.
  4. I tell Company Y that their $25 per complete bid has been beaten by a significant margin and get a “new” bid at $15 per complete.

Names obscured to protect the innocent. But nobody was “innocent” in this exchange, because from a distance it becomes obvious that the “value” of a completed survey is completely arbitrary and driven not by data quality or service quality but by a desire to win the bid.

Worse yet, I question whether that completed survey is even worth $6 to begin with. As an experiment, I joined one of the name-brand panels as a “panelist” under a pseudonym one month ago. I completed as little of the registration process as necessary to become qualified to take surveys. (Don’t worry, I haven’t polluted any of your actual work with fake responses. But I’ll come back to that in a moment.) Between August 3 and September 10, I received 25 survey invitations. That’s roughly 5 surveys per week. The panel’s frequency of contact guidelines explicitly say no more than one invitation every two days and no more than 12 per month.

“Oh, but it can’t be that bad! Most panelists are legitimate.” Let’s assume for a moment that this hypothesis is correct, and that panelists are recruited through completely legitimate efforts. For example, perhaps they were on Google and searched for “surveys” (see right).

Hmm. (And by the way: I’ve never been offered $20 to complete a panel survey. Which panel do I need to join?)

Creating a fake panelist account is fast and painless. Identity verification on the Internet is near impossible. But it’s not just respondents committing fraud in this process, the panels themselves are complicit. The lack of transparency in downstream processes invites opportunism, if not straight-up breaking of rules.

Consider: What’s the difference between a $6 respondent and a $25 respondent? Is the $25 respondent substantially better in terms of recruiting practices, data quality, and policy integrity? Or was the $25 respondent simply a $6 respondent that had been purchased from another source and marked up? And how can you tell the difference?

Answer: You can only tell the difference in quality if you are told which panels are being used, and how those panels manage their database. I haven’t found full-service research agencies to be terribly eager to share that sort of information, because:

  1. it allows me (with a little bit of sleuthing) to determine the profit margin between the sample cost and the delivered work, and
  2. the agency harbors fears of being disintermediated (there’s that word again!).

So what’s a client to do? I have three rules for myself:

  1. Always know the source of the sample.
  2. Never communicate “statistical margin of error” to internal clients on panel-based surveys.
  3. Stay close to the source. Don’t tolerate unreasonable mark-up on panel data, particularly when it’s known to be a pass-through cost from a downstream supplier.

0

client relationshipsdata qualitypanelsrespondent

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

More from Jason Anderson

Pokemon Go: Gamification Lessons For Research

Research Technology (ResTech)

Pokemon Go: Gamification Lessons For Research

Pokémon Go has been hugely successful in terms of adoption and engagement. How has Pokémon Go garnered such success so quickly? What can we learn, as ...

Jason Anderson

Jason Anderson

Owner at Datagame

Data Quality, Privacy, and Ethics

Is Safe Harbor Still Safe?

The European Court of Justice recently invalidated the Safe Harbor progrm. What are the implications for consumer research?

Jason Anderson

Jason Anderson

Owner at Datagame

Data Quality, Privacy, and Ethics

The Telephone Consumer Protection Act: We Reap What We Sow

Maybe our own behavior, and the never-ending stream of surveys, has tainted the previously clean karma of the after-work phone survey.

Jason Anderson

Jason Anderson

Owner at Datagame

Of Innovation and Snake Oil

For several years now, the insights industry has been talking about innovation. I struggle to remember what people talked about before.

Jason Anderson

Jason Anderson

Owner at Datagame

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get what matters, straight to your inbox.
Curated by top Insight Market experts.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*