Research Methodologies

November 8, 2017

Researchers: Is There Poop in Your Brownies?

While research companies are pushing for speed in turn around time, often respondent and data quality is being sacrificed.

Researchers: Is There Poop in Your Brownies?
Ron Sellers

by Ron Sellers

0

Business solutions in 48 hours! Get your survey data overnight! Do agile research! Fast, faster, fastest!

Yes, it seems the insights world is moving faster and faster every day. Many companies are promising turnaround times that would have seemed absurd just a decade ago. Shorter questionnaires, automation, and DIY solutions all offer speed and more speed.

But there’s one big question with this race to be faster than everyone else: what’s getting sacrificed?

No matter how a questionnaire is designed or how data processing or reporting are automated, there’s still an important component to any quantitative studyrespondents. And while online research panels can give you access to thousands of respondents in just hours, panel quality ain’t gettin’ any better, folks.

As regular users of panels, we are also regular recipients of bad respondents mixed in with the good ones:

  • Research bots
  • Duplicate respondents
  • Straightliners
  • Speeders
  • Other kinds of obvious cheaters

But aren’t panel companies and field agencies screening out the bad respondents for you? Well, they’re trying, but many of their solutions are automated (again, in the interests of being cheaper and faster). For example, they’ll employ an algorithm that automatically tosses any respondent who answers a questionnaire in less than 50% of the average length, or one that catches straightliners in all your grids (that is, if you’re still using lots of grids).  

Frankly, they just miss a lot.  

Panel quality is atrocious today. Grey Matter Research has adopted the position that every respondent we get is a bad respondent, until we can demonstrate otherwise. This takes a lot more than digital fingerprinting or pre-programmed algorithms. Usually, it requires going line-by-line through the data to find and remove problem respondents. Just a few ways we do this:

  • We review every response to every open-end. Even once the field agency or panel has done their quality control checks, we regularly receive verbatims that just say “great,” give answers that have nothing to do with the question, or even are actual copies of the question itself that the bot picked up from the questionnaire and inserted as the answer.
  • We look hard for duplicates. Despite the claims of how digital fingerprinting removes this problem, we regularly find dozens of duplicates in a sample. The chances that a survey database of 600 respondents contains two 43-year-old Hispanic women from Iowa?  Possible. The chances that both are football fans who spelled their favorite team as the Pittsbergh Stellers? And that they just happened to complete the questionnaire 15 minutes apart? Not so possible.
  • We search for logical anomalies, which are different in every questionnaire. In various recent studies, we’ve thrown out people who claimed to have been in both Boy Scouts and Girl Scouts as kids, those who make under $30,000 annually but had given $40,000 last year to charity, those who supposedly live one mile away from four different local hospitals which are 75 miles apart, and those who belong to a non-existent organization (with a name that couldn’t be confused with a real one).  

Of course, respondents do make mistakes or misread questions, so usually the decision to toss a respondent is from a combination of factors. They straightlined the one short grid we included? Mark ‘em yellow. They further completed the 12-minute questionnaire in 8 minutes? Downgrade to orange. Also answered the question “What are the main reasons you are not at all interested in learning more about this product” with “I like this advertisement the best”? Buh-bye.

So what does any of this have to do with speed? (Or with brownies…but I’ll get to that in a moment.) Simple: this cleaning process is not a fast one. It doesn’t have to take days, but it won’t be done in minutes, either. In the quest for getting your data faster, how many of the respondents you’re getting are bots, duplicates, satisficers, or those who just didn’t actually pay attention to the questions you were asking?

Do you have any idea how many respondents had to be replaced on your last study? Or what criteria your vendor used to identify fraudulent or poor-quality respondents?

Most importantly: Did your vendor even do anything beyond some basic, automated checks to assure you got real, quality respondents?

Make no mistake – this is not just a problem with quick turn-around surveys. I’ve seen plenty of databases delivered in no particular hurry that still lacked proper quality control. But going all-out for speed dramatically increases the chances that your data includes some bad respondents, because putting everyone on a rush basis makes it far less likely that there will be time available for quality control.

In a qualitative interview last month, I had a respondent object to a product concept, because she felt one small part of the statement was not true. When I probed for why this undermined the whole concept, she earthily explained, “Even a little bit of poop in the brownie batter means I’m not going to eat the brownies.”

So what proportion of bad respondents are you willing to accept in order to get your data faster:  2%? Five percent? Ten percent? Twenty?  

Or, to paraphrase my favorite respondent of the year so far: How much poop will you accept in your batter in order to get your research brownies baked faster?

0

automationdata collectiondata qualityrespondent

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

More from Ron Sellers

Are the Fraudsters More Sophisticated Than the Researchers?

Research Methodologies

Are the Fraudsters More Sophisticated Than the Researchers?

It’s amazing what some people will do in order to make a buck-fifty. Two recent studies have brought to light how sophisticated panel fraud has become...

Ron Sellers

Ron Sellers

Still More Dirty Little Secrets of Online Panels

Research Methodologies

Still More Dirty Little Secrets of Online Panels

Nearly half of your panel data is trash. Here is how to fix it.

Ron Sellers

Ron Sellers

Can Political Polls Really Be Trusted?

Can Political Polls Really Be Trusted?

When political polls fail to predict the exact outcome of an election, maybe they’re not wrong…maybe we are.

Ron Sellers

Ron Sellers

Panel Quality Stinks and Clients Are To Blame

Research Methodologies

Panel Quality Stinks and Clients Are To Blame

Why should panel companies improve their results when clients accept the status quo and won’t pay for better?

Ron Sellers

Ron Sellers

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get what matters, straight to your inbox.
Curated by top Insight Market experts.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*