Research Methodologies

January 18, 2013

Sliders: Good for White Castle, Bad for Research

Using sliders in online surveys may increase engagement, but also introduce bias.

Ron Sellers

by Ron Sellers

0

cheeseburger sliders

 

Editor’s Note: Ron Sellers wades into the great “respondent user experience” debate in his usual down-to-earth and irascible way. Now this one should start a good debate!

 

By Ron Sellers

If you have participated in a survey online, you probably have used sliders at some point.  Survey designers often include sliders in order to enhance respondent engagement, or to make a larger scale (e.g. 1 – 100) seem more approachable and natural than asking people to type in a number.

Even if you’ve used sliders as a participant, I’m hoping you haven’t done so as a researcher.  A new report from Grey Matter Research shows if you did, your sliders very well may have biased your data.

In an online survey of 1,700 adults (with a demographically representative general population sample from an online panel, conducted in English and Spanish), Grey Matter included a couple of question sets with sliders.  One used a seven-point scale, and one a five-point scale.

The problem with sliders is that, unlike radio buttons, they require a starting point on the screen for the slider.  The slider button respondents move must start somewhere on the scale – at the low point, at the mid-point, at the high point, or somewhere else.

This creates a couple of problems.  First, let’s say you decide to start your slider at the mid-point of a seven-point scale (a 4).  What do you do if the respondent wants to select a 4 as her answer?  You can accept the lack of movement of the slider as a legitimate response, but then you can’t differentiate between people who purposely wanted to choose a 4 and those who just didn’t bother to move the slider.  Or you can force movement of the slider in order for the response to be recorded, but then someone who wants to choose a 4 must move it off the 4 and back on again.  (This is the approach we took in our study.)

But a far more significant problem is that our research found that people’s answers depended significantly on where the slider started.  We randomized the starting point (one-third saw it at the bottom of the scale, one-third in the middle, and one-third at the top).  After about 500 completes, the data was evaluated.

Through five questions using the five-point scale and nine questions using the seven-point scale, we found pervasive bias according to the starting points.  People who started in the middle of the scale were more likely to choose a mid-point answer.  Those who started at the top of the scale were more likely to choose a higher number.

But the effect was particularly strong at the bottom end of the scale.  People who saw their slider start at the bottom were strongly biased to choose a low number on the scale.  Up to three times more likely, in fact, than people who started elsewhere.  It doesn’t take a research genius to see the problem here, nor to realize how much worse it would be if we hadn’t randomized the starting points on our sliders.

All of the nasty details are in the report How Sliders Bias Survey Data, which is available upon request from Grey Matter Research.  You’ll read why Grey Matter no longer uses sliders in any survey.

Although our latest work focused specifically on sliders, there’s a bigger issue here – how much are attempts at respondent engagement corrupting the data we get?  When we move away from tried-and-true questionnaire design in quantitative studies and start using things such as drag-and-drop, gamification, cartoon icons that “guide” respondents through the questions, thermometer-style graphic measures, and other approaches, are we sure that we’re getting the benefits of respondent engagement without the downside of simply getting wrong data?

And even bigger than that is the issue of why we need respondent engagement in the first place – is it because people are bombarded with too many extremely long questionnaires, surveys that don’t really apply to them, repetitive question sets, lengthy and boring grids, and other things that are making participation tedious and causing respondents to lose interest?

How much better would it be if we simply design a good, simple, relatively brief questionnaire that respects our respondents and doesn’t require us to resort to tricks and gimmicks in order to keep them engaged?

Sliders may bring intense brand loyalty to White Castle, but they’re probably best left to the fast food industry rather than the research world.

0

online surveysquantitative researchrespondent engagement survey design

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

More from Ron Sellers

Are the Fraudsters More Sophisticated Than the Researchers?

Research Methodologies

Are the Fraudsters More Sophisticated Than the Researchers?

It’s amazing what some people will do in order to make a buck-fifty. Two recent studies have brought to light how sophisticated panel fraud has become...

Ron Sellers

Ron Sellers

Still More Dirty Little Secrets of Online Panels

Research Methodologies

Still More Dirty Little Secrets of Online Panels

Nearly half of your panel data is trash. Here is how to fix it.

Ron Sellers

Ron Sellers

Can Political Polls Really Be Trusted?

Can Political Polls Really Be Trusted?

When political polls fail to predict the exact outcome of an election, maybe they’re not wrong…maybe we are.

Ron Sellers

Ron Sellers

Panel Quality Stinks and Clients Are To Blame

Research Methodologies

Panel Quality Stinks and Clients Are To Blame

Why should panel companies improve their results when clients accept the status quo and won’t pay for better?

Ron Sellers

Ron Sellers

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get what matters, straight to your inbox.
Curated by top Insight Market experts.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*