Can Researchers Trust Online Access Panel Data?

 

 

Editor’s Note: Online panel quality is perennially a hot button issue in our industry. Just here on GreenBook Blog there are over 140 posts on this topic, many of them from this year alone. However, none are from the unique perspective of a recent client-side researcher who has now joined the “Dark Side” of suppliers, so today’s post from Edward Appleton is important on that front, as well as raising some serious concerns that our industry may wish to sweep under the rug.

Before you dive in though, I feel honor bound to mention that I know that many panel providers are working very hard to ensure the highest level of quality in their panels already and doing a good job of it so things may not be as dire as they may appear.

It’s also important to note that there is true disruption happening in this industry that resolve many of these issues. The emergence of single source platforms, programmatic networks, river sampling and true random sampling technology, enhanced verification technologies, and yes,  even big data frameworks, are revolutionizing how we engage, understand, and activate consumers across the entire marketing life-cycle, especially research.  Just as panels disrupted CATI and drove the growth of online research, these new models will usher in a new era of effective research that we are only beginning to realize the potential of.

Current sample providers need to address the concerns raised here, but let’s be cognizant of the fact that change is inevitably coming very soon that will solve many of these issues, while surely bringing up new ones to replace them.

 

By Edward Appleton

I recently returned from a 2 day MRMW conference about mobile in market research in London. It was in one key aspect an eye-opener.

As a backdrop: the overriding focus was on how technology providers have ingenious plans to  further “transform” Market Research – recruiting respondents differently through gaming, using beacons for “in-the-moment” insights, accessing emerging markets through low-tech mobile devices, reducing survey length….Some were more convincing than others, lets say. Common denominator? Scaleability…..

Very few provided powerful examples – case studies* – of how mobile had actually helped surface insights that otherwise wouldn’t have been possible.

The session that had the strongest impact was one lead by Survey Sampling International‘s Global Knowledge Director, Pete Cape. He had invited 12 “real respondents” into the Conference room, and asked them to give their honest views on market Research.

What they said was both fascinating and horrifying:

    • money was often the first-mentioned motivator for MR participation
    • many openly confessed to multiple Panel participation
    • not telling the truth was openly admitted
    • taking upwards of 30 Surveys per month (one gent had done 80 in the last 4 weeks) was commonplace.

Many were also extremely critical of Survey design – “most of your mobile surveys are total c**p” was one succinct statement, grids were torn into, many surveys simply not working technically…..

It was a sorry state of affairs all round – Market Research being gamed by savvy pros interested in the money, disdainful of the people responsible . Curiously, the audience loved it. Much laughter. A distinct lack of embarrassment – very odd.

I was aghast, and tried to seek out Mr. Cape at the SSI stand to understand his motivation – why on earth would he do such an apparently self-destructive thing? Sadly, he was nowhere to be seen, so I am guessing as to his strategic intent.

Despite not having hooked up with Mr. Cape – yet, and perhaps he’ll comment on this blog if he reads it – here’s my take:

Online Access Panels – Change Afoot?

I can only assume that SSI deliberately wish to disrupt the business model of online Access Panels.

Public confessions of misdeeds can be therapeutic, but in a business context they are likely (hopefully?) more purposeful. SSI effectively let the world see that response quality in online Access Panels is very poor indeed. Why should anyone wish to buy that sort of low quality – regardless of price?

If you are an end-client or a Major MR agency, why on earth would you wish to continue to simply purchase “professional” responses? Or be associated with that in any way?

An Open Secret isn’t any less Shocking

Many people in the audience were less shocked than myself – shoulder-shrugging seemed the most common reaction – that’s the way it is, seemed to be the reaction…. “Good on Pete for outing it” “You look shocked, Edward – are you going to write a blog about it?” “It’s true what the respondents said, I worked for years for an Access Panel…..”

Does nobody care very much? Is there such a disregard for quality in large areas of our quant. sampling industry that the concept of disruption seems to leave people unmoved? Or are the financials so ugly that a shake-out is possibly even welcomed?

We often talk and write feverishly about the future of MR – but what about the present?

At a guess I would imagine that at least 50% of all quant. survey work uses access panels – what about all the major MR companies such as TNS, Ipsos, GfK with reputations to manage – is this all so new?

Take it to the client level: what about all the multi-million pound decision making taken on the basis of this….”c**p”? Do they know about this slight quality issue?

UK retailer Gerald Ratner’s remarks confessing one of his jewelry products was “total c**p” had catastrophic consequences for his eponymous jewelry chain, as some in the UK might remember.

“It’s True…but Keep it Amongst Yourselves”….Beg Pardon??

I checked out Social Media – Twitter first and foremost – for mentions of the disruptive impact of the SSI piece, using the Conference hashtag #MRMW. The result: very few mentions indeed.

Twitter of course isn’t representative of a MR universe; perhaps the topic is too explosive – or perhaps people simple would prefer to pretend the session hadn’t happened. Just shut up and “move on”. A conspiracy of silence?

So what’s my take-out?

Well, recruitment quality is and has been an issue in many areas of the Research Industry for decades.

But if what SSI’s “real respondent” session suggested is in any way representative of the online Access Panel universe, then it has broad implications – “Quality of Response” has been “outed” as very very low. Online access panels aren’t robust.

Worse: many of the more sophisticated parts of the MR toolkit relating to attitudinal measures – derived importance, conjoint, predictive analytics, mixed-modal implicit/explicit measurement, Bayesian statistics….- are pretty irrelevant if the fundamentals are effectively rotten. As for mobile? It hardly matters if it’s “in the moment” if that’s a fake moment.

“The King is dead” is normally accompanied by the phrase “Long live the King” – so I will watch the SSI space and see if the disruptive session was actually deliberate, launched with strategic intent, to see “what’s next.

If not, or if no further explanation is forthcoming, my blog floats off into the ether unnoticed, then I would suggest that the legitimacy of such online Access Panels is seriously questioned – and as such should come with a clear health warning to all future users.

I do hope I was not the only person to be shocked by the session – naivety to me is preferable to cynicism.

Curious, as ever, as to others’ thoughts.

Please share...

26 responses to “Can Researchers Trust Online Access Panel Data?

  1. Quality online data is not easy. It takes hard work. And a commitment to investing long term and making the hard decisions. As I said in my last article, there are 20 things that must happen to ensure quality, and you can’t take shortcuts. Not all companies are the same. There is a road to success and confidence, but it’s the road less travelled. The link to the earlier article is below:

    http://www.greenbookblog.org/2015/08/24/getting-real-about-online-data-quality-best-practices/

  2. Great article, I’m totally with you in outing MR’s dirty secret. It’s been hanging around and not dealt with for too long.

    The only point I wanted to make is that in my world of mobile qual and communities, the benefit of ‘mobile’ is simply one of accessing the person rather than it surfacing new insight. If we didn’t try to connect via mobile devices, in a lot of cases we would be able to connect at all. There’s no denying it gets the researcher closer to the action, but you’re right, if the action if fake then there’s no value.

  3. As a person who promoted and sold panel research for many years, one of the truisms of panels (client or third party) is that a panelist doesn’t see a question the same way a researcher does.

    When panels were first introduced the response was fantastic with many people wanting to share ideas. When technology providers entered the arena, surveys became sloppy and respondents were often inundated with bands of surveys and in- boxes were bulging with invites.

    The response base was often not representative (first in) and the data was often misleading. Clients and respondents alike didn’t trust surveys and surveys were egregiously long.. While we fixed the problem within research, we left respondents with a track record of poor and redundant surveys with no clear purpose and no clear impact.

    Then we made it worse by adding incentives.

    Research has changed a lot over the last few years and the heyday of trackers and large quant surveys has passed. What is still viable are surveys that are hybrid qual and quant, qualitative product driven work and quick-up focused quantitative surveys often done with either client sample or Google analytics.

    While many panelists are there for the money, there aren’t many of those who will qualify for innovation and product work and even fewer who will stick it out, even for a large incentive. The value proposition for panels is changing and panels don’t have to be as big, they just have to serve more individual purposes. Many of the large panels still boast big numbers but for the most part, those who participate are selected for much more specific purposes.

    As far as generalists who are there for incentives, they exist because someone is willing to pay for their response. Clients are doing less general research these days so the problem is self limiting except in the sense that panelists are going to cost more in the future and will come from fewer sources. That is probably a good thing.

  4. Beyond all of the suggestions given, I would add that fielding studies in waves (sub-segments of an overall sample) and help stabilize results. As most know, online surveys are completed very quickly upon delivery. This means that there can be a lot of bias based on time of day and day of the week delivery. Large scale web surveys should be administered in the same manner as telephone surveys, with surveys being spread out over at least a week’s time, probably more, depending on what your study is trying to represent. I would suggest that the surveys be conducted in waves with at least 4 sample replicates (with equal sample sizes). The final results would be the average of the replicates. These results would help smooth the data by reducing distortions.

  5. This blog post triggered a thought exercise at our office this morning: what would we do if we woke up tomorrow morning and all clients said, “you can’t use online panel sample.” It’s an extreme outcome, obviously, but a useful conversation to have.

  6. OK, this is an important topic but it is not a secret. There have been good studies into panels published on a regular basis, with Ted Vonk’s paper “THE EFFECTS OF PANEL RECRUITMENT AND MANAGEMENT ON RESEARCH RESULTS” in 2006 being one of the first. There have been working parties, the ESOMAR questions, and quality standards. We have had numerous appearances by panel members at conferences and events. However, in recent years the panel companies have been almost the only people talking about the problem, and they are almost the only people who are still trying to fix the problem.

    The reason that people like Pete Cape from SSI and Melanie Courtright from Research Now write about and speak about this topic is that their companies really want/need panels to be better. If panels fall below a certain level then the panel companies will lose business to cheaper/faster alternatives.

    The main reason that panels are not better than they are is that the key driver of quality is the customer. Faced with the option of poor stuff for $X and good stuff for $2X or $3X too many customers (people in the research agencies and people in client-side organisations) opt for the lower price. This problem is then massively compounded by these same customers (many agencies and many end clients) producing very, very poor surveys – badly worded, very long, and boring. At ESOMAR Congress today we heard that the median survey sent to SSI is 21 minutes (yes, that means half are longer) – almost any survey over 20 minutes will deliver some poor data and will erode the quality of future research. We know that about one-third of online surveys are attempted by people using a mobile device, but we also hear that more than 70% of the surveys that the customers of panels write are not really suitable for mobile, usually because of length of questions, answers, and total length.

    So, it is a problem, but not a new problem, and it will continue to get worse until customers of panels decide to stop sending out poor surveys and stop driving the price to the lowest possible level at the expense of quality #justSaying

  7. Great Article! This is not something new for people who execute online projects often. There will always be ‘professional respondents’ in any online panel (be it market leader or a new company delivering river sample). What researchers need to take care (as per me) is which study would be least impacted from online sample and which study would be highly impacted. Studies which might be highly impacted should not be run on an online panel instead alternative methodologies should be used.

    The ‘issue’ with most alternative methodologies are that they are costly and take more time like telephonic or face to face but are more accurate (if executed ethically). Most of the times client compare them with online methodologies cost and timeline which one should not. They even push suppliers to reduce both cost and timeline which in-turn motivates suppliers to take a wrong path or shortcut to reduce the cost and timeline.

    Sponsors of research should always remember that quality comes at a price (and ample time) and they should be ready to spend on it instead of buying data from professional respondents.

  8. @raypointer – Agree totally – The issue is that the lack of action may decide (or may have already mostly decided) how companies are willing to fund research (not much) and ultimately a downward cycle is occuring – The time factor isn’t going to change because it is driven by the market. That means we have to change. I believe that was how IIEX initially gained traction.

    There are many incremental steps before we throw the baby out with the bath water, but doing the same thing over and over and expecting a different result isn’t one of them.

    We have spent a lot of time trapping spammers, gamers etc. but we haven’t focused on a way to limit for profit response aside of limiting invitations. What would happen for instance, if we created online surveys that were interactive or ones where the respondent had a video “person” delivering the question and the responses were open ended and the software knew how to probe?

    The bottom line is that easy, cheap and quick deliver what you would expect. That’s why so much quant is directional and census driven these days. Online research started out with great promise but it delivered a norm that no one wants and we as an industry own that on all fronts.

    Taking respondents outside the box is uncomfortable but allowing them to create their own definitions is pretty cool.

  9. Hi Edward,
    thanks for sharing! For me, the discussion goes far beyond the repondents perspective and his/her motivation and capability to participate in the intended way. It is – in the end – the question about transparency for the end-client who wants to derive the right conclusions and draw good business decisions. I have been on the companies side like you – and often heard those rumours that were spread from agency side. I felt they were sometimes close to criminal behaviour when they talked e.g. of questionnaire duplication – but on the companies side we never knew the right questions to ask or had the right tools to get a final check on the true dimension of the issue or if we as end-client were affected. Do we talk about few black sheep? Who is in control of the situation, how big is the issue? Who is able to give the end client a „quality seal“ – which would probably also impact the dimension of prices paid…good prices for good quality?
    We should think about how to re-establish the credit of market research services – both sides, agencies and buyers of market research services.
    Worth spending more thoughts on this!

  10. I agree with the commentary above. I believe the problem is less due to multiple panel participation and more to respondent cynicism at long, tedious “fill-in-the-form” interviews chock-a-block with badly worded or overly long and “correctly-worded” prompts, ambiguous or irrelevant answers, forced response when no response is appropriate and the dreaded grid questions. Worse, the respondent is neither given feedback as the in-person interviewer does with engaging head nods and “uh-huhs,” nor is the respondent asked for her opinion on whether this interview was enjoyable and captured her opinions accurately and fully. Most researchers don’t have the guts to ask it.

    My business is developing new products, not research. I need honest and accurate answers to relevant questions. I insist that every interview gives respondents feedback as they go, calculates evaluative scores real-time, shows respondents their score and what it means, asks for their agreement or disagreement and why, and why they rated as they did. And, at the end, asks for their wrap-up evaluation of the interview itself.

    A core problem is researchers and clients are stuck in the paper and pencil paradigm for online research. Try moving “back” to the old oral conversation paradigm of in-person interviewing. Respondents hate to “fill out forms” and love when the interview “talks interactively” with them, eliciting their opinions in an engaging way. I know. I’ve been slammed by respondents when I did it stupidly, and cheered when we got it right. After enough real research on the research with the interviewee, you learn to keep interviews under six minutes, full of feedback, conversationally engaging and pleasant for the respondent.

    There is hope.

  11. @Ray Poynter. Ray, I believe that the continued use of panels has become the “Devil I know” in the research industry.

    For clients, they basically know what they are getting with panel research, and may not be willing to venture into unknown territory for their studies. They see the risk of getting something they don’t expect/don’t understand over the risk of getting poor quality from what they know.

    For vendors, everyone else is still offering panels (so you’re not offering anything worse than the competition), and many don’t know what else to offer, or how to make a compelling case to clients to try something different. This risk is offering something which may be better, and then losing out to a competitor offering panel.

    We’ve all made it “work” with panels in the past, and that inertia has been difficult to overcome for everyone in the industry.

  12. Great initiative from my former colleague Pete Cape to raise this issue again. Hopefully it will lead to some changes!

    @Ray Poynter, I fully agree with you and I believe you are spot on about the respondents using a mobile device (phone + tablet). Most of our Dutch panelmembers are taking surveys with their mobile device hence we receive a lot of complaints at our helpdesk because they are not able to participate at all. After clicking the participation button, they are directed to a screenout screen (based on the device usage). 🙁 Yes, mobile is the future we tell ourselves….I sort of get that moving a websurvey to a mobile phone device is difficult but a mobile tablet should be doable I believe.

    At IIeX in Amsterdam, Eric Salama was speaking about surveydesign and particularly about mobile. (watch 10:16min, https://www.youtube.com/watch?v=VK7CAh_lCTA). And three weeks later, Nigel Hollis also came with a blogpost about surveydesign.So as a panelprovider, I hope that this a start of some change 🙂

    (http://www.millwardbrown.com/global-navigation/blogs/post/mb-blog/2015/09/07/why-researchers-need-to-ask-less-and-impute-more)

  13. @Edward, thanks for the great article. I believe the solution is all about understanding that people in the digital world have learned to become immune to anything that is not relevant to them, either an advertisement or an invitation to take a survey. As consumers, we have also started to subconsciously filter and tune out anything that provides us with poor experiences.

    The emergence of single source platforms, programmatic networks, river sampling and true random sampling technology will not solve the problem at all, unless clients, market research agencies, panels and technology companies understand that the ultimate goal should be to turn the experience of participating in any market research initiative in something relevant and engaging to people. Reflect why online communities have emerged as one of the main market research tools adopted worldwide?

    I wrote a few days ago here on Greenbook’s blog the article “Push X Pull Market Research” (http://www.greenbookblog.org/2015/09/21/pull-vs-push-market-research/), and would love to know your thoughts on the topic. Also, having 15 years experience building online panels, and being someone who has fought to maintain the quality and survival of our own panels in Brazil and Latin America, I recently wrote an article on eCGlobal’s blog “Let’s save the Online Panels!” (http://www.ecglobalsolutions.com/?p=1904 ). that discuss many of the issues commented here on online panel quality.

  14. I like the commentary that Ray provided up here, especially since it’s an unbiased view ie he doesn’t work for one of us heinous panel companies!

    It’s time we took a serious look at the quality of the surveys we are being asked to field, and I think we need a trade body that stands with panel companies and supports us as we try to drive change upwards through the industry. It’s all very well pinning the blame on the respondents all the time, but many of our challenges would be solved if surveys were written for the 21st century consumer. Fielding an average LOI of 20 minutes to humans which now have 10 second attention spans is only going to do one thing, which is the reason SSI are going on this crusade. For their bravery to speak out and raise this issue so openly, I think they should be applauded rather than derided.

  15. I also believe there are some really significant methodological challenges here.

    If you think back to our roots, the scientific method, measuring genuine attitudinal shift, making million dollar decisions, the psychology of all this – THIS IS REALLY IMPORTANT.

    The mind-set of somebody telling us they rush through this in their lunch hour, or do it in the evening after a couple of whiskies – is this really how we want the data to be sourced for really important, company-changing, life-changing decisions?

    We really need to consider whether this approach is relevant any more.

    I know when panels were created they helped solve lots of challenges but now they are part of the challenge.

    Perhaps we need to go back to ‘random intercept’ which is possible through technology in so many different ways.

    I agree with Ben, let’s develop research which works today. Who in their right-mind would genuinely invent a 25 minute survey with grids if they were writing their survey from scratch.

    We should “select all”, delete – and then start writing our research with original proposals and contemporary questions. Our profession would be a much better place for it.

  16. The situation is even worse than what has been written above. The consequence of poor surveys and too low incentives (both monetary and social) is that (a) many segments of society have no interest whatsoever in participating, particularly young adults and minorities (b) the half-life of a panelist on a panel is typically measured in weeks. Demand for low quality at a very low price is high creating a race to the bottom. Taken in concert with shortening attention spans, the value of behavioral data gathered as a by-product of millions of people making decisions and living their lives is increasing (the whole big data industry) while the relative value of survey data is decreasing.

  17. Excellent article Edward! I have to come to largely distrust any and all online panel quant findings, as we all well know that only certain types of people are inclined to be panelists (i.e. typically lower income who need the incentives). I was in on a new product category that launched in 2014. The research had indicated that the median HHI for the product’s purchaser would be around $86K (USD) – high even for the panel cohort – when in fact once the product launched it turned out that the actual median HHI was closer to $140K (and far fewer of the product sold). The problem is, no one with HHI of $140K is signing up for online panels… thus the data is totally skewed, and basically crap. So, I think if you are researching a low-price product for Wal-Mart, then yes, perhaps your panel – may – be representative, but for any high ticket price item (a considered purchase), I do not believe that online panels have any way of working in that space – at least not the way they are ran today.

  18. Man, it’s tough to continue to watch people throw an entire business out as invalid based on some (occasionally valid) concerns. Tough to see people make sweeping, and often incorrect, statements. This part of our DNA as an industry continues to disappoint me. There are so many examples of great research done with online sampling. Pew just released a report that showed the data they get online has some differences, but overall has little materials difference than phone. Large corporations are making big investment decisions with online data, and enjoying success. As leaders in this space, we need to have better talk tracks. Fit for purposes. Strengths and weaknesses. Biases to watch for and account for. But not sound like reactionists or crazy people who set our own houses on fire.

    We CAN reach representative samples, including affluent (they participate for reasons other than incentives). We CAN sample well, We CAN control for biases. We CAN talk about good use cases. We CAN consult and recommend in ways that advance our entire industry and add t our credibility. Let’s talk about that.

    1. Agreed 1000% Mel, thank you. The Pew results certainly should go far to put to bed the interminable “representative” argument and the business case for panels and online research from a cost, speed, and efficiency standpoint was made long, long ago. Now we should be discussing how to optimize data quality and enhance consumer engagement, not bemoaning the way the world has irrevocably changed.

  19. @MelanieCourtright makes some good points. It is time to quit thinking of panels as a giant group of people simply there to serve us. Research is changing and the way we use panelists, respondents and participants is changing. I have always had an issue with the reluctance to let researchers participate in panels (obviously not taking surveys of competitors) but they have opinions too! Blaming panel companies for an issue created by researchers is not the right way to look at this. Whatever the status of respondents, it is what we have to work with and we have to figure out how to use if effectively. Even a “professional” panelist has an interest in some categories that goes beyond the incentive.

    This whole discussion really centers more around large quantitative surveys and few companies run the volume of trackers they ran even two years ago. That’s where the real impact is apparent. If you engage panelists at a higher level, the money seekers will disengage. Shorter surveys with more qualifiers (not demographic) will alleviate most of the problem. Panelists etc. are people, not data, we forget that.

  20. I wonder if Pew identified itself when it commissioned its comparative study. I know from experience that “benchmark” studies get special treatment in terms of sampling, weighting, project management and so on for obvious reasons.

    Asserting that in general an online survey will provide satisfactory data assumes (a) that all panels are equal, and (b) that all projects conducted on a particular panel will result in equal quality. Panel companies are particularly sensitive when their results are going to be compared with other interviewing modes (phone, robocalling), with their competitors, or with a true benchmark result (ex: an election). A given panel company can typically provide higher quality at a higher price. “1000 genpop respondents” can be anything from “the next 1000 people that click on a link in the next 15 minutes” (river) to a survey with 16-way interlocking quotas, multiple invitations for rare groups, a long field period, respondents that have not taken related surveys recently, respondents that haven’t taken many surveys at all, in-survey satisficing traps, improved respondent validation & de-duplication (especially if sample is outsourced to other panel companies, a very common practice), extra respondents for matching and weighting schemes that even NASA would judge to be too complicated.

    Some clients really don’t care, they just want data that supports their business case or their PR initiative (In Stephen Colbert fashion: “Is my product a great product or the greatest product?”). The survey industry is the first time I heard the term “directionally correct.” Basically if there are two choices the survey should identify the winner accurately. This is often good enough for elections where if you pick the winner correctly you claim victory even when you were outside the MOE. Imagine if you bought a car this way. You’d pay a low price and you’d know that if you put the car in drive it would go forward but at some completely random speed.

  21. Really good article, and we’ve found, as you did, that the major panels definitely know and understand this challenge – is a good thing, as this means that steps are being taken to tackle the issue, not just by SSI but the others too.
    Whilst there are respondents who do game the system, on the other side, there are still many really great respondents on the panels – so the challenge seems to be more about rewarding / engaging these respondents for being legitimate.

    As a video survey provider, we probably bring to light these challenges with the panels more than ever – thankfully video is very hard to game – and when we’re after 21yr old females and see a 40yr old male respond on camera, we can easily flag this to improve quality over time.

    There are a lot of other great technologies like Verbate.co that can build quality into existing panels, we do see video as one of the most effective ways to truly vet respondents though.

Join the conversation

Sign up to our emails

Don't miss out....
Get great insights content delivered straight to your inbox.
 I agree to receive emails with insights-related content from GreenBook.


You can manage your email preferences or unsubscribe at any time. GreenBook protects your privacy under the General Data Protection Regulation.