Download the
NEW e-book from GreenBook: Insights That Work - best research from the GRIT Top 50 companies | DOWNLOAD NOW

Heal Thyself! When Will Market Research Get Serious About Sample Quality?

 

By Allan Fromen

It’s a really fun time to be in market research. As June’s IIeX conference demonstrated, there is a plethora of innovative start-ups shaking up the industry. I had the pleasure of introducing Jeff Bander from Sticky, a company that aims to bring the previously expensive methodology of eye tracking to the masses via a cloud based solution. Really cool stuff. To read reviews of the IIeX conference, see these posts by Annie Pettit, Tom Ewing, and Dave McCaughan.

Amidst all the focus on new tools and techniques, I was struck by a presentation that compared research panelists with dynamic respondents. A senior individual went through slide after slide of data showing significant differences in the results based on whether the respondent was a panel member or recruited via dynamic techniques.

Someone in the audience aptly asked “So which respondent type reflects the truth?”

The answer was “We don’t know.”

This turn of events made me sad and reflective at the same time.

We all know that response rates are declining. More worrisome are the studies indicating that some panelists are professional respondents, jumping from survey to survey to goose their incentives. I recall once hearing a presentation that the average panelist was on 5-6 panels. Another presentation at a top conference years ago methodologically demonstrated that cheaters (speeders, straight-liners, etc.) only changed the results by a tiny amount (low single digits), so any concerns about bad sample was overblown and misplaced.

Panel companies likely follow the classic bell-curve, with some at the top of their game and far superior to their peers. But when you speak to folks over a drink, they admit that panel companies all “borrow” respondents from each other and that quality is an issue everyone knows about but no one wants to address.

At another conference recently, a senior researcher admitted to being an online panel member herself. Not in a nefarious way, but as a means to evaluate survey design, user experience, and how surveys are actually being delivered. The two panel companies she named have top-notch brands and are known as leaders in the field. But her experiences could not be more different. One panel was exactly as you would hope – they only sent her surveys that were targeted, based on her demographics and past surveys. From this panel, she received about 2-3 invitations per week. The other panel bombarded her with a steady stream of surveys – 40 or so per week (by her estimation) with seemingly no connection to any data she had shared previously, either via registration or completed surveys.

Isn’t it ironic that we are so meticulous in our survey construction – “garbage in, garbage out” we all say – and then we throw our carefully constructed babies out into the unknown? In an effort to reduce order effects, we rightly focus on randomizing brands within the questionnaire, but have no idea what survey the panel members completed just before ours. Some of the very brands we were trying so hard to inoculate from bias, might have been exposed to our panelist a minute before our survey.

None of this is new news unfortunately. Ron Sellers of Grey Matter Research has done an impressive and laudable job detailing some of these issues. See here and here.

I think the reason this is boiling over now is that we as an industry maintain this mantra of innovation. Embrace new methods, innovate or die, disrupt or be disrupted. And yet the foundation of our entire industry – the respondent – is somehow exempt from the conversation.

There are some noteworthy innovations in the sampling space, of course. Google Consumer Surveys and Survata catch people where they are browsing, in their natural environment, and don’t rely on opt in sample. GfK’s Knowledge Panel is the gold standard of probability sampling, and RiWi is doing some really cool work in this space as well.

But these are exceptions to the rule. It seems most of us shrug our shoulders and think if everyone else is doing it, it must not be that bad after all. If the whole industry is sideways, at least I’m not upside down, we tell ourselves.

I am not calling out the panel companies any more than the researchers who use them, myself included. Researchers and clients have all created the demand for faster and cheaper, and so panel companies have quite reasonably moved to fill that need in the market. We are all guilty of basking in the short term high that comes from easy and cheap sample. What other course of action is there? Revert back to the more expensive techniques that were the norm before the Internet came along? Even if they were viable, we’d have a hard time convincing clients to pay for such rigor, now that the genie is out of the bottle.

So what is the solution? I am not sure but I have one suggestion. Couldn’t organizations such as ESOMAR play the role of Consumer Reports, but focused on the sample industry? Via mystery-shops, surveys with buyers, and other methods, a report would be issued rating each company on a number of criteria, such as overall quality, number of surveys per x timeframe, quality of the “match” between survey and respondents, and so forth. The same Harvey Balls we’ve all seen a million times could be used to help buyers understand the strengths and weaknesses of various sample sources. Panels with high ratings would undoubtedly be able to charge a premium. Panels with lower scores could strive to improve, or position themselves as the low cost provider. For every BMW, there is a Kia. This would not be a public shaming, but rather a guide to help select a sample provider.

One of the challenges in using panels today is that such ratings do not exist. How are buyers supposed to evaluate sample sources, other than reputation? Panels are notoriously reticent to discuss their recruiting practices, participation rates, or number of invitations sent (if they even track that data).  As Ron Sellers told me, “There really is no way to evaluate the quality of different panels or compare options, other than actual user experience either as a buyer or as a respondent. That’s one reason so many researchers join panels. So lacking any objective measurements or personal experience, choices often come down to the only knowable factors: price and feasibility. That just exerts additional downward pricing pressure, which in turn further impacts quality. It’s a vicious cycle; one which is entirely undesirable for the industry.  And I see no end in sight.”

Please share...

32 responses to “Heal Thyself! When Will Market Research Get Serious About Sample Quality?

  1. I consider this a very important article. I am not sure I agree with your Trade Association – “ESOMAR”- and ratings solutions. (It will cause a lot of law suits). But I do think you define the sample problems very well. I fear our industry is between two worlds, “One dead and the other one powerless to be born”.

  2. I wrote this in November 2011: http://www.greenbookblog.org/2011/11/30/the-7-bs-stories-of-the-market-research-business/

    “Because the end customer has zero visibility into online recruiting practices, the online sample business has devolved into a cesspool of incestuous outsourcing, excessive mark-ups, and rampant fraud.”

    Nobody cares. Truly, NOBODY CARES. The panel suppliers don’t care because they are making money. The agencies don’t care because today’s deflated pricing makes them more price-competitive so they can continue making money. Clients care, but there’s information transparency is laughable which leaves little recourse. And so the cesspool continues to swirl, day after day. (Yes, there will be individual exceptions to the “nobody cares” rule. My point is that the system only cares enough to keep the treadmill running.)

    Before you can solve the problem, you have to make people care, and I think that shining a light into the dark corners of the survey pipeline would be tremendously healthy. At a past IIEX, a speaker from a panel supplier was lamenting survey length and how agencies continue to program 30-minute beasts. I asked if they had any data about the distribution of survey lengths across their survey portfolio. Crickets chirped.

    Of course they have the data, but there’s no incentive to publish or share it. SHINE THE LIGHT. Screw the lawyers, shine the goddamned light. Because if a trade organization doesn’t do it, by hell, somebody else eventually will and that’s not going to be a pleasant pill to swallow.

  3. Panel samples are flawed. Part of the “panel problem” is due to business-as-usual practices. Part of it is due to researchers failing to understand and address the problems with data they receive.

    Allen Fromen’s article — which I like, by the way — deals with panel bias. In that regard, one issue he doesn’t address is subcontracting.

    Panel companies sometimes will subcontract to other panel companies to meet client requirements for specific types of respondents. Some are upfront with the client about what they are doing, some aren’t. Some will tell you the partners they are using while some try to leave that as a black box. A vigilant and experienced project manager may detect what the vendor is doing. Newbies and non-researchers won’t.

    Obviously, the subcontracting system plays havoc with attempts to rate panel companies.

    However, for me, the 800 pound gorilla in the room is respondent fraud.

    Dr. Gallup warned years ago that paying respondents would introduce fraud into the survey process. He was right.

    A client warned me, years ago, that he had tossed over 50% of completed interviews on a study of office equipment as fraudulent. (Happily, it was not one of my studies.)

    More recently, I’ve seen studies of academicians, doctors, and consumers of certain medication where — after excluding speed-ballers and other obvious cheaters — there was an additional 10% to 40% of respondents who were at best questionable.

    Panel companies will offer to replace fraudulent respondents, but if one is under a deadline, that offer often is of no value. Plus there is no assurance that the replacements will be better than the originals.

    Clients need the ability to audit surveys for quality control. We need access to phone numbers so that we can have a neutral party recontract respondents to validate respondent participation. Did the target do the survey or did he/she give it to their 12-year-old? Was the respondent actually qualified for the survey?

    Panel companies don’t permit audits, and that has to change.

    I’m old school. I put survey data file into excel or SPSS and read what each respondent has written before running tabs or other analysis. It’s a good way to uncover problems, but many researchers don’t seem to do it.

    Frankly, one way to force panel companies to come to terms with quality issues is to reassess the appropriateness of web surveys for many research problems. There are alternatives that can be both cost effective and substantively better for particular situations. If we see a fundamental reduction in the volume of online panel studies, weaker vendors will be forced out of business, while stronger vendors will be forced to invest in their panels to maintain their business.

  4. Nice post Scott. Looks like we share the I/O Psych background.

    Your point about survey length is of course part of the problem. Why the long survey is dead and similar posts have been generating much buzz these past few months. But it is a sad state of affairs when a) we still see 20 minute (and longer) surveys, and b) what passes for innovation these days are suggestions for shorter surveys. What a novel idea! I bet the next innovation will be surveys that can viewed on a mobile device! Note sarcasm.

    It seems, the long survey is like gorging on high-fat caloric dessert. We know its wrong but we do it anyway, because it won’t affect us in the long term – or so we hope.

  5. @Jason

    Good points. I have a slightly different view. I think many DO care but we feel powerless to stop the treadmill you speak of. We have become anesthetized to the situation. We are living a form of learned helplessness, where the shock has been delivered so many times, we no longer try to avoid it.

    We need to somehow rise up and break free of the cage. If a trade organization does not take the lead here, I imagine some start-up will seize the opportunity and make a bundle of cash.

  6. Scott, Jason and Victor great insights into the problems of panel recruitment and respondents. You all raise issues which cut to the heart of data credibility Thanks for taking the time to write your posts.

  7. Nice post, Allan. Sample is now so cheap and turnaround so quick that clients can easily field a few short surveys using different fieldwork suppliers on topics relevant to their industry and compare the results, and also how they stack up against other data (e.g., Nielsen sales figures, government stats). Some of the big MNCs used to do this, but I don’t know how often that is done now.

  8. This may surprise the readers of this post and subsequent comments, but despite working for one of these heinous panel companies (Instantly – I run our European arm) I absolutely agree with everything that’s been written here.

    I also wrote about this topic on this blog site here:
    http://www.greenbookblog.org/2015/03/25/online-research-in-2020-machines-will-take-your-surveys/

    In that article I surmised that we as an industry are relatively oblivious to the fact that “completes” are coming from respondents, human beings, and we treat them badly. The genie is indeed out of the bottle and fast/cheap is here to stay, so what incentive does a panel company have to invest in quality when our clients want cheap? Surely, our business model should now be to invest in efficient in order to survive? (note; that’s not what we are doing, it’s a rhetorical statement to make a point).

    I agree that there is a lack of transparency in general. Instantly has a policy of always advising when partners are used, and who those partners are, when a client requests it, and I know that others do too. (Some don’t, but I’m not naming names here). Often though, clients don’t care, they just want the sample done on time. Sad, but true I’m afraid. And all panels are different, purely by the nature of recruiting methodologies used, but it’s very hard for any of us to differentiate based on that because as it’s already been written, there’s a lack of general understanding about which methodologies are better or worse than others. The general perception of “river=bad” and “panel=good” is fundamentally incorrect.

    Victor, your point about auditing is a good one. The problem we have is that we then get into the world of data protection and PII distribution, which is challenging. In theory, technology such as relevantID and TrueSample (which most of the major panel companies including us use) should be doing the digital fingerprinting, but every system is susceptible to somebody (or a bot, which is our biggest challenge today actually) that is determined enough to break it. Telephone validation also adds layers of cost at a time when cost is being stripped out of the data collection process, and it adds layers of complexity when you’re dealing with multiple countries and languages, which we do here in Europe on a regular basis.

    There are lots of folks in this industry that are vocal on this topic, myself included, and my hope is that at some point the industry will come to a critical juncture where enough of us make enough noise that the few can change the direction of the many. I certainly welcome such conversations / debates / topics, and encourage any and all researchers to challenge all panel companies much more rigorously than they do today, but I also challenge them to be prepared to pay a premium for a product that they then see is superior to another product.

    Ben

  9. @ Ben

    Thank you for your comments. I had missed your post when it was first published but read it now. Excellent points.

    I have to say it is quite refreshing to see someone working for a panel company who is willing to discuss these issues frankly, warts and all. Some of your industry peers either play down the concerns and/or point out that there is just as much bias in traditional probability sampling. To me, that seems disingenuous, and a clear case of “The lady doth protest too much” to quote Shakespeare.

    Despite an obvious incentive to turn a blind eye, you’ve done just the opposite. We need more of that. Thank you!

  10. One of the other challenges is how quickly things change in the panel world. Back when Grey Matter originally did the research for the first Dirty Little Secrets of Online Panels report in 2009, some of the biggest names were Opinion Outpost, iSay, SurveySavvy, Survey Spot, Greenfield Online, Zoom Panel, and OTX. Where are they all now? Mergers, acquisitions, defunct companies, and new players have completely re-arranged the marketplace in just six years (and even going back to just 2012, when we did our second Dirty Little Secrets report, the landscape has changed significantly).

    Part of the problem is that you start to trust a particular panel company and they bring in new leadership or get sold to someone else or change their policies and suddenly it’s a whole new ball game. I have a terrible time keeping up with all the changes from year to year. What used to be a great option for panel sample is now dubious because of changes. Five new companies are trying to get my attention with claims that finally, unlike all their competitors, THEY got it right. Even as someone who has been beating the drum on this message for six years now, the temptation is enormous just to give up and accept questionable data as “the standard.” It’s exhausting.

  11. I feel your pain Ron Sellers. One outcome of “faster and cheaper” is that sample rigor has gone out the window. Panel and sample companies are finding it harder to make a profit and are constantly taking new short-cuts.

  12. This has been an important post and discussion. Thanks again to Allan and to the others for your hard-hitting and perceptive comments.

    I do a lot of work in lesser-developed countries and survey research has always been problematic in most of them. That aside, even in MR-developed nations I sense a decline in understanding of survey research basics among sellers and buyers. Too many distractions, perhaps, or survey research being short changed in MR coursework and seminars? A couple of times in the past year or so MR agencies in the US I’d been working with were surprised when the numbers changed in tracking studies…they had changed panel companies for cost reasons and had not even considered beforehand that this might impact on the numbers.

  13. Having built online panels since 1999, I’ve seen the all the various ways that the sampling industry has evolved. In 2011, I called for a change to the pricing models for sample purchasing claiming that the current CPI based pricing model doesn’t support maintaining high quality panels. http://www.research-live.com/news/news-headlines/usamp-founder-calls-for-change-in-sample-pricing-model/4006124.article. Having re-read this article from four years ago, it seems like not much has changed.

    I’ve heard many people say that rewarding panelists is wrong, but what real human wants to spend 20, 30, 40 minutes with no reward just to take a boring online survey that’s styled like a 1995 web page. Rewards motivate people to action, even when the action isn’t very exciting. That’s the reason people still go to work, right?

    Most online panels have evolved into websites/databases of professional survey takers primarily because only those people with the thickest skin survive the monotony of starting surveys, disqualifying and then starting again. And, for the most part, the industry only rewards a user when they successfully complete a survey, not just for an attempt. The issue isn’t just a sample company problem. Across the board we have to rethink how surveys are designed and reevaluate how we treat real-people that take our surveys. For the most part sample is treated as a commodity, as opposed to real humans, and then the industry wonders why we get robots taking surveys. When you’re at Thanksgiving Dinner telling your family members to join an online panel, only then will the model be working.

    I think the real question is… how do we build a sustainable sample recruitment and retention model that broadens our reach into populations beyond the professional surveys takers while still meeting the pricing pressure that exists in the market? In my opinion, that’s the Holy Grail.

  14. @ Matt

    Good points.

    It seems like a karmic case of ‘you get what you give.’ How could we possibly expect respondents to treat our surveys with respect and dignity, when we treat those same respondents so badly in the first place?

    In I/O Psychology and elsewhere, we talk a great deal about Employee Engagement. Maybe it’s time we start thinking about something akin to Respondent Engagement.

  15. It’s a chicken-and-egg issue between pricing and quality. Does quality stink because of the pricing pressure, or have prices gotten so low because there’s so little focus on quality? I would tend to argue there’s a lot of the latter. When you buy a BMW over a Dodge, you have an idea of why you’re paying a lot more. When you buy New Balance shoes rather than some off brand, you figure they’re going to last a lot longer. But what do panels compete on if the end customer doesn’t care much about quality or has no way to measure it? They compete on price, and in order to do that quality falls.

    I have the same concerns with some of the next-gen research techniques – how many people are really investigating whether some of these things actually provide valuable insights, and how many people are just attracted to the new, shiny, and cool? For example, on social media monitoring, I’ve seen a whole lot of “you never have to pay for research again” and “you never have to travel for another focus group” as sales messages, and not a whole lot of “this is why it’s valid as a research tool.”

    When you can get 1,000 genpop respondents for $4,000 through a panel, can anyone seriously tell me that having to pay $8,000 instead would do much harm to GM or P&G or anyone else doing the research? No, especially when a decade ago we were paying far more than that to do it by phone, and when that $8,000 is small compared to the total cost of a full-service vendor project or the cost of maintaining in-house research staff. I’d be thrilled to pay double if I knew I wouldn’t have quality problems. But when there is no reliable comparison of panel quality, and/or when the researcher isn’t focused on quality, why pay more?

    1. One of the major trends we are seeing is the emergence of “cheaper, faster, good enough” research, and by cheaper we’re talking about as much as 80% less than traditional full service projects. A big piece of this shift is driven by the power of automation, including API based integration with sample providers who are simply providing a firehose of folks. This primarily impacts anything related to “testing/tracking”, which just so happens to be where a HUGE chunk of annual client spend is focused. It’s hard to argue for paying more when you can get almost the same data quality for 80% less than you are currently spending. That overall shift is just going to keep in play, and is a major revenue stream for the sample companies and marketplaces. I don’t see anything rolling that trend back.

      That said, respondent engagement is a whole other issue and I totally agree we need to tackle it. Research should be a fair value exchange, and today it’s generally not for panel companies (not all!). However, since personal data is now an asset class and as the “gig economy” continues to mature, I fully expect that consumers will become much more savvy about what their participation (data) is worth and will force incentive costs north, which will increase sample costs as well. It’s all supply and demand, and I think we’ll watch the market correct itself a few more times until we get to some level of equilibrium.

  16. When I began my MR career, mall intercepts, telephone and postal mail were the main options for consumer research. Response rates have never not been an issue nor has response quality, though many clients were not on top of this. I can recall clients telling me that suppliers had assured them that weighting by age and sex would make data from mall intercepts “representative”…and evidently believing it. For quick consumer reactions where you only need a rough read we now have much better and low cost options. For more in-depth looks I still find that I can get data of sufficient quality from established panel companies, provided the questionnaire has been reasonably well-designed (which is often not the case). That said, I did begin to notice a decline in data quality (e.g., more straightlining) about 10 years ago *and privately predicted the demise of MR). To paraphrase Ron, a $4,000 premium for higher quality won’t break the backs of most clients, and sometimes even slightly higher quality can have a big impact on their decisions. I think there are opportunities for panel companies in this higher end space.

  17. @Lenny I think we’re already seeing that consumers don’t see value in taking surveys. It’s becoming much more challenging getting users to join panels and when they do, the response rates are very low. Across the board, I hear that most of the major sampling firms are seeing massively declining email response rates. In my opinion, it’s partly poor user experience – “fool me once, shame on you / fool me twice, shame on me”. It could also be that most people check their personal email on their phone and while surveys might be formatted for mobile, they’re too long and not optimized for attention spans.

    So I could rant about issues for hours, but what’s the solution? Look back a few years at why LinkedIn got out of the sampling space? For a brief moment, the industry took a deep breath thinking B2B sampling would change forever. Then LinkedIn realized that they couldn’t provide a 100% perfect user experience and survey/sampling would tarnish their brand image. Why? Because they could never target users in their database to achieve 100% conversion (survey start to completion). If you want to build the largest, most satisfied panelists globally, then each invitation to participate should end with a satisfied customer. Would we invite someone to a focus group, make them drive, park and come in — only to tell them that they don’t qualify and “go home”. Online it’s less personal. I think the industry could come up with dozens of standard audience profiling questions that every sample firm maintains and that every MR firm could utilize to compare apples to apples. Clients could then request to target those specific users from any sample company. If the sample firm delivered that exact user, regardless of completion or disqualification, then the firm gets paid and the respondent gets rewarded. In my opinion, that would begin to solve the issue of respondent engagement and start to bring a much larger base of people into taking surveys.

  18. @Matt agree with you. “User experience” matters so much more now than it did 5 or 10 years ago (or at least we finally are acknowledging it). There’s a huge chasm between what a minute of my time is worth (e.g., Google Opinion Rewards app, 30-60 seconds, $0.10 to $1 guaranteed instant cash) and what half an hour of my time is worth (e.g., email-recruited panel survey, virtual points that are of questionable value). Part of it is accurate and non-redundant targeting, but part of it is accepting that if you want half an hour of someone’s time you need to pay them 10 bucks.

  19. I find you need to pay respondents something more akin to $1 a minute to get representative respondent results, so $10 for a ten minute survey, $20 for a 20 minute survey.

  20. Excellent article, one that I have been hoping would be written because there needs to be a discussion.

    I manually check every survey. The 10 percent panel respondent replacement allowance might have been based on reality five or ten years ago, but in my recent experience (surveys of working professionals) the percentage of questionable respondents is higher. Besides straightlining, speeding, giving the survey to a subordinate to take, and registering under different names, I’ve also noticed an increase in respondents filling in text boxes by copying/pasting text from the Internet.

    My feeling is that respondents are doing this for a couple of reasons. First, expectations are not set properly when they sign up for a panel. Second, respondents believe that no one is paying attention and they are usually correct. They have been rewarded for these behaviors in the past and have gotten away with it (and unlike consumer respondents, the incentives are high for working professionals).

    As a panel customer, I don’t think the responsibility for respondent quality should fall so heavily on me. In addition to taking a lot of time, weeding out respondents on the fly has the potential to introduce another source of bias.

    One suggestion would be for the panel companies to keep track of the number of times a respondent is replaced. The first time, the respondent is sent an email re-educating them in a nice way about their role as a panel member and telling them that all surveys are reviewed. The second time, a warning is sent, and the third time, remove them from the panel. A more radical idea – remove them from all panels, by sharing information, so that the panel companies aren’t simply recycling poor quality respondents among themselves.

    Panel companies promote themselves on the size of their panels. I would rather have a smaller panel of qualified respondents, than a larger panel with a higher proportion of respondents who cheat and don’t belong on any panel.

    As someone else mentioned in the comments, the changes in panel ownership are also a source of frustration. One excellent healthcare panel was abruptly closed in early 2013 when they were purchased by a large physician services company. I was working through the details of a large proposal for sample at the time, and there was nary a peep from the acquirer, even after the news broke and I called them directly to inquire about the proposal – very unprofessional. I then had to scramble to find replacement sourcing.

  21. Carol, thanks for your very accurate observations and possible solutions about the respondent replacement issue for panels. I don’t why our trade associations like MRA, CASRO, ESOMAR etc. aren’t having more discussions about sample and panel problems and try to find solutions.

  22. Excellent discussion, especially the contributions by Ben, Matt and Lenny!
    A few points I would like to make:
    1. Much of the cost in sampling comes from administering projects rather than the sampling process itself. This can and will be fully automated, which will lead to cost savings of around 30% without sacrificing anything in terms of quality etc. vis-a-vis the status quo;
    2. Dynamic/river/real-time etc. sampling yields poor quality at relatively high costs with 20min+ surveys that look like they’ve been designed in the 90s. The reason: “professional” respondents are already annoyed by the poor quality of surveys and UX; regular respondents even more so.
    3. Mobile forces the whole industry to rethink how survey data-collection is done. Shorter surveys with a better UX will solve most problems: once surveys are enjoyable, a further drop of 50%+ in the price of data collection is feasible. This allows faster data collection at a much greater scale (why not ask 10.000 people instead of 1.000?). Add in some passive (mobile) data + a qualified data scientist and you’ll get a new realm of instant, high-quality insights.
    4. There will soon be over 4bn smartphone users around the word. The only two barriers against fast, cheap and high-quality survey data collection are (a) outdated technologies and (b) a lack of respect by the research industry for its most valuable asset: the respondent.

  23. I’m writing as Editor of the International Journal of Market Research, sample quality being a topic we are as concerned about as Fromen and those who’ve already posted responses. We even hosted a debate at the UK Market Research Society’s (MRS) conference back in March, with Reg Baker, Doug Rivers and Corrine Moy as speakers, and you’ll find full coverage of this within the May issue (Vol 57/3). What I’m surprised is that no one has mentioned as yet the enquiry launched in the UK by the British Polling Council & the MRS following the poor showing by all the polling companies in predicting the result of the UK general election back in June. The jury is currently out on the causes, with a final report due next March, but sampling methods have already been raised as a possible factor, and the ‘phone polls were no more accurate than the online ones. See my blog https://www.mrs.org.uk/ijmr_blog_archive/blog/1298 for a summary, with deeper coverage in my Editorial in the September issue of IJMR (57/5). I believe the implications of this review are likely to gto way beyond the way we conduct political opinion polls in the UK.

  24. Nick Tortorello, thank you for your reply back, and I agree with you about trade associations – not necessarily objective, which opens the door to other issues. Leonard M., I read Melanie’s post and I also agree, she has a number of good ideas that would work…if they were actually implemented. My concern is that the current trend is away from quality, unfortunately even among traditional sample providers (not “flavor of the month” as she aptly describes some sample sourcing, especially on the consumer side). This then transfers responsibility to the client. Peter M., similar issues in the U.S., if you haven’t already, you could touch base with Nate Silver at http://fivethirtyeight.com/politics/ .

  25. @ Lenny

    The comments here have been quite engaging and illuminating. Thankfully, I have not seen any “fear based language” or “rhetoric and misinformation” in this discussion.

    In fact, some folks have been quite brave and have written honestly about sample quality challenges, despite working in that industry. Fighting the current of self-preservation is not easy, and should be applauded.

    When the customer – users and buyers of online sample – articulate a rigorous defense, we will have finally turned a corner. I am sure many of us look forward to that day, if it ever arrives.

Join the conversation

Sign up to our newsletter

Don't miss out...
Get great insights content delivered straight to your inbox.
I agree to receive emails with insights-related content from GreenBook.


You can manage your email preferences or unsubscribe at any time. GreenBook protects your privacy under the General Data Protection Regulation.