Is Online Sample Quality A Pure Oxymoron?

Why is nobody here addressing the elephant in the room? It’s not just sample quality. It’s survey quality.
6a00d83420cedf53ef014e8a87325d970d-800wi

 

Editor’s Note: It must be something in the air, because the topics of panels, online sample, and the interface of technology and quality has been a hot topic lately. So far this year alone I have engaged in four different advisory conversations with investors on the this topic, which has never happened before.  It’s no surprise though: online sampling is now the backbone of market research globally. Whether we are engaging respondents on mobile devices or PC’s, the same principles apply: personal online access is ubiquitous globally, and programmatic buying for ad delivery, predictive analytics, and online panels/sampling are BIG business. REALLY BIG business, and it’s only going to get bigger.

 

That being the case, issues around quality and how we ensure it is the primary factor while the industry continues to maximize the mix of speed, cost, and value will only grow in importance over the next few years. And that brings us to Scott Weinberg’s call to action post today. Scott doesn’t pull any punches and his concerns harken back to Ron Seller’s post a few years ago on the “Dirty Little Secrets” of online panels.  I believe we have made progress in this area and that some suppliers remain clear leaders in the quality arena, but this is an issue we shouldn’t take our eyes off of and Scott reminds us of why.
By Scott Weinberg 

 

I attended a CASRO conference in New Orleans back in late ’08 or early ’09. The topic was ‘Online Panel Quality.’ I’ve often thought about that conference: the speakers, the various sessions I attended. I recall attending the session about ‘satisficing’ which at the time was being newly introduced into the MR space (the word itself goes back decades); I thought that was an interesting expression for a routine occurrence. Mostly however I remember the hand wringing over recruitment techniques, removing duplicates, digital fingerprinting measures and related topics du jour. And I remember thinking to myself, for 2 days non-stop: ‘are you kidding me?’ Why is nobody here addressing the elephant in the room? It’s not just sample quality. It’s survey quality.

Allow me to explain where I’m coming from. My academic training is in I/O Psychology. Part of that training involves deep dives into survey design. Taking a 700-level testing & measurements course for a semester is a soupcon more rigorous than hearing ‘write good questions.’ For example, we spent weeks examining predictive validity, both as a measurement construct, and also how it has held up in courtrooms. More to the point, when you’re administering written IQ tests, or psych evals, or (in particular) any written test used for employment selection, you are skating on thin ice, legally speaking. You open yourself up to all kinds of discrimination claims. Compare writing a selection instrument that will withhold a courtroom challenge with writing a csat or ‘loyalty’ survey. Different animals, perhaps, but both are Q & A formats. A question is presented, and a reply is requested. However, the gulf in education in constructing MR type surveys is visible to anyone viewing the forest in addition to the trees.

An MR leader in a huge tech company said something interesting on a call I remember vividly. He asked: ‘when is the last time you washed your rental car?’ The context here pertained to online sample. And he was one of the few, very few really, that I’ve encountered in the last 12 years I’ve been in that space, who openly expressed the problem. The problem is this: why would you ever wash your rental car? Why change the oil? Why care for it at all? You use it for a day, or a week, and you return it. Online respondents are no different. You use them for 5 minutes, or 20, and return them. If we actually cared about them, the surveys we offer them wouldn’t be so stupefyingly, poorly written. I’ve seen literally hundreds of surveys that have been presented to online panelists. I’ve been a member of numerous panels as well. Half of these surveys are flat out laughable. Filled with errors. Missing a ‘none of the above’ option. Requiring one to evaluate a hotel or a restaurant they’ve never been to. Around a quarter consist of nothing but pages of matrices. Matrices are the laziest type of survey writing. Sure, we can run data reductions on them and get our eigenvalues to the decimal point. Good for us. And the remaining quarter? If you’re an online panelist, they’re simple boring. Do I really want to answer 30 questions about my laundry detergent? For a dollar? Ever think about who is really taking these surveys? Sidebar: do you know who writes good surveys? Marketing people using DIY survey software. Short & to the point surveys. 3 minutes. MR practitioners hate to hear it, or even think about it, but that’s reality. I’ve seen plenty of these surveys by ‘non-experts.’ They’re not only fine, but they get good & useful data from their quick hit surveys.

Since you’ve made it this far, time to bring up the bad news. I’ve been accumulating a lot of stories the last 12 years. I’ll share a few. These all happened, and I’m not identifying any person or firm so please don’t ask.

  • Having admin rights to a live commercial panel, I found a person with 37 accounts (there was a $1 ‘tell a friend’ recruitment carrot). Also found people with with multiple accounts and a staggering number of points, to the point of impossibility.
  • The sales rep who claimed to be able to offer a ‘bi-polar panel’ and sold a project requiring thousands of completes of respondents with a bi-polar or schizophrenic diagnosis.
  • The other sales reps I know personally (at least 5) who make $20,000-$30,000 per month selling sample projects. Hey, Godspeed, right? Thing is, not a one could tell you what a standard deviation is, let alone the rudimentary aspects of sampling theory. Don’t believe me? Ask them. Clearly, knowing these items are not a barrier to success in this space. Just a pet peeve of mine.
  • Basically, this entire system works via highly paid deli counter employees. ‘We can offer you 2 lbs of sliced turkey, a pound and a half of potato salad, and an augment of coleslaw, for this CPI.’ Slinging sample by the pound, and let the overworked and underappreciated sample managers handle the cleanup and backroom topoffs.
  • The top 10 global MR firm who finally realized their years-long giant tracker was being filled largely with river sample, which was strictly prohibited.
  • Chinese hacker farms have infiltrated several major panels. I know this for a fact (as do many others). You can digital fingerprint and whatnot all day long, they get around it. They get around encrypted URLs. Identity corroboration. You name it, they get around it.
  • The needle in a haystack b2b project that was magically filled overnight, the day before it was due.
  • Biting my tongue when senior MR execs explained to me their research team insists on 60 minute online surveys, and they’re powerless to flush out their headgear.
  • Biting my tongue when receiving 64-cell sampling plans. The myopic obsession with filling demographic cells at the exclusion of any other attributes, such as: who are these respondents? You’re projecting them out to non-panelists as if they’re one and the same?
  • A team of interns inside every major panel, taking the surveys, guessing the end client, and sharing that with the sales team in a weekly update.
  • Watching two big global panels merge and scrutinize for overlap/duplicates, stretching across 12 countries. USA had 18% overlap, the rest (mostly Europe) had 10%. Is this bad? No idea. Maybe it’s normal.
  • Most online studies are being at least partially filled with river sample (is anyone surprised by this?).
  • Infiltration of physician panels by non-physicians.
  • The origin of the original ‘Survey Police’ service
  • Visiting the big end client for the annual supplier review and watching them (literally) high-five each other as to who wrote the longest online survey. The ‘winner’s’ was 84 questions. We had performed a drop-off analyses, which fell on deaf ears.

Lastly, and for me the saddest of my observations, are the new mechanics of sample purchasing. The heat & light on sample quality that peaked about 4 years ago has been in steady decline. In the last couple years, sample quality is simply assumed. End client project sponsors assume their suppliers have it covered. The MR firms assume their suppliers have it covered. And the sad part? The sample buyers at MR firms, and I’ve seen this countless times, do not receive trickle-down executive support for paying a bit more for the sample supplier who actually is making an effort and investment to boost their sample quality, via validation measures for example. There are exceptions to this, or were, in the form of CPI premiums, but no widespread market acceptance to pay a buck or three more. In fact, the buying mechanics are simple, get 3-4 bids, line them up, and go with the cheapest CPI, assuming the feasibility is there. This happens daily, and has for years. And by cheaper, I’m talking 25 cents cheaper. Or 3 cents. That’s what this comes down to. So chew on this: why would a sample supplier pour money down the quality rabbit hole? Quality is not winning them orders. Margin is. Anyone working behind the scenes has also seen this movie, many times. Incidentally, there’s nothing wrong with buying on price, we all do this in our daily lives. The point is this: if you’re going to enforce or even expect rigorous sample quality protocols from your suppliers, then give your in-house sample buyers the latitude to reduce your project margins. I won’t hold my breath on this, but that’s what it takes.

I could go on but more is not necessarily better. This is the monster we’ve created: $2 and $3 CPIs has a ripple effect. How can a firm possibly invest in decent security architecture, with prices like this? How can we expect them to? If you’re buying $2 sample, why not go to the source and spend 50 cents?

Now that I’ve thoroughly depressed you, one may wonder, is there any good news? I remember telling my colleague 5 years ago ‘if a firm with a bunch of legitimate web traffic, like Google, ever got in this racket, they would upend this space.’  I didn’t think that would actually happen, but there you go (that one may still be depressing to some). I also believe that ‘invite-only’ panels give the best shot at good, clean sample. When you open your front door to anyone with a web connection, and tell them there’s money to be made, well, see above. More recently I’ve become a convert to smartphone-powered research. Many problems are removed. It has its own peculiarities, but from a data integrity perspective, it’s hard to beat. Lastly, and I could do a whole other riff on this: when we design surveys with no open end comment capture, you’re hoisting an ‘open for business’ sign to fraudulent activity. Yes you can add the ‘please indicate the 4th option in this question’ but both bots and human pros spot red herrings like that. It’s much more difficult to fake good, in-context open ended verbiage. Yes it takes a bit more work on the back end, and there are many solutions that can assist with this, one in particular. And the insights you can now share via this qual(ish) add-on is a nice change of pace relative to the presentation of trendlines and decimal points.

That’s all for now. Thank you for reading.

Share
You can leave a response, or trackback from your own site.

18 Responses to “Is Online Sample Quality A Pure Oxymoron?”

  1. Chris Robinson says:

    January 20th, 2015 at 7:46 pm

    Hallelujah Scott, I thought I was alone in this wilderness. The real issue Scott is that skilled, disciplined market research professionals like you and I are simply dinosaurs in this online world. There is no amount of ranting and raving that will stop this train blundering down the tracks. Its too convenient, too low cost and a readily accepted con job. I agree with all of what you have seen and said in great detail about survey design, but my hobby horse is a much simpler issue – a basic sample representativeness issue.

    Now why would anyone not a product of a serious research industry pre-1990’s query anything about online when it seems to solve all the key concerns of old style, legitimate market research – cost and timing. Bosses aren’t interested in issues like sample representativeness and there is a massive assumption that these newby market researchers can even write a question – assuming they don’t abuse basics like length of interview – and we all know that is becoming a given. Does anyone bother looking at response patterns in these long surveys. You should and it would shock you. Mostly straight-lining answer patterns. In other words “how the hell can I get this survey over with quickly and get my cash?” type of survey involvement.

    Now lets move onto panel representativeness. It is clear when you look at age and income profiles that panels simply cannot recruit people over 40 or with high incomes. There representation is way below what could have been expected based on census numbers. Now the industry response is “oh, don’t worry about that, you can simply weight quotas for representation”. Sounds good in theory until one starts asking why these older respondents and higher income respondents aren’t better represented. For a long time it was argued that they weren’t online or smart phone users, which of course begged the question why anyone would claim panel representation could ever be representative? Now penetration of smart phones is ubiquitous so why are panels still biased? The answer is key to everything that is wrong with panels – basically there are a whole lot of people out there who simply do not want their “private” social space intruded on by surveys. This is obviously a big issue for older and higher-incomed respondent targets, but we have to ask is this a phenomenon that might actually suggests panel participants are different from the general population. There is enough empirical research to hand to suggest this is a real problem. My own experience in seeing a series of financial services trackers move to online without any sample changes was a major learning experience. It was clear any open ended questions resulted in much lower brand mentions than in a personal interview. The other concern was those unthinking responses to imagery batteries.

    Truth is the industry is on a slip-slide into unprofessional behaviour that lacks the rigour that I was subject too as a young market researcher. I have no expectations that this exchange will lead to much. Once DIY systems started to take off you could see the dross on the surface of this industry. I am saddened by its demise.

  2. Kevin Gray says:

    January 20th, 2015 at 8:55 pm

    Thanks for this, Scott, and also Chris for your thoughts. I think there is still plenty of room for high-quality survey research at reasonable cost, but communicating “the difference” seems to be getting more challenging. Part of the reason, if I am correct, is that there are now more tools and methods competing for attention. That said, I began to become concerned about standard slipping more than 20 years ago. The mr industry was growing very rapidly even back then and training and education was not keeping pace. I do not have a simple solution.

  3. Ryan Barry says:

    January 21st, 2015 at 9:27 am

    Hi Scott,

    Thank you for taking the time to write this post. Full disclosure, I worked in the panel business for many years as recently as July (GMI/LSR) of last year and so many of the dynamics you outlined are true.

    Most field directors are pushed to buy at the lowest CPI and from spending the better part of the last year engaging with corporate researchers on the dynamics of panel quality and balancing, less than 30% that I spoke with even knew what the term river sample meant.

    It’s sort of blind trust that people assume is being taken care of. And while many agencies do really put some thorough rigor behind sampling, it’s not nearly a high enough percentage from what I have seen.

    I will say, that panel of the panels I work with now are putting an effort into diversifying their recruitment strategies and enhancing quality, but when we force 45 minute grid heavy surveys and won’t pay more than $5.00 we have to look ourselves in the mirror as well. I do see this coming to and end because the days of consumers in panels sitting through these surveys and agencies making a living by marking up panel and sending clients glorified field and tabs are quickly coming to an end.

    I also really agree with your point that the big data providers have a massive opportunity to take over the space, not only Google, but also massive telecom companies who not only have direct mobile access, but also have mines of behavioral data that we could directly tie into stated responses in surveys. I am hearing rumblings across many of these companies, but am also seeing the few companies who truly operate access panels putting a massive emphasis on pooling as much data about people as possible so we don’t have to ask age, gender, where do you shop, what do you buy 50,000 times to the same poor respondent. My worry is that it might be too late as we have already annoyed these people too much.

    I will say, that in my current firm we don’t ask any respondent to spend more than 10 minutes with us, using panel information that we know but also in keeping our surveys focused and the quality of the open ended responses and overall engagement is evident.

    There was a great piece of research that my former colleagues Mitch Eggers and Jon Puleston presented at ESOMAR a few years back, titled ‘the dimensions of quality’ that essentially showed that everything from panel quality, to balancing, quota management, respondent engagement and survey length/engagement had a fairly equal impact on the quality of the results.

    It just cracks me up that the panel is the flour in which our industry bakes its bread (have to credit Susan Griffin of Brainjuicer with that metaphor), yet we consider the topic stale and not of interest.

    Thanks again, Scott.

    Sincerely,
    Ryan

  4. John Sukup says:

    January 21st, 2015 at 7:18 pm

    Scott, Chris, Kevin–all interesting information you presented. I’m rather “new” to the MR industry (~7 1/2 years in) but I can agree 100% with your sentiments. The online survey methodology SHOULD be dying on two very noticeable fronts–poor panel quality and lack of attention to survey design best-practices. Unfortunately as I believe you would all agree, MR has begun to devolve into a sort of pseudoscience with very little rigor or scientific foundation. At the end of the day, all MR clients seem to want (and all the MR supplier-side client managers want for that matter) is THE NUMBER.

    When I say “the number” I’m referring to an absolute basic level of measure. “Well X% said they liked our product in the survey and X% said they did not. We are doing well/poorly.” That is the extent of it. MR does not exist to drive decisions and actions (as it always should — thank you AIM Process) but rather to put a pretty piece of “evidence” into the mix of decision-making. MR users do not necessarily care how you arrived at your “number” (and this can be on both client and supplier sides as I have personally witnessed), they just care that “the number” exists.

    Therefore, panels are probably safe for now as are the online surveys that rely on them. The real issue, maybe even the root cause, is the lack of education and understanding by the clients and suppliers of MR–it goes both ways. Without a general understanding of best-practices in MR (sample planning, methodology, statistics, data analysis, etc.) it will continue to be the blind leading the blind. I’ve seen it first hand and see if every day.

  5. Anonymous Researcher says:

    January 23rd, 2015 at 3:25 pm

    I thoroughly enjoyed this post and the comments following. I recently ran a study using a panel, and was excited about the results until a colleague pointed out that, statistically speaking, the demographics didn’t play out, unless there was a large majority of masters-degree bearing people in the US who joined panels. Being a member of a few panels myself, I’m often bemused, and sometimes horrified, at the ridiculously low quality of surveys being administered. I worked through a 45-minute survey once or twice just to see what on EARTH was so important that it had to be that long. The answer? Nothing – but someone thought it would be great to ask me about 5 different topics in-depth, with poorly written questions and poor sets of answers! I have been frustrated by the lack of “not applicable” options. I took one survey asking me about my pets. I have none. I answered such, and it STILL asked me what food I typically fed my pets. I wrote to the panel management team and told them the survey was broken, but the email was answered a full month after submitting it, so I’m guessing someone who just wanted the points for the survey without pets went ahead and gave whatever answers to get the points. I agree that there are enourmous issues on both sides of this coin – from panels and who they consist of, to the survey writers, to the research consumers who don’t really care for in-depth analysis, they just want the top-level numbers so they can move on.

    I am a member of one panel that offers next to nothing in terms of rewards for being a member, but I will nearly drop anything to take their surveys because they are ALWAYS less than 5 minutes to take, ALWAYS have open-ended text options, and, 98% of the time, the questions are well-written (including the answer options) (and, sidenote, their surveys are always fantastic on mobile devices, unlike other panel’s surveys that are absolutely terrible on mobile!).

    So here’s a follow-up question – what do we do in the MR community to change any of this? How can we be sure that good surveys are being written? How can we be sure that those for whom we are doing the research actually pay attention to what the research is actually telling them? I think DIY researchers are doing better jobs with their surveys because they don’t have time to write super lengthy surveys that end up requiring super in-depth analyses. They have a specific need for the surveys they administer, so they come away with very focused surveys. How can we get those who are supposed to be experts in this to take the same approach? How can we convince customers that the survey experience will be better all around if we focus on a single purpose for our surveys instead of trying to ask too much of our respondents just because we think we have their attention and want to make the most of it?

  6. Saul Dobney says:

    January 26th, 2015 at 4:27 am

    It’s still a bit wild-west with regard to DIY research, but if the research stops producing valid insights and businesses make wrong decisions, then in the post-mortem someone will query why the forecast outcomes from the research don’t match the actual outcomes. In this context, cheap research starts to look expensive.

  7. Matt Dusig says:

    January 27th, 2015 at 5:09 pm

    Scott, Great thoughts in this article. You know the game… we build the panels as fast as we can, and the lack of transparency between survey software quota requirements and sample targeting causes churn. Then if the panelist actually does qualify they frequently end up in a maze of long survey questions that only further make people say “wow, I’m spending 30 minutes for $1 reward”. If you’re lucky it’s a $1 reward. Why $1? Well as CPIs are driven down by the needs to achieve greater profitability, the rewards that can be offered suffer. But $1 in China… that’s still good money. So just like much of what used to be produced in the US and is now produced in China, it doesn’t surprise me to see your comments about Chinese farms. What’s the solution? In my opinion, standardized panelist targeting questions that map to standardized survey quotas and then you can get to a true post-targeting incidence. You’ll increase the conversion of starts to completes for each survey, drive up the satisfaction for panelists, and create scenarios where more people actually want to take surveys — instead of waste their time. I could go on for hours about this one… Matt

  8. Dieter Korczak says:

    February 4th, 2015 at 9:33 am

    Dear Scott, dear participants in the discussion,

    for me the solution out of the described problematic situation is Evidence based consumer research.
    Therefore we need again old tools like representativity, validity, reliability and a new MR moral.
    Additionally, we have to educate our customers/ clients that you get monkeys if you pay peanuts.

  9. John Coldwell says:

    February 4th, 2015 at 9:50 am

    Hi Scott,

    There’s another elephant in the room – survey response rates.

    Over the past couple days I’ve been researching the general subject of Survey Response Rates.  My normal interest in the subject became elevated when I ran across an article having to do with the ACSI . 

    What caught my attention in terms of this email was the following quote –
     
    ———————————–
    “The American Customer Satisfaction Index found that response rates for paper-based surveys were around 10% and the response rates for e-surveys (web, wap and e-mail) were averaging between 5% and 15% – which can only provide a straw poll of the customers’ opinions.” 
    ————————————
     
    While it’s not news that electronic survey response rates have been steadily eroding for the past twenty years, I was very surprised to read that, in at least some cases, they were now performing similarly to the long-maligned paper survey.  After reading the statement, various additional questions sprang to mind; chief among them, is that response range indicative of the entire industry, or is it a product of something that ACSI is doing?  I must have visited 75 web sites in a search for the answer to that question.  The results were decidedly mixed. 
     
    First off, of the thousands of web survey providers out there, I ran across quite a few claiming they had achieved some pretty lofty response rates.  In support of supposedly “proprietary techniques”, I found companies claiming that they had achieved 30%, 50%, even 80% response rates.  One company even claimed to have hit 100%, more than once.  Most of those assertions, however, seemed to have caveats, both openly stated and inferred, attached to them; qualifiers like “a survey of a very small body of very closely intertwined customers”.  In other words, many of the high response rates were probably based on having sent out ten survey invitations.  After discounting those sort of claims, and after reading between the lines on other sites, it was clear that no-one anywhere was making claim that they can consistently hit numbers anywhere near those kind of totals.  In fact, no-one anywhere seemed to make any kind of claims at all as to what they can consistently hit.  No averages, no medians, no realistic expectations or long term histories of any kind.    
     
    Secondly, in my travels I ran across quite a number of academic and research company generated articles which, though presenting another fairly broad range of results, seemed to average out to a reasonable expectation of something in the 10% to 15% range, and probably trending closer to the lower number.  I was not able to locate a definitive voice of the industry on the matter, but will continue looking when or as time allows.  
     
    One opinion that I did run into over and over again what that response rates to surveys in general – and of course most references on the web are to paper, internet or telephone – have ALL been declining over the past ten years.  I find that imminently believable given our own history, which has been consistent with that trend.  In the mid and late ’90’s, we consistently came in at 75% to 80%.  By the mid to late ’00’s though, we dropped to closer to 70%, sometimes less.  In the current decade we are so far running closer to 68%, and sometimes less.

    (Perhaps at this stage I should explain, for those of you who are unaware of InfoQuest, what it is we do and how we’re rather different. Back in 1989 a couple of guys developed a white plastic box with five compartments and a deck of cards that we use for customer satisfaction surveys. Nowadays the box would probably fall into the ‘gamification’ camp. It would probably also be condemned as old-fashioned and clunky. Well, here’s the thing. Our clients are all in business-to-business (B2B). Typically they’ll have between 100 and 500 customers (who are all different in their personalities, needs and wants, and profitability). Using the clunky old-fashioned box we are able to pose up to 60 questions and statements in any language from Afrikaans to Farsi and get them a response rate they can work with.)

    There has been plenty of teeth gnashing and navel gazing around here in recent years as we have repeatedly tried to figure out why we are no longer hitting the kinds of numbers we used to routinely enjoy.  We’ve reviewed our operations, our validation procedures, the content of advance notification letters, the callers we use, anything we could think of that might be having an impact on response rates and, with rare exceptions, we found nothing.  The simple truth seemed to be that what worked like a charm in 1995 is simply not working as well in 2015.     
     
    There are two factors, however, that are difficult to escape.  First, in 1995, customer satisfaction surveys were still a relatively new phenomenon.  Companies and people were just starting to understand the value of surveys, and we had the clearly better mousetrap.  In the intervening twenty years, however, everybody, and I mean EVERYBODY, has jumped onto the proverbial bandwagon.  In 1995, surveys were an interesting novelty, an intriguing idea.  Today they are everywhere.  We are bombarded by them wherever we turn, often unable to avoid them, even when we’d prefer to.  You can’t conduct business online, can’t buy something in a department store, can’t buy a light bulb at Home Depot without being asked to participate in a survey.  It’s become a near glut, and like the trees in a forest, after a while you no longer even see them.    
     
    The second factor is the growth of informational incompetence among our clients.  In the early 90’s we dealt with generally small companies, often “mom and pop” operations who generally knew their customers pretty well.  Today we are mainly dealing with multi-billion dollar, multi-national conglomerates who have decrepit CRM systems, who take every informational shortcut they can when assembling a customer list, and who consistently have us trying to validate former employees, former customers, non-decision makers, and the dearly departed.  In other words, a big part of our problem is application of the theory of garbage in, garbage out.  

    But we’ve still got an average response rate that is six times higher than a web-based survey.

    So, back to the 5% to 15% response issue.

    The problems are: –

    1. A low response rate will tend to garner feedback from the two ends of the spectrum – totally satisfied through to totally dissatisfied – in a ratio of 3 dissatisfieds to 1 satisfied. Those in the middle are the least likely to respond.

    2. The unbalanced response will end up becoming an add-on to the company complaints procedure.

    3. If you have only, say, 100 most important customers, then to hear back from only 5 to 15 of them can be depressing.

    4. And NOT hearing back from the other 85 to 95 can be seriously dangerous if the CEO was thinking of making strategic decisions based on the feedback.

    So for me its been the Elephant in the Room for years. No-one talks about response rates and yet, particularly in the B2B arena where the typical organisation has only a few hundred customers, then a good, high response rate is a key component to having feedback that is ‘useful’ rather than being “interesting”. And data based on low response rates is downright dangerous and should carry a health warning.

    If you’re interested in the box then there’s a short 90-second video on http://www.infoquestcrm.com. And, to help, we have some advice on how to increase response rates – http://infoquestcrm.co.uk/how-to-increase-response-rates/

  10. Brett Watkins says:

    February 4th, 2015 at 10:50 am

    Interesting read Scott…being in the qual world there are several parallels: databases (panels) over-used with repeater/cheater issues; screeners 30 minutes long without any reward unless you’re one of the lucky 20 that qualify for the research. Otherwise, “I’m sorry, but you don’t qualify for this study, but we’ll call you again soon for another chance to spend 30 minutes on the phone with us.” The consume one really wants? The person you’re most likely to alienate with these arcane processes. Sadly, the same obstacles exist: disconnect on declining cooperation rates, penny wise and pound foolish decisions over a few hundred dollars (or an extra day of travel), etc.

    The part that resonates with me from your message, and I believe are the origins of the issues…are the lack of educational curriculum and accreditation. Very few undergraduate and graduate programs with a focus on market research. “Expert” can be established with what…a certification with little academic rigor (one week course? Being a member of an organization? Come on down!). Its not surprising DIY has taken hold as anyone can hang a shingle saying they are experts or consultants…if clients make poor choices based upon budgets, then are burned…why be surprised when they feel they can write a questionnaire or moderate a focus group just as well (and ever trust that there’s truly a difference in the right hands)?

    Knowing education is a long term problem that have numerous parallels throughout our world today, the key in my opinion short term is finding more ways our industry can be coalesced and all elements of the research process be in the same room. There’s close to a dozen different organizations that represent different facets of the industry: while each serves their purpose, there’s a definitive need for them to work in concert. Best practices will evolve, all layers (end client to field) will better understand their roles and strengths to do the proper needs assessment and deliver the best solutions (think technology today…software companies don’t sell storage and vice versa, they work together as channel partners to deliver the best solutions), awareness of the industry will rise in the eyes of consumer and understand its importance… and true experts and innovation will emerge.

  11. Jaimie Korody says:

    February 4th, 2015 at 1:17 pm

    Thanks for the spot-on commentary, Scott. I was trained in the world of graphic design, where user interface rules. It’s not so surprising that the best surveys – brief and personable – are produced on DIY platforms by marketers attuned to interface. Marketers disrespect consumers at their peril.

    The research industry needs to recognize that it has both the toolset and reach to produce actionable data – through smartphones. A creative, secure and respectful interface in everyone’s pocket, worldwide.

  12. Scott Weinberg says:

    February 4th, 2015 at 2:22 pm

    Thanks all for the comments. Much obliged.

    Also – I’m noshing a follow-up, motivated by the many who have connected privately with me since this was published. If I paraphrase those chats, their identities will remain anonymous.

  13. Has Research Quality Really Gone Downhill? | GreenBook says:

    February 6th, 2015 at 8:45 am

    […] post actually started as a reply to Scott Weinberg’s terrific Greenbook Blog post Is Online Sample Quality A Pure Oxymoron?  After doing a little writing in the reply box, I realized my comments were lengthy enough to […]

  14. Melanie says:

    February 6th, 2015 at 10:36 am

    Just to be clear – it’s certainly NOT every major panel company that has a group of interns. There is NOT a group of interns at Research Now doing that. Period. We lay our heads on our pillows in peace each night.

  15. Scott Weinberg says:

    February 6th, 2015 at 4:03 pm

    Melanie – that was interpreted incorrectly (or perhaps I didn’t write it coherently). The ‘every major panel’ passage = they registered *as panelists.* Not that every panel company is using interns for this purpose.

    Major difference of course, and glad I could clarify.

  16. Scott Weinberg says:

    March 21st, 2015 at 8:58 pm

    I added a brief follow-up to my original post in case of interest: http://tablamobile.com/blog/

  17. Hispanic Sample in the Commodity Age - says:

    January 31st, 2016 at 3:47 pm

    […] margins, quality in the sample industry has been compromised (as discussed in great lengths in many blog […]

  18. Piper says:

    May 17th, 2016 at 9:11 pm

    Touche. Grewat arguments. Keep up the great work.

Leave a Reply

*

%d bloggers like this: