Download the

Online Research in 2020: Machines Will Take Your Surveys

 aiwide

 

By Ben Leet 

I was recently asked to present at Insight Innovation Exchange (IIEX) in Europe on the topic of what research will look like in 2020. Considering the company I work for, Instantly, is known as an innovative company my initial thought was that this would be an easy task, but it was made harder with the mandate of not turning it into a sales pitch! Since Instantly is predominantly in the business of supplying audiences for online surveys, I decided to talk about some of the industry trends surrounding our part of the MR world, and what I think will happen in the future.

Trend 1 – Respondent behaviour

We ran some comprehensive fieldwork across our panels to assess the state of mind of our respondents. The results were predictable, if a little scary. Almost 60% of the panel spend more than 15 minutes on our sites at a time, and almost 80% of them would consider a survey longer than 15 minutes. However we found that most admit to speeding through surveys at least occasionally. Top reasons given were boring or repetitive surveys or a low incentive.

Trend 2 – Researcher behaviour

The surveys that we are asked to field are becoming increasingly long, complex, and repetitive. The average survey length is now over 20 minutes, many of which are still not mobile optimised. To capture high quality data and avoid this trend of speeding through surveys, incentives need to be increased to compensate for the time and inconvenience of taking surveys.

Trend 3 – Commoditised Market

Research agencies believe that all panels are built the same (they are not; see trend 4), therefore they only care about the price they can buy sample for. Of course, they will “say” that quality is important, but do not ever ask the right quality questions and instead make a buying decision based on price. To a panel company that has already optimised efficiency, reductions in price have to come from above the gross profit line, i.e. reduce incentives paid out to respondents.

Research agencies are writing long and boring surveys thinking that respondents care about the answers that they are giving in them. The agencies are then paying pennies to those same respondents. Respondent behaviour is therefore changing and they are speeding through surveys as fast as they can to achieve their meagre pay out. Overall, the industry has a misalignment in objectives and it means that any longitudinal data will naturally change over time, even though the methodology “must stay the same to preserve consistency”.

So what next for panel companies?

Trend 4 – not all panels are built the same

There’s a common misconception in the industry that all panels are built the same, and that panel = good quality and river = bad quality. Even the Esomar 28 questions allude to this. Consider question #2:

“Please describe and explain the type(s) of online sample sources from which you get respondents. Are these databases? Actively managed research panels? Direct marketing lists? Social networks? Web intercept (river) samples?” Context: The description of the types of sources a provider uses for delivering an online sample will provide insight into the quality of the sample”.

Firstly, the context is completely incorrect. There is absolutely no way that a buyer of sample can gain any insight into the quality of the sample simply by understanding whether said panel is sourced from a panel or a river. As I presented in Amsterdam, a good analogy is to think about fishing. Would you prefer to eat a fish caught fresh today from a lake, or a fish caught a week ago from the Ganges and packaged up? The fish from the lake will taste better, but because we know nothing about it we’d rather eat disease-ridden pre-packed fish that’s caught in bulk from the Ganges? I doubt it.

The quality of online sample provided depends on the actual source of the sample. Panel or river is not a source, it’s a methodology. There are many affiliate networks and publishers that drive traffic to either a survey router (river) or to a panel, and often the same source is used for both river and panel, so it’s all about the sourcing of the traffic.

The right questions that really define sample quality are:

  1. What is your recruitment methodology? Describe the different sources and blends used to build the research panels/drive the survey routers.
  2. Does your company have the ability to track respondent quality back to source and therefore actively manage those sources? How is this done?
  3. What are average drop-out rates, disqualification rates and recall rates across the panels that you manage?

Trend 5 – panel companies care more about efficiency than quality

Of course they do. In a commoditised market, where quality actually counts for zero in the sales cycle, companies have no choice but to drive efficiency to maintain margins. All businesses have shareholders looking to create earnings from profits made, which means cutting costs if prices are in decline.

How most panel companies cut costs:

  • The most obvious area is to make sure that respondents see as many surveys as possible, and can complete as many as possible. This means pre-conditioning them via routing. It optimises the panels, but causes data skews. Long gone are the days of “one invite for one survey”.
  • Reduced incentives. Mentioned above, incentives are at an all-time low just when surveys are at an all-time high for poor quality and length.
  • Source cheaper. It’s no coincidence that the cheaper sources of web traffic are also likely to convert into higher profit margins, as those sources drive very active and aggressive survey takers only in it for the money. But, since all panels are built the same and our clients don’t care about this, why not……!

At my company for instance, we’re working hard at building technology that drives efficiency in our panels, whilst at the same time not compromising on quality. Our sampling today is focussed around our Adaptive Profiling platform, which uses machine learning and some clever data science “stuff” (algorithms, apparently!) to better predict which surveys would be relevant for which audiences without actually asking them. That drives a better respondent experience without the pre-conditioning that a traditional router would give. This means that we can still afford to close down poor quality traffic sources without compromising our bottom line targets.

What will our world look like in 2020? Considering the speed at which big data and machine learning is taking hold, I can confidently predict that we could build algorithms to replace respondents, and our research agency clients would not notice any differences in data. Indeed, we are already driving our respondents to think like machines as it is. If we want to stop this trend, we have to be better at how we communicate with them:

  • Keep it short. Time is precious. We already know so much about them, why ask the same damn questions over and over again? We can be way smarter in this area.
  • Think mobile at all times. Nobody wants to sit on a desktop for 30 minutes and take your survey just for fun! Cut it to 10 minutes and send it to their mobile device and the insights gained will be richer and more accurate.

Change is happening in this industry faster than at any other time in our history, and now is the time to embrace it before machines take it over completely.

Please share...

6 responses to “Online Research in 2020: Machines Will Take Your Surveys

  1. Would “bot” respondents be all bad? As a research consultant the possibility of running surveys virtually instantaneously and at low costs is exciting. Niche audiences would no longer be a problem. This would also lead to the ability to perform additional iterations i.e. to go back with the new questions that arise from the first survey, something we rarely do now and something that can only help to refine our analyses and improve our interpretation.

    I can see why panel providers and fieldwork agencies would be worried but would this really be a bad thing for the industry as a whole? It would place the emphasis on analysis and interpretation and away from data collection.

    The “bots” would have, of course, to be proven to reflect the opinions and behaviours of real people but they would open up a new world of possibilities.

  2. Very interesting concept and am excited to see how this works out for both panels and researchers. After doing business abroad and working internationally, we have run into cultural and wording issues when conducting surveys. Would these bots make it easier? Would they be able to understand cultural differences? As stated above, needs proven, but game changer for marketers if successful.

Join the conversation

Sign up to our newsletter

Don't miss out...
Get great insights content delivered straight to your inbox.
I agree to receive emails with insights-related content from GreenBook.


You can manage your email preferences or unsubscribe at any time. GreenBook protects your privacy under the General Data Protection Regulation.