1. Greenbook 2
  2. Greenbook-Mobile-6.29.16-

Science is Dead – Long Live Marketing Research

Research-on-research is a good thing, but not if we build our biases into the design.

Data-science (1)

 

Dr. Stephen Needel

There are some days when I think I should just stay in bed. But today I’m working off jet-lag in Shanghai, so I’m up at 4am, catching up on my RBDR Daily News Report. There’s Bob Lederer’s smiling face, on his 28 May broadcast (free plug, Bob), extolling the importance of a study done by Instantly. This study purports to show interesting differences between using a mobile device and PC online to respond to surveys. I believe research-on-research is a good thing for us to do, but this study isn’t a very good exemplar. Because it is getting talked about in a number of blogs, it deserves some skewering.

Instantly begins by claiming, “This research was… designed with an open mind to prove or disprove that mobile gives more accurate insights than online.” First, nobody who isn’t selling mobile research claims or believes that mobile necessarily gives more accurate insights, so proving or disproving this isn’t a burning issue in researcher’s lives. As you read the report, you’ll notice that the writing either favors mobile or apologizes for mobile’s shortcomings; at least the latter are reported.

To run this study, Instantly recruits two panels of shoppers, one who participates via mobile and one who responds on a PC (the online sample). It’s a three-part test. The first part is a shopability study for Lays potato crisps/chips – Prawn Cocktail in the UK and Cheesy Garlic Bread in the US. Prawn Cocktail has been around as long as I’ve been doing research internationally – since 1995 at least. Cheesy Garlic Bread was an in-and-out product for Lays in 2013. Shoppers were asked to go to the store and buy this product, then answer some questions about it – how long it took them to find it and where it was located on the shelf. Mobile shoppers could do this “in the moment” (a phrase I’m coming to abhor for its misuse and overuse), while the online panel had to wait to get back home to respond (at the earliest).

You, dear reader, should be banging your head on the nearest hard surface, asking who, in their right mind, would do a shopability study with an online sample? You’ll be shocked to learn that the study finds major differences between mobile and online responses – shocked I say! (with apologies to Claude Rains).  Mobile users were much more accurate in recalling the location of the product and claimed much shorter shopping times. This is a blinding flash of the obvious. Moreover, any researcher who did a shopability study like this online deserves the bad data they get. NOBODY should ever, ever, ever do this – there is just no excuse. Indeed, mobile is perfect for this type of work.

The next phase of the test was an in-home usage test. Now, I’ve always thought an IHUT was pretty simple. You ask the participant to try your product and then respond to some questions about it. In the case of a one-time trial, like a snack product, you might say something like, “When you’re ready, we’d like you to try the [Prawn Cocktail/Cheesy Garlic Bread] product you bought, then log in to take a quick survey”. Apparently, this was too sophisticated an instruction set. 25% of the mobile users and over half of the online users answered the questions three or more hours after trying the product. I’m thinking that if this were my study, I’d delete those who didn’t taste the product just before answering some sensory questions as part of data cleaning.

Online has a higher Purchase Intent score than does mobile, which the authors believe is another argument in favor of mobile (remember – they are claiming objectivity). The authors state, “…product owners following the online data would over-invest on positioning and product supply”. They do not state the obverse – that believing the lower mobile-generated PI scores could lead to under-investment and product out-of-stocks. They do not do the obvious – tell us which version gives the more accurate sales forecast. Actual sales was a known quantity – they could have told us whether online over-projected or mobile under-projected. In claiming big differences, they also ignore the fact that UK top 2 box scores are identical; this would be the likely case for an existing product.  They make no mention of the demographics of the two groups of panelists – if they are different, this could very well account for PI differences. Salty snacks are one of those categories that have an age profile for flavor preferences – different ages, different responses. And we would expect an older online sample compared to a younger mobile sample (I note that space limitations in a promotional piece may have kept them from telling us about the sample).

Finally, they want to claim that the diagnostic data one gets from mobile is much richer. Mobile panelists use an average of eight words per diagnostic, while online users only employ seven words. Such a big difference should overwhelm us? While statistics may show this to be statistically significant, I’m not sure how meaningful that difference is. But then, I’m a quant type of guy, not a qual expert. I do note that they quickly gloss over the fact that sensory ratings show little difference between the panels and are directionally inconsistent, suggesting nothing is there.

While working hard to appear unbiased, they do mention that mobile had a significantly higher drop-out rate, took twice as long to run, and costs 55% more than the online study. But, remaining unbiased, they ask, Is it wise to spend money on an online study for in-store work when it is proven to be flawed and subject to inaccurate data?” No, it’s not wise to do an online study for in-store work like this. But choosing this topic and technique to compare mobile and online research is at best a straw man game with a foregone conclusion, and not much of an addition to our body of knowledge about the differences between the tools.

Share
You can leave a response, or trackback from your own site.

2 Responses to “Science is Dead – Long Live Marketing Research”

  1. Chris Robinson says:

    June 15th, 2015 at 9:26 am

    Steve, couldn’t you leave something for me to gnaw at? Brilliant demolition of a piece of unprofessional work all packaged to sell guess what?

  2. Vic Crain says:

    June 15th, 2015 at 4:36 pm

    Steve, I very much enjoyed your piece. I’m old enough to remember when disguising sales as research was considered unethical; now it seems quite common.

    My take-aways from this are that, quite unsurprisingly,

    (1) Humans have trouble following instructions regardless of method.
    (2) There is no one perfect one-size-fits-all research method.

    Was there any effort to collect product codes, or do we have a sense of how many respondents simply made up their answers in order to get whatever the incentive was? If there was a difference by method, that would be interesting.

    Again, thank you for a fun read.

Leave a Reply

*

%d bloggers like this: