This Is A Test. This Is Only A Test.
By Steve Needel, Ph.D.
Words to live by if you grew up in America during the Cold War (and anyone over 40 is reminded of this in our current political climate). However, this is not about politics but about understanding a simple fact of a researcher’s life – sometimes you have to test an idea to see if it will work. Unfortunately in our line of work we don’t have a lot of “facts”. Here are some common things we don’t know:
- Does social media make a difference? There’s some research that says yes and some research that says no. Worse, some of the studies that say a social media push is only effective in specific ways for specific things. Not very helpful, is it? But we want to believe that social/digital media matters, so we keep on trying.
- How should we advertise? Never mind the newer question of digital versus traditional, which is a whole other issue. What should our ads look like? How should they be delivered (reach, frequency)? On what device(s)? How often should they be changed? You would think that with all the years of research on this topic we would have a pretty good blueprint by now.
- Does eye-tracking matter? Beyond the simple, “if you don’t see it you don’t buy it” axiom, we haven’t been able to show that attracting more attention to a product increases its sales. Again, we all believe it, we just haven’t proved it.
- Aren’t all the answers in Big Data? Gee, if I had a dollar for every time someone says this, I could retire soon. Sometimes the answer is in Big Data, sometimes it’s not. Sometimes a database has a great covariance story; every time a brand does this, here’s what happens. That’s a pretty good indicator that if you try it, you’ll come up with the same outcome. More often, you find the data is equivocal, usually because there are additional factors you aren’t considering. Second, you want to try something new, something no one else has tried before. Big Data won’t be that useful in that case.
- Should brands be stocked horizontally or vertically? This is actually something I know about and the answer, of course, is it depends on the category. Again, not helpful.
Experimentation needs to be in our research toolbox. It’s always been a hallmark of the scientific method, just as observation (think ethnography or data-mining) is and just like hypothesis formation is. Whether you are doing a simple online A/B test, a virtual reality test, a controlled store test, or a live test market, sometimes the solution to a whether a marketing idea will work is to test it. Over the 25 years we’ve been doing our research, we’ve found that, contrary to expectations:
- You can charge more for your product than you may think.
- Making packages more convenient for consumers may not improve sales.
- SKU reductions can improve sales for the brand and for the retailer.
- Shelf signage and displays are not always a good thing – they can actually hurt your sales.
Well-designed experiments need not be costly nor do they need to be time-consuming. They can:
- Provide a causal, rather than correlation-based answer to your question; you know the differences you see are due to the test variable.
- Reduce the risk associated with a marketing action. How many new products would perform better had they been test marketed?
- Resolve disagreements on how to market a product when competing ideas exist; test them both and see which wins.
Harvard economist Sendhil Mullainathan has a great quote:
No one would say, “Hey, I think this medicine works, go ahead and use it”. We have testing, we go to the lab, we try it again, we have refinement. But you know what we do on the last mile? “Oh, this is a good idea. People will like this. Let’s put it out there.”