ARF re:Think 2011: The Great Neuromarketing Debate
Without a doubt one of the most buzzed about topics at the ARF re:Think 2011 (and probably in market research in general) was Neuromarketing. All of the major vendors were there: Emsense, Innerscope, NeuroFocus, & Sands Research (a special shout out to Emsense for having a great exhibit space!) and several sessions were devoted to the topic, including the much anticipated “Neurostandards” presentation – Improving Neuromarketing Measurement: Ground-Breaking Results. In case you have not been following this effort, here is the crux of the matter:
The ARF NeuroStandards Collaboration, is the first study of its kind to conduct an independent peer review of neuro-and biometric market research. In this session we will describe the applications for various methods, explain which methods are suited to particular research needs, and present best practices for evaluating neuromarketing methods.
So how did it go? Well, that depends on who you ask.
Considering that the two largest players in the industry (Emsense and NeuroFocus) declined to participate, I have to question how effective the whole thing will be. Emsense objected because they feel that the method was flawed since the project compared many different techniques by the same standard and NeuroFocus claims that their standards are the best already and the industry should conform to their method as the quality benchmark.
I lean towards Emsense having the right of it since the study looked at facial coding, biometrics, electroencephalography (EEG), quantitative EEG, facial electromyography (fEMG), steady-state topography (SST) and functional magnetic resonance imaging (fMRI). All are very distinct methods, and although some firms combine multiple approaches to deliver what they consider to be the best results, it’s extremely difficult (maybe impossible?) to develop standards in an emerging space like Neuromarketing when practitioners are still all over the map in defining processes and even the underlying science itself. Add to that the business issues of competitive espionage, operational scalability, and marketing communications of a complex topic and it seems to me that the ARF may have their heart in the right place but are biting off more than anyone could chew right now.
As if all that isn’t enough, since Neuromarketing is still very much tied to academia and experimental labs around the world (all of the major players tout their Science Advisory Boards as validation for their approach) there is a high level of competitiveness and even vitriol between the firms as only academics can do it. This actually came to the fore during the convention, with many witnessing heated debates between eminent scientists based on rather obscure issues like dry vs. wet conductivity and muscle noise filtering. To we casual observers it seemed like so much sturm und drang, although to the scientists who have devoted their lives to this work, it was as serious as serious gets.
Several outlets have already written about their take on the ARF initiative; here is a sample of what folks are saying:
These new technologies raise as many new questions as they answer. For example, ARF’s report said, “Reactions to one scene within a commercial are likely to be influenced by the preceding content. Interactions between the images, sounds and words need to be untangled to pinpoint causes of viewer response.” – Susan Kuchinskas | ClickZ
The first batch of information from the Advertising Research Foundation (ARF) NeuroStandards Collaboration Project has been released, and, perhaps unsurprisingly, the main conclusion seems to be that more research is needed. A draft of a summary document shows equivocal results. On one hand, the committee found that neuromarketing techniques can “provide important, valuable new insights for the evaluation of commercials and other visual stimuli.” On the other hand, the report notes, neuromarketing studies “should not be regarded as providing conclusive scientific evidence about research objectives—specifically, such concerns as which of several commercials or which elements of a commercial will sell more product.” – Roger Dooley | Neuromarketing
I couldn’t help to think about the field of text analytics. There are some similarities for sure. While both have been around for a while, both are also on the cutting edge. With so much growth recently there’s undoubtedly a lot of smoke and mirrors, or “magic” rather than science.
So while I’m all for eliminating the bad apples from the good, I can also see why a company which is operating in a growing/cutting edge field with a lot of new intellectual property would not be interested in participating in anything like this for various reasons. – Tom H.C. Anderson
“In neuromarketing there is no M = MC squared equation”
Richard Thorogood, Director of Strategic Insights & Analytics, Colgate-Palmolive US Company
I had the opportunity to meet with the senior leadership of Emsense, NeuroFocus, and Sands Research and discuss all of these issues. These are some of the brightest folks working in our industry today, and each was surprisingly forthcoming about the state of their businesses, the challenges facing this nascent industry, and the opportunities for further innovation available to them. Each have different business models and they utilize different approaches, but they all agreed that their growth prospects are significant. Based on the level of interest at the sessions devoted to neuromarketing-related topics as well as the traffic at their booths I would have to agree.
During my conversations with these firms, I developed a few conclusions about the space. Ultimately I think all of the debate boils down to a few key points:
- Pragmatic business realities often force science to adapt. Whether wet or dry conductivity is best or not, ultimately the solution that is scalable, cost effective, and delivers valid results within an acceptable margin of error wins.
- For Neuromarketing to go mainstream, it has to go mobile to allow for real world observations. That means the devices have to be simple, intuitive, and yes, even aesthetically appealing for broad consumer use. It’s up to the supplier to figure out a method to account for artifacts in the data as a result of this business need.
- Neuromarketing can be used as a quantitative measure just like heat maps or click data or it can be directional like sensory or qualitative concept testing. Both are valid approaches and both fit within the traditional realms of market research. It’s just more data to help guide decisions; by itself it does not replace other methods.
- Neuromarketing helps understand the difference between what people say and what they do, but it needs to be applied at a macro-sample level to be a projective indicator of broad behavior. Certainly the data supports the conclusion that the science is valid, but small sample sizes don’t support broadly projectable results which is why most of the suppliers in the space are focused more on testing projects.
- Hybrid models may increase effectiveness. Combine EEG, eye-tracking, and a predictive market and you get better results. Combine sensory testing designs with biometric measurements and you get greater insights. Experimentation along these lines is going to be the key to long term success of the segment.
I think we’ll see each of the major players settle into niches that are best suited for their particular approach. Each of them offer some unique capabilities and resources and the market is already self-segmenting into sweet spots for each firm. For instance, Sands is probably ideally suited for sensory research, NeuroFocus takes an on-site lab approach, and Emsense follows a macro-sampling model that is appropriate for more quantitative based studies.
Each of these players are, in my opinion, of equal quality in terms of their scientific rigor, technology, and internal talent. Rather than trying to compare apples and oranges within a rapidly evolving and highly fragmented field, perhaps we should just focus on the results being delivered? There is no question that Neuromarketing works; now we just need to understand how best to use it. All of their models have been validated in hundreds of real world studies already and clients don’t spend the type of money that these projects cost without getting bang for their buck. In my mind that ends the debate and opens the door to a very constructive and practical dialogue instead; how do we maximize the value of this technology?
Let’s put our energy there rather than debating minutia related to EEG headset design, shall we?