Research Technology (ResTech)

January 19, 2017

Neuromarketing: Identifying the Fact From the Fiction

Neuromarketing has seen its fair share of pseudoscience. How can you determine the real from the fake?

Michelle Niedziela

by Michelle Niedziela

0

By Michelle Niedziela

One of the striking narratives that plagued 2016 was the emergence of fake news. With the decline of the newspaper and growth of viral news, more people are getting their information from social media rather than the older, reliable news sources. Many are quick to accept what they read online as fact, and even more don’t even read past the headlines or check the news source before accepting the message. The growth in fake news has been so large that it may have even influenced the 2016 presidential election.

Fake news, however, is not the only problem. There has also been an emergence in the spread of pseudoscience. Pseudoscientific news ranges from the hilariously ridiculous (Big Foot sightings) to the dangerous (homeopathic cures for cancer).

Unfortunately, the persistence of fake news and pseudoscience affects not only our entertainment, but can also have legal ramifications (http://www.telegraph.co.uk/technology/11834670/Woman-who-claimed-she-was-allergic-to-Wi-Fi-gets-disability-allowance-from-French-court.html).

Neuromarketing has seen its fair share of pseudoscience. There are no easy to use gadgets that can “read consumers’ minds.” The human brain is far too complicated to be reduced to a simple piece of plastic sitting on top of your head (that’s not to say that physiological measures can’t tell us something about consumers’ reactions to products and communications, but that’s not the same as mind reading).

If you are looking for a simple solution like that and are not interested in its legitimacy, let me redirect you to here: https://www.google.com/search?q=mood+ring

Perhaps a great New Year’s resolution for 2017 is to be sure to think critically and dismiss fake news and pseudoscience.

But how can you identify neuro-fact from neuro-fiction?

My first piece of advice is to know that there is no ONE perfect tool for studying human response. Different research questions and settings require different methodologies and technologies. So if your research provider is suggesting that their widget can do everything anywhere, you are dealing with a widget salesperson that will only ever sell you a widget, not a scientist helping you to understand your consumer. And to that point, if your research provider cannot tell you the limitations of their widget, then they are not being honest with you.

But when seeing “scientific” news about neuromarketing, here are a few steps to help you to sort through the muck:

1. The use of Psychobabble

Psychobabble is the use of words that sound scientific, but are not. Neuromarketers have ahabit of tagging the word “neuro” in front of anything to make it sound like real neuroscience. The use of these neuro-words or neuro-brands is really no more than “neuro-hype.” Often these words are really just a marketing scheme to get you to believe in a product or company.

And while it’s just a name, this is why we at HCD prefer to use the term “Applied Consumer Neuroscience.” We believe this better describes the process of using a combination of neuroscience, psychology and traditional consumer research methods to better understand the consumer experience. Sure it’s just a name, but we don’t believe that neuro- measures are meant to replace traditional research and instead suggest that the addition of neuro- measures is an evolution and advancement to the already existing field of market research.

2. Reliance on anecdotal evidence

In place of published studies, many neuromarketing companies offer case studies and most do not validate their tools or methods with any scientific research. If you are not paying for a validated tool, then what exactly are you paying for?

Many people become interested in using physiological or neuropsychological measures because they believe it will be more accurate than traditional measures. They believe that participants won’t be able to lie as they might on a survey or that difficult to articulate emotional reactions may be revealed through neuropsychological measures. And while that may be true, anecdotal evidence is not evidence. Any new measure or new application of a neurological tool must be validated before being used (and sold).

While case studies can be very informative and lead to great research ideas, thoughtful research must still be done to validate a methodology. For example, in the world of pharmaceuticals this is very important. Just because one participant given a drug may have improved more than when given a placebo does not mean that the drug should be approved. It still needs to be thoroughly tested; otherwise you risk relying on a false positive result.

Many neuromarketers provide anecdotal evidence as proof that their tool works. However, I suggest that if your research provider cannot provide you with real evidence (published peer-reviewed papers, or at least blinded case studies with real statistical analyses), then you may best be cautious. Buyer beware.

3. Extraordinary claims in absence of extraordinary evidence

The human brain is a complicated organ, so complicated that it can’t be duplicated and many aspects of it are still not understood. Academic neuroscience, for example, is still trying to explain even simple, vital, everyday things we do such as eating (see recent publications here: https://www.ncbi.nlm.nih.gov/pubmed/?term=food+intake, at time of writing, 187,055 recent publications still can’t tell us why we eat or stop eating).

So when I see a claim that says that this or that tool or approach can “read the subconscious” or predict a complicated behavior like consumer behavior, I raise an eyebrow. Unless they can show you the evidence that the measure is linked to a behavior, then this is not predictive. It is imperative that neuromarketers do the background research in order to prove that their tools can be used in the specific ways that they claim, rather than just what sounds interesting.

4. Claims which cannot be proven false

When making claims about neuro- methodologies, researchers often fall into the trap of hindsight bias.  Hindsight bias (https://en.wikipedia.org/wiki/Hindsight_bias) is the research mistake of asserting that your finding is true and predictive after the event has occurred. It’s the act of seeing the final score of the Super Bowl and then telling everyone you predicted it before. No one can prove that you didn’t and it can make you seem very smart. But it hinders the scientific process of moving the neuromarketing field forward. If we are not using real findings and making real discoveries, then we are not really accomplishing anything of value.

But more importantly, this doesn’t help our clients.The problem with this approach is that it doesn’t give credit to what applied consumer neuroscience is best used for: helping us to better understand the consumer. It’s not a replacement of current market research methodologies. And so being directly predictive of something that could have just been asked is not helpful. But when used as an addition to instead of replacement of traditional measures, applied neuroscience can be a valuable complement to current research.

The question then is not whether neuromarketing could have predicted liking. If we want to know if someone liked something you can simply ask them. The better research question for applied neuroscience is ‘why do they like it’.

5. Claims that counter scientific fact

Again, it’s not currently possible to “read the mind” with any tool. However, there was a recent academic study that got close (sort of). fMRIs were done on participants as they viewed a movie. Participants watched the same movie for 3 months. After 3 months of training on the same movie, researchers were able to identify which movie the participants viewed by identifying a similar pattern in brain activity. But this is not the same as “reading the mind”. They trained people to exhibit a response and then identified that response in testing. Further, it is known that certain patterns are identifiable as synchronicity rhythms in brain activity due to blinking that is often caused by the phrasing used in cutting scenes together. Definitely not mind reading.

Brains are really complicated (neuro-understatement of the year). They control our breathing, eating, standing, walking, etc., and so on… everything. So there’s a lot going on up there even when we don’t appear to be doing anything but sitting quietly and still. So imagine the amount of activity happening while you are walking through a store. Now imagine how differently your brain might look than another person’s brain might look as they walk through a store. You might hear different sounds or different people. Your experiences would be different and so activity in your brain would also be different. This makes studying this sort of behavior with neuroscience tools very difficult. The acts of walking and breathing and staying upright (balance) are very complicated things we do without having to consciously think of them. But these acts require a significant amount of brainpower, causing a lot of noise in the data if you are not interested in how well someone is walking, but interested in what they are seeing in a store. Real-time, naturalistic experiences are not well suited for neuro-measures and require a great deal of attention to proper research design. This is the fact of the situation, and if your research provider ignores these facts, again, buyer beware.

6. Absence of adequate peer review

One of the biggest problems in neuromarketing is the absence of peer review (though some are trying to correct this problem). The scientific method is clearly about testing hypotheses. But even further, it’s about replicating results and presenting your research to the  larger scientific audience for critique.
However, criticism is not something that the many in the neuromarketing community encourage and the lack of a legitimate scientific peer review process for proposed methodologies has allowed many companies to get away with peddling non-validated widgets unchecked.

Because neuromarketing companies don’t provide the key details of the analysis techniques they use, it’s hard to evaluate them objectively.

7. Claims that are repeated despite being refuted

If it sounds too good to be true, it probably is.

While it would be amazingly convenient to measure neuro- responses while a consumer walks through a store, this simply is not a valid methodology.  While it would be great if we could really read the mind, it’s simply not that simple. As discussed earlier, the brain is complicated and so when we measure it we need to do so using validated tools and thoughtful research design. It is possible to use applied neuroscience to better understand consumer response.

Making claims from brain response is highly difficult. Labeling a set of brain data as a signal of attention or anxiety based on one set of data is similar to saying “tomatoes are red, this apple is red, therefore this apple is a tomato” and continuing to state that an apple is a tomato despite evidence to the contrary.

We see this in neuromarketing frequently, probably due to the lack of a strong peer-reviewed scientific process and the drive to sell methodologies. For example, while academic research has found that social setting (whether in presence of another person or alone; see research: http://psycnet.apa.org/journals/dev/32/2/367/, http://psycnet.apa.org/journals/emo/1/1/51/) can influence facial emotional response, many neuromarketers use facial coding in group settings such as focus groups.

Unfortunately, there is a tendency of neuromarketers to keep methods secret, therefore, hampering serious evaluation. This does not, however, mean that all the data is bad. With a properly designed study, it is possible to look for meaningful (statistical) changes between stimuli or products, as well as look for meaningful changes from baseline measures. And it’s possible to make inferences from those changes in a well designed study, but those claims need to me made cautiously and be backed up by research.

So let’s all resolve to do better in 2017.

0

consumer behaviorinnovationneuromarketing

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

More from Michelle Niedziela

Trust Me: Quality Control in Neuromarketing Research

Trust Me: Quality Control in Neuromarketing Research

Taking a step back from the buzz around neuromarketing to understand that the methodology is just one part of the insights process.

Michelle Niedziela

Michelle Niedziela

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get what matters, straight to your inbox.
Curated by top Insight Market experts.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*