The One Thing Most Consumer Research Companies Are Missing

Many companies love the idea of co-creation, but don't know how to implement it.

By Nicholas Licitra

Most leaders of consumer research companies are aware of co-creation, whereby consumers, technology experts and other outsiders closely collaborate with a company for a certain length of time.

Together, they imagine, design, build and market a new product or service. This hands-on process goes far beyond traditional marketing research methods, which primarily involve customer surveys and other forms of polling.

As with any research or design process, co-creation is only as good as its implementation. Unfortunately, many consumer research companies aren’t quite making the most of this revolutionary system. What’s the main problem? What’s the one missing element that’s weakening their co-creation efforts?

Defining Terms

To start with, some brands mix up the words “co-creation” and “crowdsourcing,” which is understandable since these two methods do have some qualities in common.

When you crowdsource, you typically ask people to give your company ideas. Perhaps they’re suggestions for new items to sell, ways to improve your existing offerings or concepts for new commercials. Most likely, you’d request those submissions online. People could send them to you via email, social media or your website. Every so often, you or one of your employees could sift through those ideas and try to find some that seem workable.

In some cases, crowdsourcing entails more than just taking in ideas. For example, internet fundraising can be a form of crowdsourcing. And, once in a while, consumer research companies use this technique to find people who are willing to perform various tasks for a fee. Such assignments could include translating catalogs into other languages and filling out questionnaires.

For their part, consumers who engage in co-creation take on roles that are much more extensive. Co-creators are immersed in practically every stage of product development; their recommendations shape all aspects of the process. They supply initial concepts, and they test prototypes. Afterward, they let companies know how an item could be altered so that it’s easier to use and become more beneficial.

Another key difference between crowdsourcing and co-creation is the number of people who are involved. With crowdsourcing, anyone can make suggestions. By contrast, with co-creation, a company will select relatively few individuals to provide assistance.

What’s more, co-creators are usually selected according to their qualifications. Brands frequently look for those with particular expertise and a flair for technology. Plus, they might seek longtime, enthusiastic buyers of their goods or people who have relevant educational and professional backgrounds.

Automating Your Co-Creation (Mobile Optimum)

Many consumer research companies have discovered that seamless automation tools can greatly enhance co-creation. That’s because, whenever co-creators come up with a great idea ― whether they’re at home watching a football game, out jogging by the lake or anyplace else ― they can instantly whip out a mobile device and describe that stroke of genius.

Otherwise, if your co-creators have to wait for an upcoming meeting to share their inspiration, they might not remember important details or they might forget about the idea entirely. Not only that, but people sometimes have second thoughts about their concepts. If they allow an idea to roll around in their heads for too long, they might lose confidence in it and ultimately keep it to themselves.

Gamification

Adding a level of gamification and structure greatly adds to the experience. Begin with a well-designed challenge outline, clearly indicate how you want members to participate and define their roles. Next make it a fun competition, add a leaderboard, assign status levels via badges and promote a collaborative experience.

Information at Your Fingertips

With your customer co-creation process in place and mobile tools set to go, you’re ready to start co-creating.

Who knows? Maybe you and your team will devise a breakthrough item, one that will be renowned throughout the marketplace for its serviceability and the sheer brilliance of its design.

Originally posted here

Who is Doing Cocaine: Estimating Bad Behavior

The Item Count method can be used to reduce the effects of social desirability distortion.

By Michael Lieberman

Marketing researchers face a number of measurement challenges when interviewing their audiences. Certain factors limit subjects’ ability to provide accurate or reasonable answers to all questions: they may not be informed, may not remember, or may be unable to articulate certain types of responses. And even if respondents are able to answer a particular question, they may be unwilling to disclose, at least accurately, sensitive information because this may cause embarrassment or threaten their prestige or self-image.

Research projects that investigate socially risky behavior (for example, “Have you used illegal drugs in the past week?”) or, conversely, socially expected behavior (e.g., voting, religious attendance, charitable giving, etc.) are subject to what are referred to as social desirability pressures. These sorts of questions, we advise, will almost certainly yield inaccurate results if not administered correctly.

Social desirability distortion is the tendency of respondents to answer questions in a more socially desirable direction than they would if the survey were administered anonymously. A form of measurement error, it is often referred to as bias, socially desirable responding, or response distortion.

Over the past few years, we have successfully employed a questionnaire technique called the Item Count method to reduce the effects of social desirability distortion. As used, the respondent reports only the number of items on the list in which he/she has engaged, not which behaviors. If the average number of non-stigmatizing behaviors is known for the population, one can estimate the rate of the sensitive behavior for the population by the difference between the average number of behaviors reported for the population including and excluding the stigmatized behavior.

Item Count Method

The item count method allows survey respondents to remain anonymous when reporting a sensitive behavior. This is accomplished by including the sensitive behavior of interest in a list of other relatively non-stigmatizing behaviors. It is also used to estimate socially positive behaviors as well, such as voting.

Example

In the example below, the researcher is attempting to identify the percentage of college students who tried or used cocaine in the past.

One of the groups received a set of five items. The participants were told that this questionnaire was designed to encourage honest responding, and were asked not to respond directly to whether any particular item is true; rather, they were asked to list how many of the four items were true.

How many of the following have you done in the past six months?

While indicating how many items were true, they never directly endorse any particular item. Someone who responded 3, for instance, was indicating that three of the five items were true for him or her.

Another group of respondents is given the same list, plus one additional behavior—the one we are interested in measuring.

How many of the following have you done in the past six months?

Subtracting the average number of behaviors reported by the first group from the average number of behaviors reported by the second group estimates the proportion of people given the longer list who said they performed the requested behavior.

There are a few guidelines when designing an item-count method experiment.

  • The behavior used on the item-count method list should be such that few respondents have performed all or none of them. Reporting one activity negates the anonymity.
  • Behaviors should be within the same ‘category’. For example, if one is investigating risky sexual behavior, then other risky behaviors should be included on the list. It the goal is to estimate voter turnout, other civil activities should be included.
  • A separate sample for direct reporting may be included for comparison.
  • Larger samples enhance estimate stability and accuracy.

Below are preliminary results, by gender.

The base rate estimate for the behavior of interest is found by subtracting the two means: mean (Group 2) – mean (Group 1). In this example, in total Group 1 conducted 2.35 behaviors; Group 2 conducted 2.72 behaviors. Thus, the base rate estimate in this population for cocaine use is 2.72-2.35=0.37—roughly 37% of our college student sample has used cocaine at one time or another.

Broken out by gender, men have slightly higher means of risky behavior—not surprising, given that, for example, riding a bike without a helmet or walking through a dangerous neighborhood is viewed as less risky by male students than female. Men also had a slightly higher estimated cocaine usage.

Closing Thoughts

Reducing measurement error is an ongoing challenge for marketing researchers. Respondents are generally unwilling to respond, or respond truthfully, to questions they consider inappropriate for the given context, they do not see as serving a legitimate purpose, or are sensitive, embarrassing, or threatening to their own-self image. Utilizing the item-count measure can be an effective way of reducing misreporting caused by social desirability pressures associated with interviewer-administration.

Revenue Weighted NPS Scores – All customers are equal, but some are more equal than others

Very few companies and models actually take into account the customer profile when reporting out NPS Scores.

By Vivek Bhaskaran

Net Promoter Score – An Introduction

Net Promoter Scores – We all have heard of it and most of us have drunk the cool-aid. NPS Scores are a measure of customer loyalty, and thereby a reflection of profitability. Most companies measure NPS by asking customers, after a recent purchase, via a simple transactional survey.

However, very few companies and models actually take into account the customer profile when reporting out NPS Scores. In our experience and modeling, most companies are not leveraging the power of NPS to determine service delivery perfection.

Let’s take a small example to illustrate a point;

ID Customer Type NPS Score
1 SMB 10
2 Enterprise 2
3 SMB 7
4 SMB 8
5 Enterprise 6

Now, let’s calculate the standard formula for the NPS Score – which is the % of the folks who are promoters – % of customers who are detractors;

In this above example;

Promoters 40%
Passive 40%
Detractors 20%

Finally, we arrive at the cumulative NPS Score:

NPS Score: 20

Note – the NPS Score is always a “Score” that can have valued between -100 and +100.

Customer Lifetime Value

Now – let’s bring in actual revenue. Let’s propose that the Lifetime Value of an Enterprise Customer is much higher than the value of an SMB Customer. The concept of loyalty and rewarding customers who spend more – is to stratify and design programs for customers in the appropriate spend category.

For conversation’s sake, let’s say the LTV of an Enterprise User vs. an SMB User is as follows;

Customer Type LTV
Enterprise 8000
SMB 2000

We will see this in a minute, but the _absolute_ numbers (8000 and 2000) don’t matter. What matters here is that we are saying that the Enterprise Customer is worth 4X more to the company than the SMB Customer – from a revenue perspective. Now this metric in this example we are using is revenue – but as you can imagine, you can replace revenue with profit margins also. What we are alluding to here is that – customers can be stratified by either spend or by margins/profitability.

Revenue Weighted NPS Score

Now, since we have the NPS survey response at an individual level, we can compute the NPS Score taking into account the spend/revenue on a customer level.

Let’s take into account the same NPS Score calculation, but this time use the Customer LTV into the equation;

In the above example – we have 2 promoters – but both of them happen to be SMB customers and 1 detractor who happens to be an Enterprise customer. Without taking LTV and Customer Tier into account, we would come to the incorrect conclusion – that we have 40% (⅖) Promoters and 20% (⅕) Detractors in the system. But this would be incorrect – from a pure business and economic perspective. We run the risk of losing $8,000 in value – while we pat ourselves on the back for keeping $4,000 in value as promoters.

To prevent this, we can “weight” the NPS Scores based on LTV of the Customer Tier;

Revenue Revenue Weighted
Promoters 40% $4,000 18%
Passive 40% $8,000 45%
Detractors 20% $10,000 36%

How did we compute the Revenue Weighted NPS Score?

If you notice, in our sample dataset, we have 2 Promoters with a cumulative revenue of $4000 (2x$2000). The cumulative revenue in our sample dataset is 3 SMB’s and 2 Enterprise customers – this is a total of $22,000 (3x$2000 + 2x$8000)

Revenue Weighted Promoters : $4,000 / $22,000 18%
Revenue Weighted Passive : $10,000 / $22,000 45%
Revenue Weighted Detractors: $8,000 / $22,000 36%

Thus, the Revenue Weighted NPS Score  : 18-36 = -18

Revenue Weighted NPS Score:  -18

Compare this to our Non-Revenue weighted score of +20 – so in effect, we went from a +20 of NPS Score to a -18 because we applied revenue weighting to the score. Without this insight, most companies will continue to make decisions that are directly orthogonal to their overall goals – which is to increase revenue and profitability.

Revenue Vs. Margins/Profit

In the example above, we focused in on Revenue as a metric to anchor and weight the NPS Score by. Companies in their lifecycle are generally interested in only two broad financial metrics – Top Line and Bottom Line – which is Revenue & Profit. If you are interested in making a profit based NPS model – the formula and the model is exactly the same – just replace revenue with profit and the weighting will automatically apply. All you will need to do is to – instead of plugging in the LTV (Lifetime Customer Value) – you would use the Customer Gross Margin – which is the margin you expect to make for each Enterprise Customer or SMB Customer.

Revenue At Risk

If we all assume that the customers who are actively NOT willing to recommend you – are at risk of leaving you for a better product or service. With the Revenue weighting model, we can not ascribe a clear dollar value that is at risk – of leaving.

In our example above, our Revenue At Risk : $8000

Revenue & Operational Metrics

In our experience, showcasing revenue metrics to line-managers has a _direct_ impact on strategy and behaviour. Line managers relate to revenue and can understand the model. As Peter Drucker said, you always manage what you measure – if we wan’t revenue and/or profitability to go up, we need to provide the tools and the underlying model to help increase revenue and profitability. The Revenue Weighted NPS Score will allow for managers to take their NPS Scores seriously – since all the metrics are tied to real world metrics – like revenue and profit margins.

Flying blind does not help!

Sentiment Analysis is Simple (a Trump’s Tweets Example)

Sentiment analysis allows you to quickly gauge the mood of the responses in your data.

By Chris Facer

Social media provides a sea of information, and it can be hard to know what to do with it all. When people post their ideas and opinions online, we get messy, unstructured text. Whether its comments, Tweets, or reviews, it is costly to read them all. Sentiment analysis allows you to quickly gauge the mood of the responses in your data. This article takes a brief look at what sentiment analysis is, and applies some simple sentiment analysis to Donald Trump’s tweets.

What is sentiment analysis?

At its simplest, sentiment analysis quantifies the mood of a tweet or comment by counting the number of positive and negative words. By subtracting the negative from the positive, the sentiment score is generated. For example, this comment generates an overall sentiment score of 2, for having two positive words:
Positive sentiment analysis example

 

You can push this simple approach a bit further by looking for negations, or words which reverse the sentiment in a section of the text:

Negative sentiment analysis example

The presence of the word don’t before like produces a negative score rather than a positive one, giving an overall sentiment score of -2.

The process of reducing an opinion to a number is bound to have a level of error. For example, sentiment analysis struggles with sarcasm. But when the alternative is trawling through thousands of comments, the trade-off becomes easy to make.  A little sentiment analysis can get you a long way. This is especially true when you compare the sentiment scores with other data that accompanies the text.

Trump’s tweets

During the election campaign of 2016, much discussion revolved around who was sending out Donald Trump’s Tweets. A number of articles described how the tone of Trump’s tweets is more positive when they come from an iPhone device, than when they come from an Android. The hypothesis is that Trump tweets from an Android device, and that he employs social media assistants who tweet from an iPhone. But how do you work that out?

You add the sentiment scores to a data set, and then compare the sentiment scores for the different devices. You can try this example out for yourself in Displayr.

In a data set containing 1,512 tweets from @realDonaldTrump sent during the primaries, there is a small but positive average sentiment score of 0.3, with scores ranging from -5 to 6. This means that the average tweet has slightly more positive language than negative. The magnitude of the scores is small as the length of a tweet is restricted.

The power of sentiment arises when considering other variables in the data. Think of the now-famous example of the Trump sentiment gap between Android and iPhone. The mean sentiment score of Tweets from Android, 0.1, is significantly lower than the overall average of 0.3:

If these mean scores don’t sway you, then you may find the shape of the distribution more convincing:

The iPhone has a greater proportion of neutral (0) and slightly-positive (1) tweets. The Android has fewer such tweets, and a greater proportion of tweets with a negative score.

Engagement

The data from Twitter includes the number of times each tweet has been Favorited. This is used as a proxy for engagement. For this data set, the average is around 19,000. By considering how the average number of favorites varies with the sentiment, we discover another interesting pattern.

Those tweets which have a negative sentiment (scoring -2 or fewer) garner a significantly higher number of favorites on average. It would seem that Trump’s followers are noticeably more engaged by negative content.

A little sentiment analysis can reveal patterns in the data which would be difficult to gain by reading through the sea of content.

You can analyze Donald Trump’s tweets yourself by clicking here

Acknowledgements

Emojis in this article come from the open-source emojione.com. Thanks to David Robinson for his blog post which inspired my recent thinking on sentiment analysis and text analysis.

This post was originally posted here

Jeffrey Henning’s #MRX Top 10: AI, EQ, and Data Sets Visualized, Breached, and Perfected

Of the 5,822 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted...

By Jeffrey Henning

Of the 5,822 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted…

  1. GfK Strengthens Its Membership with ESOMAR – GfK has licensed ESOMAR content and services for all its
  2. ESOMAR Webinars – ESOMAR has updated their free webinars with events in March for the Young ESOMAR Society, market research in Russia, and identifying fraudulent participants in online
  3. Concern about the NHS Jumps to the Highest Level since 2003 – 27% of Britons surveyed by Ipsos MORI report that Brexit is the most important issue facing the United Kingdom, with 17% saying the NHS is the most important
  4. Artificial Intelligence in Market Research – InSites Consulting presents the results of a study using predictive analytics to predict disengagement from or negative behavior in an online community.
  5. The Rise of AI Makes Emotional Intelligence More Important – Writing for Harvard Business Review, Megan Beck and Barry Libert argue that AI will displace workers whose jobs currently involve gathering, analyzing and interpreting data, and recommending a course of action – whether those jobs are doctors or financial advisors or something To thrive, professionals in affected industries must cultivate our emotional intelligence and how we work with others.
  6. How Interest-Based Segmentation Gets to the Heart of Consumers – Hannah Chapple, writing for RW Connect, argues that a better way to understand social-media users is to study who they follow, not what they say. Who they follow shows their interests, but what they say often must make it passed self-imposed
  7. Behavioral Economics: Three Tips To Better Questionnaires – Chuck Chakrapani of Leger Marketing, writing for the Market Research Institute International’s blog, offers three quick tips for applying lessons from behavioral economics to questionnaires: beware the subliminal influence of numbers on subsequent questions, consider the issue of framing when wording questions, and ask for preferences before
  8. One Dataset, Visualized 25 Ways – Nathan Yu visualized life expectancy data by country in 25 different ways, to demonstrate there’s no one way to visualize a
  9. How a Data Breach Battered Yahoo!’s Reputation – Emma Woollacott, writing for Raconteur, discusses how Yahoo! failed to take even minimal steps to notify and support its users after it became aware of its data
  10. Big or Small: No Data Set is Perfect – Kathryn Korostoff of Research Rockstar argues that big data, survey research, ethnographic studies, focus groups, and customer analytics require business users to better understand the strengths and weaknesses of the resultant data sets.

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. Only links with a research angle are considered.

Edward Appleton’s Impressions of #IIeX EU 2017

Edward Appleton offers a review of last week's IIeX Europe

 

 

By Edward Appleton

Going to Amsterdam in February isn’t the best time to visit the city, but it’s IIeX Europe time – so: grey skies, wind, rain… here we come!

This was the fourth time I attended – and I was impressed. There were apparently well over 500 attendees a huge rise over 2016 which was I believe under 400, and 30% client side researchers. Wow.

It was energizing as ever, a great place to network, and with multiple parallel tracks and competitions going on, it requires careful planning.

What made this IIeX different? Two things stood out:

  1. The voice of qual was well represented with presentations from Acacia Avenue, IPSOS Mori, Northstar and yes, our good selves at Happy Thinking People. The AQR was present, very ably represented by chair person Simon Patterson. People from QRCA were there too. It was great to juxtapose qual. and “tech” – suggesting a complementary, rather than a competitive relationship. Reminded me of the Big Data/ Qual juxtaposition at the Esomar Conferences in Berlin in 2016.
  2. New Speakers Track. Led by Annie Pettit, people who had never been on stage before presented their own Market Research Innovations. The talks I saw were impressive and made me wonder if my own presentation skills needed an urgent refresh! I applaud this – it’s something we should do more of: giving a platform to unheard voices.

Thematically, what stood out? I by no means have an overview – it’s impossible to “do” all of IIeX, there’s simply too much, but here’s what stuck:

  • “Crowd wisdom” – whether you’re looking for a freelance creative team to move your idea along, or want to get a first understanding directly of real-life behaviours in unknown markets, there were a number of companies (eg Streetbees & Mole in a Minute ) linking up different sorts of crowds directly with budget owners. Automated, in real-time, fast, and I’m imagining relatively cost-efficient.
  • AutomationZappistore continues to be a major presence at IIeX, propagating the benefits of full automation at a fraction of the cost of traditional methods. A company to continue to watch, it seems.
  • Stakeholder Engagement – a demo on the smart video software from TouchCast stunned me. Paul Field’s live demo of how a presentation could be whizzed up fast and made to look as if a professional TV studio had created it – amazing.
  • Non-conscious/ implicit methodsSentient Decision Science were a familiar and welcome presence, other newer faces also suggested different ways (strength of attitudinal response – courtesy of Neuro HM) of to accessing more authentic, dare I say System 1 responses, with higher predictive validity.
  • Artificial Intelligence was a strong theme – allowing companies to mine and access knowledge in their reports much more easily, for example, or eliminating low-value, time-consuming tasks e.g. during recruitment by automatically identifying potentially relevant audiences.

I was interested to see the likes of big-hitting conjoint experts Sawtooth there, Mr. Aaron Hill – IIeX is getting noticed far and wide, it seems.

Overall, IIeX shows the humble visitor that “Market Research” (whatever you call it) is vibrant, but it’s already very different to what it was a very few years ago.

Major client side companies are already showcasing their new MR approaches – CPG giant Unilever being the stand-out company doing that at IIeX but Heineken and fragrance and flavor company IFF also hosted a showcase track.

The human aspect is still central – tech can help us concentrate on that, automating and removing repetitive, low-interest, non-value-added tasks.

If you do visit in future (which I would recommend), I suggest you come with a mind-set that looks to join-the-dots rather than be overwhelmed by “breakthrough” or “step-change” developments.

For more on my thoughts, as well as many of my colleagues, here is a video blog we did last week:

Impressions from IIEX Europe 2017 from Happy Thinking People on Vimeo.

Tech can enable, be disruptive, but it’s up to us to link up, be imaginative, find the sweet application spot in whatever part of the MR area we play in.

Curious, as ever, as to others’ views.

John Kearon Unveils System1 Group

John Kearon, CEO of BrainJuicer, unveils their new brand and explains their thinking behind the rebrand, what it means for the company and their clients, and his view on the industry over the next few years.

 

This morning BrainJuicer announced to the investment community their decision to rebrand as System1 Group, an integrated insights and creative agency that incorporates System1 Research (formerly BrainJuicer) and System1 Agency, their already established creative agency.

Shareholders are being asked to approve the Company’s proposed change of the name from BrainJuicer Group PLC to System1 Group PLC

Here is a summary from the release:

Over the last 16 years BrainJuicer has built an international business by applying Behavioural Science to predicting profitable marketing. At the heart of Behavioural Science is the notion that people use instinct, intuition and emotion to make most decisions.  This is known as, “System 1” thinking.  Having adopted the System 1 approach to market research and successfully launched our System1 advertising agency (‘System1 Agency’), we believe the company’s growth will be better served by adopting the System1 name across the group. Within the System1 Group, we will have, System1 Agency to produce profitable marketing and System1 Research to predict it. As the ‘System1’ name becomes synonymous with ‘profitable growth’, the business will be in a great position to help clients move towards 5-star marketing and the exponential growth that comes with it.

Why is this worth covering on the blog? Because BrainJuicer has been recognized as the “Most Innovative Supplier” in the GRIT 50 list for 5 straight years; they arguably have more brand equity established than almost any other research company, and certainly more than any of the “next gen” companies that have emerged in the past decade. They are masterful marketers who practice what they preach and have had an extraordinarily successful history in a short period of time. They also have been a primary driver within the industry in bringing attention to the topics of behavioral science in all it’s many forms, taking the ideas of behavioral economics and applied neuroscience from a niche to a very mainstream topic.

For a company with all those claims to fame to make a shift in their branding and to double down on a very specific direction is news worthy indeed, perhaps even inspirational.

I had the opportunity to sit down with John Kearon, the Chiefjuicer himself (I forgot to ask if his new title will simply be #1 Guy) to dig deeper into their thinking behind the rebrand, what it means for the company and their clients, and his view on the industry over the next few years.

As always, John is a joy to chat with; he’s funny, smart, and provocative with that innate British coolness we Americans are secretly deeply jealous of. I hope you enjoy listening to our conversation as much as I enjoyed having it.

Neuroscience and Marketing

Marketing scientist Kevin Gray asks Professor Nicole Lazar to give us a brief overview of neuroscience.

By Kevin Gray and Nicole Lazar

KG: Marketers often use the word neuroscience very loosely and probably technically incorrectly in some instances.  For example, I’ve heard it used when “psychology” would have sufficed. Can you tell us what it means to you, in layperson’s words?

NL: Neuroscience, to me, refers loosely to the study of the brain.  This can be accomplished in a variety of ways, ranging from “single neuron recordings” in which electrodes are placed into the neurons of simple organisms all the way up to cognitive neuroimaging of humans via methods such as functional magnetic resonance imaging (fMRI), positron emission tomography (PET), electroencephalography (EEG), and others.  With single neuron recording of the brains of simple organisms, we get a very direct measure of activity – you can actually measure the neuronal activity over time (in response to presentation of some stimulus, for instance).  Obviously we can’t typically do this type of recording on human beings; fMRI, PET, etc. give indirect measures of brain activation and activity.  These are maybe the two extremes of the neuroscience spectrum.  There is overlap between “neuroscience” and “psychology” but not all people involved in what I think of as neuroscience are psychologists – there are also engineers, physicists, applied mathematicians, and, of course, statisticians.

KG: Many marketers believe that unconscious processes or emotion typically dominate consumer decision-making. Marketing will, therefore, be less effective if it assumes that consumers are purely rational economic actors. System 1 (“fast thinking”), as popularized by Daniel Kahneman, Dan Ariely and others, is commonly used by marketers to describe this non-rational aspect of decision-making. “System 1 thinking is intuitive, unconscious, effortless, fast and emotional. In contrast, decisions driven by System 2 are deliberate, conscious reasoning, slow and effortful”, to quote Wikipedia. “Implicit” is also sometimes used synonymously with System 1 thinking.  Is there now consensus among neuroscientists that this is, in fact, how people actually make decisions?  Or is it still controversial?

NL: I don’t think there is consensus among neuroscientists about anything relating to how people process information, make decisions, and the like!  “Controversial” is perhaps too strong a word, but these things are very complex.  Kahneman’s framework is appealing, and it provides a lens for understanding many phenomena that are observable in the world.  I don’t think it’s the whole story, though.  And, although I’ve not kept up with it, I believe that more recently there have been some studies that also disrupt the System 1/System 2 dichotomy.  Clearly we have some way to go before we will reach deeper understanding.

KG: Neuromarketing can mean different things to different people but, broadly-defined, it attempts to measure unconscious responses to marketing stimuli, i.e., fast thinking/implicit response. fMRI, EEG, MEG, monitoring changes in heart and respiration rates, facial coding, Galvanic Skin Response, collages and the Implicit Association Test are perhaps the most common tools used. Based on your expertise in neuroscience, are any of these tools out of place, or are they all, in one way or another, effective methods for measuring implicit/fast thinking?

NL: First, I don’t consider all of these measures to be “neuroscientific” in nature, at least not as I understand the term.  Changes in heartbeat and respiration, galvanic skin response – these are physiological responses, for sure, but even more indirect measures of brain activation than are EEG and fMRI.  That’s not to say that they are unconnected altogether to how individuals are reacting to specific marketing stimuli.  But, I think one should be careful in drawing sweeping conclusions based on these tools, which are imperfect, imprecise, and indirect.  Second, as for fMRI, EEG, and other neuroimaging techniques, these are obviously closer to the source.  I am skeptical, however, of the ability of some of these to capture “fast thinking.”  Functional MRI for example has low temporal resolution: images are acquired on the order of seconds, whereas neuronal activity, including our responses to provocative stimuli such as advertisements, happens much quicker – on the order of milliseconds.  EEG has better time resolution, but its spatial resolution is poor.  Reaching specific conclusions about where, when, and how our brains respond to marketing stimuli requires both temporal resolution and spatial resolution to be high.

KG: Some large marketing research companies and advertising agencies have invested heavily in neuromarketing. fMRI and EEG, in particular, have grabbed a lot of marketers’ attention in recent years.  First, beginning with fMRI, what do you feel are the pros and cons of these two methods as neuromarketing techniques?

NL: I’ve mentioned some of this already: resolution is the key.  fMRI has good spatial resolution, which means that we can locate, with millimeter precision, which areas of the brain are activating in response to a stimulus.  With very basic statistical analysis, we can localize activation.  That’s an advantage and a big part of the popularity of fMRI as an imaging technique.  It’s harder from a statistical modeling perspective to understand, say, the order in which brain regions activate, or if activation in one region is leading to activation in another, which are often the real questions of interest to scientists (and, presumably, to those involved in neuromarketing as well).  Many statisticians, applied mathematicians, and computer scientists are working on developing methods to answer these more sophisticated questions, but we’re not really there yet.

The major disadvantage of fMRI as a neuroimaging research tool is, again, its relatively poor temporal resolution.  It sounds impressive when we tell people that we can get a scan of the entire three-dimensional brain in two or three seconds – and if you think about it, it’s actually quite amazing – but compared to the speed at which the brain processes information, that is too slow to permit researchers to answer many of the questions that interest them.

Another disadvantage of fMRI for neuromarketing is, I think, the imaging environment itself.  What I mean by this is that you need to bring subjects to a location that has an MRI machine, which is this big very powerful magnet.  They are expensive to acquire, install, and run, which is a limitation even for many research institutions.  You have to put your test subjects into the scanner.  Some people have claustrophobia, and can’t endure the environment.  If you’ve ever had an MR scan, you know that the machine is very noisy, and that can bother and distract as well.  It also means that the research (marketing research in this case, but the same holds true for any fMRI study) is carried out under highly artificial conditions; we don’t usually watch commercials while inside a magnetic resonance imaging scanner.

KG: How about EEG?

NL: The resolution issues for EEG and fMRI are the opposite of each other.  EEG has very good temporal resolution, so it is potentially able to record changes in neuronal activity more in real-time.  For those who are attempting to pinpoint subtle temporal shifts, that can be an advantage.  In terms of the imaging environment, EEG is much friendlier and easier than fMRI in general.  Individuals just need to wear a cap with the electrodes, which is not that cumbersome or unnatural.  The caps themselves are not expensive, which is a benefit for researchers as well.

On the other hand, the spatial resolution of EEG is poor for two reasons.  One is that the number of electrodes on the cap is not typically large – several hundred spaced over the surface of the scalp.  That may seem like a lot at first glance, but when you think about the area that each one covers, especially compared to the millimeter-level precision of fMRI, localization of activation is very amorphous.  In addition, the electrodes are on the scalp, which is far removed from the brain in terms of the generated signal.  All of this means that with EEG we have a very imprecise notion of where in the brain the activation is occurring.

KG: As a statistician working in neuroscience, what do you see as the biggest measurement challenges neuroscience faces?

NL: The data are notoriously noisy, and furthermore tend to go through many stages of preprocessing before the statisticians even get to see them.  This means that an already indirect measure undergoes uncertain amounts of data manipulation prior to analysis.  That’s a huge challenge that many of us have been grappling with for years.  Regarding noise, there are many sources, some coming from the technology and some from the subjects.  To make it even more complex, the subject-driven noise can be related to the experimental stimuli of interest.  For example, in fMRI studies of eye motions, the subject might be tempted to slightly shift his or her entire head while looking in the direction of a stimulus, which corrupts the data.  Similarly, in EEG there is some evidence that the measured signal can be confounded with facial expressions.  Both of these would have implications on the use of imaging for neuromarketing and other trendy applications.  Furthermore, the data are large; not “gigantic” in the scale of many modern applications, but certainly big enough to cause challenges of storage and analysis.  Finally, of course, the fact that we are not able to get direct measurements of brain activity and activation, and possibly will never be able to do so, is the largest measurement challenge we face.  It’s hard to draw solid conclusions when the measured data are somewhat remote from the source signal, noisy, and highly processed.

KG: Thinking ahead 10-15 years, do you anticipate that, by then, we’ll have finally cracked the code and will fully understand the human mind and what makes us tick, or is that going to take longer?

NL: I’ll admit to being skeptical that within 10-15 years we will fully understand the human mind.  That’s a short time horizon and the workings of our mind are very complex.  Also, what is meant by “cracking the code”?  At the level of neurons and their interconnections I find it hard to believe that we will get to that point soon (if ever).  That is a very fine scale; with billions of neurons in the human brain, there are too many connections to model.  Even if we could do that, it’s not evident to me that the exercise would give us true insight into what makes us human, what makes us tick.  So, I don’t think we will be building the artificial intelligence or computer that exactly mimics the human brain – and I’m not sure why we would want to, what we would learn from that specifically.  Perhaps if we think instead of collections of neurons – what we call “regions of interest” (ROIs) and the connections between those, progress can be made.  For instance, how do the various ROIs involved in language processing interact with each other to allow us to understand and generate speech?  Those types of questions we might be closer to answering, although I’m still not sure that 10-15 years is the right frame.  But then, statisticians are inherently skeptical!

KG: Thank you, Nicole!

______________________

Kevin Gray is president of Cannon Gray, a marketing science and analytics consultancy.

Nicole Lazar is Professor of Statistics at the University of Georgia and author of The Statistical Analysis of Functional MRI Data. She is an elected fellow of the American Statistical Association and editor-in-chief of The American Statistician.

4 Reasons Survey Organizations Choose On-Site Hosting

Why is a portion of the industry sticking with in-house data hosting?

By Tim Gorham

Most organizations across the market research industry have chosen cloud hosting for their survey data storage. For them, it’s easier to manage, easier to budget for, and secure enough for their needs.

But a core group are not prepared to jump to the cloud. For them, they choose to physically control survey data centers located on company property. And we’re very familiar with their rationale, since Voxco offers one of the few professional survey software platforms that is available on-premise.

So why is this portion of the industry sticking with in-house data hosting? Here are the four reasons we hear over and over again:

1. Complete Data Control

Market research organizations manage the kind of sensitive data that is commonly protected via strict privacy regulations. That means they want to know exactly where their data is at all times, and they choose to be in total control of storage, and avoid third party suppliers.

Many of our healthcare and financial services clients need to prove conclusively that their data is protected to the letter of the law. In some situations, Canadian and European clients need to prove that data is stored within their own borders. It’s not always clear how offsite data is being stored, who maintains ownership, and who else might have access to it. That can be a real worry.

On-premise set-ups often make it easier to prove total compliance. Even when cloud companies can guarantee compliance, some IT managers feel more comfortable absorbing the risk and controlling the data storage themselves. 

2. Infrastructure Costs

Cost is always a deciding factor. It often boils down to prioritizing fixed capital expenditures over monthly operational expenditures. Monthly hosting fees can be significant for large organizations with huge data requirements and numerous users. At some point, the economics tip in the favor of a fixed capital investment.

This is especially true for organizations with existing infrastructure in place for data storage in-house. It’s a very easy decision for them to select on-premise hosting for their survey software.

3. Physical Server Customization

Cloud hosting providers have existing server structures. However, in-house hardware is custom-tailored to an organization’s personal needs. This offers levels of local control, visibility, and auditability which are unattainable from cloud providers.

Retaining infrastructure control internally also allows instant fixes and improvements to how data storage is structured. The larger the cloud provider, the harder it is to request fixes or customization.

4. Internet/Bandwidth Restrictions

We’re spoiled in most of the western world with high bandwidth and uninterrupted internet connectivity. But many parts of the world are still catching up; internet and bandwidth can be slow or spotty. For these situations, hardwired internal databases are often the most productive and efficient solution available.

Sound familiar?

Do you choose to host your survey data in-house? Let us know why YOU have made that choice in the comments section below.

The Most In Demand Suppliers At IIeX Europe ’17

An analysis of the 154 private Corporate Partner meetings that took place at IIeX Europe this week and what that tells us about the commercial interests of research buyers.

 

IIeX Europe 2017 is happening right now, and it’s been an amazing event. With 560 attendees it’s grown massively from previous years in general, but this year the client side attendance has been especially strong, with 30% of registrants being research buyers. It seems as if the event has fully become a part of the European MR event calendar and word has spread that IIeX is THE event to come to to find new partners, be inspired and challenged, and embrace innovation. Considering that has always been our mission, it’s incredibly gratifying to see our message being embraced.

Although registration metrics can tell us a lot about how we are doing, what we pride ourselves on across all GreenBook and Gen2 Advisors initiatives is how we create impact by connecting buyers and suppliers and one of the best ways we have to measure that is via our Corporate Partner program. The premise is simple: research buyers (and in a few cases, investors) tell us which attending companies they are interested in meeting with and we coordinate those meetings for them in private meeting rooms during the event.  It doesn’t cost any extra for either party to participate; it’s a value add for all stakeholders at the event.

In addition to being a great benefit to IIeX attendees, it also gives us great data on what clients are looking for so we can continue to refine our events to meet their needs and give the industry some useful perspective!

This year at IIeX EU we scheduled 154 unique meetings for 26 different client groups from 19 brands (some brands sent teams interested in different things, like P&G for instance that has different teams for CMK and PR), with 84 different suppliers being asked to meet (many received multiple requests). That is A LOT of meetings!

The brands that joined as Corporate Partners at this event were:

City Football Group
Alpro
E.ON Energy
Facebook
Heineken
HERE
IFF
Instagram
Inter IKEA Systems
Kantar
McKinsey & Co
Mintel
Northumbrian Water Group
P&G
Reckitt Benckiser
Red Bull
Strauss Water
Test-Aankoop
Unilever

Now, I’m not going to divulge the names of the suppliers who were asked to participate, but I did a quick analysis of them by assigning them to a segment of either Service or Tech based and then categorizing them by their core offering.

First, 62% of all meetings requested were with Tech companies. The definition of Tech that I used was that the primary offering was a technology solution that was either DIY or offered limited service options beyond basic project support.

38% of the meetings requested were with companies that fall into the Service category, meaning that they provide full service, although it may be confined to their specific area of focus such as nonconscious measurement or brand strategy vs. a more traditional Quant/Qual full service agency. In fact, only 11 companies fall into the traditional “Full Service” bucket, with the rest being positioned more as niche consultancies focused on specific business issues or methods.

 

 

It’s instructive to look at the types of companies that clients were interested in, so I assigned each to a “specialty” category based on their positioning. A few notes on my thinking here:

  1. Nonconscious is any company that is focused on using methods related to nonconscious measurement as their primary approach. This includes facial coding, implicit, EEG, fMRI, etc… and includes those just offering technology and those who have built full consulting organizations around these approaches.
  2. Mobile includes anything that is “mobile first”, regardless of use case. If a company has built their offering around mobile devices as the primary means of collecting data whether it’s qual, quant, crowd-sourcing, behavioral data, etc.. they fit into this bucket.
  3. Full Service MR are companies that fit the traditional definition: they offer a range of methods and focus areas across the methodological spectrum and engage with clients with a full complement of service solutions.
  4. Data Collection is only those companies who license data collection platforms as their core business.

I think the rest of the categories are self explanatory, and while some could fit into multiple ones I tried to capture the claim to fame of each as their primary selling point.

As you can see in the chart below, anything related to Nonconscious Measurement continues to be of high interest. This is phenomenon we have seen at every IIeX event since the beginning. Although we are not seeing major adoption by share of project client-side interest in anything related to understanding the the motivations of consumers outside of cognitive processes is of intense interest. My belief is that when a validated, scalable, mobile friendly and inexpensive solution hits critical mass we will see the share of project for these approaches skyrocket just as we have seen with DIY quant and now with automation.

Surprisingly, mobile-only solutions were almost as hot which tells me that yes indeed, clients reached the tipping point a while ago and are now aggressively looking for new mobile-centric capabilities to augment or replace traditional approaches.

Another surprise was the number of Video Analytics meetings occurring: I suspect a symbiotic relationship between this and the other top approaches which also use video combined with social media data: with so much video being produced by consumers as part of their daily lives and in response to research projects the need to develop solutions to make the analysis and curation of the video efficient and affordable. Look for this to continue to be a priority.

 

 

Finally, let’s look at the number of companies in each category that were asked to meet with clients. Remember that some suppliers were asked to meet with multiple clients so of course there is a high correlation to the previous chart which showed meetings by category. This serves as a nice snapshot of the types of offerings clients are looking for as well as the types of companies that do well at IIeX in terms of business development.

 

 

IIeX has always been a stalking horse for the rest of the industry by indicating what clients are looking for today to build the insights organizations of tomorrow. We’re privileged to have a first hand view of what that looks like via our Corporate Partner program and are glad we can share it with the industry as a whole in this way.