1. Research Now
  2. SIS International
  3. ORC International
  4. 20-20ad

Defining ‘Value’ to Understand What Makes People Tick

Knowing what people value most, provides us with the information to create as much value as possible.


By Anouar El Haji

The most fundamental concept in the social sciences is value. Value determines what we do. Value is that ‘thing’ that needs to be created, preserved, maximized, communicated, transferred and shared. Value is ever-present and ever-changing. The pervasiveness of value can easily lead to the conclusion that our lives revolve around value. But what is value exactly? Value is often described using its synonyms, such as importance, benefit, quality, usefulness or appreciation. This isn’t very helpful because these notions are as vague as the common understanding of value. There are some that would argue that such a grand concept as value cannot be defined. Accordingly, value is supposed to be something that can only be recognized.Fortunately, it turns out that there’s something special about value. A very precise and useful definition of value exists that is barely known:

Value : the maximum amount that you’re prepared to give up

This definition has four interesting properties: (1) value is quantifiable (“the maximum amount”), (2) value is about potential (“maximum … prepared”), (3) value is subjective (“you’re”) and (4) value is expressed in terms of sacrifice (“to give up”). This combination of properties provides the foundation to understand, measure and analyze value.

For example, how much would you be prepared to walk to get a freshly baked bread from a friend? Whatever the distance, that is your valuation of getting that bread. Some might literally go very far to get that bread. Maybe because your friend’s breads are known to be really good, you promised someone to get it or there is no more bread at home. Whatever the reasons, your willingness to walk a distance of x to get it, reflects the bread’s value to you. This is (1) quantifiable, (2-3) it’s about what you’re potentially willing to do and (4) it is expressed in walking distance, which requires effort.

The more valuable the bread becomes to you, the greater the distance you’re prepared to walk. Of course, not everyone is willing to walk the same distance. This shows that there’s no such thing as objective or intrinsic value. Value cannot be separated from people; people determine value—without people there’s no value.

In the bread example I didn’t need to use money to express value. You can get something edible (the object of value) and to get it you’ve to walk (the sacrifice). In fact, most decisions in life actually don’t involve any money. How much time are you willing to spend on reading that article? Or how mucheffort are you willing to exert to fix that nasty hole in the wall? Or how much noise are you willing to endure to stay in a bar? These are all ways to express value.

The easiest way to express value is in terms of money. Specifically, your monetary valuation of anything is simply the maximum amount that you’re willing to pay for it. Knowing people’s willingness to pay is powerful because it allows you to predict what people will purchase.

The price of a product shouldn’t be confused with one’s maximum willingness to pay. The price of a product doesn’t necessarily reflect its value. The moment of purchase does, however, reveal a bit of information about one’s valuation. If you order a cup of espresso for $3, that decision implies that your valuation of a cup of espresso at that moment is at least $3. Your valuation might be much higher than $3, which can be considered a great deal. Or your valuation is only a little bit higher than $3, which means that if the cup was a little bit more expensive you wouldn’t have bought it. So knowing how much people spent can only reveal what they were at least willing to pay, not their actual valuation.

Having a precise understanding of what value is, isn’t merely a philosophical exercise. It provides a logical framework to understand what makes people tick. Knowing what people value most, provides us with the information to create as much value as possible. The more value we create, the more value we can capture in return, tangible or intangible. To be aware of what value is, is in and of itself valuable.


Fame, Feeling and Fluency: When Brand Tracking Meets Behavioral Science

Instead of trying to marry outmoded metrics to modern data sources, why not reinvent brand research from the ground up?

Speedometer with needle racing through the words Revolution, Change, Shake it Up, Status Quo and Stagnation


By Tom Ewing

Brand tracking: in need of change

The great thing about science is that it simplifies and clarifies things. And if there’s one thing that needs simplifying and clarifying, it’s brand tracking. When we at BrainJuicer decided to tackle brand tracking, we took our inspiration from the behavioral sciences – psychology and behavioral economics – and came up with a model rooted in the science of how people really make decisions.

Before we look at that, we need to identify where current tracking goes wrong. Traditional brand tracking doesn’t really help much to guide and predict brand growth. Marketers have often criticized it for being backward looking, not predictive. And modern frustration with brand tracking is also rooted in its inability to get to grips with the myriad of new data sets available to marketers – more online data, more customer preference data, social media data, mobile data… the list goes on.

The obvious solution – and the one most suppliers are promising – is to try and integrate new and traditional data sources. But this has been a painful process. The truth is, brand tracking has never been brilliant at taking behavioral data into account, even before the digital revolution. Its fit with CRM and sales data has been messy and required serious work.

But why has brand tracking had these problems? The major issue is that brand research has tried to do two things at once – set strategic goals and inform tactical decisions while being a poor fit for either of them.

Brand trackers have always looked to give a big-picture view of a brand’s fortunes and market position. At the same time they have looked to provide tactical advice based on movements in the market. In an ideal world the big-picture metrics would let marketers make strategic decisions, and the trackers would show how they are paying off.

But it hasn’t worked out that way. Neither piece of the puzzle – strategic or tactical – has been served well by brand research.

Most of the criticism has focused on the tactical piece. Brand trackers always delivered a delayed view of data – sometimes months behind actual events. This was never ideal, but in the era of up-to-the-minute sales and social data it feels farcical. The gap in speed is a big reason why simply integrating the instant data of social media into traditional trackers hasn’t worked.

This is well understood. What’s less understood is that the strategic element of brand tracking is also deeply flawed. Put simply, we’ve been asking the wrong things. And this is where behavioral science comes in.

Fame, Feeling And Fluency: How people buy brands

Strategic branding research tends to assume that people make considered decisions about brands, becoming aware, then experienced, then finally loyal – and that the innate differences and attributes of brands guide them in this decision making. But Behavioral Science shows that this is very rare.

What the Behavioral Sciences tell us is that we humans are fast and frugal in our decision-making. The truth is that people think much less about brands than we, as an industry, previously believed. People don’t evaluate options carefully, but instead rely on mental shortcuts – rules of thumb – to help them decide between options quickly and effortlessly.

There are three key mental shortcuts that help people decide between brands. At BrainJuicer we call them Fame, Feeling and Fluency. To consumers’ fast-thinking, System 1 minds:

  • If a brand comes readily to mind, it’s a good choice (Fame).
  • If a brand feels good, it’s a good choice (Feeling).
  • If a brand is recognizable, it’s a good choice (Fluency).

These rules of thumb are what behavioral scientists call the ‘availability heuristic, the ‘affect heuristic’ and the ‘processing fluency heuristic’.

Why should these fancy terms be of any interest to a CMO, a CEO or a company shareholder? Because large brands have created these shortcuts in spades and are beneficiaries of them; small brands haven’t (yet) but need to if they are to grow. Taken together, these three heuristics explain market share across categories and regions with an average correlation of +0.9. That’s very explanatory. People hate thinking too hard about which brand to buy and avoid it whenever they can; a truly successful brand is one that people will buy without careful evaluation or consideration. The marketer’s task expressed at its simplest is therefore to create Fame, Feeling and Fluency shortcuts for their brand, such that it becomes the obvious, automatic, default choice.

The 3Fs in Action

Each of the three Fs play an important role for brands. Fame is the dominant indicator of current market share. Feeling, meanwhile, predicts a brand’s future market share. If a brand has greater positive Feeling than its size would suggest (we call this ‘surplus Feeling’), then it will grow the following year. If a brand is coasting on its fame, and neglects its Feeling so that it drops below the required minimum for its size, then it stands to lose share the following year. We saw this with UK supermarket Tesco – the dominant player in its market, we tracked its feeling and saw it drop sharply. The next year, it issued its first ever profit warning and has been in decline ever since, losing market share to newer, cheaper rivals – competitors who also enjoy a surplus of feeling.

So in rethinking brand tracking, we take how a brand performs on the three heuristics and translate the scores into a 1-5 star rating. A 1-star brand will have low levels of fame and low market share; a 5-star (famous) brand will have high market share and be the most obvious choice for most people – an automatic, default choice that requires no deliberative thought. In addition, we assign a star rating prediction for the future, based on the brand’s surplus or deficit amount of Feeling.

Above all else, the marketer’s task is to make as many people feel something positive towards their brand as possible. Feeling simplifies and guides decision-making, it provides a ‘lift’ that helps people to decide in favor of your brand over another. Besides share growth, there are many other benefits of having a surplus of Feeling – it provides a buffer against PR problems (think VW) and gives brands permission to extend into new areas, and it lets brands charge more. Feeling is what economists might call ‘demand’. If you feel nothing, you’ll do nothing; if you feel more, you’ll buy more.

As for Fluency, the early indications from our database is that it enjoys a strong relationship with price. Strong, clear, brand assets make a brand easy to buy, and more powerful as a means of signalling status. The ability to charge a price premium is a brand’s most valuable asset, so brands with high fluency find themselves in a strong position. An example is Santander, one of Spain’s biggest banks but a relatively new entrant to the UK market. It’s enjoyed a remarkable rise in fame and scores strongly on feeling and fluency, thanks to its distinctive branding and simple offering. It’s now felt able to raise the prices on its offers by 150%.

Strategic brand tracking in the analytics age

Fame, Feeling and Fluency offer a way out of the impasse that brand research has found itself in. The problem was that brand tracking tried to mix the strategic and the tactical, and failed on both counts. The tactical element was out of date by the time it was charted. The strategic element was based on a misconception about how people choose brands. The result has been tools that fail to predict the future and can’t even handle the present!

Instead of trying to marry outmoded metrics to modern data sources, why not reinvent brand research from the ground up? Stop trying to ride two horses at once, and separate the tactical and strategic roles. Use behavioral data analytics to build a monitoring tool that gives you just-in-time tactical insight on what your customers are really doing and saying. But then use Fame, Feeling and Fluency as navigational aids to set broad strategic priorities. They let you create easy to understand, top-level performance metrics that give you a way to focus and clarify the day-to-day frenzy of brand activity. Modern branding is just too fast and complex for one-size-fits-all solutions at the tactical level: marketers need analytics. But alongside them you need something more high-level, human, and predictive, that gives you the opportunity to take a big picture view, set goals, and breathe a little.


Is Path to Purchase the Road to Perdition?

Is there anything like a shopper journey? The answer is sometimes yes and sometimes no.

Tyler aug2015 card front


By Dr. Stephen Needel

Many researchers and technology suppliers have become captivated by the process by which the shopper comes to select a product to purchase. This process goes by many names – the “path to purchase” and the “shopper journey” are common labels.  They all involve the basic formulation of awareness, consideration, preference, intention, purchase, and loyalty. As a picture is worth a thousand words, everyone who plays in this game has their own graphical depiction of what this path looks like. There are straight paths, crooked paths, funnels, circles; I’m waiting for more creative researchers to embrace other, more exotic geometric forms. I was happy to ignore all this until I came across a website that explained the path to purchase via Taylor Swift’s dating advice.

Having conceptual models is wonderful for marketers. It imposes a big-picture perspective on the daily minutiae of marketing that can’t help but be useful. Those that embrace this in their daily work may well think they are on that yellow brick road leading to the Land of Oz. My view as a researcher is different; assuming a “path to purchase” actually exists is more likely to lead the researcher down the highway to hell than to the merry old land of OZ. There are any number problems with a path to purchase concept from a research perspective, some of them solvable, some of them unsolvable, and some we can just ignore.

Is there anything like a shopper journey? The answer is sometimes yes and sometimes no. When we need toilet paper, my wife tells me, I go to Walmart, and I buy the product I’ve been using for as long as I can remember (hint – I don’t squeeze it). The only journey part is getting in my car. On the other hand, when we bought a new TV this past year, I talked with friends, I looked at the ads, I went to the store, I read reviews, and then made a purchase. Then, I couldn’t help myself – I completed the circle, writing an online review a month after buying. When it comes to CPG products, I’m not sure a shopper journey happens very often.

Even when we are considering a model that appropriately describes how shopping is done for that product domain, the question of whether we can estimate that model becomes relevant.  Statistically, there are tools such as structural equation modeling that would permit this. The reality, however, is that the availability of the data needed from any one person in order to estimate a model is unlikely to exist.

To date, our research approach has been silo-ed, where we worry about the individual links in a path to purchase model rather than the model as a whole. Advertising researchers worry about awareness, and how to increase the likelihood of going from awareness to consideration.  Packaging researchers worry about going from consideration to preference to intention – indeed, most packaging studies use purchase intention as their KPI.  Shopper marketers assume that the intention is there and worry about how to raise intention’s salience in order to make the product the shoppers’ choice. Little of this gets us to a model that is useful.

Accepting that we might only be able to understand a piece of the model at any one time does not release us from the need to validate our tools to the end of the process. A tool that reliably measures consumers’ emotional reactions to advertising, not matter how slickly, is useless if it can’t be shown that differences in emotional reaction are related to differences in purchasing. We often get in the intelligence test trap – intelligence is what intelligence tests measure. When we test pricing or packaging, we don’t really care if your purchase intent improves – purchase intent is what that 5-point scale measures and lately we’ve begun to think the same about NPS. We care about whether that price or that package generates more sales than another price or another package.

I’ve argued before, in this forum, that we need a model that describes purchase behavior and provides a theoretical framework for our brands. The model/theory should guide us in our primary purpose as researchers, which is to help our marketers change shoppers’ behavior. In the time-honored tradition of behavioral economics, I suggest stealing from old time social psychology.

Fishbein and Ajzen’s (1975) summary of 40 years of research on attitude formation and change has a very simple model. A person forms beliefs about a product and about behavior related to that product. Those beliefs combine with social norms about the product and its related behaviors to form an attitude – a predisposition to like or dislike. That belief combines with the shopping situation at the time for that person to form an intent, which leads to [purchasing] behavior.  This adaptation works for any type of product, for any type of shopping trip, and focuses on behavior as the outcome. When the proper measures have been taken, we can understand what needs to be adjusted to generate the behavior we want.

Don’t spend time going down the highway to hell – work smarter.


For Research Suppliers, Actions Speak Louder than Advertising

A recent search for a vendor showed just how difficult some research vendors make it for clients to want to work with them.



By Ron Sellers, 

Everyone says they want more business.  But how many research vendors make it easy for clients to give them business?

Grey Matter Research partners with telephone field centers when we have a phone survey to complete.  Normally we like to have three or four vendor options, but for various reasons we recently found ourselves down to one.  So I decided to send out a couple of live RFPs to explore some different potential vendors.

Because we sometimes have very large projects (e.g. 3,500 completes), we only targeted field centers that promoted at least 80 phone stations, going through an industry resource listing to find likely targets.  We decided to start with price competitiveness, and then explore a small subset of potential vendors more carefully once we were comfortable that their pricing would be reasonable.  We gave vendors about a week to respond to the two RFPs.

We looked at the following 16 companies:

  • Bernett Research Services
  • Clearwater Research
  • Customer Research International
  • Directions in Research
  • Galloway Research Service
  • Harmon Research Group
  • I/H/R Research Group
  • Issues and Answers Network
  • McMillion Research Service
  • NORS Surveys
  • Precision Opinion
  • Research America Market Research Solutions
  • Survey Technology & Research Center
  • Thoroughbred Research Group
  • VuPoint Research
  • Wiese Research Associates

What happened next was a fascinating exercise.  Since we have found that some company listings on research directories are not updated frequently, the first start was each firm’s website to find a contact person.  You would think research vendors would make it easy to submit an RFP, wouldn’t you?

One company that supposedly operated 100 CATI stations had no mention of telephone research at all on their website, and no e-mail or phone number for anything other than a panel help desk.  Other companies made it difficult to find a real person to contact, giving only “info@” or “bids@” generic e-mail addresses.  I’ve had a lot of experience with e-mails to those generic addresses being ignored for days at a time, which didn’t give me a confident start, but they got included in the bid process anyway.

Two firms provided no e-mails at all, giving only an online contact form to be filled out.  Since the RFPs were fairly detailed and already in PDF form, I wanted to attach the documents.  One vendor had no way to attach any files – everything would have to be re-typed into their form (which I was not about to do).  I used the form to explain the situation, asking for an e-mail to which I could send the RFPs.

They never responded.

The second one did have a way to include attachments, but with each attempt I was informed that the file type was not valid (both were simple PDFs).  After multiple attempts I gave up.

This eliminated three companies from our original list of 16, leaving RFPs going out to 13 firms.  Of these 13:

  • One did not respond at all – no bid, no acknowledgement, no polite declination of the opportunity, nothing.
  • One responded a day later to thank us for the opportunity, and promised a response by our deadline. They never did follow up with an actual bid and we never heard from them again.
  • One gave their first response (which also include their bids) three days after the deadline (we were told the person we sent the e-mail to was on vacation, but wouldn’t you think they’d be checking that in-box a little more frequently?).
  • One acknowledged our e-mail two days later, although they did provide a bid one hour before the deadline.
  • One responded three days later, although they did provide a bid well before the deadline.
  • Only nine acknowledged our RFPs on the same day they were sent.

In all, we received 10 bids (although one company bid on only one project, completely ignoring the second RFP without explanation).  Interestingly, although we provided almost a week for responses, one of the bids came in three days late, and five more arrived within two hours of the deadline (including one that considered 5:30 pm to be “end of day” – really stretching things a bit).  All told, just five of the 13 vendors acknowledged the RFP the same day they received it and sent a response that wasn’t bumping right up against the deadline, and only eight of the 13 ended up fully responding to both RFPs by the deadline.

Some of the vendors called or e-mailed with additional questions, or just to introduce themselves.  One who did so asked such basic questions it was obvious he hadn’t even really read the RFPs (e.g. “You want this done by phone?”).  It was no surprise when their bid came in five times as expensive as anyone else’s, and required two months to complete 400 interviews at 80% incidence.

Based on this exercise and on many years in the industry, I offer some suggestions for all research vendors (not just field centers):

  • Make it easy for me to find a contact person on your website, rather than generic e-mail addresses or contact forms. Vendor relationships are all about relationships with people, not with companies.  I can’t build a relationship with a corporate office.
  • Make it easy for me to submit an RFP. Make sure the e-mail addresses on your website and on directory listings are correct.  If you do use an online contact form, make sure I can attach files, and be certain the form actually works.  If you do provide an e-mail address, see that inbox is checked frequently or forwarded to someone who’ll respond in a timely manner.
  • Show your interest by acknowledging my RFP within a reasonable time period.
  • Don’t be afraid to ask me questions. It impresses me when someone thinks through my project enough to have questions or make suggestions, rather than just making assumptions which may turn out to be wrong.
  • But if you do ask me questions, be sure they’re relevant. If the RFP specifies a methodology and sample size, please don’t waste my time by asking me the methodology and sample size.
  • If you’re not interested or unable to take the project, fine – but have the courtesy and professionalism to let me know. Do that and I may consider you for other projects.  Ignore me and I’ll gladly return the favor.  Or worse yet, I’ll let colleagues at other companies know what my experience was with you.

This last suggestion may raise some vendor objections, but I’ll make it anyway.  If possible, don’t wait until the last second to submit your bid.  I know you may be busy and I know that 4:30 pm qualifies as “end of day” and therefore you have met the requirements of the RFP, but if on our very first contact you’re sneaking things in under the wire after nearly a week to work on it, it suggests to me that this is what our relationship will be like if you get the project.  It’s one thing if I give you a tight timeline to respond to an RFP, but I give you four or five days and you’re still barely hitting the deadline?

When I have a variety of vendor options, do you want me to know you met the requirements of the RFP, or do you want me to view you as someone going the extra mile and not waiting until the last minute?  Would you rather your bid stand out as the only one I’m reviewing a day before the deadline, or one of five I get in at the same time an hour before they’re due?  Do you want to meet the requirements or do you want to impress me?

Some of these vendors did impress me with their responsiveness, their cordiality, their insightful questions, their detailed responses, their follow up, and occasionally even their pricing.  I’ve identified a couple I want to learn more about, and we may end up doing business together.  But I also had confirmed for me that no matter how much research vendors schmooze at trade shows, run expensive advertising to claim they want my business, create detailed listings in various research directories, and have expensive websites touting their capabilities, many don’t really demonstrate that they want my business by their actions.

And as Mom always pointed out, actions speak much louder than words.


Want To Win $25,000? Submissions & Voting Are Open for the Insight Innovation Competition at IIeX Europe 2016!

Submissions and voting are now open for the latest round of the Insight Innovation Competition (IIC), to be held at the Insight Innovation eXchange (IIeX) March 3-4 in Amsterdam, Netherlands.

logo_eu (1)


Submissions and voting are now open for the latest round of the Insight Innovation Competition (IIC), to be held at the Insight Innovation eXchange (IIeX) March 3-4 in Amsterdam, Netherlands.

Imagined and organized by GreenBook, the Insight Innovation Competition helps entrepreneurs bring disruptive ideas to life while connecting brands to untapped sources of competitive advantage through deeper insights.

The Competition works as follows:

  1. Innovators submit a great idea that will change the future of marketing research and consumer insights.
  2. The market research industry votes on the ideas that have merit.
  3. Five finalists with the most votes (and possibly a couple of wildcard entrants selected by the judging committee) are invited to pitch their ideas to a panel of judges at Insight Innovation eXchange Europe 2016.
  4. The winner gets $25,000, mentoring, fame and exposure to potential funding partners.
  5. The industry benefits from a great new solution that improves how companies understand consumers.

The Competition has been a huge success story. It’s a win for everybody: entrepreneurs with great ideas for improving the business of insights, investors looking for new opportunities in the insights space, and the corporate-side end-users of market research who are looking for new solutions.

Past winners have gone on to great success and include:

Winners (2)


Stephen Phillips, CEO of past IIC winner ZappiStore, said winning “not only helped us feel great about what we were doing but also helped us attract both clients and potential investors.”

Similarly, according to David Johnson of Decooda, winning the IIC “helped accelerate the entrance of our company into the marketplace, gave us massive visibility to potential partners and clients, and led very directly to new business.”

Additionally, the influx of new thinking and new technology helps further the evolution of the market research community as a whole. The finalists have also experienced great benefits and many have achieved increased growth as a result of IIC participation:

competition_winners_finalists (2)


The $25,000 cash prize is sponsored by Kantar, Lowe’s Innovation Lab and Vision Critical, and of course another benefit of participation is the ability to connect with and explore possible relationships with these three (and other) potential partners, including the many IIeX Corporate Partners who will be in attendance at the conference.


Submissions and voting take place on the Insight Innovation Competition website at http://www.iicompetition.org until 5:00 Eastern Time January 1, 2016.


Trick or Treat, IIeX Style!

Posted by Leonard Murphy Friday, October 30, 2015, 13:00 pm
All of our events are 30% off for Halloween weekend only.

IIeX Halloween


Mark Earls on Building and Using a Better (Insights) Map

One of the great beauties of the new map of human behavior is that it is evidence-based, rather than something that is text-book: the insights into human behavior it relies on are all based on sound, repeatable experiments.


treaure map



Editor’s Note: Mark Earls, “the Herdmeister”, is one of the world’s leading thinkers on brands and behavior. He is the best-selling and prize-winning author of “Welcome to the creative age”, “Brand New Brand Thinking”, “Herd: how to change mass behavior by harnessing our true nature”, “I’ll Have What She’s Having – Mapping Social Behavior” (with Profs Alex Bentley and Mike O’Brien) and “Copy Copy Copy”.

He also plays in a Ska band and is in general one of the nicest folks in the world.

We’re privileged to have him as part of the GreenBook “extended family” through his involvement with our IIeX event series, and today via  a guest post here on the blog.


By Mark Earls

In 1798 the English explorer and cartographer James Rennell returned to London from a long expedition to West Africa, charting the length of the mighty river Niger. Among the treasures he brought home was a carefully surveyed and beautifully composed map of the region, which included a striking new geological feature – the Mountains of Kong. Just as he and his contemporaries expected, the source of the mighty river was – so the map suggested, at least – amongst these towering peaks.

For nigh on a hundred years, all the maps of the region – whether made by German, French, Dutch or English hands – featured this memorably-named range of mountains. Everyone accepted that they were real. Until that is,…a pesky Frenchman (yes, a Frenchman!) bothered to go to the place on the map where the mountains were supposed to be. Instead of towering peaks, he found a plateau at 1000 metres: no peaks, no snow, no ravines, no lofty views.

Eventually, after a bizarre legal wrangle with the British Royal Geographical Society, the facts of the matter were agreed and the maps changed. So if you find yourself with a vintage map of West Africa in which the mountains appear, you can be sure it dates from the 19th Century (or is based on a map made at that time – in certain encyclopedias of the 1920s, the mountains lived on).

The moral here of course is that the utility of the maps we use depends to a great extent on their accuracy – the facts of the landscape are important. However plausible – and the notion of a mighty range of mountains being one of the sources of the great river Niger very much fitted the expectations of the age – it’s essential that we check the facts of the matter.

This is what the explosion of behavioral and cognitive science in recent years has done for us in the insights community: it’s challenged our underlying assumptions about how people do what they do (and therefore how we might better understand and influence them); it’s given us better rules of thumb such as Kahneman & Tversky’s “Lazy mind” model (thinking is to humans as swimming is to cats – we can do it if we really have to…) or the role of short-hands and heuristics that we use to think “fast” (as Kahneman dubs it). Rather than be understood as a race of Star Trek Spocks (logical, considered and evidence-based deliberative decision-makers) we are mostly more like the Vulcan’s friend, Captain Kirk (intuitive, impulsive and emotional). And rather than being utility maximisers (seeking out the best or the best value), it turns out we are approximate “satisficers”, happy to go with “good enough” in most circumstances.

All of which paints a very different of behavior from that we inherit from previous generations of insights professionals (just consider what lies behind the AIDA model of communication…).

One of the great beauties of this new map of human behavior that this starts to paint is that is evidence-based, rather than something that is text-book: the insights into human behavior and human behavior it relies on are all based on sound, repeatable experiments.

All of this presents great opportunities to insights professionals as well as challenges: for example, what are the implications of the new map for traditional research methodologies? What if – to use a widespread analogy for the relative importance of conscious and unconscious mind – it turns out we are researching only the White House Press Office’s view of policy and policy making and not POTUS’ own account? Do we need new research methods or simply new frames to interpret the output of the old methods by?

Similarly, it’s worth asking have we got all of the detail right? Is the bit we have readily embraced -behavioral economics (BE) with its interest in individual cognitive biases – all we need to know? Many like myself argue that it isn’t because it excludes or at least underestimates the social side of human nature which other disciplines (like anthropology and evolutionary economics) emphasize? Equally, are the “experimentally-derived” insights really universal or are they for example culturally-based (Prof Joe Henrich has made a career from playing BE-type games in different cultures and producing strikingly different findings from those shown for the US-undergraduate samples which still dominate most psychology research).

And of course, how do we turn these new insights into things – products, services, practices – that can be commercially as well as theoretically useful? And how do we engage those inside and outside our organisations with this stuff – people who are by and large less interested in the details of the map and just want to know how to get from A-to-B?

These are some of the things we’ll be exploring on the 9th November in NYC at the IIeX Behavioral Marketing Forum. I’ll be reviewing the day and seeking to provide a synthesis of the map at least (based on work I’ve been doing in recent years). If you’re at all involved in insights and the new map, you should be there.


The Modern Research Respondent: Holding Their Attention with Dynamic Questions

Posted by Aaron Jue Wednesday, October 28, 2015, 10:23 am
Posted in category General Information
While there are many points that must be addressed to significantly increase engagement, Focus Vision recently studied the specific impact of dynamic questions. Here is what they learned.

Man with megaphone and white board


By Aaron Jue

It’s a constant struggle in the world of market research – keeping an increasingly demanding respondent engaged. With attention spans at an all-time low, and technology innovations across the board, people expect and feel entitled to a certain level of interactivity while online. This includes the interface that they encounter when responding to market research outreach.

While there are many points that must be addressed to significantly increase engagement, we recently studied the specific impact of dynamic questions. These questions are more compelling and interactive, providing a graphical and interactive way for researchers to capture respondent data in online surveys. Standard HTML inputs, such as traditional radio buttons or select boxes, may not be the most user friendly or engaging forms for today’s research respondent. Using Javascript or HTML5, dynamic questions enable more flexible and customized design elements to hold respondent attention and address the need for greater responsivity.

Dynamic element examples include:

  • Drag and drop
  • Sliders
  • Card sort
  • Rank sort
  • Button select

The enhanced and flexible design capabilities of dynamic questions allows researchers to create question types that are suitable for different platforms. Traditional radio button designs may be too small for mobile device screens. We can use dynamic question to create larger, mobile friendly forms, such as ATM style boxes.  Visual cues– like the illusion of a depressed button when it is selected–also enhance usability and help increase survey participation among the mobile population.

Instead of a pick list using checkbox forms, the survey researcher may employ a shelf test to better simulate a real-world shopping experience. Using a drag and drop interface, respondents can physically rank order a set of cards rather than typing in a rank number in a text box. In both of these instances the dynamic forms establish an intuitive and user-friendly way for survey respondents to express their opinions about a brand or product.

But dynamic questions executed poorly can increase dropout rates, reduce tendency to read questions, and even cause respondent confusion when not executed properly.  We have seen designs where a respondent was navigating a space ship and had to “shoot” an item to indicate a response; or were timed to move objects through goal posts. In these instances, an additional layer of cognitive processing involved hand-eye coordination and manipulation with the computer mouse. The resulting data was sporadic as respondents focused more attention on the act of completing the task at hand rather than thinking about the question asked by the researcher.

There are common sense ways to avoid pitfalls when using dynamic questions:

  • Pre-testing: This is a critical step for any elements that stray away from standard question structures. Does the question work on multiple platforms (PC, mobile, tablet)? Is it time intensive for the respondent? Is it self-explanatory so the respondent knows what to do?
  • Teamwork: Good design also requires specialized expertise. While many market researchers may direct hired programmers on how they want their dynamic questions customized or setup, this task is better done in collaboration with a visual designer or usability specialist.
  • Standardization: Data collected by using dynamic question may not be directly comparable to data collected using traditional formats. Any comparisons with previous norms should be done with careful consideration. When employing dynamic questions, develop standards and maintain consistency in how you customize them. That way you can properly interpret the relative value of your data without worrying that the question format is biasing results.
  • Clear Benefit: Your dynamic question may require extra clicks, additional mouse movement, or dragging and physically manipulating objects across the screen. Is there a clear and substantial benefit to having respondents spend this extra time? In other words is your dynamic question that much better than a traditional HTML form?

Dynamic questions can improve the survey experience. As researchers build surveys on multiple platforms and screen sizes, dynamic questions allow the designer greater flexibility to meet this challenge. Dynamic questions offer additional functionality for online surveys where graphical and interactive displays allow the research to capture information in more engaging ways that are not possible in a traditional format. But understand, too, that if not done properly, building new question types can create confusion as respondents encounter something they are unfamiliar with and may use them incorrectly or not in the intended manner.


Gain Deeper Insights from Networks PLUS Content

Seth Grimes interviews Preriit Souda on merging network and content analysis approaches to driver deeper, more impactful insights.
By Seth Grimes 

The research & insights industry — that’s market research and consumer insights — is having a hard time coming to grips with social media: chaotic, unreliable, hard to quantify… and yet an incredibly rich source of unscripted conversation. As a researcher (or a research client), how do you make sense of social, particularly when you’re accustomed to methods that allow you to ask direct questions (via surveys) and guide conversations (in focus groups) and observe and measure reactions in controlled settings? We have yet to crack construction of scientific samples of social-platform users, lacking which we can’t report statistically significant findings.

TNS researcher Preriit Souda

Nonetheless, research & insights professionals are working to modernize methods, to accommodate social insights. TNS data scientist Preriit Souda — 2011 ESOMAR Young Researcher of the Year — is on the front lines of this work.

Preriit graciously submitted to an interview — hard for him to find time, given a grueling schedule — in the run-up to the LT-Accelerate conference, taking place November 23-24 in Brussels. Preriit and other insights, customer experience, media & publishing, and technology leaders will be presenting on applications of language technologies — text, sentiment, and social analytics — to meet everyday business challenges.

Here, then, is Preriit Souda’s explanation on how to obtain —

Deeper Insights from Networks PLUS Content

Seth Grimes> You have remarked that too much of today’s social media analytics relies on antiquated methods, on little more than counting. So you have advocated studying networks and content in order to derive deeper insights. Let’s explore these topics.

To start, could you please describe your social-conversation mapping work, the goals and the techniques you use, the insights gained and how you (and your clients) act on them?

Preriit Souda> Networks give structure to the conversation while content mining gives meaning to that structure.

People talk about structures of conversation styles based on network analysis. I have used networks to better understand conversations on Twitter, Facebook, Tumblr, Twitter + YouTube, Weibo, etc. While these are good analyses, if you look only at a graph, often patterns formed don’t make sense. Unless you add content mining to understand these structures, you get wrong interpretations. When you use content analysis to guide network analysis, a complete picture emerges.

In addition, clients get excited when seeing the networks (because it looks cool), but then they ask why/what/how. To answer you need content mining. For any significant insight, you need both.

For example, I worked on a campaign analysis. The campaign was handled by a big ad agency and its success was reported in a big advertising magazine. The network graph showed a decent amount of volume. But certain patterns raised questions about the conversations between certain tweeters. We looked at our text-mined data and found that these guys were artificially inflating the tweets and hence the impressions. Using both network and text mining together helped us uncover that the actual volumes reported were much less.

Further, we use text mining to understand sources of negativity or positivity. We use text mining to measure volume of brand imagery and perception changes with time and then use network graphing to see spread.

Seth> Alright, so networks plus content. Any other insight ingredients?

Preriit> Apart from studying networks and content together, use of social meta-data in collaboration is quite important. Also the idea of analysing different social networks differently (because each has a different character) and then merging “findings” is important but missing today.

Finally, clients need to use social data in conjunction with other sources of insights — survey, CRM, store data, e-commerce etc. — to get the complete picture. When social is understood in conjunction with all these pieces of the jigsaw puzzle, true impact is realized. Social media analytics needs to up its game to be a part of a larger overall picture.

We need insight-oriented analytics and not simply counting of likes and shares.

You referred to “sources of negativity or positivity.” What role does sentiment analysis play for you and for TNS clients?

I will try to answer this question using a broader term — content analysis — and then delve into opinion mining. (I like calling it tonality analysis).

Themed attributes are linked to corresponding tones

Content mining is the most important part in any social media analysis we do. If you do the conversion of unstructured data accurately and insightfully, subsequent analyses will make more sense and be quite robust. Else, if your content mining is crap, all your following analysis is better not done! The basic pillar of any analysis is data. Unstructured data can’t be used directly. It has to be converted into structured data and hence your text mined data becomes data feeding your models. Nowadays, I have seen people in analytical/consulting firms building econometric models based social data. When I question them on their content mining I realize that I can’t rely much on their analysis because the very conversion of unstructured to structured data is faulty.

If you don’t spend time in being creative, insightful, comprehensive and accurate at this stage, I doubt your analysis.

Coming to your question on sentiment analysis: We look at sentiment as a part of content analysis. In some cases, clients need simple +/- while in some cases clients are more insight focused and need to understand different shades of opinions with respect to different entities (brand, product, services, etc.) while some want to further understand shades with perceived linkages with different attributes and imageries.

We create customized opinion mining algorithms for every project, client, and sector because every situation is different. Machines can’t understand the difference between someone speaking about nuclear topics from a political angle vs. a scientific angle vs. an educational angle.

Clients expect insights as robust as from traditional research methods like surveys or focus groups and other forms of research. While in a survey/focus group, you are explicitly asking people questions, in social you are mining people who are speaking in a natural environment. So we have to understand context and how what people say can be linked to answer explicit questions otherwise answered via a survey. For example, in survey people are asked questions like “Do you associate Brand X with trustworthiness?” while in social no one will use that lingo. So I have to find ways how people refer to such concepts. And then link it up to quantify opinions. So for us opinions are not simply +/- but much more than that. These things make our life difficult but also exciting.

You advocate use of text mining for meaning discovery, to get at explicit, implicit, and contextual meaning in customer conversations. Could you please give an example of each type?

Well, different people use these words in different manner. Some people might disagree with my definition or some may call it differently but what I am referring to is as follows.

TNS Conversation MappingExplicit meaning: Say, people using the word Barclays and talking about its bad service

Implicit meaning may be broken out as —

  • Referential Implicit: People don’t use the word Barclays but share a URL (about Barclays) and express their opinion with respect to Barclays.
  • Operational implicit: Saying something after seeing a YouTube video or in reaction to a Facebook post.
  • Conversational implicit: Talking to people who have a very high probability of being linked only to the topic you are mining for. They might not use the words you are looking for, but there is a very high probability that they are talking about things of your interest.
  • Using images to express: Sharing pictures with minimal words to express their opinion.

Contextual meaning may also be broken out —

  • By Geography: Certain words mean differently in different geographies and hence the importance you give to them, in order to understand intensity, varies. Plus often we need to tweak our algorithms to take into consideration different lingo styles of people from different origins within a given geography.
  • By Sector: Certain phrases or words mean differently by different subjects and context. When interpreting those words or phrases, context has to be properly understood by our algorithms.
  • By time: Meanings of certain words/phrases change by time or are influenced by ongoing events. So one algorithm is right at certain times but at certain times it can be wrong. For example, when people say positive things about Lufthansa airline staff, that translates to goodwill for the airline. But during adverse times — in most cases is negatives expressed against management or the brand in totality — staff may be misperceived negatively.

What text analytics techniques should forward-looking researchers master, whether for social or survey research or media analysis?

I think I am using up a lot of your time, so I will try to keep it short. Without going into any technical details, I think linguistic library based techniques are useful along with machine learning techniques. So someone trying to enter in this area should be aware of both and be ready to use both. I feel that nowadays lot of people have a bias towards ML which is right in some cases but in some cases I don’t feel that it gives desired results. So I believe that a more combinative approach should be used.

What best practices can you share for balancing or tempering automated natural language processing, including sentiment analysis, with human judgment?

Different people look at this problem in different ways. I can talk about certain overarching steps which involve humans at different steps to improve results.

Start with a good desk research by the content analyst followed by inputs from a subject matter expert. At both stages create and refine your mining resources. Bring in social data and then further refine. Create your model and get it checked by a linguist along with the subject matter expert. Both will give their own perspectives and sometimes differences between them can help you refine your model. Test with new data across different times. (Social data is often influenced by events — some known and some unknown.) Monitor your performance till you reach around 70-90% perfection on agreed model outputs.

You’ll be speaking at the LT-Accelerate conference, topic “Impact and Insight: Surveys vs. Social Media.” What are the key challenges your presentation will address, and could you hint at key take-aways?

It’s been almost 3 years that I have been using social media data alongside surveys. It’s been a challenging ride and continues to present new challenges.

I will talk about some of things that I have talked about in questions above. I will talk about my personal experiences using social to answer client questions and possible solutions that I have found to work nicely in my context. I will also talk about some of the problems I face. I will try to use examples while trying to protect client privacies.

People can look at my past work to get a sense of my approaches and challenge me or make suggestions. My talk will be informal and I would prefer the audience be open in sharing thoughts. Here are a couple of items:

Finally, what’s on your personal agenda to learn next?

Learning Econometric Modeling and sharpening my skills in certain scripting languages.

Thanks Preriit!

Again, meet and hear from TNS researcher Preriit Souda — and research/insights leaders from Ipsos, DigitalMR, Deloitte, Xerox, and other organizations — at the LT-Accelerate conference, 23-24 November in Brussels.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Toto, we’re not in Kansas anymore!

How quickly is change happening and what are corporate clients saying about the rate of change and their role in this new world?



Editor’s Note: Our colleagues at Cambiar have just released their  4th Annual Future of Research Report, which along with our own GreenBook Research Industry Trends Report and the ESOMAR Annual Report serve as the three key strategic planning reports for much of the industry. Each report provide insight into specific business questions, and one of the key areas that FoR  explores well is the perception of change occurring on the client-side of the industry.

In this post Beth Rounds pulls out some of the results from FoR that indicate that perhaps the pace of change we have been discussing for many years here on GreenBook has not just accelerated, but has passed the tipping point and the changes in the industry are both rapid and massive at all levels.

We’re hard at work on analyzing the latest data from the GRIT survey and will be releasing the new report at the end of November. We see corroborating trends in GRIT as well, and the gist certainly seems to be that 2016 (and beyond) will be interesting years for the industry indeed.


By Beth Rounds,

It’s great to be back working with the Cambiar team as we assist clients, both corporate MR teams and global agencies, with their transformation challenges and in many cases, their opportunities

For those that are embracing the notion that we aren’t in Kansas anymore, it is likely that you are re-thinking your business model.  At Cambiar, we see three models needing transformation, driven by three existential questions that manifest themselves in three meta-trends:



So how quickly is change happening and what are corporate clients saying about the rate of change and their role in this new world?

In our 4th Annual Future of Research Survey, when asked to evaluate the pace of change in the last two years (on a 5-point scale where 1 was “minor” and 5 was “massive”), fully 49% of suppliers and 37% of clients opted for the top two boxes. This would suggest that suppliers downstream of changes in client organizations are experiencing that change fairly viscerally in their own organizations and business models. But it does not end there: when asked to evaluate the likely rate of change over the next five years, clients and suppliers alike expected a significant increase in the pace.



The good news, in the midst of all this change, is that fully two-thirds of clients feel they are having a greater impact on the decision-making of their business partners than they did two years ago. Only half of suppliers felt the same thing, a figure which is open to numerous interpretations that will need to be the subject of more and later study. Interestingly, these figures are even higher when the researchers involved are integrating numerous sources of data and/or using social media analysis as a complement to more traditional methods.

And who are the key partners upon whom corporate researchers are having greater impact?

  • Brands and marketing (80%)
  • Senior management (C-Suite) (64%)
  • Customer marketing (51%)
  • Corporate strategy (43%)
  • Sales (35%)

It is probable that we would not have seen such strong identification of the C-Suite and Corporate strategy folks as key business partners for research even five years ago.

It is also clear that clients are excited about their future role. When asked to identify the expected differences in their roles five years from now, participants stressed aspects of the job that were of a higher order, including strategic thought partnership, being consultants and opportunity identifiers and providers of business recommendations.


This chart also highlights a number of other findings going on in the client side of our profession:

  • Today’s roles remain relatively limited compared to the aspirations we saw in the 1st Annual Future of Research Report. Primarily the role of corporate market research and insights functions are still seen as being voice of the customer and insights generation (with little thought as to how those insights are then taken on through the rest of the business).
  • The proportion of the clients saying that they play the role of strategic thought partner is identical today to what it was in our first annual report (37%). Given that the aspiration to become a strategic thought partner was as strong then as it is now, it is clear that progress in the client world is uneven at best.
  • While clients are aspiring to higher order roles, it is also clear that they expect to be taking on more roles in the future—even ones which had largely been confined to the back room, such as “risk reducer.” Will they have the capacity to fulfill all these roles? Will there be the corporate vision to let this be so? Disturbingly, only 44% of clients rate the role of market research in their own organization as being “clear” or “very clear.”

So what is holding us back? What are the barriers to success in market researchers’ quests to have more impact and play a greater role? When broken down, it seems to be a story either of reduced budgets or of research not being able to be in the right place (structurally or tactically) at the right time to influence a decision.




Interestingly, clients tend to emphasize the tendency for senior management to make decisions on gut instinct, a factor that Corporate Executive Board brought to the forefront two years ago when they found that only 5% of customer-centric decisions actually involved research. Suppliers tended to emphasize market research departments not having enough influence among senior management. This would suggest that they have less confidence in their clients than perhaps they ought!

Perhaps most intriguing of all is that large clients (those with corporate revenues in excess of $10 billion) are much more likely to emphasize the influence of large consulting firms as usurping the research or insights department as thought leaders within the organization. Are the years of researchers shouting, “The consultants are coming!” really now starting to become true?

Key Takeaways From Our Research

When the 1st Annual Future of Research Report was released, clients and suppliers forewarned of major change to come. That change has arrived and its pace is only expected to quicken in the next five years.

Among clients there is a genuine feeling of having made progress in terms of the impact that they are having on their internal business partners, whom they identify as being much higher up in the organization than in the past.

There is both anticipation and aspiration in the roles that clients believe they will be playing in the future, but this is not quite matched by the progress we have seen in their roles in the last five years. Will aspirations be fulfilled or does disappointment lie ahead?

Some of the barriers that may lead to disappointment lie in reduced budget and headcount, as well as market research just not being in the room when decisions are made. In larger client companies, the consultants may be there ahead of them.

Next Blog – How MR Agencies Are Winning In the Change Game.