1. Greenbook 2
  2. Greenbook-Mobile-6.29.16-

Technology is Not the Answer to Better Insights

Instead of moving towards passive/reactive listening options, let’s pivot towards a proactive relationship with the public.


By Kevin Lonnie

Whether you were happy with the recent election results or you’re currently searching for affordable Canadian housing options, odds are you were surprised with the results.

Well, how did the polls get it so wrong this time? Of course, there is no single factor involved. But a recent Ad Age article went so far as to place part of the blame on voters themselves.

“…part of the shame belonged to Trump voters, many of them unwilling to admit, particularly to live human beings on the other end of the phone, their plans to vote for the president-elect.”

          – Ad Age (Nov. 11, 2016)

When we start blaming the public for our inaccuracies, we have gone off the rails.

Clearly, there is a whole litany of possible solutions and better-integrated solutions than our current reliance on phone polling. As Einstein wrote, the definition of insanity is; “doing the same thing over and over again and expecting a different result.”

All right, how do we replace/augment our current polling tactics? The general consensus seems to be rallying around better modeling, algorithms and passive listening. In other words, if consumers/voters won’t tell us the truth, we’ll figure it out ourselves.

Emblematic of that trend, IIeX this week is hosting “The Forum on Nonconscious Consumers” in Chicago.

Apparently, since the public is not to be trusted, our only option is to surreptitiously deduce the hidden truth via technology.

I think we’re moving in the wrong direction.

Of course, there’s a place for behavioral economics/neuromarketing/eye tracking/text analytics, but they all stink at answering the fundamental question “why?”

The role of MR is to help companies make better decisions (e.g. launch a product, kill a bad concept, etc.). We only get there by acting as a conduit for the consumer.

And that’s why I feel we should find ourselves more in touch with 21st-century social mores and allow for a more interactive/reciprocal relationship. Did we really ever understand the disgruntled Trump voter’s journey? Did we put them in a position to drive the car and tell us how they arrived at their viewpoint on election day? Will greater reliance on nonconscious technology answer any of those fundamental questions?

If we are to arrive at the right answer, we have to double down on our commitment to the public. We need new techniques that empower the customer, so they can feel comfortable in sharing their viewpoints.

In other words, instead of moving towards passive/reactive listening options, let’s pivot towards a proactive relationship with the public.

Our relationship with the customer should be the epicenter of finding the right answer. Technology remains a means to that end, but by no means is technology the answer in itself.


UAlbany Emoji Study: My Smiley May be Your Smirk

There are differences in the choices of emoticons across languages, just as there are differences in the choice of words that people of different languages use.


By Debra Caruso Marrone 

Emojis. The smiley face. The angry face. The wow! face. We think we all know what they mean.

Psychology professor Laurie Beth Feldman knows differently.


Posters for Feldman’s colloquium in the Women’s College at the University of Qatar in Doha last week.

“In fact, there are differences in the choices of emoticons across languages, just as there are differences in the choice of words that people of different languages use,”she said. “The sense of a word can differ by context and it is almost certain that the same applies to emoticons. Emotions are universal but their expression is not always.”

Feldman gave a well-attended colloquium last week in the Women’s College at the University of Qatar in Doha, where she was warmly welcomed by faculty and students.

She recently asked students whose first language is Arabic to give an example of an emoticon that does not translate well. They would not use this smiley 🙂 to indicate happiness or joy. They said they use it for something more superficial and maybe even to hide anger or sarcasm.

Feldman, a cognitive psychologist interested in language, joined the UAlbany faculty 26 years ago. She examines language processing (speaking and reading) in both native speakers and non-native speakers of a language with special attention to word structure, such as how to form the past tense of verbs (ie: walked has an “-ed” but ran does not).

“The smiley has multiple meanings, even within a language. The classic example is to indicate that you intend to be humorous or sarcastic by including a smiley just as one might smile if conversing person-to-person,” she said. “People also use emoticons to gain consensus. It would be the analog of saying, ‘Right?’ or ‘You know what I mean?’ in a spoken conversation.

Feldman reported about the behaviors of adult scientists who do not speak the same first language and who worked together for four years, examining how they communicated remotely to control a telescope. She found they alter their use of emoticons and vocabulary depending on whom they are talking to.

“This style of coordination across speakers has been documented for vocabulary, grammar, emotions, gestures and now emoticons in bilingual speakers,” Feldman said.

The paper is now under review for publication. (Under review, Bilingualism, Language and Cognition, with co-authors Cecilia R. Aragon, Nan-Chen Chen and Judith F. Kroll.)


Feldman in front of a wall of emojis in the Women’s College at the University of Qatar.

Some of Feldman’s work on emoticons across different languages is used in the Psychology of Languages courses she teaches at UAlbany to demonstrate how we learn about behavior by analyzing patterns in big data. She also directs the undergraduate honors program in psychology.

Feldman was invited to Qatar by her host, Yousri Marzouki, who is from Aix-Marseille University in France and has a visiting teaching position at the Women’s University. She knew him from a panel she organized last year at the American Association for the Advancement of Science’s (publisher of the journal Science) annual meeting in San Jose, Calif.

The two researchers have begun to collaborate on analyzing tweets from the Paris attacks in 2015.

“We collected 250,000 tweets from the Arab attack in February 2015,” Feldman said. “We analyze patterns, like do people who tweet with an “I” pronoun (I, me, my, mine) use more words associated with excitement (like praise, freedom, abuse and betrayal) than people who tweet with a “we” pronoun (we, our, ours),” Feldman said. Marzouki, Feldman, her former student Samira Shaikh and current graduate student Eliza Barach are analyzing those tweets.

About the University at Albany

Educationally and culturally, the  University at Albany-SUNY puts the world within reach for its more than 17,300 students. A comprehensive public research university, UAlbany offers more than  120 undergraduate majors and minors and 125 master’s, doctoral, and graduate certificate programs.  UAlbany is a leader among all New York State colleges and universities in such diverse fields as atmospheric and environmental sciences, business,  criminal justice, emergency preparedness, engineering and applied sciences, informatics, public administration, social welfare, and sociology taught by an extensive roster of faculty experts. It also offers expanded academic and research opportunities for students through an affiliation with Albany Law School. With a curriculum enhanced by 600 study-abroad opportunities, UAlbany launches great careers.


It’s Time For Facebook To Assume Responsibility

As a media juggernaut, a leading resource of news for people and an effective advertising vehicle for marketers Facebook has a corporate responsibility and a social responsibility.

BERLIN, GERMANY - FEBRUARY 24: The Facebook logo is displayed at the Facebook Innovation Hub on February 24, 2016 in Berlin, Germany. The Facebook Innovation Hub is a temporary exhibition space where the company is showcasing some of its newest technologies and projects. (Photo by Sean Gallup/Getty Images)


By Doron Wesly

First, a few facts to level set the conversation:

  • I love Facebook
  • I spend too much time on Facebook
  • I love staying up to date on family and friends thanks to Facebook
  • I love advertising on Facebook (it’s effective)
  • I am amazed that 1.8 BN humans are connected via Facebook
  • I am in awe of their business growth and operational efficiency
  • I salute FB & GOOG: 75% of all NEW digital ad spend, AND  85% of all digital spend in Q1 ’16.
  • FB accounts for 43% of US digital ad revenue growth in 2016 (GOOG accounts for 60%)
  • I love being reminded of past special moments (10 year memory lane)
  • Thank you Facebook

All of that said, as a media juggernaut, a leading resource of news for people and an effective advertising vehicle for marketers Facebook has a corporate responsibility and a social responsibility. Especially when you are one of two companies that are the only ones growing. This responsibility falls into 2 basic categories: advertising and news.


Mistakes happen. Miscalculations happen. Life happens.

They happen less when there is a structured independent audit enforced and in place. It will help all FB and all of us. Audits help uncover issues. Audits promotes processes. Audits uncover the small things that you may overlook. Audits bring the best (minds) of the best to solve issues – collaboratively. Audits enable transparency which promotes trust. Audits yield innovation. Audits supports co-creation. Audits applauds collaboration. Audits reduce mistakes and identify ones more quickly.

Internal Audits was a start.

External Audits are the way forward.

Facebook has initiated the publication of a new “Metrics FYI” blog today.  On it, the company has identified several new errors – none of which impact the company’s billings directly –  including the following:

  • On one of its dashboards “one summary number showing 7-day or 28-day organic (not paid) page reach was miscalculated as a simple sum of daily reach instead of de-duplicating repeat visitors over those periods”.  ON average, de-duplication will reduce 7-day reach by -33% and 28-day reach by -55%.  The “bug” has been live since May and will be corrected in the next few weeks

  • Paid organic reach metrics will now refer to viewable impressions, which will reduce reported reach by -20% on average

  • Measurement of time spent on Instant Articles (relevant to publishers rather than to advertisers) has on average been over-reported by 7-8% since August of 2015, caused by a calculation error

  • Measurement of clicks which were intended to capture clicks from posts to apps or websites (“referrals”) have been including clicks to view photos or videos on Facebook.  30% of the measured clicks occurred on Facebook instead (although only 6% on average for the top apps who look at this data most frequently)

I implore the leadership @facebook: Mark Zuckerberg, Sheryl Sandberg (@sherylsandberg), Carolyn Everson (@ceverson), Daniel Slotwiner (@DanielSlotwiner) to take ownership of this and submit @facebook to full @MRC audits of relevant metrics. As a person who loves our industry, understands the power of marketing and supports it’s growth I want to ensure responsible & sustainable growth.

All marketers of Facebook, from small mom & pop shops who spend $1,000 to Fortune 500 companies who spend $100,000,000 and everyone in between deserve accountability. The video metrics & newly discovered mistakes may not effect billing – they do effect perceived and calculated effectiveness & efficiency. It is absolutely a false claim to say “The miscalculations did not impact commercial transactions between Facebook and its partners” since I know from my own return on marketing investment calculations that the mistakes directly impact my conclusions on the actual investment justification – not just for my company – but for score of fellow CMO’s investments, including my wife’s.


According to PEW Research Center:

  • in 2004 only 12% of US Adults got their news from digital platforms.
  • in 2016 81% of US Adults get their news from digital platforms.
  • 62% of US Adults get their news from social platforms like Facebook.

That brings us to Facebook news feed. You may not (yet) call yourself a news organization.

You are.

Facebook and its news feed algorithms are just that: a news organization powered by Artificial Intelligence & editors (people).

The professionalism, reliability and public accountability of a news organization are three of its most valuable assets.

Journalists in the U.S. and E.U. have led in formulation and adoption of these standards, such codes can be found in news reporting organizations in most countries with freedom of the press.

The Society of Professional Journalists  in its Code of Ethics states:

public enlightenment is the forerunner of justice and the foundation of democracy. The duty of the journalist is to further those ends by seeking truth and providing a fair and comprehensive account of events and issues. Conscientious journalists from all media and specialties strive to serve the public with thoroughness and honesty. Professional integrity is the cornerstone of a journalist’s credibility.”

The Radio Television Digital News Association Code of Ethics centers on: public trust, truthfulness, fairness, integrity, independence and accountability.

While various existing codes have some differences, the common elements including the principles of: truthfulness, accuracy, objectivity, impartiality, fairness and public accountability According to the “Canons of Journalism” (aka code of ethics).

Over the past election cycle we have witnessed how fake news has taken over the news feeds on Facebook. This must stop.

Facebook as a defacto news organization has the responsibility, under the Radio TV Digital News Association Code of Ethics to put a mechanism in place to reduce and work towards eliminating fake news in its news feed and ensure truthfulness and integrity. This mechanism may include enhanced artificial intelligence, human editors and a filter of news organizations that adhere to the code of ethics.

Free press is a critical pillar of democracy. We must protect it. We must promote it.

Our collective future depends on it regardless of your political beliefs.


Have We Forgotten What Research Is?

Posted by Steve Needel, PhD Wednesday, November 16, 2016, 11:59 am
Posted in category General Information
Clearly we are not doing enough to educate our client base when experts, marketers, and industry publications don’t think our work can make an important contribution.



Dr. Stephen Needel 

Researchers are not, by and large, an overly emotional group of people. The personality traits that make someone a good researcher do not lend themselves to over-reaction. So I read two posts/stories that have my blood boiling and that amazed look on my face that is usually reserved for things like the Cubs winning the World Series.

The first comes from a post on Linkedin by a woman who is a self-styled shopper marketing expert (because we have no shame or humility anymore). This post goes through the reasons why you shouldn’t worry about metrics for shopper marketing activities. The reasons include tedious and costly data collection and preparation, incomplete data, and analyses that are inconsistent and slow to produce.  Her advice could easily be taken as you should forego PEA (post-event analysis) because of these problems. Of course, should you decide you do want to understand the value, or lack of value, in your shopper marketing efforts, she’ll sell you software that may or may not help here.

The second comes from a manufacturer via the early November 2016 issue of the CPGMatters newsletter. A senior director of shopper marketing wanted to make their marketing efforts “more strategic and coordinated”. Who could argue with this? Not me. In a workshop at one of our industry’s self-styled institutes, he looked at where his company was at the time, he noted four problems:

  • Its marketing and branding plans were more tactical and not strategic. “We need one idea everyone can rally around,” he said.
  • Consumer and shopper communications were not consistent. “If consumers of [one of his brands] visited a store, a website, checked an app, looked at a billboard or a print ad, would they get the same cohesive story? No.”
  • The cycle for the sales/retailer/marketing plan was not aligned.
  • The brand department would build the program that other departments would later bring to light.

In short, he figured out that they need a strategy, integrated communications, delivered at the right time, with company-wide buy-in. There’s nothing new here, but there’s nothing bad here either. Right up until he volunteered that, in trying to fix the problems, “We have not yet seen an increase in sales, but I am convinced that we will. And we will see ROI go up.”  Let me translate this for you in case you are confused – they have no data that says any of their actions will actually work! Moreover, they are not expecting to see anything for a year or so.

I pause here while I clean up my laptop because my head has exploded.

Okay, I’m back.

On the one hand, we have a person claiming to be a shopper marketing expert, writing on LinkedIn marketing research sites, saying measurement may not be important because it’s hard to do. On the other hand, you have a senior executive from a major company [with a good-sized research department] telling us they are making major marketing changes and haven’t tried or haven’t been able to measure any impact.

We have lots of tools for measuring the impact of shopper marketing initiatives that are quick and cost-effective – some are free, some are inexpensive (from my company and lots of other companies), especially given the marketing dollars we are talking about. We, as an industry, have the ability to test changes in marketing strategy, tactics, and organizations. Clearly we are not doing enough to educate our client base when experts, marketers, and industry publications don’t think our work can make an important contribution. We need to adopt Faber College’s motto, “Knowledge is good”.


Predicting Election 2016: What Worked, What Didn’t and the Implications for Marketing & Insights

Posted by Leonard Murphy Tuesday, November 15, 2016, 11:25 am
Posted in category General Information
On November 29th, GreenBook & the ARF are jointly presenting a hybrid virtual/in person event to analyze the implications on marketing & analytics of the 2016 Election predictive wins and misses.



Almost everyone failed to predict the outcome of the 2016 U.S. election, and the winner came as a shock to many pollsters, the media, and people in the U.S. and around the world. How did we get it so wrong, and what does this mean for marketing and insights?

On November 29th, we’ll be exploring that very topic at our upcoming event, Predicting Election 2016: What Worked, What didn’t and the Implications for Marketing & Insights, brought to you by GreenBook and the ARF.

The event will take place from 8:30am to 11am.  We’ll start with webinar with 4 short presentations related to new thinking about predicting election results and then transition to a live-streamed panel with key thought leaders and experts for a lively discussion on what we can learn from this election cycle related to tools to predict outcomes. The agenda is still coming together so look for an update on specific presenters soon, but trust us, it’s going to be very, very good.

For those in New York, we’d love to have you join us live at the ARF Headquarters in New York, but the event will be available to join virtually as well.

Register here: http://thearf.org/event/nov-29-2016-predicting-election-2016/

During this event, we won’t be rehash the polls or outcome of the election, but rather explore the implications of this polling failure for commercial research and analytics on the things that are important to our industry: trust in research (especially surveys!), new tools and techniques, predicting & modeling behavior or trends, implicit vs. explicit data sources, the application of cognitive & behavioral psychology, and more.

Now is the time to have meaningful conversations about the lessons learned from this election cycle and to apply those learnings to not only political polling, but public policy and commercial research in all of their many forms. Arguably approaches using experimental polling methods social media analytics, behavioral economics-based analysis, “big data”, meta analysis and data synthesis, and text analytics were more predictive of the results than traditional polling, and the implications of that for other forms of research should not be ignored. Conversely, are some of the approached pioneered in commercial research for ad testing, forecasting, attribution modelling, etc.. applicable to increase the accuracy of polling?

We’ll be tackling all of these topics and more during this joint program with the ARF, so we hope you’ll join us virtually or in person for the discussion!

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

MR Realities: New MR Tech We Can’t Ignore

Marketing Research is being flooded with new technology.  If your head is swimming, you're not alone. Which new developments should we pay most attention to?



MR Realities is series of podcasts in which Dave McCaughan and I discuss a wide range of topics important to marketing researchers with special guests. We ask our guests questions you would want to ask, the way you would ask them, not just what we’re interested in. And, we let our guests speak for themselves even when we disagree.

For this installment of MR Realities we’ve invited a true authority on MR innovation, Lenny Murphy, the Executive Editor & Producer of GreenBook Marketing Media and Senior Partner at Gen2 Advisors. Lenny is well-known to most of you and someone all marketing researchers should pay attention to. Never a dull moment, either!

We think you will find the podcast insightful, useful and fun. It’s audio only and no registration is necessary – just follow this link:

“New MR Tech We Can’t Ignore” (Leonard Murphy, GreenBook, Gen2 Advisors)

Below are links to the other podcasts we’ve done so far:

“Data, Analytics and Decisions: Rhetoric versus Reality” (Professor Koen Pauwels, Ozyegin University and University of Groningen)

“Data Science Uses, Excuses and Abuses” (Eric King, The Modeling Agency)

“Will you still need me? Marketing to Seniors” (Professor Florian Kohlbacher, Xi’an Jiaotong-Liverpool University)

“The Coming Deluge of Analytics Malpractice” (Randy Bartlett, Blue Sigma Analytics)

“Semiotics: The Problem Child of Qualitative Research” (Sue Bell, Susan Bell Research)

“Tips for Marketing Researchers, Young and ‘Old'” (Professor John Roberts, University of New South Wales)

“When Bringing Technology To MR Is No Longer About Being MR Driven” (Greg Armshaw, marketing technologist)

“Thinking Mistakes Marketers Make” (Terry Grapentine, Grapentine Company)

“When Everyone is a Single Child??” (Kevin Lee, China Youthology)

“Is There Too Much Gloom and Doom About MR?” (David McCallum, Gordon & McCallum)

“AI: Reality, Science Fiction and the Future” (Mei Marker, ai-datascience.com)

“Conjoint Analysis: Making It Work For You” (Part 1) (Terry Flynn, TF Choices LTD)

“Conjoint Analysis: Making It Work For You” (Part 2) (Terry Flynn, TF Choices LTD)

“Social Media: Promises, Challenges and the Future” (Professor Raoul Kübler, Ozyegin University)

“How to Choose the Right Online Qual Method?” (Jennifer Dale, InsideHeads)

“Market Research in 2025” (Ray Poynter, The Future Place)

“Where Behavioral Economics Fits in the Customer Insight Landscape” (Bri Williams, People Patterns)

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

What’s Really Wrong With Polling

Posted by Tom H.C. Anderson Friday, November 11, 2016, 8:40 am
Posted in category General Information
What can researchers learn from yet another major polling fail?

Editor’s Note: There is much to be learned from the recent election on many different levels, but perhaps most relevant to our readers are the implications on MR from the hit or miss predictive accuracy of polling and/or various analytical approaches. Nate Silver’s FiveThirtyEight began the post mortem with the post The Polls Missed Trump. We Asked Pollsters Why. and there have been hundreds of articles dissecting the data, casting blame, and suggesting changes. The same is happening in groups, on forums, on private list serves (yes, they do still exist!) and in individual discussions threads on social media. We think the topic is so important that today, GreenBook and the ARF are meeting to discuss collaborating on a webinar with a panel of key stakeholders on the topic, so look for more on that in the days ahead.

Here’s my take: some polling did better than others, as this post by Investor’s Business Daily rightfully shows: ” IBD/TIPP’s final numbers put Trump up by 1.6 points in a four-way race. As of 2 a.m. Wednesday morning, Trump was up by about 1 point in the popular vote. (The actual vote outcome will likely change as votes continue to be counted over the next several weeks.)

 Not one other national poll had Trump winning in four-way polls. In fact, they all had Clinton winning by 3 or more points. For the entire run of the IBD/TIPP poll, we showed the race as being far tighter than other polls. This isn’t a fluke. This will be the fourth presidential election in a row in which IBD/TIPP got it right.

The Los Angeles Times, which had employed a panel of people who were queried about their choice (and which had been ridiculed throughout the election) showed Trump up in a two-way race by 3 points.”

So the outliers and innovators in polling did better than traditional methods. But so did other approaches using social media analytics, behavioral economics-based analysis, “big data”, meta analysis and data synthesis, and the focus of today’s post, text analytics.  Tom Anderson posted on election day  what text analytics was suggesting as an outcome, and in today’s follow-up (reposted from the OdinText blog) he takes a clear eyed view on how he did.

The key takeaway here is that some polling approaches work, but so do many other approaches and we’d do well to apply those lessons to political polling, public policy research, and commercial research.


By Tom H. C. Anderson

Whatever your politics, I think you’ll agree that Tuesday’s election results were stunning. What is now being called an historic upset victory for Donald Trump apparently came as a complete shock to both of the campaigns, the media and, not least, the polling community.

The question everyone seems to be asking now is how could so many projections have been so far off the mark?

Some pretty savvy folks at Pew Research Center took a stab at some reasonable guesses on Wednesday—non-response bias, social desirability bias, etc.—all of which probably played a part, but I suspect there’s more to the story.

I believe the real problem lies with quantitative polling, itself. It just is not a good predictor of actual behavior.

Research Told Us Monday that Clinton Was In Trouble

On Monday I ran a blog post highlighting responses to what was inherently a question about the candidates’ respective positioning:

“Without looking, off the top of your mind, what issues does [insert candidate name] stand for?”

Interestingly, in either case, rather than naming a political issue or policy supported by the candidate, respondents frequently offered up a critical comment about his/her character instead (reflecting a deep-seated, negative emotional disposition toward that candidate). [See chart below]




Our analysis strongly suggested that Hillary Clinton was in more trouble than any of the other polling data to that point indicated.


  1. The #1 most popular response for Hillary Clinton involved the perception of dishonesty/corruption.
  1. The #1 and #2 most popular responses for Donald Trump related to platform (immigration, followed by pro-USA/America First), followed thirdly by perceived racism/hatemongering.

Bear in mind, again, that these were unaided, top-of-mind responses to an open-ended question.

So for those keeping score, the most popular response for Clinton was an emotionally-charged character dig; the two most popular responses for Trump were related to political platform.

This suggested that not only was Trump’s campaign messaging “Make America Great Again” resonating better, but that of the two candidates, the negative emotional disposition toward Hillary Clinton was higher than for Trump.

Did We Make a Mistake?

What I did not mention in that blog post was that initially my colleagues and I suspected we might have made a mistake.

Essentially, what these responses were telling us didn’t jibe with any of the projections available from pollsters, with the possible exception of the highly-respected Nate Silver, who was actually criticized for being too generous with Trump in weighting poll numbers up (about a 36% chance of winning or slightly better than expecting to flip tails twice with a coin).

How could this be? Had we asked the wrong question? Was it the sample*?

Nope. The data were right. I just couldn’t believe everyone else could be so wrong.

So out of fear that I might look incompetent and/or just plain nuts, I decided to downplay what this data clearly showed.

I simply wrote, “This may prove problematic for the Clinton camp.”

The Real Problem with Polls

Well, I can’t say I told you so, because what I wrote was a colossal understatement; however, this experience has reinforced my conviction that conventional quantitative Likert-scale survey questions—the sort used in every poll—are generally not terrific predictors of actual behavior.

If I ask you a series of questions with a set of answers or a ratings scale I’m not likely to get a response that tells me anything useful.

We know that consumers (and, yes, voters) are generally not rational decision-makers; people rely on emotions and heuristics to make most of our decisions.

If I really want to understand what will drive actual behavior, the surest way to find out is by allowing you to tell me unaided, in your own words, off the top of your head.

“How important is price to you on a scale of 1-10?” is no more likely to predict actual behavior than “How important is honesty to you in a president on a scale of 1-10?”

It applies to cans of tuna and to presidents.



[*Note: N=3,000 responses were collected via Google Surveys 11/5-11/7 2016. Google Surveys allow researchers to reach a validated (U.S. General Population Representative) sample by intercepting people attempting to access high-quality online content—such as news, entertainment and reference sites—or who have downloaded the Google Opinion Rewards mobile app. These users answer up to 10 questions in exchange for access to the content or Google Play credit. Google provides additional respondent information across a variety of variables including source/publisher category, gender, age, geography, urban density, income, parental status, response time as well as google calculated weighting. All 3,000 comments where then analyzed using OdinText to understand frequency of topics, emotions and key topic differences. Out of 65 topics total topics identified using OdinText 19 topics were mentioned significantly more often for Clinton, and 21 topics were significantly more often mentioned for Trump. Results are +/- 2.51% accurate at the 95% confidence interval. ]

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Interview with Pat LaPointe of MR Investment Fund Growth Calculus

My interview with Pat LaPointe, Managing Partner of Growth Calculus, the growth capital investment and advisory firm.



About 2 weeks ago, Gregg Archibald posted here on the GBB about the launch of a new type of investment fund — the Service Evolution Fund — for the MR industry. Gregg and I are working with that fund under our Gen2 Advisors consultancy to identify potential candidates for discussions because we’re very excited to finally have an option to help more service-based businesses grow.  Since I think this is a very positive development for our industry , I wanted to dive deeper so everyone could learn more. With that in mind, today I’m posting my interview with Pat LaPointe, Managing Partner of Growth Calculus, the growth capital investment and advisory firm.

Before we get into the interview, here is a bit from the website on the entrepreneur perspective and why the Growth Calculus play is different:

It’s frustrating when you have built a strong business that delivers great value to customers, but can’t find the capital to help you take it to the next level. It seems most investors are so focused on finding that next unicorn, they can’t see the potential for attractive returns you might offer.

At Growth Calculus, we are entrepreneurs. We have founded companies and generated substantial growth. We have built teams, developed new products, sold over a billion dollars in solutions, and created enduring customer loyalty along the way. And based on those foundations, we were able to deliver significant returns to our investors.

Today, we combine two of the things most growth companies need: capital to expand and explore new avenues, and experienced advisors to work with you, week-by-week, to help you make it happen. Together, we help you take your company to the next level and create exceptional value for you, your customers, and your stakeholders.

We also firmly believe in gender equality in access to capital. So we make every effort to ensure at least 50% of our invested dollars are going to women-owned businesses.

The Service Evolution Fund 1 is a highly targeted fund which invests behind a specific strategy of finding great marketing services companies and helping them unlock their “product” potential through better data asset development, application of technologies for front or back-room improvement, sales effectiveness, and marketing impact.

The Fund seeks to uniquely combine investment capital in the $1M to $3M range with expert (and active) consultative support in companies with a 5+- year track record of creating great value for blue-chip clients, stable annual revenue streams in the range of $5M to $25M, and great ideas for how they might take their knowledge and talents and evolve them into a more “productized” offering to create more value for clients and shareholders alike.

They are particularly interested in companies in areas like:

  • Customer experience consulting and analytics
  • Specialty research and data/reporting
  • Marketing effectiveness consulting and analytics
  • Forecasting and behavioral economics
  • Social media data collection and analysis
  • Mobile marketing data/analytics and app development
  • Advertising production and talent cost management
  • Media auditing
  • Agency compensation management
  • CRM and customer loyalty consulting and analysis
  • Sales operations effectiveness and training

Here is my interview with Pat. I think you’ll find it interesting, engaging, and helpful.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Who Are You Voting Against?

Text Analytics Shows Dislike May Decide Presidential Election


By Tom H.C. Anderson

Exit pollsters today will ask thousands of Americans “Who did you vote for?” when they probably should be asking “Who did you vote against?”

A survey we just completed suggests that the outcome of the 2016 U.S. Presidential Election may hinge on which candidate is disliked more intensely by the other side.

In fact, for many, a vote for either candidate may be primarily about preventing the other candidate from being elected.

More than Just the Lesser of Two Evils

They’re both unpopular. We knew that already.

A slew of polls going back to the start of the general election and most recently by Washington Post/ABC News have repeatedly indicated that Hillary Clinton and Donald Trump are the two least popular candidates for U.S. president in the history of political polling.

But it appears this election will not just be a matter of just holding one’s nose and voting for the lesser of two evils.

Unaided responses to one open-ended question in respondents’ own words suggest that what may drive many voters to cast their ballots for either candidate today is an intense distaste for the alternative.

People’s distaste for each candidate is so intense that when asked to tell us what he or she stands for, respondents didn’t name a policy issue, they named a character flaw.

Top of Mind: The Crook and the Hatemonger

We took a general population sample* of 3000 Americans via Google surveys, split it in half randomly, and asked each half the same single question substituting only the candidate’s name:

“Without looking, off the top of your mind, what issues does [insert candidate name] stand for?”

The comments—presumably the issues that are truly top of mind for people in this election—were analyzed with OdinText and are captured in the chart below.




You’ll note that for each candidate, respectively, respondents frequently offered a negative character perception instead of naming a political issue or policy supported by the candidate.

Indeed, the most popular response for Hillary Clinton involved the perception of dishonesty/corruption and the third most popular response for Donald Trump was perceived racism/hatemongering.

In both cases, the data tell us that people are unusually fixated on perceived problems they have with the candidates personally.

Higher Level Emotions

Though the comments tend to be rather short and direct, it can still be interesting to look at the words used to describe the candidates on a higher ‘emotional’ level.

The OdinText visualization below shows the biggest emotional differences between Clinton and Trump are in the area of Joy and Anger. [See OdinText Emotions Plot Below, Trump Red, Clinton Blue]




While both candidates descriptions contain a lot of anger, the proportion of anger in comments for Clinton is significantly higher (16.4% VS 12.3% for Trump).

The higher emotion of ‘Joy’ is partly due to Trump’s positive campaign slogan “Make America Great Again” which has significantly higher recall than Clinton’s. Among the people surveyed 33 people in our sample referred to the Trump slogan, and only one single person referenced Clinton’s slogan “Stronger Together”. A noteable difference in percentage terms (2.2% vs 0.07%, respectively).

More Effective Messaging for Trump

In terms of actual issues identified by respondents, Clinton was most often associated with championing women and civil rights, while Trump was identified with immigration and a pro-America, protectionist platform.

Here one could argue that the Trump campaign has actually done a more effective job of establishing a signature issue for the candidate.

While neither campaign has done a significantly better job of educating voters on its candidate’s policies than the other, (8.2% vs 8.6% for Trump and Clinton, answering “I don’t know”). It may be that the simple message of “Make America Great Again” has been clearer than Clinton’s “Stronger Together.”

Indeed, the top issue identified for Trump was immigration (12.8% VS 2.3% for Clinton), while the number one issue for Clinton was the negative trait “corruption/lies” (12.5% VS. 1.4% for Trump).

This may prove problematic for the Clinton camp.

When voters don’t like their choices, they tend to stay home. If voter turnout is high today, it won’t be because people are unusually enthusiastic about the candidates; it will be because one of these candidates is so objectionable that people can’t in good conscience abstain from voting.

[*Note: N=3,000 responses were collected via Google Surveys 11/5-11/7 2016. Google Surveys allow researchers to reach a validated (U.S. General Population Representative) sample by intercepting people attempting to access high-quality online content—such as news, entertainment and reference sites—or who have downloaded the Google Opinion Rewards mobile app. These users answer up to 10 questions in exchange for access to the content or Google Play credit. Google provides additional respondent information across a variety of variables including source/publisher category, gender, age, geography, urban density, income, parental status, response time as well as google calculated weighting. All 3,000 comments where then analyzed using OdinText to understand frequency of topics, emotions and key topic differences. Out of 65 topics total topics identified using OdinText 19 topics were mentioned significantly more often for Clinton, and 21 topics were significantly more often mentioned for Trump. Results are +/- 2.51% accurate at the 95% confidence interval. ]

(previously posted on the OdinText blog)

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Implicit Testing in Consumer Product Research

Consumer product research is critically important to win market share.


Editor’s Note: Daniel will be presenting “Not Just What We See, But Also What We Smell” at the IIeX Forum on Nonconscious Consumers. Join Daniel and other industry experts November 14-15 in Chicago to learn how the world’s biggest, most successful brands are turning to the behavioral sciences and nonconscious measurement tools to better understand, measure, and predict consumer behavior. Find out more here

By Daniel Blatt

Product research is too often neglected by consumer product companies, who tend to invest disproportionate time and money on marketing and communication research.  At Q Research Solutions (Q) we believe that consumer product research is critically important to win market share. Great product research, allows you to understand consumer preferences and drivers and to develop superior products that consumers love and buy again and again.

Traditional product research is excellent for understanding what consumers can verbalize, namely what they like and don’t like, and to a lesser extent why.  However, we know that purchase and repurchase behaviour is not always directly predicted by what consumers tell you. This disconnect between what consumers tell you and what they do has sparked a lot of discussion and research.

In Thinking, Fast and Slow, Kahneman wrestles with flawed ideas about decision making and the impact of System 1 & System 2. System 1 “is the brain’s fast, automatic, intuitive approach”, System 2 is “the mind’s slower, analytical mode, where reason dominates.” Kahneman says “System 1 is…more influential…guiding…[and]…steering System 2 to a very large extent.”

Traditional product testing has focused on system 2, or what the consumer can or choose to explicitly verbalize not to mention that consumers are often unaware of why they bought something. This makes understanding the implicit or System 1 critical.  Market researchers understand this and are starting to develop and include implicit association measurement techniques into their toolboxes.

At Q we understand that although System 1 has been well studied and to some extent applied in communication and marketing research it has been largely ignored in consumer product testing, or even worse misapplied through the study of reaction time.  Studying a consumer’s reaction time to an explicit idea, is not an implicit test but rather a “fast explicit” test.

With this paradigm in mind, we partnered with Emotive Analytics who has adapted a real implicit technique, not based on reaction time, called the Affect Misattribution Procedure (AMP).  It was developed by Dr. Keith Payne, PhD at the University of North Carolina and has over 10 years of use and has amassed impressive credentials.

Next month at the IIEX forum in Chicago we will be presenting some brand new findings in this very area. In particular, we’ll be demonstrating how real implicit testing can be applied to the sensory and consumer product research field.  In a first of its kind experiment, we demonstrate how combining traditional System 2 testing with System 1 implicit testing (Affect Misattribution Procedure) can aid product development.

We show how emotions and feelings are activated and can be measured by, activating the sense of smell. And we answer two important questions:

  1. Are different fragrances implicitly associated with different emotions and feelings?
  2. If so, how do these implicit associations compare to explicit associations with the same emotions and feelings?

What we discovered is that implicit will never completely replace traditional methods. In traditional testing there can be occasions where the respondent will provide overly positive opinions, however through the use of implicit we can really look at the real underlying and subconscious reactions to the stimuli. This doesn’t mean that implicit will replace traditional testing, but by layering implicit associations onto explicit data collection we have the opportunity to learn key information that can drive product development.

For the full details of our experiments in the subconscious come visit us at IIEX in Chicago next month.