Empathy2 – An Innovation Catalyst

Get a behind the scenes look at how P&G's Sion Agami makes innovation happen in this new series with Jeffrey Resnick.

By Jeffrey Resnick


This blog represents the kickoff of a new series. This series explores the opposite side of my Transformation IQ eBook and blog series. While the Transformation IQ series focused on CEOs of research supplier-side firms tackling disruption in the industry, this new series focuses on individuals within client-side organizations that help make innovation happen. These are individuals who have that coveted ‘seat at the table’ with executives. In this first article, Sion Agami, a Research Fellow at P&G shares his thoughts on innovation and how to be a catalyst that makes it happen.

Empathy2 – An Innovation Catalyst

Sion brings the experience of 25+ years in research at P&G. His experience leads him to extol the importance of consumer-centricity as the core of innovation. Sion, and his team, are at the heart of the intersection of technology and the consumer. When creating a product and package that delights consumers, they also need to find the right midpoint of what is profitable, manageable and can be produced on time. From his perspective, understanding a consumer’s struggle is where innovation begins. Sion believes innovation is driven less by a sexy new tool or methodology but much more by fully understanding the business question and the finding the optimal approach to answer it. New research methodologies play an important role but are rarely the silver bullet. Additionally, a researcher’s active participation is a crucial element of successful innovation.

Sion draws a distinction between revolutionary innovation – truly disrupting a market with a product or service that has not previously existed and evolutionary innovation, where the goal is to better address consumer needs through modifications/improvements to an existing product. Our discussion focused within the evolutionary spectrum. Within this context, he holds a deep conviction on two points:

  1. The insight professional must actively engage in the research process. He or she knows the product and the value proposition it intends to deliver to the target market the best. It is through the lens of this understanding that product innovation must occur. This doesn’t eliminate the potential for powerful co-creation between a trusted research partner and his insights team. In his world, research conducted in the absence of a member of his team who is in touch with both the product technology and the consumer, is largely a non-starter.  
  2. He subscribes to ‘lean’ methodologies, where agile research plays a critical role. Explore a key business question using a rudimentary concept, learn from the results, persevere or pivot, and move on to the next required insight – rapidly. Failure should be fast and cheap. This is a very different approach to product development than historically followed – where the goal was to minimize risk to the organization behind a fully developed product concept that required large-scale investment often over several years. Simply stated – innovation is easier to achieve if you approach it in steps, with each step forward driven by a deep understanding of the consumer’s dilemma.

Augmenting these two basic principles, Sion reflected on several additional themes.

Understanding the holistic customer experience is required to drive customer empathy.
Knowing part of the customer’s story is insufficient. Sion believes we have yet to reach the ceiling on empathy. He believes the future holds achieving “empathy-squared (Empathy2),” deeply understanding the patterns and habits of individuals and how this affects their daily choices. He believes we too often research consumers in silos – the toothpaste bought, the hair products used etc. Understanding the consumer holistically requires approaching he or she as a user of multiple brands with multiple connections, going beyond what they say and observing what they do.  

Innovation requires transforming data into knowledge to drive action.
Empathy helps create the story behind the numbers and consumer insights convince organizations to act on the data. In order to capture the insights, creating a consumer model helps a lot, as it is when data is linked with theory… “Data without theory is just trivia, and theory without data is just an opinion”. Consumer models leverage both data and theory, and formalize the consumer story highlighting the relative importance of product experiences and consumer reactions.

Don’t be afraid to go out on a limb – it won’t always break.
Sion’s current primary area of focus is feminine care. He relayed a story that predates today’s ubiquitous presence of mobile phones and demonstrates the meaning of going out on a limb with a new approach to answer a pressing business question – how could P&G improve the placement of a pantyliner? A timid researcher might take the approach of asking female respondents to show how they place a pantyliner on a pair of underwear within a focus group or research lab environment – give them a pair of women’s underwear and ask them to place a pantyliner on them. Sion took a less timid approach and asked women to take a picture of their panties with their smartphone (when smartphone penetration was just 15%) once they had placed the pantyliner on the underwear they were wearing. The ability to see the placement in a realistic environment led to a deeper, better understanding of how P&G could improve the process.

Co-create research solutions with passionate partners.
Sion sees leveraging the intelligence of strong research partners as accretive to the innovation process. He identifies research partners suitable for co-creation as those who not only bring new tools but also passion and the ability to innovate– the conviction that their solution will provide insights other solutions cannot, penetrating deeper in consumer’s minds. Again, the operative word is co-creation. The researcher must be actively involved in the process, not simply a bystander waiting for results.

Harness the power of AI/machine learning.
Machines won’t replace humans in Sion’s viewpoint but they will be able to see patterns across large-scale multi-source information that will enable the generation of unique insights. He gets visibly excited when contemplating the impact machine learning can have on our understanding of the consumer. From his perspective, leveraging this technology will create new frontiers in insights.

Challenge yourself to always learn – or become extinct.
Driving yourself to continually learn is a core principle to which Sion holds himself accountable. In this fast evolving industry, where technology is growing exponentially, you need to take action. The reason is simple, he fully believes that if you fail to continually learn, “you can become obsolete in the blink of an eye.”  

Sion’s wisdom and experience permeated the interview. My primary takeaway from the interview, however, is that the active participation by the researcher is the secret sauce. This will be very difficult for a machine to replace.



Sion Agami – Research Fellow.  Procter & Gamble (Feminine Care) Insight Alchemist

Research fellow in P&G, but likes the “Insight Alchemist” description as it reflects what he does.

25 years of experience in Product Research/ Product Innovation, inventing and launching new products that have left a mark in consumer minds and in business across the globe, transforming knowledge into action.

Started in Latin American R&D, with assignments in Detergents and Fabric Softeners. In US R&D worked in Air Care, Snacks, and Feminine Care. Well known for unlocking business building insights with cutting edge Product Research methodologies.

  • A change agent, developing new methodologies, consumer relevant test methods, and creating consumer/ technical models.
  • Modernized how Product Research is done by establishing contact with consumers’ real time and at relevant moments.
  • Recognized as a master in translating consumer insights into product innovation, and creating holistic product propositions.

“If I Had Asked People What They Wanted…” & Other Elitist Myths

Customer led inspiration, while often dismissed, is an essential part of a company's success.

By Kevin Lonnie

A popular rebuke to customer led inspiration is attributed to the great 20th Century industrialist, Henry Ford.

If you’re a senior executive who believes on going with his gut, you can trot out (no pun intended) the Ford quote about faster horses as a reproach to customer led inspiration.

Well, our story could end there, but it’s actually where it starts getting interesting.  

Turns out, Ford never uttered those famous works.   

In fact, the first dated reference to Ford’s quote doesn’t appear until roughly the year 2003 per a Harvard Business Review article written by Patrick Vlaskovits. Further research on the topic indicates the quote was originally used in the 3rd person to describe how Ford would have responded to critiques that his designs were missing the mark.

So why would this very popular 21st century business quote find itself attributed to a man whose success was 100 years ago? Very simple, it’s highly effective at shutting down the idea of customer led inspiration. Heck, even Steve Jobs often cited that quote.  

You could, of course, choose to position yourself as the next Steve Jobs and convince yourself that your own intuition and instinct will successfully chart the firm’s future course. This is what Jobs disciple Ron Johnson did when he took the helm at J. C. Penney. Despite colleagues concerns that Johnson was making radical changes without consulting with customers he responded, “We didn’t test at Apple.” Apparently, the key independent variable is having a visionary like Steve Jobs at the top. Unfortunately, for every Steve Jobs, we have thousands of Ron Johnson’s who are ready to crash and burn at the feet of customer displeasure.  

As MR evolves from the reactive tools of the past century to proactive insight generation, customer led insights will prove essential to a firm’s ideation engine. Anyone who feels they can map this strategy relying solely on their gut instincts does so at their own risk.   

At the same time, researchers shouldn’t expect a welcome party. The incumbent creative elitists will look to hold their ground by downplaying the value of customer led inspiration.  

At least you’ll be ready to debunk their favorite Henry Ford quote.  

Corporate Research Buyers Speak: Keys to Engaging, Selling and Becoming Our Preferred Partner

How do market research buyers decide which suppliers to engage with? This upcoming study from Collaborata on client's "path to purchase" brings insights into the industry.

By David Harris

Do marketing-research and insights suppliers really understand the daily life of corporate research buyers? Do sellers of research have the requisite deep insights they need to address clients’ unmet needs and to know what moves buyers to engage and hire a new vendor? Are there different segments of buyers that are looking for different benefits? How are the buying dynamics changing given the advent of DIY tools, new technologies, social media, and the overall shifting landscapes in the marketplace?

If the marketing-research and insights industry were our client and came to us to improve sales, guide new-product development, or just about anything else, what would we advise? We would likely counsel the industry to start with deep qualitative research. We would want to know the emotional and practical factors driving corporate research buyers’ decision processes. We would want to explore unmet needs and to uncover the factors and triggers that pull them towards new suppliers, or drive them away.

When has the marketing-research industry, or your company, done deep qualitative research with corporate research buyers? We don’t always practice what we preach!

No question: One of the barriers to doing this caliber of research with corporate research buyers is the cost, which would be a lot for a small to mid-sized supplier.

So… what if several companies chipped in to share the costs and the insights? This way, your company would pay only a fraction of the total cost. But, you’d still be buying a big competitive advantage, and this advantage could be leveraged toward new-product development, messaging, sales, and overall performance. Sure, 80 percent of what we’d find out might not really surprise you. But as we tell our clients, it is that 20 percent that makes all the difference. If this study helps you get one additional client, or keeps you from losing one, it would more than pay for itself.

We have decided to offer such a study to the industry through Collaborata, a cost-sharing market-research platform. We’ve titled the project: Corporate Research Buyers Speak: Keys to Engaging, Selling and Becoming Our Preferred Partner. We’re also partnering with Aha! and Reality Check on executing the research and with GreenBook in promoting it. For a relatively small investment, a limited number of research companies can collectively chip in and gain deep insights on how to engage, sell, and become a better, more valued research partner.

Given the changing landscape, we need to take a step back and listen to our clients so as to much more deeply understand their journey and path to purchase. Please join us in sponsoring this research.  

For all the details, click here.

Using Bubble Charts to Show Significant Relationships and Residuals in Correspondence Analysis

Learn how to utilize bubble charts for clear data visualization in correspondence analysis.


By Tim Bock

Bubble chart for Correspondence AnalysisWhile correspondence analysis does a great job at highlighting relationships in large tables, a practical problem is that correspondence analysis only shows the strongest relationships, and sometimes some of the weaker relationships may be of more interest. One of our users (thanks Katie at JWT!) suggested a solution to this: format the chart to highlight key aspects in the data (e.g., standardized residuals).

Case study: travelers’ concerns about Egypt

The table below shows American travelers’ concerns about different countries (I have analyzed this before in my Palm Trees post). There is too much going on with this table for it to be easy to understand. I have used arrows and colors to highlight interesting patterns based on the standardized residuals, but too many things are highlighted for this to be particularly helpful. This is the classic type of table where correspondence analysis is perfect.

The correspondence analysis of the data is shown below. The two dimensions explain 93% of the variance, which tells us that the map shows the main relationships. However, the map is not doing a good job of explaining the relationships between Egypt and China and the concerns of travelers. Both countries are close to the center of the map. Adding more information to the visualization can enhance it further. In the rest of the post I focus on improving the view of Egypt.

Plotting positive standardized residuals

The standardized residuals are shown below. Remembering that positive numbers indicate a positive correlation between the row and column categories, we can see that there are a few “positive” relationships for Egypt, with Safety being the strongest relationship. As the data is about travelers’ concerns, a positive residual indicates a negative issue for Egypt.

Bubbles represent the positive standardized residuals in the plot below. The area of the bubble reveals the strength of the association of the concern with Egypt. This is a lot easier to digest than the residuals. We can easily see that “Safety” stands out as the greatest concern. “Not being understood” and “Friendliness”, the next most important issues, appear trivial relative to “Safety”.

Adding the raw data to the chart

A limitation of plotting standardized residuals is that they show the strength of association, which can be misinterpreted if there are attributes in the analysis that are widely held or obscure. A simple remedy is to plot the raw data for the brand of interest in the labels. This clears up a likely misinterpretation encouraged by all the earlier charts. You can interpret the previous visualizations as implying a lack of relationship between “Cost” and Egypt. However, 44% of people evidently show concern about the cost of visiting Egypt. There exists, however, no positive correlation because they are much more concerned about the costs with the European countries (you can see this by looking at the original data table, earlier in the post).

Showing positive and negative relationships

The following visualization also shows the negative standardized residuals, drawing the circles in proportion to their absolute values. Blue represents the negative residuals, and the pink color the positive ones. In a more common application, where the correspondence analysis is of positive brand associations, reversing this color-coding would be appropriate.

Showing only significant relationships

The final visualization below shows only the significant associations with Egypt. I think it is the best of the visualizations in this post! If you are wanting to understand the data as it relates to Egypt, this is much more compelling than the original data. We can quickly see that “Cost” represents a comparative advantage, and that Egypt shares its main weaknesses with Mexico.  If you want to encourage visitors to Egypt, then you could consider positioning it as a competitor to Mexico. (This data comes from a survey done in 2012, and thus potentially constitutes a poor guide to the market’s mind today.)


To see the underlying R code used to create the visualizations in this post, click here, login to Displayr and open the relevant document. You can click on any of the visualizations in Displayr and select Properties > R CODE in the Object Inspector to see the underlying code.

I have also written other posts that describe how to create these visualizations and the differences in the R code between the plots.  One of them describes how to create these visualizations in Displayr, and another describing how to do it in Q.



How IIeX Competition Winners Delvv.io Deliver Qual Insights At Scale

Delvv.io won the Insight Innovation Competition at IIeX EU 2017, and in this follow-up interview CEO and Co-founder Trevor Wolfe discusses the genesis of the platform, their business focus, and how they intend to change the role of internal teams in early stage ideation and iterative design.

Qualitative research is now undergoing the same tech-driven transformation that quantitative research started years ago. Building on trends such as communities, crowdsourcing and automation many platforms have emerged in recent years to deliver the cost, speed, and impact improvements to the traditionally slower and more expensive qual category.

Delvv.io is the latest example of how entrepreneurs are getting creative in reverse engineering some of the pain points of qual research, applying tech to change the game and aligning the new solutions with specific business issues. Their approach is to use a global panel of creatives to rapidly test concepts with and/or to engage for ideation on new products or campaigns.


The growth of next generation qualitative platforms that can function as a single solution for a range of needs, while often plugging in to larger knowledge management systems, is a boon for end-client insights organizations, agencies, and research partners who need rapid turn around qualitative insights. Delvv.io has built their business around this need, and the resulting acclaim has been well earned.

Delvv.io won the Insight Innovation Competition at IIeX EU 2017, and in this follow-up interview CEO and Co-founder Trevor Wolfe discusses the genesis of the platform, their business focus, and how they intend to change the role of internal teams in early stage ideation and iterative design.




Delvv.io is based in Johannesburg, South Africa and launched in March of 2016. In that short time they have gained international traction with clients like Unilever, General Motors, L’Oreal, Pernod Ricard and GSK by assisting these brands in adapting their global creative concepts in multiple markets. SenseCheck, their concept testing and development flagship product, is at the heart of these successful research projects and has proven to be an impactful tool to help brands achieve their creative efficiencies.

They have not slowed down in their goals to revolutionize the traditional qualitative research space. Through the power of innovation and technology, they have continued to introduce their unique platform to brands and their creative partners globally. They have committed themselves to the venture of radically reshaping what it means to get consumer insight and continue taking creativity of the crowd, to the cloud as they further aim to disrupt the market research sector globally.

Like all IIeX Competition Winners, Delvv.io is poised to become a major player in the insights space and will further accelerate the rapidly changing marketplace globally.

Learn More

Delvv.io has recently launched their self serve feedback platform BigTeam which allows brands and agencies to request feedback from internal employees, customers and external stakeholders (partners, resellers) and are looking for BetaUsers. They have invited GreenBook blog readers to trial their platform by going to www.BigTeam.co.

Reaching for the Stars While Standing on a Garbage Pile

Posted by Ron Sellers Monday, July 24, 2017, 10:55 am
There are so many different techniques and approaches available to the consumer insights professional today.  But have we simply lost the ability to do good research, even with all these new options?


By Ron Sellers


I try to stay up on what’s happening within the research industry by participating as a research respondent whenever I can.  I just spent about an hour today responding to a few survey opportunities through a couple of online panels of which I am a member.


It fascinates me how many intense debates there are about the future of the marketing research industry, and about fine points of the research process.  Is big data the future?  Will microsurveys revolutionize research?  Will mobile make online panels obsolete?  Is gamification of surveys good or bad for the industry?

Step away from these high level debates for a moment and try being a participant for a few projects.  You’ll quickly forget the big picture and begin to wonder whether anyone can design something as basic as a moderately decent questionnaire anymore.  The big picture starts to become moot when you realize how poor the little picture is in many instances.

Follow along with me as I attempt to respond to a few surveys.  First of all, once again my inbox was filled with requests for survey participation – five, six, seven a day from some panel companies.  Not a great start (and possibly not great panel companies to rely on for sample – but that’s another topic for another day’s rant.)

Then, of course, I hit the survey invitation sent just a few hours ago, where I give them my gender and age (information the panel company already has), and promptly get told that the survey is now closed.  After four hours?  Really?

Let’s not forget the variety of studies for which I don’t qualify, wasting my time as a respondent without any compensation for my efforts, and answering the same five or six demographic questions over and over.  The panel company keeps redirecting me to another survey, and I have to answer those questions again.  Just how many times do I need to give my age and race?  (And yes, as a researcher I understand why this happens – but respondents won’t.)

So I finally qualify for a study.  But before I did, the panel company sent me through their portal, where they asked me pre-qualifying questions for a bunch of different projects.  One of those questions was how often I drink vodka, gin, whiskey, and rum.

Now I’m answering a full questionnaire, and I find it’s about alcohol.  One of the first questions is what types of alcohol I can think of.  Hmmm…I’ve just read a specific question about vodka, gin, whiskey, and rum in the portal two minutes ago.  What are the chances I can immediately think of vodka, gin, whiskey, and rum?  Amazing!

Half of this survey’s response options are in all capitals, and the other half in upper and lower case.  Oh, and the questionnaire was obviously written for the phone and simply programmed into an online version.  How do I know?  The box for “don’t know” actually says “Doesn’t know/Is not aware of any,” and one of the questions starts with, “Now I’m going to read you a few things other people have said after seeing this advertisement.”  I was actually expecting some type of voice reader to give me the options, until I clicked “next” and found it was just that someone was too lazy or incompetent to note that phone surveys and online surveys need different wording.

I finish that survey, and click on an invitation from a different panel.  After a few attempts, I again qualify for a study, but I’m quickly shaking my head when I’m faced with non-exclusive categories to a simple demographic question.  I can say that I have no children of any age, that I have children under 18 in the household, or that I have children 18 or older no longer in the household.  It’s not multiple response.  What do I answer if I have adult children living in my household (as so many people do today)?  Or if I have a teenager at home, but also a 20-year-old away at college?  Fortunately, I only have a ten-year-old, so I, at least, can answer accurately and move on.

It’s a brief questionnaire about flossing my teeth.  First, I’m asked how often I floss (you’ll be so pleased to know it’s daily).  Then, I’m given a number of questions that ignore that answer.  I’m asked where I floss, and one of the options is that the question is irrelevant to me because I don’t floss at all.  I’m also asked why I don’t floss, and one of the options is that the question is irrelevant to me because I actually do.  At this point, I’m wondering whether I’ll get a question about whether I flossed when I was pregnant.

I would love to sit down with the survey designer and introduce him or her to this fabulous new development called “skip patterns” – they mean not everyone has to see every question or option when some don’t apply to them!  (What wonders we now have available to us in research!)

Oh, I almost forgot – on one questionnaire, I was asked to report what state I live in.  I had a lot of trouble with this one, because it was very difficult to find “Arizona.”  You see, someone had apparently read a research primer and learned that randomization of responses can be a good thing, so they actually randomized the order of the states.  I finally found Arizona about three-quarters of the way down the list, right between Oregon and North Dakota.

At this point, I should probably apologize for my sarcasm and snarkiness.  It’s just that I’ve spent enough years in the consumer insights world that I actually care about the industry, and it literally hurts me to see such lack of competence on the very basics of survey design.  My hour spent trying to respond to surveys in a thoughtful, accurate way felt like a massive waste of time.  These aren’t fine points of whether the questionnaire should use a five-point or a seven-point scale or whether the methodology should incorporate discrete choice – these were mistakes that shouldn’t be made after taking just one market research class in college.  These mistakes shouldn’t be made by any professional researcher…yet they are what I see all too frequently in research.

I might even be willing to chalk this up to inexperienced people trying their hands at DIY, except for too many personal experiences and reviews of questionnaires and projects that I know to have been conducted by “professional” researchers.

I often recall the long-time corporate research manager who thought focus group respondents were all employees of the focus group facility who were simply assigned to each night’s groups, or of another corporate research manager who, after 18 months handling primary research for his company, asked a colleague what a focus group was.

I think of the car rental companies and hotels that give me satisfaction surveys and tell me I’m supposed to give them the highest marks possible.

I think of the major research report released by a consulting company claiming that young adults are far more educated than previously thought, then learning that their sample frame was alumni association lists from eight specific Midwestern colleges.  I think of another consulting company releasing a big report about how senior adults give to charities online much more than previously thought, then learning that the entire study was done online.

I think of being asked to tabulate the screeners from a set of focus groups.  I think of the company we did a major study for (to the tune of about $750,000), that couldn’t believe we forgot to put percent signs on all the numbers in the presentation, so they added them and presented it to their customers (even though we clearly explained in the report that those “percentages” were actually R-squared figures).  These are not just mistakes (which everyone makes), but serious competence issues.

In all honesty, I have trouble feeling excited about the “new frontier” of research when so much is being done so poorly on the methods that have been around for decades.  The fundamentals of good research, including knowing how to choose a methodology, understanding how to ask a question, knowing what people can and cannot reasonably answer, knowing what statistical methods to use on a database, knowing the validity of the data and how it was gathered, understanding that qualitative research is not statistically projectable, and knowing what a good sample is, all still apply to the new methods just as they did to the old methods.

Maybe it’s time to take a step back for a moment in the debate about the future of the industry.  Maybe we need to discuss issues related to basic quality and competence a little bit more, with less of a focus on whether something is new and exciting.  Because if a researcher doesn’t grasp the difference between writing questions for an online survey and a phone survey, what are the chances that researcher will handle facial analysis or galvanic skin response competently?

Why Capability Trumps Character for Supporters of the US President

What do supporters of Donald Trump value in a President? Tim Bock breaks down the data.

Trump delivers an address

By Tim Bock

American supporters of Donald Trump believe that financial skills are more important in a president than decency and ethics, a new survey shows.

Data science app Displayr and survey company Research Now questioned 1,015 adult Americans in July 2017 on their preferences among 16 different characteristics and capabilities relevant to judging the performance of a president. Supporters of Mr. Trump consider an understanding of economics, success in business, and Christianity to be important. People not approving of Mr. Trump place a much greater store in decency, ethics, and concern for the poor and global warning.


What type of President do most American’s want?

For most people, the most important characteristic in a president is being decent and ethical. This is closely followed by crisis management. An understanding of economics comes in at a distant third place, only half as important as decency and ethics. These three characteristics are collectively as important as the other 13 included in the study (shown in the visualization below).

Capabilities Trump Character

The survey found that people who approve of Trump as president place greater value on different traits to most people. This is illustrated on the visualization below, which compares the preferences of people broken down by whether they Approve, have No Opinion, or Disapprove of President Trump’s performance as President.

While most people regard decency and ethics as the most important trait in a president, this characteristic falls into third place for Trump approvers, who instead regard having an understanding of economics and crisis management as more important. For supporters, capabilities trump character.

The largest difference relates to being successful in business. This is the 4th most important characteristic among the people that approve of President Trump. However, it is the 11th most important among the disapprovers. In terms of an actual absolute importance, success in business is 11 times more important to Trump approvers than disapprovers.

The data shows the reverse patterns for experience in government, concern with poverty, concern for minorities, and global warming. All are characteristics that are moderately important to most people but unimportant to those that approve of President Trump.

Finally, there is evidence for the views that those who support President Trump prefer a Traditional American (which was a dog whistle for white), male, Christian, and entertaining president. However, these differences are all at the margin relative to the other differences.


Explore the data

The findings from this study can be explored in this Displayr Document.



Displayr, the data science app, conducted this study. Data collection took place from 30 June to 5 July 2017 among a cross-section of 1,015 adult Americans. Research Now conducted the data collection.

The state-of-the-art max-diff technique was used to measure preferences. This technique asks people to choose the best and worst of five of the characteristics, as shown below. Each person completed 10 such questions. Each of the questions used a different subset of the 16 characteristics. The data was analyzed using a mixed rank-orderd logit model with ties.



The percentages shown in the the visualizations are importance scores. They add to 100%. Higher values indicate characteristics are more important.

All the differences between the approvers and the rest of the sample are statistically significant, other than for “Good in a Crisis” and “Multilingual”.

The table below shows the wordings of the characteristics used in the questionnaire. The visualizations use abbreviations.

Decent/ethical Good in a crisis Concerned about global warming Entertaining
Plain-speaking Experienced in government Concerned about poverty Male
Healthy Focuses on minorities Has served in the military From a traditional American background
Successful in business Understands economics Multilingual Christian


Jeffrey Henning’s #MRX Top 10: Best Practices for Information Security, Digital Marketing, Incentives, and Predictions

Posted by Leonard Murphy Wednesday, July 19, 2017, 7:19 am
All the news fit to tweet, compiled by Jeffrey Henning and curated by the research community itself.


Of the 2,831 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted…


  1. The Future Consumer: Households in 2030 – Euromonitor expects 120 million new single-person households to be added over the next 14 years, driven by delayed relationships and the elderly outliving their spouses. Couples with children will be the slowest-growing segment.
  2. Beyond Cyber Security: How to Create an Information Security Culture – Louisa Thistlethwaite of FlexMR offers five tips for market researchers to create an “information security culture”: 1) have senior execs take the lead; 2) include security in corporate objectives; 3) provide creative training; 4) discuss security frequently; and 5) promote transparency not fear.
  3. Best Practices for Digital Marketing in the Market Research Space – Nicole Burford of GutCheck discusses the importance of segmenting your audience to provide the right content for the right people at the right time in their path to purchase.
  4. Top 5 Best Practices for Market Research Incentives – Writing for RWConnect, Kristin Luck discusses best practices for incentives, including tailoring them to the audience being surveyed and delivering them instantly.
  5. 5 Ways B2B Research Can Benefit from Mobile Ethnography – Writing for ESOMAR, Daniel Mullins of B2B International discusses five benefits of mobile ethnography: 1) provide accurate, in-the-moment insights; 2) capture contextual data; 3) develop real-life stories; 4) capture survey data, photos, and videos; and 5) more efficiently conduct ethnographies.
  6. MRS Reissues Sugging Advice in Wake of Tory Probe – The MRS encourages the British public to report “traders and organizations using the guise of research as a means of generating sales (sugging) or fundraising (frugging).”
  7. The Future of Retail Depends on the Millennial Consumer – Writing for TMRE, Jailene Peralta summarizes research showing that increasing student debt and rising unemployment for Millennials are reducing expendable income and decreasing retail shopping by this generational cohort.
  8. Prediction Report Launched by MRS Delphi Group – The Market Research Society has issued a new prediction report, “Prediction and Planning in an Uncertain World,” containing expert takes on the issue and case studies on integrating research into forecasting.
  9. 6 Keys for Conveying What Participants Want to Communicate – Mike Brown of Brainzooming untangles the complexity of reporting employee feedback “comments EXACTLY as they stated them.”
  10. Sampling: A Primer – Kevin Gray interviews Stas Kolenikov of Abt Associates about keys to sampling. On sampling social media, Stas says, “[It’s] a strange world, at least as far as trying to do research goes. Twitter is full of bots and business accounts. Some people have multiple accounts, and may behave differently on them, while other people may only post sporadically. One needs to distinguish the population of tweets, the population of accounts, its subpopulation of accounts that are active, and the population of humans behind these accounts.”

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. The following links are excluded: links promoting RTs for prizes, links promoting events in the next week, pages not in English, and links outside of the research industry (sorry, Bollywood).


Jeffrey Henning is the president of Researchscape International, providing custom surveys at standard prices. He volunteers with the Market Research Institute International.

WIRe 2017 Gender Diversity Study

Take the WIRe 2017 Gender Diversity Study!


It’s been three years since WIRe released the results of our first global survey on gender and diversity in the MR industry. In order to track against our baseline data, and measure progress in our industry, we need your help once again.

Please help us out by taking 10-15 minutes to participate in our 2017 Gender Diversity Study. Your feedback is truly invaluable to WIRe and our industry! This survey is mobile optimized and can also be stopped and restarted if more time is needed to submit your feedback. 

Take The Survey

In this update to the 2014 study, we’ll once again be digging into understanding the diversity of work and people in our field—with the ability to measure against the baseline data we previously collected. We will also look to illuminate what progress has been made on improving and providing diverse and supportive work environments in our industry.

Many many thanks to Lieberman Research Worldwide for their survey design and analytical support and to FocusVision for their programming prowess, as well as to our corporate sponsors, ConfirmitFieldworkLinkedInFacebookHypothesisLightspeedFocusVision, Research Now and Kantar, for their support of this research.

We’ll be sharing the results of the survey in the Fall, including a presentation at ESOMAR Congress.

Please forward…. sharing this survey with others in the industry (both women AND men) will ensure we collect diverse points of view.

Thank you!

Kristin Luck
Founder, WIRe

Causation: The Why Beneath The What

Can market research predict what consumers will do next? Find out in this interview with Kevin Gray and Tyler VanderWeele on causal analysis.

By Kevin Gray and Tyler VanderWeele


A lot of marketing research is aimed at uncovering why consumers do what they do and not just predicting what they’ll do next. Marketing scientist Kevin Gray asks Harvard Professor Tyler VanderWeele about causal analysis, arguably the next frontier in analytics.

Kevin Gray: If we think about it, most of our daily conversations invoke causation, at least informally. We often say things like “I dropped by this store instead of my usual place because I needed to go to the laundry and it was on the way” or “I always buy chocolate ice cream because that’s what my kids like.” First, to get started, can you give us nontechnical definitions of causation and causal analysis?

Tyler VanderWeele: Well, it turns out that there a number of different contexts in which words like “cause” and “because” are used. Aristotle, in his Physics and again in his Metaphysics, distinguished between what he viewed as four different types of causes: material causes, formal causes, efficient causes, and final causes. Aristotle described the material cause as that out of which the object is made; the formal cause as that into which the object is made; the efficient cause as that which makes the object; and the final cause that for which the object is made. Each of Aristotle’s “causes” offers some form of explanation or answers a specific question: Out of what?. . . Into what. . . ? By whom or what. . .? For what purpose. . .?

Causal inference literature in statistics, and in the biomedical and social sciences focus on what Aristotle called “efficient causes.” Science in general focuses on efficient causes and perhaps, to a certain extent, material and formal causes. We only really use “cause” today to refer to efficient causes and perhaps sometimes final causes. However, when we give explanations like, “I always buy chocolate ice cream because that’s what my kids like” we are talking about human actions and intention and these Aristotle referred to as final causes. We can try to predict actions, and possibly even reasons, but again the recent developments in causal inference literature in statistics and the biomedical and social sciences focus more on “efficient causes.” Even such efficient causes are difficult to define precisely. The philosophical literature is full of attempts at a complete characterization and we arguably still are not there yet (e.g. a necessary and sufficient set of conditions for something to be considered “a cause”).

However, what there is relative consensus on is that there are certain sufficient conditions for something to be “a cause.” These are often tied to counterfactuals, so that if there are settings in which an outcome would have occurred if a particular event took place, but the outcome would not have occurred if that event hadn’t taken place then this would be a sufficient condition for that event to be a cause. Most of the work in the biomedical and social sciences on causal inference has focused on this sufficient condition of counterfactual dependence in thinking about causes. This has essentially been the focus of most “causal analysis”, an analysis of counterfactuals.

KG: Could you give us a very brief history of causal analysis and how our thinking about causation has developed over the years?

TV: In addition to Aristotle above, another major turning point was Hume’s writing on causation which fairly explicitly tied causation to counterfactuals. Hume also questioned whether causation was anything except the properties of spatial and temporal proximity, plus the constant conjunction of that which we called the cause and that which we called the effect, plus some idea in our minds that the cause and effect should occur together. In more contemporary times within the philosophical literature David Lewis’ work on counterfactuals provided a more explicit tie between causation and counterfactuals and similar ideas began to appear in the statistics literature with what we now call the potential outcomes framework, ideas and formal notation suggested by Neyman and further developed by Rubin, Robins, Pearl and others. Most, but not all, contemporary work in the biomedical and social sciences uses this approach and effectively tries to ask if some outcome would be different if the cause of interest itself had been different.

KG: “Correlation is not causation” has become a buzz phrase in the business world recently, though some seem to misinterpret this as implying that any correlation is meaningless. Certainly, however, trying to untangle a complex web of cause-and-effect relationships is usually not easy – unless a machine we’ve designed and built ourselves breaks down, or some analogous situation. What are the key challenges in causal analysis? Can you suggest simple guidelines marketing researchers and data scientists should bear in mind?

TV: One of the central challenges in causal inference is confounding, the possibility that some third factor, prior to both the supposed cause and the supposed effect is in fact what is responsible for both. Ice cream consumption and murder are correlated, but ice cream probably does not itself increase murder rates. Rather, both go up during summer months. When analyzing data, we try to control for such common causes of the exposure or treatment or cause of interest and the outcome of interest. We often try to statistically control for any variable that precedes and might be related to supposed cause or the outcome or effect we are studying to try to rule this possibility out.

However, we generally do not want to control for anything that might be affected by the exposure or cause of interest because these might be on the pathway from cause to effect and could explain the mechanisms for the effect. If that is so, then the cause may still lead to the effect but we simply know more about the mechanisms. I have in fact written a whole book on this topic. But if we are just trying to control for confounding, so as to provide evidence for a cause-effect relationship then we generally only want to control for things preceding both the cause and the effect.

Of course, in practice we can never be certain we have controlled for everything possible that precedes and might explain them both. We are never certain that we have controlled for all confounding. It is thus important to carry out sensitivity analysis to assess how strong an unmeasured confounder would have been related to both the cause and the effect to explain away a relationship. A colleague and I recently proposed a very simple way to carry this out. We call it the E-value, which we hope will supplement in causal analysis, the traditional p-value, which is a measure of evidence that two things are associated, not that they are causally related. I think this sort of sensitivity analysis for unmeasured or uncontrolled confounding is very important in trying to establish causation. It should be used with much greater frequency.

KG: Many scholars in medical research, economics, psychology and other fields have been actively developing methodologies for analyzing causation. Are there differences in the ways causal analysis is approached in different fields?

TV: I previously noted the importance of trying to control for common causes of the supposed cause and the outcome of interest. This is often the approach taken in observational studies in much of the biomedical and social science literature. Sometimes it is possible to randomize the exposure or treatment of interest and this can be a much more powerful way to try to establish causation. This is often considered the gold standard for establishing causation. Many randomized clinical trials in medicine have used this approach and it is also being used with increasing frequency in social science disciplines like psychology and economics.

Sometimes, economists especially, try to use what is sometimes called a natural experiment, where it seems as though something is almost randomized by nature. Some of the more popular of such techniques are instrumental variables and regression discontinuity designs. There are a variety of such techniques and these require different types of data and assumptions and analysis approaches. In general, the approach used is going to depend on the type data that is available, and whether it is possible to randomize, and this will of course vary by discipline.

KG: In your opinion, what are the most promising developments in causal analysis, i.e., what’s big on the horizon?

TV: Some areas that might have exciting developments in the future include causal inference with network data, causal inference with spatial data, causal inference in the context of strategy and game theory, and the bringing together of causal inference and machine learning.

KG: Do Big Data and Artificial Intelligence (AI) have roles in causal analysis?

TV: Certainly. In general, the more data that we have the better off we are in about ability to make inferences. Of course, the amount of the data is not the only thing that is relevant. We also care about the quality of the data and the design of the study that was used to generate it. We also must not forget the basic lessons on confounding in the context of big data. I fear many of the principles of causal inference we have learned over the years are sometimes being neglected in the big-data age. Big data is helpful but the same interpretative principles concerning causation still apply. We do not just want lots of data; rather the ideal data for causal inference will still include as many possible confounding variables as possible, quality measurements, and longitudinal data collected over time. In all of the discussions about big data we really should be focused on the quantity-quality trade-off.

Machine learning techniques also have an important role in trying to help us understand which variables, of the many possible, are most important to control for in our efforts to rule out confounding. I think this is, and will continue to be, an important application and area of research for machine learning techniques. Hopefully our capacity to draw causal inferences will continue to improve.

KG: Thank you, Tyler!



Kevin Gray is president of Cannon Gray, a marketing science and analytics consultancy.

Tyler VanderWeele is Professor of Epidemiology at Harvard University. He is the author of Explanation in Causal Inference: Methods for Mediation and Interaction and numerous papers on causal analysis.