Using Bubble Charts to Show Significant Relationships and Residuals in Correspondence Analysis

Learn how to utilize bubble charts for clear data visualization in correspondence analysis.


By Tim Bock

Bubble chart for Correspondence AnalysisWhile correspondence analysis does a great job at highlighting relationships in large tables, a practical problem is that correspondence analysis only shows the strongest relationships, and sometimes some of the weaker relationships may be of more interest. One of our users (thanks Katie at JWT!) suggested a solution to this: format the chart to highlight key aspects in the data (e.g., standardized residuals).

Case study: travelers’ concerns about Egypt

The table below shows American travelers’ concerns about different countries (I have analyzed this before in my Palm Trees post). There is too much going on with this table for it to be easy to understand. I have used arrows and colors to highlight interesting patterns based on the standardized residuals, but too many things are highlighted for this to be particularly helpful. This is the classic type of table where correspondence analysis is perfect.

The correspondence analysis of the data is shown below. The two dimensions explain 93% of the variance, which tells us that the map shows the main relationships. However, the map is not doing a good job of explaining the relationships between Egypt and China and the concerns of travelers. Both countries are close to the center of the map. Adding more information to the visualization can enhance it further. In the rest of the post I focus on improving the view of Egypt.

Plotting positive standardized residuals

The standardized residuals are shown below. Remembering that positive numbers indicate a positive correlation between the row and column categories, we can see that there are a few “positive” relationships for Egypt, with Safety being the strongest relationship. As the data is about travelers’ concerns, a positive residual indicates a negative issue for Egypt.

Bubbles represent the positive standardized residuals in the plot below. The area of the bubble reveals the strength of the association of the concern with Egypt. This is a lot easier to digest than the residuals. We can easily see that “Safety” stands out as the greatest concern. “Not being understood” and “Friendliness”, the next most important issues, appear trivial relative to “Safety”.

Adding the raw data to the chart

A limitation of plotting standardized residuals is that they show the strength of association, which can be misinterpreted if there are attributes in the analysis that are widely held or obscure. A simple remedy is to plot the raw data for the brand of interest in the labels. This clears up a likely misinterpretation encouraged by all the earlier charts. You can interpret the previous visualizations as implying a lack of relationship between “Cost” and Egypt. However, 44% of people evidently show concern about the cost of visiting Egypt. There exists, however, no positive correlation because they are much more concerned about the costs with the European countries (you can see this by looking at the original data table, earlier in the post).

Showing positive and negative relationships

The following visualization also shows the negative standardized residuals, drawing the circles in proportion to their absolute values. Blue represents the negative residuals, and the pink color the positive ones. In a more common application, where the correspondence analysis is of positive brand associations, reversing this color-coding would be appropriate.

Showing only significant relationships

The final visualization below shows only the significant associations with Egypt. I think it is the best of the visualizations in this post! If you are wanting to understand the data as it relates to Egypt, this is much more compelling than the original data. We can quickly see that “Cost” represents a comparative advantage, and that Egypt shares its main weaknesses with Mexico.  If you want to encourage visitors to Egypt, then you could consider positioning it as a competitor to Mexico. (This data comes from a survey done in 2012, and thus potentially constitutes a poor guide to the market’s mind today.)


To see the underlying R code used to create the visualizations in this post, click here, login to Displayr and open the relevant document. You can click on any of the visualizations in Displayr and select Properties > R CODE in the Object Inspector to see the underlying code.

I have also written other posts that describe how to create these visualizations and the differences in the R code between the plots.  One of them describes how to create these visualizations in Displayr, and another describing how to do it in Q.



How IIeX Competition Winners Deliver Qual Insights At Scale won the Insight Innovation Competition at IIeX EU 2017, and in this follow-up interview CEO and Co-founder Trevor Wolfe discusses the genesis of the platform, their business focus, and how they intend to change the role of internal teams in early stage ideation and iterative design.

Qualitative research is now undergoing the same tech-driven transformation that quantitative research started years ago. Building on trends such as communities, crowdsourcing and automation many platforms have emerged in recent years to deliver the cost, speed, and impact improvements to the traditionally slower and more expensive qual category. is the latest example of how entrepreneurs are getting creative in reverse engineering some of the pain points of qual research, applying tech to change the game and aligning the new solutions with specific business issues. Their approach is to use a global panel of creatives to rapidly test concepts with and/or to engage for ideation on new products or campaigns.


The growth of next generation qualitative platforms that can function as a single solution for a range of needs, while often plugging in to larger knowledge management systems, is a boon for end-client insights organizations, agencies, and research partners who need rapid turn around qualitative insights. has built their business around this need, and the resulting acclaim has been well earned. won the Insight Innovation Competition at IIeX EU 2017, and in this follow-up interview CEO and Co-founder Trevor Wolfe discusses the genesis of the platform, their business focus, and how they intend to change the role of internal teams in early stage ideation and iterative design.
 is based in Johannesburg, South Africa and launched in March of 2016. In that short time they have gained international traction with clients like Unilever, General Motors, L’Oreal, Pernod Ricard and GSK by assisting these brands in adapting their global creative concepts in multiple markets. SenseCheck, their concept testing and development flagship product, is at the heart of these successful research projects and has proven to be an impactful tool to help brands achieve their creative efficiencies.

They have not slowed down in their goals to revolutionize the traditional qualitative research space. Through the power of innovation and technology, they have continued to introduce their unique platform to brands and their creative partners globally. They have committed themselves to the venture of radically reshaping what it means to get consumer insight and continue taking creativity of the crowd, to the cloud as they further aim to disrupt the market research sector globally.

Like all IIeX Competition Winners, is poised to become a major player in the insights space and will further accelerate the rapidly changing marketplace globally.

Reaching for the Stars While Standing on a Garbage Pile

Posted by Ron Sellers Monday, July 24, 2017, 10:55 am
There are so many different techniques and approaches available to the consumer insights professional today.  But have we simply lost the ability to do good research, even with all these new options?


By Ron Sellers


I try to stay up on what’s happening within the research industry by participating as a research respondent whenever I can.  I just spent about an hour today responding to a few survey opportunities through a couple of online panels of which I am a member.


It fascinates me how many intense debates there are about the future of the marketing research industry, and about fine points of the research process.  Is big data the future?  Will microsurveys revolutionize research?  Will mobile make online panels obsolete?  Is gamification of surveys good or bad for the industry?

Step away from these high level debates for a moment and try being a participant for a few projects.  You’ll quickly forget the big picture and begin to wonder whether anyone can design something as basic as a moderately decent questionnaire anymore.  The big picture starts to become moot when you realize how poor the little picture is in many instances.

Follow along with me as I attempt to respond to a few surveys.  First of all, once again my inbox was filled with requests for survey participation – five, six, seven a day from some panel companies.  Not a great start (and possibly not great panel companies to rely on for sample – but that’s another topic for another day’s rant.)

Then, of course, I hit the survey invitation sent just a few hours ago, where I give them my gender and age (information the panel company already has), and promptly get told that the survey is now closed.  After four hours?  Really?

Let’s not forget the variety of studies for which I don’t qualify, wasting my time as a respondent without any compensation for my efforts, and answering the same five or six demographic questions over and over.  The panel company keeps redirecting me to another survey, and I have to answer those questions again.  Just how many times do I need to give my age and race?  (And yes, as a researcher I understand why this happens – but respondents won’t.)

So I finally qualify for a study.  But before I did, the panel company sent me through their portal, where they asked me pre-qualifying questions for a bunch of different projects.  One of those questions was how often I drink vodka, gin, whiskey, and rum.

Now I’m answering a full questionnaire, and I find it’s about alcohol.  One of the first questions is what types of alcohol I can think of.  Hmmm…I’ve just read a specific question about vodka, gin, whiskey, and rum in the portal two minutes ago.  What are the chances I can immediately think of vodka, gin, whiskey, and rum?  Amazing!

Half of this survey’s response options are in all capitals, and the other half in upper and lower case.  Oh, and the questionnaire was obviously written for the phone and simply programmed into an online version.  How do I know?  The box for “don’t know” actually says “Doesn’t know/Is not aware of any,” and one of the questions starts with, “Now I’m going to read you a few things other people have said after seeing this advertisement.”  I was actually expecting some type of voice reader to give me the options, until I clicked “next” and found it was just that someone was too lazy or incompetent to note that phone surveys and online surveys need different wording.

I finish that survey, and click on an invitation from a different panel.  After a few attempts, I again qualify for a study, but I’m quickly shaking my head when I’m faced with non-exclusive categories to a simple demographic question.  I can say that I have no children of any age, that I have children under 18 in the household, or that I have children 18 or older no longer in the household.  It’s not multiple response.  What do I answer if I have adult children living in my household (as so many people do today)?  Or if I have a teenager at home, but also a 20-year-old away at college?  Fortunately, I only have a ten-year-old, so I, at least, can answer accurately and move on.

It’s a brief questionnaire about flossing my teeth.  First, I’m asked how often I floss (you’ll be so pleased to know it’s daily).  Then, I’m given a number of questions that ignore that answer.  I’m asked where I floss, and one of the options is that the question is irrelevant to me because I don’t floss at all.  I’m also asked why I don’t floss, and one of the options is that the question is irrelevant to me because I actually do.  At this point, I’m wondering whether I’ll get a question about whether I flossed when I was pregnant.

I would love to sit down with the survey designer and introduce him or her to this fabulous new development called “skip patterns” – they mean not everyone has to see every question or option when some don’t apply to them!  (What wonders we now have available to us in research!)

Oh, I almost forgot – on one questionnaire, I was asked to report what state I live in.  I had a lot of trouble with this one, because it was very difficult to find “Arizona.”  You see, someone had apparently read a research primer and learned that randomization of responses can be a good thing, so they actually randomized the order of the states.  I finally found Arizona about three-quarters of the way down the list, right between Oregon and North Dakota.

At this point, I should probably apologize for my sarcasm and snarkiness.  It’s just that I’ve spent enough years in the consumer insights world that I actually care about the industry, and it literally hurts me to see such lack of competence on the very basics of survey design.  My hour spent trying to respond to surveys in a thoughtful, accurate way felt like a massive waste of time.  These aren’t fine points of whether the questionnaire should use a five-point or a seven-point scale or whether the methodology should incorporate discrete choice – these were mistakes that shouldn’t be made after taking just one market research class in college.  These mistakes shouldn’t be made by any professional researcher…yet they are what I see all too frequently in research.

I might even be willing to chalk this up to inexperienced people trying their hands at DIY, except for too many personal experiences and reviews of questionnaires and projects that I know to have been conducted by “professional” researchers.

I often recall the long-time corporate research manager who thought focus group respondents were all employees of the focus group facility who were simply assigned to each night’s groups, or of another corporate research manager who, after 18 months handling primary research for his company, asked a colleague what a focus group was.

I think of the car rental companies and hotels that give me satisfaction surveys and tell me I’m supposed to give them the highest marks possible.

I think of the major research report released by a consulting company claiming that young adults are far more educated than previously thought, then learning that their sample frame was alumni association lists from eight specific Midwestern colleges.  I think of another consulting company releasing a big report about how senior adults give to charities online much more than previously thought, then learning that the entire study was done online.

I think of being asked to tabulate the screeners from a set of focus groups.  I think of the company we did a major study for (to the tune of about $750,000), that couldn’t believe we forgot to put percent signs on all the numbers in the presentation, so they added them and presented it to their customers (even though we clearly explained in the report that those “percentages” were actually R-squared figures).  These are not just mistakes (which everyone makes), but serious competence issues.

In all honesty, I have trouble feeling excited about the “new frontier” of research when so much is being done so poorly on the methods that have been around for decades.  The fundamentals of good research, including knowing how to choose a methodology, understanding how to ask a question, knowing what people can and cannot reasonably answer, knowing what statistical methods to use on a database, knowing the validity of the data and how it was gathered, understanding that qualitative research is not statistically projectable, and knowing what a good sample is, all still apply to the new methods just as they did to the old methods.

Maybe it’s time to take a step back for a moment in the debate about the future of the industry.  Maybe we need to discuss issues related to basic quality and competence a little bit more, with less of a focus on whether something is new and exciting.  Because if a researcher doesn’t grasp the difference between writing questions for an online survey and a phone survey, what are the chances that researcher will handle facial analysis or galvanic skin response competently?

Why Capability Trumps Character for Supporters of the US President

What do supporters of Donald Trump value in a President? Tim Bock breaks down the data.

Trump delivers an address

By Tim Bock

American supporters of Donald Trump believe that financial skills are more important in a president than decency and ethics, a new survey shows.

Data science app Displayr and survey company Research Now questioned 1,015 adult Americans in July 2017 on their preferences among 16 different characteristics and capabilities relevant to judging the performance of a president. Supporters of Mr. Trump consider an understanding of economics, success in business, and Christianity to be important. People not approving of Mr. Trump place a much greater store in decency, ethics, and concern for the poor and global warning.


What type of President do most American’s want?

For most people, the most important characteristic in a president is being decent and ethical. This is closely followed by crisis management. An understanding of economics comes in at a distant third place, only half as important as decency and ethics. These three characteristics are collectively as important as the other 13 included in the study (shown in the visualization below).

Capabilities Trump Character

The survey found that people who approve of Trump as president place greater value on different traits to most people. This is illustrated on the visualization below, which compares the preferences of people broken down by whether they Approve, have No Opinion, or Disapprove of President Trump’s performance as President.

While most people regard decency and ethics as the most important trait in a president, this characteristic falls into third place for Trump approvers, who instead regard having an understanding of economics and crisis management as more important. For supporters, capabilities trump character.

The largest difference relates to being successful in business. This is the 4th most important characteristic among the people that approve of President Trump. However, it is the 11th most important among the disapprovers. In terms of an actual absolute importance, success in business is 11 times more important to Trump approvers than disapprovers.

The data shows the reverse patterns for experience in government, concern with poverty, concern for minorities, and global warming. All are characteristics that are moderately important to most people but unimportant to those that approve of President Trump.

Finally, there is evidence for the views that those who support President Trump prefer a Traditional American (which was a dog whistle for white), male, Christian, and entertaining president. However, these differences are all at the margin relative to the other differences.


Explore the data

The findings from this study can be explored in this Displayr Document.



Displayr, the data science app, conducted this study. Data collection took place from 30 June to 5 July 2017 among a cross-section of 1,015 adult Americans. Research Now conducted the data collection.

The state-of-the-art max-diff technique was used to measure preferences. This technique asks people to choose the best and worst of five of the characteristics, as shown below. Each person completed 10 such questions. Each of the questions used a different subset of the 16 characteristics. The data was analyzed using a mixed rank-orderd logit model with ties.



The percentages shown in the the visualizations are importance scores. They add to 100%. Higher values indicate characteristics are more important.

All the differences between the approvers and the rest of the sample are statistically significant, other than for “Good in a Crisis” and “Multilingual”.

The table below shows the wordings of the characteristics used in the questionnaire. The visualizations use abbreviations.

Decent/ethical Good in a crisis Concerned about global warming Entertaining
Plain-speaking Experienced in government Concerned about poverty Male
Healthy Focuses on minorities Has served in the military From a traditional American background
Successful in business Understands economics Multilingual Christian


Jeffrey Henning’s #MRX Top 10: Best Practices for Information Security, Digital Marketing, Incentives, and Predictions

Posted by Leonard Murphy Wednesday, July 19, 2017, 7:19 am
All the news fit to tweet, compiled by Jeffrey Henning and curated by the research community itself.


Of the 2,831 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted…


  1. The Future Consumer: Households in 2030 – Euromonitor expects 120 million new single-person households to be added over the next 14 years, driven by delayed relationships and the elderly outliving their spouses. Couples with children will be the slowest-growing segment.
  2. Beyond Cyber Security: How to Create an Information Security Culture – Louisa Thistlethwaite of FlexMR offers five tips for market researchers to create an “information security culture”: 1) have senior execs take the lead; 2) include security in corporate objectives; 3) provide creative training; 4) discuss security frequently; and 5) promote transparency not fear.
  3. Best Practices for Digital Marketing in the Market Research Space – Nicole Burford of GutCheck discusses the importance of segmenting your audience to provide the right content for the right people at the right time in their path to purchase.
  4. Top 5 Best Practices for Market Research Incentives – Writing for RWConnect, Kristin Luck discusses best practices for incentives, including tailoring them to the audience being surveyed and delivering them instantly.
  5. 5 Ways B2B Research Can Benefit from Mobile Ethnography – Writing for ESOMAR, Daniel Mullins of B2B International discusses five benefits of mobile ethnography: 1) provide accurate, in-the-moment insights; 2) capture contextual data; 3) develop real-life stories; 4) capture survey data, photos, and videos; and 5) more efficiently conduct ethnographies.
  6. MRS Reissues Sugging Advice in Wake of Tory Probe – The MRS encourages the British public to report “traders and organizations using the guise of research as a means of generating sales (sugging) or fundraising (frugging).”
  7. The Future of Retail Depends on the Millennial Consumer – Writing for TMRE, Jailene Peralta summarizes research showing that increasing student debt and rising unemployment for Millennials are reducing expendable income and decreasing retail shopping by this generational cohort.
  8. Prediction Report Launched by MRS Delphi Group – The Market Research Society has issued a new prediction report, “Prediction and Planning in an Uncertain World,” containing expert takes on the issue and case studies on integrating research into forecasting.
  9. 6 Keys for Conveying What Participants Want to Communicate – Mike Brown of Brainzooming untangles the complexity of reporting employee feedback “comments EXACTLY as they stated them.”
  10. Sampling: A Primer – Kevin Gray interviews Stas Kolenikov of Abt Associates about keys to sampling. On sampling social media, Stas says, “[It’s] a strange world, at least as far as trying to do research goes. Twitter is full of bots and business accounts. Some people have multiple accounts, and may behave differently on them, while other people may only post sporadically. One needs to distinguish the population of tweets, the population of accounts, its subpopulation of accounts that are active, and the population of humans behind these accounts.”

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. The following links are excluded: links promoting RTs for prizes, links promoting events in the next week, pages not in English, and links outside of the research industry (sorry, Bollywood).


Jeffrey Henning is the president of Researchscape International, providing custom surveys at standard prices. He volunteers with the Market Research Institute International.

WIRe 2017 Gender Diversity Study

Take the WIRe 2017 Gender Diversity Study!


It’s been three years since WIRe released the results of our first global survey on gender and diversity in the MR industry. In order to track against our baseline data, and measure progress in our industry, we need your help once again.

Please help us out by taking 10-15 minutes to participate in our 2017 Gender Diversity Study. Your feedback is truly invaluable to WIRe and our industry! This survey is mobile optimized and can also be stopped and restarted if more time is needed to submit your feedback. 

Take The Survey

In this update to the 2014 study, we’ll once again be digging into understanding the diversity of work and people in our field—with the ability to measure against the baseline data we previously collected. We will also look to illuminate what progress has been made on improving and providing diverse and supportive work environments in our industry.

Many many thanks to Lieberman Research Worldwide for their survey design and analytical support and to FocusVision for their programming prowess, as well as to our corporate sponsors, ConfirmitFieldworkLinkedInFacebookHypothesisLightspeedFocusVision, Research Now and Kantar, for their support of this research.

We’ll be sharing the results of the survey in the Fall, including a presentation at ESOMAR Congress.

Please forward…. sharing this survey with others in the industry (both women AND men) will ensure we collect diverse points of view.

Thank you!

Kristin Luck
Founder, WIRe

Causation: The Why Beneath The What

Can market research predict what consumers will do next? Find out in this interview with Kevin Gray and Tyler VanderWeele on causal analysis.

By Kevin Gray and Tyler VanderWeele


A lot of marketing research is aimed at uncovering why consumers do what they do and not just predicting what they’ll do next. Marketing scientist Kevin Gray asks Harvard Professor Tyler VanderWeele about causal analysis, arguably the next frontier in analytics.

Kevin Gray: If we think about it, most of our daily conversations invoke causation, at least informally. We often say things like “I dropped by this store instead of my usual place because I needed to go to the laundry and it was on the way” or “I always buy chocolate ice cream because that’s what my kids like.” First, to get started, can you give us nontechnical definitions of causation and causal analysis?

Tyler VanderWeele: Well, it turns out that there a number of different contexts in which words like “cause” and “because” are used. Aristotle, in his Physics and again in his Metaphysics, distinguished between what he viewed as four different types of causes: material causes, formal causes, efficient causes, and final causes. Aristotle described the material cause as that out of which the object is made; the formal cause as that into which the object is made; the efficient cause as that which makes the object; and the final cause that for which the object is made. Each of Aristotle’s “causes” offers some form of explanation or answers a specific question: Out of what?. . . Into what. . . ? By whom or what. . .? For what purpose. . .?

Causal inference literature in statistics, and in the biomedical and social sciences focus on what Aristotle called “efficient causes.” Science in general focuses on efficient causes and perhaps, to a certain extent, material and formal causes. We only really use “cause” today to refer to efficient causes and perhaps sometimes final causes. However, when we give explanations like, “I always buy chocolate ice cream because that’s what my kids like” we are talking about human actions and intention and these Aristotle referred to as final causes. We can try to predict actions, and possibly even reasons, but again the recent developments in causal inference literature in statistics and the biomedical and social sciences focus more on “efficient causes.” Even such efficient causes are difficult to define precisely. The philosophical literature is full of attempts at a complete characterization and we arguably still are not there yet (e.g. a necessary and sufficient set of conditions for something to be considered “a cause”).

However, what there is relative consensus on is that there are certain sufficient conditions for something to be “a cause.” These are often tied to counterfactuals, so that if there are settings in which an outcome would have occurred if a particular event took place, but the outcome would not have occurred if that event hadn’t taken place then this would be a sufficient condition for that event to be a cause. Most of the work in the biomedical and social sciences on causal inference has focused on this sufficient condition of counterfactual dependence in thinking about causes. This has essentially been the focus of most “causal analysis”, an analysis of counterfactuals.

KG: Could you give us a very brief history of causal analysis and how our thinking about causation has developed over the years?

TV: In addition to Aristotle above, another major turning point was Hume’s writing on causation which fairly explicitly tied causation to counterfactuals. Hume also questioned whether causation was anything except the properties of spatial and temporal proximity, plus the constant conjunction of that which we called the cause and that which we called the effect, plus some idea in our minds that the cause and effect should occur together. In more contemporary times within the philosophical literature David Lewis’ work on counterfactuals provided a more explicit tie between causation and counterfactuals and similar ideas began to appear in the statistics literature with what we now call the potential outcomes framework, ideas and formal notation suggested by Neyman and further developed by Rubin, Robins, Pearl and others. Most, but not all, contemporary work in the biomedical and social sciences uses this approach and effectively tries to ask if some outcome would be different if the cause of interest itself had been different.

KG: “Correlation is not causation” has become a buzz phrase in the business world recently, though some seem to misinterpret this as implying that any correlation is meaningless. Certainly, however, trying to untangle a complex web of cause-and-effect relationships is usually not easy – unless a machine we’ve designed and built ourselves breaks down, or some analogous situation. What are the key challenges in causal analysis? Can you suggest simple guidelines marketing researchers and data scientists should bear in mind?

TV: One of the central challenges in causal inference is confounding, the possibility that some third factor, prior to both the supposed cause and the supposed effect is in fact what is responsible for both. Ice cream consumption and murder are correlated, but ice cream probably does not itself increase murder rates. Rather, both go up during summer months. When analyzing data, we try to control for such common causes of the exposure or treatment or cause of interest and the outcome of interest. We often try to statistically control for any variable that precedes and might be related to supposed cause or the outcome or effect we are studying to try to rule this possibility out.

However, we generally do not want to control for anything that might be affected by the exposure or cause of interest because these might be on the pathway from cause to effect and could explain the mechanisms for the effect. If that is so, then the cause may still lead to the effect but we simply know more about the mechanisms. I have in fact written a whole book on this topic. But if we are just trying to control for confounding, so as to provide evidence for a cause-effect relationship then we generally only want to control for things preceding both the cause and the effect.

Of course, in practice we can never be certain we have controlled for everything possible that precedes and might explain them both. We are never certain that we have controlled for all confounding. It is thus important to carry out sensitivity analysis to assess how strong an unmeasured confounder would have been related to both the cause and the effect to explain away a relationship. A colleague and I recently proposed a very simple way to carry this out. We call it the E-value, which we hope will supplement in causal analysis, the traditional p-value, which is a measure of evidence that two things are associated, not that they are causally related. I think this sort of sensitivity analysis for unmeasured or uncontrolled confounding is very important in trying to establish causation. It should be used with much greater frequency.

KG: Many scholars in medical research, economics, psychology and other fields have been actively developing methodologies for analyzing causation. Are there differences in the ways causal analysis is approached in different fields?

TV: I previously noted the importance of trying to control for common causes of the supposed cause and the outcome of interest. This is often the approach taken in observational studies in much of the biomedical and social science literature. Sometimes it is possible to randomize the exposure or treatment of interest and this can be a much more powerful way to try to establish causation. This is often considered the gold standard for establishing causation. Many randomized clinical trials in medicine have used this approach and it is also being used with increasing frequency in social science disciplines like psychology and economics.

Sometimes, economists especially, try to use what is sometimes called a natural experiment, where it seems as though something is almost randomized by nature. Some of the more popular of such techniques are instrumental variables and regression discontinuity designs. There are a variety of such techniques and these require different types of data and assumptions and analysis approaches. In general, the approach used is going to depend on the type data that is available, and whether it is possible to randomize, and this will of course vary by discipline.

KG: In your opinion, what are the most promising developments in causal analysis, i.e., what’s big on the horizon?

TV: Some areas that might have exciting developments in the future include causal inference with network data, causal inference with spatial data, causal inference in the context of strategy and game theory, and the bringing together of causal inference and machine learning.

KG: Do Big Data and Artificial Intelligence (AI) have roles in causal analysis?

TV: Certainly. In general, the more data that we have the better off we are in about ability to make inferences. Of course, the amount of the data is not the only thing that is relevant. We also care about the quality of the data and the design of the study that was used to generate it. We also must not forget the basic lessons on confounding in the context of big data. I fear many of the principles of causal inference we have learned over the years are sometimes being neglected in the big-data age. Big data is helpful but the same interpretative principles concerning causation still apply. We do not just want lots of data; rather the ideal data for causal inference will still include as many possible confounding variables as possible, quality measurements, and longitudinal data collected over time. In all of the discussions about big data we really should be focused on the quantity-quality trade-off.

Machine learning techniques also have an important role in trying to help us understand which variables, of the many possible, are most important to control for in our efforts to rule out confounding. I think this is, and will continue to be, an important application and area of research for machine learning techniques. Hopefully our capacity to draw causal inferences will continue to improve.

KG: Thank you, Tyler!



Kevin Gray is president of Cannon Gray, a marketing science and analytics consultancy.

Tyler VanderWeele is Professor of Epidemiology at Harvard University. He is the author of Explanation in Causal Inference: Methods for Mediation and Interaction and numerous papers on causal analysis.


6 “Back to Basics” Steps Researchers Should Practice

With all the current buzz topics in MR, it's also important to focus on strong fundamentals including sample and data quality.


By Brian Lamar

I’ve been fortunate to work in research for over 20 years and have seen the industry evolve in so many ways.  I started at a small market research company in Lexington Kentucky as telephone interviewer while I was an undergrad at the University of Kentucky. Each day I came in for work, my manager would brief me on the studies that I was going to work on that day ensuring I knew the questionnaire as well as possible. We’d go through the questionnaire thoroughly, we’d role play, and she’d point out areas that the client wanted additional focus. The amount of preparation seemed like overkill to me, but I played along and occasionally would have my feedback incorporated into the questionnaire. These meetings went on for a couple of years – every single day. Every single study. And not with just me – all of us interviewers had to go through this process. I think I still have most of an utility questionnaire memorized.

Later, I managed a telephone tracking study at a different company in New York as a project manager. Like most project managers most days were very hectic with all of the different tasks that you do to support clients. About once a month I would go over to the phone center and monitor the telephone interviews. I would sit in a briefing similar to what I went through my initial role as a telephone interviewer. This briefing was at an entirely another level though. The supervisor would have an entire room full of interviewers, and they’d review the questionnaire(s) similar to what I did, only these interviewers were much more detail oriented and critical and did a very thorough QC of every study before it launched. They’d not just offer suggestions, but they’d campaign for changes and talk about how important it would be to make these changes. Each month I would receive this lengthy list of changes, and it would be frustrating to go through them and determine which suggestions were important enough to bring to the client’s attention. Typically 1-2 changes would be made, making the language more consumer (not research) friendly, more logical flow, and other improvements. Looking back on it, the process that was created long before me added a lot of value to the research.

In 2001, like most clients, this client decided to transition their research from telephone to online, including the tracking study I managed. When we initially moved the work online, we had an entire team of people review the questionnaire and offer insights on its design, keeping both new technologies in mind as well as making improvements to the respondent experience. We had internal experts discuss the advantages and disadvantages of online research, and we implemented them. We did a side by side test for over a year and were in constant communication with the client on a questionnaire/design standpoint. Rest assured, we made a lot of mistakes back then and were far from perfect. Sweepstakes as an incentive seem ridiculous nowadays. We transitioned phone surveys to online without pushing back on interview length and didn’t think long-term as much as we should have. But we had a large, diverse group of people who focused on the quality of the research as well as advocates for the respondents. Nearly all companies did back then, and the client was very involved in these discussions and decisions. They had transparency throughout the entire process that made the research more successful.

From 2001 and 2013 (when online research moved from infancy to maturity) I had a variety of positions almost exclusively in online research from project management to sales to analysis and was somewhat removed from the quality assurance processes. I know a process still existed and were important, but I wasn’t as involved. One of my current roles is to assist clients in data quality review. I review data; I review questionnaires; I review research designs at a much broader level than I did early in my career. Instead of managing or seeing 5-10 studies per week, I have the opportunity to review much more than that across a wide range of objectives and topics. Perhaps it’s the nature of this role, but I feel like the systems we, and lots of companies put in place back in my telephone and early online research days, are now non-existent. I also take a lot of surveys from non-clients as I’m a member of numerous different panels just to see what new types of innovations and research in the marketplace. According to a recent GRIT report, about half of all surveys are not designed for mobile devices, which is completely unacceptable. I can personally testify how frustrating these surveys are. Online research has made a lot of technology investments in the past few years, and many of these innovations have made improvements. But we certainly haven’t figured out how to best use this technology to improve survey design and the respondent experience – at least not yet.

I see a lot of bad research unfortunately both in my day job to evaluate data quality as well as when I take surveys in my spare time. I see screeners that are obvious for any respondent to enter the survey and even surveys without screeners entirely. I’m not sure if all researchers understand the importance of a “none of these” any longer. Respondents routinely answer the same question over and over as they’re routed from sample provider to sample provider. And this bad research isn’t from companies you would expect – they’re from names all of you have heard of: big brands or big market research companies along with small businesses or individuals using DIY tools.

At some point along the way, I feel like we’ve lost scrutiny over questionnaire and research design and that is the point of my writing this. A lot of other people have written similar blogs, and while nothing I say may be unique, it needs to be said over and over again until things improve. I heard someone recently say that “the market has spoken” when discussing sample and data quality meaning that clients, market research firms, and researchers have accepted lower quality in so many areas. Perhaps as an industry we have, but I feel like a lot of driving principals of research I described above are now non-existent. Do companies still have a thorough QC process? Do clients review online surveys? How many people are involved in questionnaire design? Just last week I led a round-table discussion on data quality and multiple brands admitted to not reviewing surveys and not looking at respondent-level data. Honestly, it makes me sad and if you’ve read this far you should be sad or angry as well.

Perhaps these data quality controls exist at some companies – I bet at the successful ones they do. I’d love to hear from you as data quality, and ultimately clients making better business decisions because of survey research is a goal of mine.

Having said all of this, I can’t discuss all of these challenges without a few words of advice for researchers:

  1. I urge you to take your online surveys. All of them. Have your team take them as well. Have someone not associated with the study or even market research test it as well. I think you’ll be amazed at what you find.
  2. Use technology to assist. Programming companies have done a great job at implementing techniques to help with the data quality process. They can identify/flag speeders. They can summarize data quality questions. They can provide respondent scores based upon open-ended questions. Become familiar with them and utilize them!
  3. Everyone else in the process should take your survey. The client shouldn’t be the only person expected to take the survey, but so should the market research firm, the sample team, the analyst, a QC team, everyone involved. Believe me, you’ll make a lot of recommendations around LOI and mobile-design if you do this. Join a panel and take a few surveys each week and odds are, you’ll want to write a blog like this as well.
  4. Know where your sample is coming from and demand transparency. Most sample providers are transparent and will answer any question, but you have to ask the question. How do they recruit? How does the survey get to the respondent? Are they ever routed? Do you prescreen? These are just a few questions you should understand about respondents to your survey.
  5. Ask respondent satisfaction and feedback. Are you getting feedback from respondents about the survey design? Insights can be obtained this way as well.
  6. Don’t remove yourself from the quality assurance role like I did for so many years. Regardless of where you are in the market research process, make sure you understand the quality steps throughout the entire process and ensure there are no gaps.


A New View On The MR Landscape: RFL 2017 “Global Top 50 Research Organizations”

RFL Communications, Inc. has released its third annual “Global Top 50 Research Organizations” (GT50) ranking based on 2016 revenue results.

For many years the AMA Gold Top 50 Report (formerly the Honomichl Report) and the variation of it used in the ESOMAR Global Market Research Report have been the default view of the size of the research industry. These reports have evolved over the years to encompass an ever expanding definition of what constitutes market research, but have left some critical gaps by not including sample companies, technology platforms, and organizations such as Google, Facebook, Equifax, etc… companies that fit within other categories but yet have active research divisions that are players in the market. So although incredibly useful and important, I think they are incomplete views of the industry.

Over the past few years there have been a few alternative views circulated, mostly through the work of organizations such as the MRS, Cambiar Consulting and even here at GreenBook, however industry legend Bob Lederer may have gotten closer to what such a list should be like via his own “Top 50 Research Organizations” report. While still somewhat incomplete in my view (for instance Research Now, SSI, Google are not listed and should be), it is a more comprehensive ranking and presents an interesting alternative structure.

The new RFL Report is out, and below is the press release with a link to download the report. You can download the AMA Gold report here and the ESOMAR report here.

RFL Communications, Inc. has released its third annual “Global Top 50 Research Organizations” (GT50) ranking based on 2016 revenue results. This unique tabulation first broke industry norms for such a list in 2015 by extending inclusion beyond only dedicated research agencies, and giving consideration to many of the most important research suppliers, plus dynamic and unorthodox research businesses.

Key examples on the 2016 list include Acxiom, Dunnhumby, Tableau Software, Experian, Harte-Hanks and Twitter, among others.

Revenues reported in the “GT50” are sourced from financial filings by public companies, published and confidential sources, and RFL’s own internal estimates.

The third annual edition of the “GT50” contains a surprising shuffling at the top of the industry hierarchy. Optum, a subsidiary of UnitedHealth Group, is listed as the industry’s number one player, displacing the Nielsen Company, the longtime standard-bearer on every research industry ranking.

Click Here for Your PDF Copy

“We were surprised to find Optum’s $7.3 billion revenues from its Data and Analytics division in 2016 were $1 billion larger than Nielsen in the same period of time,” says Bob Lederer, RFL Communications’ President and Publisher. “Optum’s business activities, notably its operations’ research work, validated its presence on the GT50 and its revenues consequently led to their supplanting Nielsen as the largest company in the research industry today.” The Nielsen Company comes in at number two on this year’s list.

There are eight new research organizations in the GT50 this year, led by Optum, Rocket Fuel, and Simon Kucher & Partners. Six research organizations on last year’s GT50 are not on this year’s list. Some of those are due to mergers with other GT50 companies, notably Quintiles and AlphaImpactRx, both of which merged with IMS Health in 2016.

One conspicuous company missing from this year’s ranking is comScore, whose final 2016 financial figures are part of a multi-year auditing and were not available as this report went to press. Two other companies, IBM and Omnicom, were dropped from the 2016 list. In spite of their existence inside public companies, a lack of transparency made it impossible to calculate their research revenues.

RFL’s “Global Top 50 Research Organizations” is available now on the company’s official website, Existing RFL newsletter subscribers should have already received their copies by mail.

Hat’s off to Bob for continuing to expand our understanding of the market!

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Old-School Crosstabs: Obsolete Since 1990, but Still a Great Way to Waste Time and Reduce Quality

Crosstabs may historically be apart of DIY analysis tools, however researchers can use statistical tests to automatically screen tables.


By Tim Bock

Difficult to read old-school crosstab

The table above is what I call an old-school crosstab. If you squint, and have seen one of these before, then you can probably read it. The basic design of these has been around since the 1960s.

Matrix printer paper

Originally, these old-school crosstabs were printed on funny green and white paper with a landscape orientation, shown to the right. The printers were surprisingly slow. The ink and paper expensive. The data processing experts responsible for creating them tended to be very busy. So, these crosstabs were designed with the goal of fitting as much information on each sheet as possible, with multiple questions shown across the top.

Advances in computing have led to a change in work practices. Some researchers still create these tables, but have them in Excel rather than print them. Other researchers have taken advantage of advances in computing and stopped using old-school crosstabs altogether. This post is for researchers who are still using old-school crosstabs. It reviews how three key innovations made these old-school crosstabs obsolete:

  • Improvements in printing and monitors, which permit you to show statistical tests via formatting.
  • Automatic screening of tables based on statistical tests.
  • DIY crosstab software.

At the end of the post I discuss the automation of such analyses using Q, SPSS, and R.

Improvements in printers and monitors

When the old-school crosstabs were invented, printers and computer screens were very limited in their capabilities. Formatting was in black and white – not even grey was possible. The only characters that could be printed in less than a few minutes were those you could find on a typewriter. With such constraints, using letters to highlight significant differences between cells, as done in old-school crosstabs, made a lot of sense.

However, these constraints no longer exist. Sure, an experienced researcher becomes faster at reading tables like the one above, but the process never becomes instinctive. You do not have to take my word on this. Can you remember the key result shown on the table above? My guess is you cannot. Nothing in the table attracts your eye. Rather, the table is something that requires concentration and expertise to digest. For example, to learn that the 18 to 24 year olds are much less likely than older people to feel they are “Extremely close to God”, you need to know that, rather than look in the column for 18 to 24s, you instead need to scan along the row and look for other columns where either or appears.

Now, contrast this to the table below. Even if you have never seen a table quite like this before, you can quickly deduce that the 18 to 24 year olds are less likely to be “Extremely close to God” than the other age groups. And, you will also quickly work out that we are more likely to feel close to God the older we get, and that the tipping point is around 45. You will also quickly work out that females and poorer people are more likely to think themselves close to God.


Crosstab with sig testing displayed using colors and arrows

The difference between the two tables above is not merely about formatting. The first table is bigger because it includes a whole lot of needless information (I return to this below). The second table is easier to read because it contains less data. It also uses a different style of statistical testing – standardized residuals– which lends itself better to representation via formatting than the traditional statistical tests (the length of the arrows indicates degree of statistical significance).

We can improve this further still by using a chart instead of a table. The chart below is identical in structure to the table above, except that it uses bars, with these bars shaded according to significance. The key patterns from the previous tables are still easy to see, but they are now easier to spot as they are represented by additional information (i.e., the numbers, the arrow lengths, the bar lengths, and the coloring). We can also now readily see a pattern that was less obvious before: with the exception of people aged 65 or more, all the other demographic groups are more likely to be “Somewhat close” than “Extremely close” to God.

Cross tab with bars


Using statistical tests to automatically screen tables

When people create old-school crosstabs, they never create just one. Instead, they create a deck. Typically, they will crosstab every variable in the study, by a number of key variables (e.g. demographics). Many studies have 1,000 or more variables, and usually 5 or so key variables, which means that it is not unusual for 5,000 or more tables to be created. Old-school crosstabs actually consist of multiple tables pushed together, so these 5,000 tables may only equate to 1,000 actual crosstabs. Nevertheless, 1,000 is a lot of crosstabs to read. To appreciate the point look at the crosstab below. What do we learn from such a table? Unless we went into the study with specific hypotheses about the variables shown in the table below, it tells us precisely nothing. Why, then, should we even read it? Even glancing at it and turning a page is a waste of time.

Difficult to read old-school crosstab.


However, it is tables like the one below, which I suspect are most problematic. How easy would it be to skim this table and fail to see that people with incomes of less than $10,000 are more likely to have no confidence in organized religion? You wouldn’t make that mistake? Imagine you are skim reading 1,000 such tables late at night with a debrief the next morning.

If you instead use automatic statistical tests to scan through the tables and identify tables that contain significant results, you will never experience this problem. Instead, you can show the user a list of tables that contain significant results. For example, the viewer could be told that “Age” by “Organized Religion” and “Household income” by “Organized religion” are significant, and given hyperlinks to these tables.

Hard-to-read old-school crosstab


DIY crosstab software

In addition to cramming together too much data and using painful-to-read statistical tests, the old-school crosstabs also show too many statistics. Look at the table above. It shows row percentages, column percentages, counts (labeled as n), averages, column names, and the column comparisons as well.

The reason that such tables are so busy is that in the olden days there was a bottleneck in the creation of tables. There were typically lots of researchers wanting tables, and only a few data processing folk servicing all of them. This meant that when we created our table specs we tended to create them with all possible analyses in mind. While most of the time we wanted only column percentages, we knew that from time-to-time it was useful to have row percentages, so we had them put on all our tables. Similarly, having counts on the table was useful if we wanted to compute percentages after merging tables. And, averages were useful “just in case” as well.

Creating tables in this way comes at a cost. First, when people ask for tables because they might need them, somebody still spends time creating them: time that will often be wasteful. And, because the tables contain information that is unnecessary, it requires more work to read them. Then, there is the risk that key results are missed and quality declines. There are two much more productive workflows. One is to give the person who is doing the interpretation DIY software, leaving them to their own devices. This is increasingly popular, and tends to be how most small companies and consulting-oriented organizations work today. Alternatively, if the company is still keen to have a clear distinction between the people that create the tables versus those that interpret them, then the table-creators can create two sets of tables:

  1. Tables that contain necessary and needed key results that pertain to the central research question.
  2. Tables that contain significant results that may be interesting to the researcher.

If the user of such tables still wants more data, they can create it themselves using the aforementioned DIY analysis tools.


Nothing in this post is new. (Sorry.) Using formatting to show statistical tests has been around since the late 1980’s. The first DIY crosstab tool that I know of was Quantime, launched in 1978. And, laser printers have been in wide availability since the mid-1990s.

Stopping using old-school crosstabs is just a case of breaking a habit. A good analogy is smoking. It is a hard habit to kick, but life gets better when you do. I am mindful, though, that it is more than 100 years since the father of time and motion studies, Frank Bunker Gilbreth Sr., worked out that bricklayers could double their productivity with a few simple modifications (e.g., putting the bricks on rolling platforms at waist height), and many of these practices are still not in common use. Of course, most laborers get paid by the hour, so they do not need to improve their productivity.