Bias and the Election: What No One is Talking About*
By Katja E. Cahoon
Since the November 2016 election, much has been written and said about the discrepancy between polling results and the actual outcome. But one topic is almost completely ignored, and I personally realized it the hard (or shall I say, unpleasant) way.
There have been important discussions about sample size and sample representativeness, sampling and non-sampling error. About using better methodologies and approaches that bypass the rational mind and what people say (e.g., Implicit Association Testing). About simpler ways of getting data (e.g., text analytics), and accounting for the lack of ability or willingness to answer a question truthfully (e.g., because it might not be politically correct). The latter brings us back to better methodologies and asking non-biased versus biasing questions. Even the challenges around understanding probabilities and different models have been discussed.
During the excellent ARF / Greenbook Election 2016 Debrief – Research & Analytics Event, many of the above topics were discussed by the speakers and panel members. This event provides insightful responses about what went right and wrong. It is also relevant for market research in general, especially given high product failure rates (it is not 80%, that is an urban myth, but it is still high enough).
One topic was mentioned in passing by a few of the knowledgeable speakers and panelists, especially Melanie Courtright, EVP at Research Now, which brings me back to my uncomfortable wake-up call:
Two days after the election, still reeling from the surprise, I had the pleasure of doing an in-depth IDI with a delightful man in his forties from the Mid-West about a topic related to work and finances. He had a Master’s degree, was a manager for a small company, and happily married with two young children. He was the kind of dream participant qualitative researchers hope for – open about his life, finances, fears, and hopes, generous with his time, and both thoughtful and able to discuss his emotions. His main concern, reiterated over and over, was to provide for his children and wife combined with his fears about not being able to do so because of rising health care and living expenses. My heart went out to him. And then he revealed that he voted for Trump.
It is rare to experience undiluted cognitive dissonance, but that is exactly what I felt. For months on end I had considered Trump voters to be largely the confederate flag waving poor white males, the uneducated, the disenfranchised – a frame influenced by the media. This man was none of these things (except white and male). I am grateful for my training as a psychotherapist and my experience as a researcher, which helped me to catch myself. I was able to continue to listen to his challenges with real empathy. After the interview ended, I reflected on my experience and realized: it is not just about having the right data (there were indications that Hillary was not the clear-cut winner most pollsters and publications made her out to be, some indeed correctly called out Trump as the winner as did Michael Moore). It is just as much about being able to see all the data and its implications, as opposed to be being blinded by our own frames, cognitive biases, and media generated impressions.
And that is precisely what is not talked about enough: bias impacts ALL of us, not just voters and consumers, but also pollsters, researchers, and of course decision makers. Melanie Courtright puts it best, “they were all wrong in the same direction, that indicates a bias.” A “democratic bias,” as Raghavan Mayur, President of TechnoMetrica Market Intelligence points out. This bias is reflected in the media. It certainly impacted me; I have a suspicion I am not the only one. This is the “echo chamber.” I should know better as a psychotherapist and highly trained researcher but as the wonderful Daniel Kahneman puts it in Thinking, Fast and Slow, neither intelligence nor experience protects from falling prey to cognitive errors.
And decision makers are impacted as well. Overconfidence in the Clinton camp, especially toward the end, points to that. This is certainly not limited to politics, it happens all the time in business as well. All the way back in the 1970s Irving Janis wrote his ever-relevant book Groupthink. Without being privy to what exactly led to the systematic faulty forecasting and overconfidence I suspect aspects of groupthink played a role (indeed, overconfidence is one of the hallmarks of groupthink).
So, what can we do in addition to becoming better and better at collecting data? It is pretty well accepted at this point that consumers (and especially voters) are not rational beings and are impacted by cognitive errors and biases. But have we – marketers, researchers, and decision makers – truly accepted that we are as well? This is precisely the first step of the three steps of overcoming cognitive bias: true understanding and acceptance that we have them as much as any other human being. A Mistakes Were Made, but not by me kind of attitude does not serve us here. In the next article, I will dive deeper into the three steps of overcoming cognitive bias.
*The use of hyperbole is deliberate. Of course, some, but very few, researchers are talking about this explicitly.