August 30, 2020

Can Political Polls Really Be Trusted?

When political polls fail to predict the exact outcome of an election, maybe they’re not wrong…maybe we are.

Can Political Polls Really Be Trusted?
Ron Sellers

by Ron Sellers

0

Over the past few election cycles, we’ve witnessed great wailing and gnashing of teeth regarding the supposed inability of the polls to predict the winner and winning margin correctly.  There have been hypotheses that phone research is no longer valid because of low response rates, claims that everyone simply lies in surveys (and therefore that surveys have never been valid), and a variety of other statements to the effect that if political polling is no longer reliable, then maybe no polling or research is reliable.

Perhaps these concerns say more about the failure to understand how research works and how it can be applied appropriately than they do about the actual ability of research to continue to play a critical role today’s world.

Certainly there are some political polls that are flat-out conducted poorly, just as there is some business research that is misleading garbage.  But many of the supposed “problems” with political polls are that pollsters, pundits, and/or media are trying to make research do something it simply cannot do.  Let’s take a quick look at some of the issues, and consider how they also apply to research your own organization may conduct.

 

Failing to Research the Right People

Recently I saw a front page newspaper article about a national poll that put Hillary Clinton two points ahead of Donald Trump.  Wait – did we start having a nationwide popular election and no one told me?  If that were the case, my daughter would be learning about Presidents Al Gore and Samuel Tilden in school.  Four separate times, a candidate has won the popular nationwide vote but lost in the Electoral College, which means these national polls are simply measuring voters in the wrong way.

In business, the same issue applies.  A customer satisfaction study is great, but not if it only includes long-term customers.  What about all those former customers who no longer do business with you, or those occasional customers who haven’t been with you that long?  Incorporating all of them is a much truer measure of your customer service.  Careful thought about who you’re researching is just as important as the techniques you’ll use or the questions you’ll ask in the research.

 

Ignoring Basic Statistics

How many headlines have you read about one candidate “leading” another by two points?  Buried in the article is the fact that the survey’s margin of error is ±3.8 points.  Well, guess what – that “lead” isn’t a lead at all, no matter what the headlines proclaim.  And then we’re surprised when the “trailing” candidate ends up winning by two points?

This problem arises all the time in business:  trying to position one product name as the “clear winner” because it outpolled the alternative 38% to 35%, or building complex statistical models in which a new product launch will succeed at 18% consumer acceptance (the number from the research), but not at 16% acceptance (well within the study’s margin of error).  There are plenty of reports that provide extensive data on subsets of 30 people, or that show all findings with decimal points (as if somehow 53.6% brand awareness is more accurate or relevant than 54%).

All research is subject to some margin of potential sampling error.  In addition, there’s a difference between statistical significance and practical significance.  If two potential logo designs are at 62% and 59% favorability, that difference may be statistically significant – but is it significant enough on a practical level that the ratings alone dictate which one to choose?

 

Believing People Can Forecast Their Future Behavior

Consumers are pretty good at telling us what they think and believe.  They’re not so good at predicting their future behavior.  They may fully intend to buy Crest next time at the grocery store – then they get a great coupon for Colgate, or they see an intriguing package or promotion for a new brand, or they find the store just raised the price on Crest.  Suddenly, their behavior no longer matches their prediction.

Similarly, some voters fully intend to vote but don’t get around to it, forget to mail the absentee ballot on time, or have a sick child on election day.  Even if they do vote, they may be leaning towards one candidate for months and then decide to switch to the other based on the last attack ad they saw, the latest candidate mis-step, or the most recent leak of damaging information.

Pre-election polls are consumer predictions of future behavior, which must be viewed with an understanding of the limitations of this type of measurement.  Particularly when there are so many independents and swing voters (and in the 2016 presidential election, particularly when so many voters have a negative view of both major candidates), people can change their minds multiple times between the last poll they answered and pulling that lever on the voting machine.

In business, the same understanding of limitations is critical.  When you test advertising, asking people “Will this ad make you more likely to buy the product” is not a valid measurement; people just cannot answer this question accurately.  Designing questions that try to box people into a yes/no “Are you going to take this future action?” often results in unreliable data.  That’s one reason Grey Matter usually measures interest or willingness to consider rather than likely to buy; respondents can more accurately answer the former than predict the latter.

 

Confusing Correlation and Causality

Political polls often make distinctions and predictions by various voting blocs:  women, Latinos, Millennials, evangelicals, etc.  The problem comes when it is assumed that being a member of one of these blocs is actually the factor that determines a person’s voting decisions.

There may be a correlation in the data regarding which groups are supporting which candidates, but can it be determined that being part of a specific group is actually influencing who those people support?  And what happens when different blocs overlap (which they typically do)?  Let’s say women and Millennials are voting more liberal, while evangelicals and Caucasians are voting more conservative.  What happens with the predictions about the vote of a White, evangelical, 26-year-old woman?

Correlation and causality are confused in business research all the time.  Research is wonderful at discovering correlations; for example, we can clearly see a connection between lower incomes and lower levels of education.  It’s the causality that’s a challenge.  Sociologists have been arguing for decades over whether lower-income people lack the resources to achieve higher levels of education or whether less-educated people lack the resources to earn higher incomes.

If your data shows that people who pay more for your product are also more loyal to your brand, is it because some people are willing to pay more because they are more loyal, or is it that people who paid more feel stronger loyalty because they want to justify to themselves how much they paid?  Or is the driving factor something else entirely?

With this data, it could be easy to recommend raising prices to drive stronger brand loyalty, but that could also be incredibly wrong, because you would have inferred causality where it’s entirely possible there is none.

 

Oversimplifying Complex Issues

Polls often show one candidate “leading” another by something like 45% to 39%.  What happened to the other 16%?  Are they firmly in the camp of a third-party option, or still undecided?

For that matter, how solid are the supposedly decided voters?  There’s a big difference between “definitely voting for Carl Wilson” and “probably voting for Carl Wilson.”  The former is likely pretty well locked in; the latter may well change.

Oversimplifying also applies to defining subgroups.  Political pollsters often prefer quick questions, such as “Are you Catholic?” or “What is your religious preference?”  Problem is, there’s a huge difference between a committed Catholic who attends Mass regularly and someone who was baptized Catholic but hasn’t been to Mass in twenty years.

Gathering and analyzing data is not simple, and treating it as simple leads to misleading data.  Unfortunately, too often qualitative research is considered “art” while quantitative research is considered “science.”  There is plenty of science to good qualitative research, and plenty of art to good quantitative.  Failing to account for the undecided in a political poll is a good example of oversimplifying research, but there are also many ways to oversimplify while exploring critical business decisions.

 

Dealing with Reporting Spin

Let’s say Bill Ward led Teresa Davis 48% to 35% last month; this month Ward’s lead is 45% to 40%.  Consider four possible headlines about the polls:

  • Ward Continues to Hold Big Lead over Davis
  • Ward’s Support Declines
  • Davis Makes Big Gains, Closes Gap
  • With 15% Undecided, Race Is Too Close to Call

Each of these headlines would be technically correct, yet each puts an entirely different spin on the findings.  And given the fact that what’s happening in the polls can influence voters, the interpretation of the data can actually impact the election.

Business research is no different.  If 60% are highly satisfied and 40% aren’t, what’s the story – the six out of ten people who are highly satisfied with your product, or the four out of ten who are not?  If your brand awareness has gone from 3% to 6%, is the story that brand awareness has only increased three points, or that it has doubled?  How data is interpreted and reported makes a big difference in how it is ultimately used by your organization.

 

Trying to Make Research Do What It Can’t

Research is a crucially important tool, but as with any tool, it is only valuable if utilized skillfully.  The best electric drill in the world won’t be much help if you’re trying to use it to paint a wall.

People naturally crave certainty, but research is not a discipline that provides absolute certainty.  Instead, it provides guidance.  That’s not to downplay the value of research in any way, given how vital is the guidance that research can provide.  It’s just that too many people look to research to make decisions.  Research should never make decisions – it should inform and guide them.  It should make you better at making business decisions.

The same is true of political polls.  They can inform us how people are thinking right now, and why they’re thinking that way.  They can show us trends over time for each candidate’s support.  They can give us guidance for what may happen in the actual election.  But they are not iron-clad predictions of which candidate will win and by exactly how much.  Viewing them as such and then criticizing them when they’re not unerringly right is more an indictment of our desire for a tool that will forecast the future with full certainty than it is a statement on the validity or value of research.

 

Photo by Andy Feliciotti on Unsplash

0

electionsmarket research for politicspolling

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

More from Ron Sellers

Are the Fraudsters More Sophisticated Than the Researchers?

Research Methodologies

Are the Fraudsters More Sophisticated Than the Researchers?

It’s amazing what some people will do in order to make a buck-fifty. Two recent studies have brought to light how sophisticated panel fraud has become...

Ron Sellers

Ron Sellers

Still More Dirty Little Secrets of Online Panels

Research Methodologies

Still More Dirty Little Secrets of Online Panels

Nearly half of your panel data is trash. Here is how to fix it.

Ron Sellers

Ron Sellers

Panel Quality Stinks and Clients Are To Blame

Research Methodologies

Panel Quality Stinks and Clients Are To Blame

Why should panel companies improve their results when clients accept the status quo and won’t pay for better?

Ron Sellers

Ron Sellers

Generalizing: The Bane of Insights

Generalizing: The Bane of Insights

I often wonder whether, in research, we spend so much time navigating the complexities of gathering the data that we neglect the all-important field o...

Ron Sellers

Ron Sellers

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get what matters, straight to your inbox.
Curated by top Insight Market experts.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*