Highlights from the Greenbook/ARF Webinar: Predicting Election 2016 – What Worked, What didn’t and the Implications for Marketing & Insights
Editor’s Note: Last week’s “pop-up event” that we put on with our friends at the ARF generated a ton of interest, and rightfully so: the election results and the narrative around polling misses leading up to it has thrust research front and center into the national spotlight, and not in a good way. The potential blowback on commercial research, especially survey-based research, has already begun and it’s imperative that we as an industry get ahead of the story and show the great work our industry does. It’s also an opportunity for commercial research to help our colleagues in political and public opinion research learn that the researcher toolbox consists of many tools that can deliver greater nuance, depth and accuracy to the good work they do.
That is why we’re devoting two days of coverage to the event, starting with yesterday’s post by Tony Jarvis and wrapping up with today’s review by Jeni Lee Chapman and Larry Friedman. This is the start of an ongoing dialogue and we think the industry as a whole needs to be kept fully in the loop on our efforts to move this forward.
If you weren’t able to join the webcast of the event, you can watch the entire thing here.
By Jeni Lee Chapman & Larry Friedman, Ph.D
Right before the election, all (or nearly all) of the leading polling organizations predicted that Hillary Clinton would defeat Donald Trump, with the likelihood of victory being anywhere from 70-99%. The primal scream of pain from the industry since those predictions were proved false is still resonating, and will likely do so for quite some time.
The post-mortems have begun, and ARF and Greenbook should be commended for producing a program on Nov. 29 in NYC with leading lights in the polling and broader market research world to start examining what went wrong, what it might mean for market research more generally, and what needs to be changed for the future.
The session was in two parts. The first part featured brief individual presentations from market researchers who argued that combining the survey polling data with other kinds of data showed that Trump was actually in better position than polls alone indicated. The argument is similar to what many in the broader industry have been arguing for some time now, that traditional survey research by itself isn’t enough, and other kinds of data need to be used as well. This is a theme that runs through conferences like IIeX.
Ironically enough, a key theme that went through these presentations was that the old Bill Clinton phrase, “It’s the economy stupid”, was one of the key factors that played a role in Hillary’s losing bid for the presidency.
Jared Schreiber – Co-Founder & CEO from InfoScout shared data that combined attitudinal data with actual behavioral/ consumption data that they track from the same people. These data showed Trump voters and Undecided’s responded at much higher rates than Clinton voters that ” they were spending more on groceries in the last 6 months” even though they had actual shopping data that showed grocery spending on groceries had actually dropped 6%. This finding would suggest that Undecideds shared the same economic anxieties as Trump voters, and were more likely to break Trump’s way when it came to actually voting – which it seems they did.
InfoScout key takeaways: Lessons polling companies and campaigns can learn through the examination of shopping data linked to attitudes
- market to the masses not only your base.
- brand distinctiveness – stand clearly for something. This came up in Tom Anderson’s presentation where he shared data collected from an open-ended question and used text analytics to analyze the data. The results showing what key issues and traits where associated with each candidate. This analysis revealed Trump’s stronger association with economic based issues and clearer positioning among both urban and rural voters. The article and data was shared in an earlier Wonk / Greenbook post.
- Grab attention and get noticed: Much data has been shared that through Trump’s use of Twitter and his direct media strategy, he controlled the conversation and was mentioned much more often that Hilary Clinton. He defined what were the issues of the campaign.
Aaron Reid from Sentient Decision Science shared how the use of different techniques that get at implicit attitudes can get you closer to what people will actually do versus just knowing what they say they will do. In the Wisconsin primaries in particular, there was a huge gap between those that said they would vote for Clinton, but then did not.
Tom Anderson from Odin Analytics showed the insight powerhouse one single question can deliver when text analytics are applied to open ended questions. He looked at not only what was said about each candidate but also how long it took respondents to write in their responses about each candidate. Tom made the case that text analytics could help solve the issue of non-response bias given that with this approach you are able to look at large sample sizes from rural and non-rural areas, and identify people’s real voter intentions through their unfiltered responses to open ended questions.
Taylor Schreiner – VP, Research, from TubeMogul, which focuses on programmatic buying, shared the benefits of testing in real time and that with Facebook for example, you can do a whole series of a/b tests to determine what content strategies are working.
Main Takeaways – Panel Discussion
Chris Bacon from the ARF did a wonderful job moderating the panel. But regardless of moderating talents. it was hard not to feel the pain on stage. At some moments defensive “I was the only one to get it right” as Raghavan Mayur, President of TechnoMetrica Market Intelligence stated and at other moments confessional as Gary Langer, former Director of Polling ABC News read out a detailed explanation of all the value the polls did provide. The panel session ultimately highlighted a number of important points about the future of Presidential election polling and its role in predicting results. In a moment almost reminiscent of the Nixon/Frost interviews after Watergate, it wasn’t until nearly the very end of the session when Cliff Young, President of Ipsos public affairs was the first panelist brave enough to flat out say “We got it wrong”.
So what are the polling firms thinking about doing differently?
- De-emphasizing the national horse race numbers. For the second time since 2000, the winner of the national popular vote did not win the Electoral College. The media needs to place much more emphasis on understanding individual battleground states before making pronouncements about who is ahead during the campaign. Further as Gary Langer argued, so much of the value of survey research is understanding what is behind voters’ choices.
- Cliff Young from Ipsos made some great points about the need to improve predictive voter turnout models. Presidential elections are unique in that they happen only once in four years and only something like 50-60% of eligible voters actually turn out. You never know who is actually coming out until they do. If your assumptions about the composition of the turnout is different from actual (as seems to have happened this year), your “final” poll numbers will be off. As discussed in the panel, perhaps incorporating surrogate measures of candidate “enthusiasm” and “inspiration” would improve the models.
- Research Now’s Melanie Courtright – EVP, Global Client Services, made the important point that it all starts with the right sample frame and sample design. If you are going to predict elections, state level data critical and within state, you need to get a proper representation of the populations there – including rural areas, minority communities etc. Given that these groups tend to not do phone or even on-line surveys, she encouraged the polling companies to really start planning and implementing multi-mode techniques to achieve solid sample sizes that represent the correct sample frame for that state and those counties within a state. This discussion linked to the points around the impact of non-response rates – when you get 90% non-response from a sample frame, those that don’t answer become more interesting than those that do answer the survey.
- From inside the Trump campaign, Matthew Oczkowski – Head of Product, Cambridge Analytics, shared that they have access to so much more data than outside pollsters do. The key takeaway shared is that for each candidate they work with, they always build the model from scratch. Their job is to get candidates elected and you cannot rely on what happened last time around as each candidate brings unique qualities to the election process. Trump is nothing if not unique.
Great session; some spirited discussion but overall an opportunity for knowledge sharing that was thoughtful and productive. No doubt there will be other such sessions in the coming months. Academic political scientists, once they get a chance to really dig into the data, will be able to make valuable contributions – how about the companies who presented and on the panel making their data available to a consortium of academics? The issues are too important for proprietary reasons to get in the way.
Larry Friedman, Ph.D. – Senior Advisor, Larry Friedman Market Research Advisory Services, LLC
Dr. Friedman has had a 35-year career in market research, working on both the client and supplier sides of the business. He spent the last 25 years of his career at TNS (and various predecessor companies) in various positions before retiring in 2015, including Global Head of Brand and Communications Research and Chief Research Officer for North America. He has led large divisions with P&L responsibility, developed a number of cutting edge techniques, and has consulted widely with senior level marketing executives on the strategic business implications of market research, which he still continues to do. He has published widely, and has spoken before many leading industry conferences. He holds a Ph.D. in Social Psychology from Columbia University.
Jeni Lee Chapman
Jeni Lee Chapman is a veteran executive in the brand research and PR / communications area having worked, along with her phenomenal team, building businesses in the syndicated and custom brand and research space. Jeni spent over a decade at Kantar, first at NFO and then leading the Brand and Communications’ practice at TNS where she established an early stage framework for the creation of a cross media measurement platform. Most recently, Jeni managed the US business for a global media intelligence, SasS based firm and was part of the executive team that took it to a successful sale. She started her career in Madrid, Spain – heading up the international practice of the leading market research and polling company in Spain – Demoscopia. One of her claims to fame is having run with the bulls – not by choice – in a small town in Spain!