Judging 2016 Predictions: Only 3 Out of 125 Were Right!

Dan Foreman judges the 125 predictions for 2016 we published last year. How did our soothsayers do? Not very well!

 

Editor’s Note: As 2016 comes to a close, we’re going to participate in the time honored tradition of making predictions for the year ahead (we started with David Sackman’s views yesterday). Before we launch this year’s full crop though we thought it would be interesting to look back at the predictions for 2016. Dan Foreman tackled that fun challenge, and now unveils the less than auspicious results. It turns out that 2016 was just a bad year for predictive accuracy of most researchers, not just pollsters!

Look for our collection of predictions soon; maybe we’ll do a better job predicting 2017!

 

By Dan Foreman

In my opinion, our collection of thought leaders and soothsayers got it right 3 out of 125 times. By any standards that’s pretty shocking. But why?

Well, it turns out that in general people are over-stating the position, predicting things to happen faster, writing what they hope to happen rather than what they actually predict to happen.

Lenny wrote in his preview last year a few caveats:

  • None of the contributors are time-travelers or psychics
  • All contributors are inherently biased
  • Some are promoting their own agendas
  • Lenny will put his money on most of them!

Lenny, I hope you didn’t put your money on them or you are going to be out of pocket this season.

Overall, my interpretation is that people’s optimism, or hope, is what they wrote down, not an objective, fact-based analysis leading to prediction. I suspect most people wrote their predictions in a rush, not applying the analytical and predictive techniques they would bring to their professional judgments.

Napoleon Bonaparte said “A leader is a dealer in hope”. But I don’t think he was a market researcher.

My own biases are inherent here too. I read each prediction and gave it a straightforward Yes, that’s true, or No, that didn’t happen. Many of those that fell into the No category will be argued by others to have happened, or to have nearly happened … but somebody’s got to choose where they go and that somebody happens to be me today.

Therefore, I publish, in full, each of the 3 that I believe got it right. Congratulations to Jason, Gregg, and Ron. To the rest of you, better luck in 2017!

 

Jason Anderson, President, Insights Meta: 

I expect 2016 to be the year that political polling in the US goes off the rails. A major prediction “miss” would not be shocking, with political campaigning, social media, phone-based polling, and general angst all swirling together in turbulent ways. In the aftermath of what will almost certainly be an endless parade of polling data, the average person is going to question even more the usefulness and accuracy of survey data. Some of those people are clients and collaborators; I expect to be explaining and defending the virtues of our work for most of the year.

 

Gregg Archibald, Managing Partner, Gen2 Advisors:

Legislators and the industry will continue to talk about the need for privacy and consumers will continue not to care – as evidenced by their behavior. As we have seen with the advent of the e-commerce (and even before), convenience trumps privacy.

 

Ron Sellers, President, Grey Matter Research & Consulting, LLC:

The next year will see much more of the same – Luddites fighting any new approach or technology; True Believers telling us how each new approach or technology (largely the ones they sell) will replace surveys/focus groups/IDIs etc. and completely eliminate them from use within a year or two. The reality will be somewhere in-between:  good researchers carefully evaluating the new techniques with an eye towards adding them to the tool box while continuing to conduct lots of valuable surveys, focus groups, IDIs, and other traditional approaches, and bad researchers falling in love with some approach (new or traditional) and wanting to use it for everything (but doing it poorly).

You can leave a response, or trackback from your own site.

3 Responses to “Judging 2016 Predictions: Only 3 Out of 125 Were Right!”

  1. Kevin Jenne says:

    January 4th, 2017 at 11:20 am

    Dan, thanks for doing this; these three gentlemen certainly nailed it. I wonder if you could share a few of the predictions that didn’t pan out (without attribution, of course). It sounds like there are plenty from which to choose. 🙂

  2. Jeff Adler says:

    January 4th, 2017 at 3:27 pm

    I think Ron got it right. I think this will apply to 2017 as well:

    The reality will be somewhere in-between: good researchers carefully evaluating the new techniques with an eye towards adding them to the tool box while continuing to conduct lots of valuable surveys, focus groups, IDIs, and other traditional approaches, and bad researchers falling in love with some approach (new or traditional) and wanting to use it for everything (but doing it poorly).

  3. Julian says:

    January 5th, 2017 at 1:51 am

    I think it’s interesting that, even within these three correct predictions, only the first, Jason Anderson’s, really predicted something novel. The other two were certainly true for 2016, but arguably were equally true for many other past years (both reference an element of ‘continuity’) and will probably be true for 2017 too.

Leave a Reply

*