Download the
What's happening in MR and what it means for you? READ NOW in the latest Q4 2017 GRIT Report!

First, Psychology Studies – Is #MRX Next?

scrutiny on research

 

By Zontziry Johnson

August 27, an article was published in the New York Times detailing the efforts of a team called the Reproducibility Project to replicate findings from psychology studies published in reputable journals (and by reputable, I’m referring to peer-reviewed journals like Science). In short, the results for a number of those studies could not be recreated, casting something of a pall of doubt on any study done by anyone in any field.

Should we really worry?

The first time I read through the article, I worried for anyone doing any type of research and trying to get it published. I’ve worked at a scientific research institution and am familiar with the various levels of trust-worthiness of scientific journals. There’s a reason studies take so long to be published in the most credible journals: they go through a rigorous peer-review process to be sure that the way the study was conducted followed sound scientific principles, like passing a scientific “sniff test,” if you will.

However, closer scrutiny made me wonder a bit about the way that the studies were being reproduced. This quote in particular bothered me: “…there could be differences in the design or context of the reproduced work that account for the different findings.” One such example cited was a study that was reproduced using women from the United States instead of women from Italy. In this study, the findings from the reproduced study were found to be weaker than in the original study; a closer look shows that cultural differences can certainly play a role in the findings.

What’s the real issue?

I think there are two real issues at play here. The first is a question on how we are talking about original studies. Are global inferences being made on studies focused on one particular culture? For example, in the study on how attractive women rated men based on their time of fertility which used a sample of women primarily from Italy, are generalizations being made without taking into account factors such as cultural biases? Recently, another study made headlines for finding that, as the headlines went, “Having children is one of the crappiest things that can happen to an adult.” An actual reading of the study showed first, the study was done regarding German parents’ experiences with parenthood, and not only that, it was looking at why German parents were more likely to have only one child, even if they were expecting to have two when they were first thinking about how many children they wanted to have. The idea explored was how supported parents were by their peers and families, and their perceptions of how the parenting experience would be. Those who didn’t have good support in place when they had their first child, and whose experiences didn’t work out as they had expected, were less likely to have a second child.

So, we need to stop generalizing results, misinterpreting them, and misrepresenting them when talking about them in the media – from well-known media outlets to our own blogs and social media shares.

Second, when a study is being reproduced, well, it should be reproduced, not approximately reproduced. I understand that doing such a thing will take significant time and effort and money – much like the original studies took, I’m sure. But in order to really be credible, you can’t say you’re going to recreate an apple and end up with a jicama instead (if you haven’t eaten a jicama, the texture and flavor is close to that of some apples), and then say the apple wasn’t an apple after all.

Implications for market research

What does this mean for the field of market research? I’ve been thinking about this since reading the NYT article a couple of weeks ago. Here are some of my conclusions.

  • Be sure we’re using sound methodology for our studies. Be up-front when reporting on the results, specifically identifying the sample used (again, cultural biases play a factor in results) and whether the results are representative of the population being studied. Remember to publish the sample size and the confidence interval for your results. I think in the current push for faster studies and visual reports, the rigor behind some of the research can be lost, and we can end up with poorly-run projects with misleading results.
  • When talking about other studies, be careful of making broad generalizations or misrepresenting the original data.
  • If something you see reported seems a bit outlandish, or very surprising, go check the original source of data.
  • Don’t just re-share a headline because the headline seemed interesting and because it’s gone viral. Read the source material. Too often, items are being reshared on social media or commented on by others without people taking the time to read the original source material. Conclusions are too often made based on others’ comments, not based on reading the original item that was shared.
  • Some studies in market research won’t be reproducible simply because in our research, we often are measuring changing perceptions among audiences. Based on a variety of factors – marketing campaigns, market influences, etc., those perceptions are likely to change, or have changed by the time the same study is conducted, even if it’s done among the same exact respondents in the original study. I don’t even think trackers could be reproduced for this very reason; they are typically tracking changes in an audience, from changes in satisfaction to changes in perception to changes in behavior.

In short, do good research and take the time to review claims before passing them along. Let’s be good stewards of our own and of others’ data.

Please share...

4 responses to “First, Psychology Studies – Is #MRX Next?

  1. Dear Zontziry:

    Thank you for a good article and better warning. The idea of credible studies that can stand the “sniff” test is becoming a real problem. In the past, organizations like CASRO and AAPOR published rules and regulations for publicly releasing surveys. Although these organizations had no legal power to enforce these standards and guidelines, they could expel an individual or member firm for falsifying study results. I remember when AAPOR censored a pollster in the NY Times who is now a regular commentator on CBS for refusing to describe the methods he used on a political poll. Unfortunately, with online panel and cell phone studies the ability to lie or mislead has even gotten worse. One reason polls are more and more suspected is that many polls fall into the category of “push-poll” surveys with a pre-determined set of results in mind. The are also polls which are really selling under the guise of objective research and fund-raising under the guise of actual research. Unfortunately, probably fearing lawsuits, no organization or trade association seem to enforce values, ethics, or their own standards and guidelines any more. This has resulted in most people not believing in the objectivity of poll results.

  2. One needs to be careful here. Most of the really bad stuff out there in academia and medical research was totally fraudulent. Its hard to see how this segued into market research, where reproducibility is rarely ever practiced based on professional expectations from suppliers and pretty transparent processes.

  3. I’ve been concerned about this hitting MR for quite some time. Thank you for raising this issue publicly! Reproduceability is a tough topic and I personally favor (well-conducted) meta-analyses. Meta-analysis itself is complex and here is a brief overview of meta-analysis:

    https://www.linkedin.com/pulse/article/meta-analysis-marketing-research-kevin-gray

    By chance 🙂 I’d written a comment on this post about reproduceability:

    “Some of the commentary I’ve read in the media regarding studies failing to replicate suggests a lack of understanding of meta-analysis and inferential statistics. Most population effect sizes are small, and sample sizes are often small, thus many results will be “insignificant” even with non-zero effect sizes. Small effect sizes can have big consequences, however, and I don’t think should be ignored out of hand. In our line of work, as we know, small differences can translate into big bucks (though not always).”

Join the conversation

Sign up to our newsletter

Don't miss out...
Get great insights content delivered straight to your inbox.
I agree to receive emails with insights-related content from GreenBook.


You can manage your email preferences or unsubscribe at any time. GreenBook protects your privacy under the General Data Protection Regulation.