Hard Hat Stats: Some Common and Uncommon Sense (Part 2)
By Kevin Gray
I’m not a scholar – just a lunch pail guy – but I do have more than 30 years experience as a marketing researcher and statistician. I’d like to share some tips I’ve learned on my journey.
Don’t confuse the possible with the plausible, and the plausible with fact.
Learn how integrate and use many kinds of data, not just what we collect ourselves. This can go a long way to help us help our clients. These data may include government and industry statistics as well as clients’ own internal data.
Try to prove yourself wrong! Don’t just go with your gut when making decisions, and check your thinking to see if it is internally consistent and supported by empirical evidence (not cherry-picked data). Longitudinal and time-series data are better suited than cross-sectional data for making causal inferences (e.g., what has worked and what hasn’t). Use experimentation when you can. Causal analysis is a hot topic among researchers and academics in many fields these days and I’ve listed a number of references I’ve found helpful here.
Create standard questions and questionnaire templates for studies that will be repeated frequently in the future. Constantly re-inventing the wheel is inefficient and leads to inconsistent quality.
Don’t ask consumers questions regarding their purchase behavior that are so detailed that no human could be expected to answer them accurately. Don’t ask consumers to rate long lists of values, attitudes and lifestyle statements (“psychographics”) that are unrelated to past, current or future consumer behavior. Check the literature for attitudinal scales that have been shown to work.
Be aware that response patterns in survey research differ by national culture. A 50% top 2 box score might be pretty good in some countries but pretty lousy in others. Employee and customer satisfaction research and NPS can easily fall victim to these cultural differences.
Don’t confuse statistical significance with importance. Different beasts. On the other hand, p-values, etc., should not simply be dismissed as meaningless…be mindful of the human tendency to think dichotomously!
Understand that a large number of measurements made on a small sample is not the same as having a large sample. While many measurements on a respondent may (or may not) improve measurement precision for that respondent, it does not increase the number of respondents in our sample.
Don’t confuse the sampling methodology with the sample. For instance, a polling company may select a sample via a probability sampling method but unless non-response is trivial, the respondents will not be a true probability sample.
Appreciate how important chance events are in our work (and daily lives). I can wholeheartedly recommend David Hand’s book The Improbability Principle to anyone, marketing researcher or not.
Don’t be overawed by the opinions of “thought leaders” or other self-proclaimed authority figures in the business world. In other contexts, Albert Einstein counseled against this. And he was a real Einstein.
Don’t become overly-specialized. Any method works well for some kinds of projects but not for others. A true marketing researcher doesn’t just sell canned methodologies and knows how to tailor research to fit their client’s real needs…not just their own infrastructure and sales target. This requires a broad skills set, though, and we need to identify our weak points and work on them. “Be a jack of all trades and a master of at least two,” to quote one of my mentors.
Look for new ideas outside of MR, not just within. The methods we use today nearly all originated in other disciplines and have diffused into our industry, sometimes very slowly. Outside reading never hurts and there is now a lot online and freely available. There are also many professional associations you might consider joining.
Remember that there are better and worse ways to do the same thing. MR is not consistently best-in-class (let’s be honest!) and this is another reason to look to other disciplines for ideas and guidance and not just rely on our own gurus.
Don’t confuse potential with performance. A new methodology may show great promise but we shouldn’t spend our precious budgets on promise alone. A lot of claims are made these days that turn out to be complete nonsense. Here are a few tips on how to ferret them out.
Hope you find this helpful!