Biasing Your Research by the Act of Doing Research
Longitudinal studies can influence how people respond to your questions simply by the fact that you have researched them before. And if you’re not careful, this problem can come about even when you’re not doing a longitudinal study.
Editor’s Note: GreenBook’s own resident curmudgeon Ron Sellers offers up a great “meat and potatoes” post today on the inherent dangers of bias in longitudinal research, which raises questions about panels and communities as well.
Although Ron doesn’t get into it here (hopefully he or one of our other authors will soon) it leads to the idea that perhaps a truly unbiased sample is impossible in this era of over surveying populations regardless of the recruitment method. That factor can and should be accounted for in our discipline, but an even bigger question is: in an always on digital society where “reality TV” dominates broadcast media, Youtube creates viral sensations, and individuals are striving to brand themselves via myriad social media channels does the whole principle of the observer effect have to be rethought? In effect, our society expects to be observed and that very expectation changes behavior, thus introducing bias. Perhaps Behavioral Economics and virtual ethnography holds the key here since it certainly seems as if we need to rethink our assumptions about the possibility of achieving a sample free of bias from the observation effect.
Until we have that debate, Ron brings up some great points about what to watch for in more traditional approaches. As always, reading Ron’s musings is well worth the time.
By Ron Sellers
One of the fundamental tenets of research is not to affect the research subjects (and therefore the results) by the simple act of doing the research. For instance, anthropologists often worry that by observing their subjects, they are impacting the behaviors of those subjects.
This is often given as a criticism of focus groups: people may react in unnatural ways when they’re in a room surrounded by microphones, a big mirror, and a professional moderator who’s asking them about their last purchase of bathroom tissue.
Yet a greater – and overlooked – danger in this applies to longitudinal studies (where the exact same respondents are tracked over time).
Years ago, I participated in a mail panel (remember those?). Every month, a new set of questions would arrive in my mail box. One day, I received a set of questions about automobile advertising – for which brands I had ad recall, what the message of the ad was, etc. I completed it without any problems.
The next month, I got the same questions again. And then again the next month, and the next. At some point, I knew what was coming each month – amongst the questions about pet ownership, allergy medication, and other forgettable issues would be the same set of questions about automotive advertising.
Before long, my awareness of automotive advertising was heightened considerably. I would think, “Oh, there’s a new Pontiac ad – now I can say I saw something for Pontiac this month.” In other words, my advertising recall rose substantially simply because of the research in which I was participating. The researchers were no longer getting real-world responses from me because I had been impacted by the act of completing the research.
This has serious implications for any longitudinal research. Let’s say I’m completing a survey about tea. I’m asked if I’m aware of Lipton, Bigelow, Stash, Tazo, and other brands. I’ve never heard of Stash, and because it piques my curiosity, I look it up. Maybe even buy a box. Maybe even start drinking it regularly (it is pretty good tea).
Six months later, I complete another questionnaire about tea. I can now tell the researchers that not only am I aware of the Stash brand, but I am a regular user of the brand. Because I like Stash, I also have a heightened awareness of the brand’s advertising, so I recall a number of their ads.
Would this impact the research findings? It certainly would if Stash commissioned the research to find out whether their new advertising had the ability to reach people who were unaware of the brand and convert them to product buyers.
While something like this may not happen with very many respondents, the earlier example of tracking my advertising awareness for the same product category month after month very easily could impact a lot of the research participants.
And while it’s not involving longitudinal studies, there’s another application of this issue that applies when using an online access panel. There are companies that use the same methodology and questions with multiple clients. This is common in advertising research, for example – particularly since each client’s ads can be compared to a set of norms maintained by the research company.
But if that company does a lot of this testing, and returns to the same panel respondents over and over, you could be a victim of this priming effect without even conducting a longitudinal study. This happened a few times while Grey Matter Research was evaluating panels for our More Dirty Little Secrets of Online Panel Research report. Our panelists were asked to review advertisements in separate surveys. But the same research company was using the same measurements to test different ads – only problem was that the same panelists kept getting opportunities to complete these studies. So after reviewing ads for candy, and then for investments a few days later, our panelists knew exactly what would be asked (and therefore what to look for in the ads) when they were asked to evaluate automotive advertising later that week. By returning to the same people over and over for this testing, the research company was influencing their behavior, which influenced the research.
Like most other tools, longitudinal research has its place in the research tool box. But it should be used only with the understanding that the act of conducting the research very well may influence the research itself. If there is no way to avoid or control for that possibility, it may be that another methodology is a better bet for the project.