1. Clear Seas Research
  2. SIS International Research
  3. Research Live
  4. RN_In_App_Effectiveness_GBook_480_60

Getting Back To Basics: Mixed Modal Is Garbage

Mixing data collection modes is bad research and no amount of convenience or expediency will change that.

 

By Paul Richard McCullough

Using mixed modal data collection, that is, using two or more data collection methods to collect data from the same sample population, eg, phone and online, is not bad research, it’s gag-awful research.  Anyone who tells you differently is selling something.

Commercial marketing research is an interesting business.  Like most businesses, we focus almost all our time and energy on execution, on implementation.  We’re all about getting the job done.  But we’re also about finding truth.  Or as close to truth as we can afford.

Marketing researchers are like data engineers.  We work hard to build our bridges and our skyscrapers as fast and as economically as possible.  But it’s also important that they don’t fall down, too.  Fast and cheap is important but its not enough.  In marketing research, bridges and skyscrapers are built with accurate, valid data.

Mixed Modal is tempting, especially when you’re faced with a difficult field job.  You’ve already sold the project in, perhaps, and then found out you can’t get all the completes you need online.  If you switch completely to phone or face-to-face, your budget will blow sky high.  Maybe you can supplement your cheaper online data with a minimum number of more expensive phone interviews.  This is done all the time, right?  Right.  Besides, nobody will even know.  WRONG!  Your data will know.  And your data will suck.

The problem is we do this all the time.  One of the largest sample vendors in the world proudly boasts “reaching respondents … via Internet, telephone, mobile/wireless, and mixed-access offerings (emphasis added).”  Mixed Mode is a standard solution to difficult field problems.  I understand the temptation.  But the need for mixed mode to be a valid solution does not make it a valid solution.  It does not work.  Find another solution.

I recently had a client call me to ask my advice on a problem he was facing.  He had collected brand imagery data online and supplemented with some phone interviews.  His problem was that the data from the two modes were completely different.  Not close.  His question to me was two-part: 1) had I encountered this before? And 2) did I know of a valid way to adjust the data so that the two data sets could be justifiably pooled.  I had and I didn’t.

I am privileged to know personally some of the brightest marketing scientists in the world.  So I sent an email out to my little community of brainiacs and asked them the two-part question my client had asked me.  The responses I received were uniform, surprising and disappointing.  Yes, they had each encountered such problems and no, they had no idea how to deal with them.  No idea!  Smartest people in research.   No idea.  Where does that leave you and me?

For a recent Customer Sat Study, my client suggested supplementing the online sample with telephone interviews of some larger customers.  The idea was to insure that key accounts would be represented in the final analysis.  The data were so divergent from the online results that our only option was to report them completely separately.  A rationale could be reasonably constructed to explain the differences based on the differences between key accounts and regular customers.  An equally credible rationale could be constructed to explain the differences based on the impact of a personal interview versus an anonymous one.  In fact, the differences in sample profile are confounded with differences in data collection so it is impossible to know why the data are different.  We couldn’t, in good conscience, pool the data.  We were forced to report them separately as if they were two separate studies.  More work for us and less satisfying for the client.

Another problem is that more often than not the researcher chooses to pool the data from the two (or more) data collection modes so that no one ever knows the amount of bias that has been introduced into the data because no one has bothered looking.  Pooling mixed mode data is like the little kid who puts his hands over his ears and starts yelling so he doesn’t have to hear his mother telling him to clean up his room.

Mixing data collection modes is bad research and no amount of convenience or expediency will change that.  Either don’t do it or put your hands over your ears and start yelling.

Share
You can leave a response, or trackback from your own site.

12 Responses to “Getting Back To Basics: Mixed Modal Is Garbage”

  1. Scott Weinberg says:

    March 13th, 2014 at 12:43 pm

    Thank you for writing this. Thank you for having the courage to write it. I nodded throughout as I recalled witnessing this I don’t know how many times. At a minimum, the method bias inherent in every methodology alone makes the idea of data blending absurd. Why do MR pros, fieldwork firms, etc believe they can throw it all into the blender and then bake uniform brownies? This scenario is one of the elephants in the room I hope more of us shine a spotlight on.

  2. Ariel Spigelman says:

    March 13th, 2014 at 7:39 pm

    I’m afraid the author in this article is introducing a false dichotomy and rather bluntly insisting on the unqualified evils of mixed mode surveys without considering the inherent nuances of the issue.

    The first problem is that no theoretical reasoning is produced to show why MM is ALWAYS problematic as the author asserts. And cherry-picking empirical evidence to back up his point doesn’t help: for sure there will be MM studies with discrepancies between each mode’s data, but I have also come across many studies in the literature and in my career where the differences are nonexistent or negligible. Are these latter studies equally unacceptable in spite of this?

    Another problem is that the author considers all modes, and more importantly all mode mixing options, to be equal (and equally bad). But studies have shown that certain mixes work better than others, and that certain modes are conceptually and practically closer to each other than to others: Web + CATI ≠ Web + CAPI ≠ CATI + CAPI etc.

    Finally, the author fails to acknowledge the potential biases and errors inherent in relying on single-mode methodology, including coverage error and non-response bias. What’s worse: excluding from the sample frame everyone in a target population who only has a mobile and is thus unreachable by fixed line, thereby skewing the sample; or including those unreachable subsample respondents using web or face-to-face? That is an empirical question, one that will hinge in part on the content and nature of the study as well as the methodology used to administer it.

    These considerations and more regarding MM have been covered in academic as well as non-academic studies, and the author would do well to look into them before producing blunt, blanket, black and white assertions that summarily fail to incorporate the unavoidable subtleties of this issue.

  3. Paul Stirr says:

    March 14th, 2014 at 5:23 am

    The author does not speak at all about the sources of sample frames and treats all online surveys as alike. If these are all opt-in Web panels (i.e., non-probability) then it is not at all surprising they might produce different results. This does not sound like scientific research, and the author appears to display an ignorance of the basics of survey sampling.

  4. Jeffrey Henning says:

    March 14th, 2014 at 10:08 am

    Plenty of successful mixed-mode studies have been implemented by our friends in the public opinion research industry. Such studies have been carefully designed from the start to be mixed mode. Heck, the 2010 Census supported multiple modes. I’d encourage the author to join the American Association of Public Opinion Research or attend its annual conference to learn more.

  5. Rebecca Colwell Quarles says:

    March 14th, 2014 at 12:25 pm

    Mixed-mode is sometimes the only methodology that will produce a sample that includes enough respondents for meaningful analysis and/or weighting for the full range of demographic and ethnic groups in the population. For example, it would be futile to try to get enough young Hispanic males using only land line telephones. Many of the problems with mixed-mode will wash out if you examine the demographic/ethnic differences in response and weight to population parameters for the groups where significant differences exists.

  6. Paul Richard McCullough says:

    March 14th, 2014 at 6:25 pm

    Thank you to all the commenters to this blog. There are many valid points raised, particularly by Ariel Spigelman.

    A few general comments. I purposely wrote the blog in a black and white, mixed mode sucks, kind of way, to stimulate discussion. It worked! Now to dig myself out of the hole I am buried in.

    Ariel points out that I claim mixed mode never works. Absolutes are always wrong, as they say, so I must be, too. I agree. However, in the context in which I work, which is not public opinion polling, but commercial marketing research, mixed mode is generally used not for good theoretical reasons but for expediency. To think otherwise is naïve.

    In my experience, mixed mode usually occurs when a certain sub-population is difficult to reach with the primary data collection mode. Then, when differences occur, one cannot know if the differences are due to differences in populations or differences due to mode. In the blog I mention a study I did where the differences were so dramatic we had to report the data separately. It was a B2B study. The client wanted telephone surveys with key accounts and online with all others. The key accounts were dramatically more positive than other accounts. Was this due to the fact that the client company treated key accounts differently and better or was this due to the fact that customer satisfaction data were being collected by a human instead of anonymously? For what it’s worth, based on the results and industry knowledge, the client felt that the latter explanation was more likely.

    Of course, as pointed out, some modes may work better together than others. But in the marketing research practitioner’s world what usually occurs is we try to complete as many interviews as possible online because its cheapest and fastest. Then we struggle to complete some quotas and we jump instinctively to mixed mode. My empirical evidence suggests this is extremely dangerous. Yet it happens all the time.

    I strongly disagree with the notion that difficulty in sampling alone justifies a mixed mode methodology. Just because a study is not feasible doing it properly is not justification for doing it badly. Erroneous information is not better than no information. And if mode has significantly tainted the results, demographic weighting will not wash away the taint.

    I appreciate the comments and I’m glad to see some people treat mixed mode data collection seriously. I don’t think you are the norm. Mixed mode data collection in marketing research is commonplace and most of the time there is little consideration for the dangers that exist. And those dangers are substantial.

  7. Benjamin Messer says:

    March 15th, 2014 at 12:30 pm

    “Mixed Modal is tempting, especially when you’re faced with a difficult field job. You’ve already sold the project in, perhaps, and then found out you can’t get all the completes you need online. If you switch completely to phone or face-to-face, your budget will blow sky high. Maybe you can supplement your cheaper online data with a minimum number of more expensive phone interviews. This is done all the time, right? Right. Besides, nobody will even know. WRONG! Your data will know. And your data will suck.

    The problem is we do this all the time…”

    Well, you should stop doing this all the time. I do work in marketing as well and we have used mixed-mode data collection very successfully, but we do plan and design the methods before going in the field. No survey research I’m aware of would suggest simply adding on a few phone surveys to a web survey (or vice versa) to meet quotas or enhance representativeness. What would make you think this is possible or reasonable to do? Mixing modes requires careful and thoughtful planning in advance of collecting data, and is not and should not be considered a simple panacea to data collection problems using a single mode, as you seem to suggest is the case in your field. P.S. I’m not selling anything.

  8. Saul Dobney says:

    March 17th, 2014 at 8:20 am

    With scale-based questions I would expect different modes to give different answers because this is an inherent problem with scale-based questionning. Even keeping the same mode, but with a different agency, can affect scale based results. The answer is to be wary about using or relying on scales in research – find other more precise methods of measurement.

    Secondly different modes giving different answers raises the questions as to whether a single mode-survey is reliable if it can’t be replicated in a different mode. If the answers are tied to the mode of survey, that doesn’t help in decisions about which results are reliable.

    I’d also expect key accounts to be completely different from non-key accounts since they have a much deeper level of relationship. Key account interviews are also wasted if they are aimed at producing satisfaction scores, and not focused on developing account plans.

  9. Brad Rucker says:

    March 17th, 2014 at 10:08 pm

    Saul gets to the point when he says different modes give different results, which leads one to ask, which mode is best for which audience? With membership surveys, the core of what I do, it appears that the more active and engaged members are in the client’s organization, the more likely they are to respond to an online survey. As you can imagine, results from them are different than those from the “average” member. I rely on the phone surveys because I believe it provides a truer representation of the membership as a whole .. but only after persistent dialing.
    There is increasing pressure to move to online. Even worse, many organizations rely entirely on online surveys, believing that even though the response rate is low they can accumulate enough surveys online to represent their membership.
    This is a mess … thanks Paul for raising the issue.

  10. Melanie says:

    March 19th, 2014 at 4:22 pm

    I like where the thread has taken this conversation. The core premise behind mixed mode has always been to increase the coverage error of any given methodology. I remember in the phone world, adding in-person mall recruits to get coverage among minotity groups. Or adding mail to phone for offsetting participation issues. Or adding Mail to in-person to increase coverage, In every methodology there have been issues around getting the broadest coverage in the most efficient way. I’ve seen strong work in the service, product, academic, and government sectors that have enabled strong decision making. I think the key to the posed question – which mode is right – is based on the client objectives. budget, and where the audience lives that they are studying! And beyond that to employt only the best practices when combining the methodlogies, and undestanding the differences each method brings. At the end of the day it’s about Fit for Purpose. And that’s a decision responsibleresearchers make in concert with well-informed buyers.

  11. Sandeep Das says:

    May 26th, 2014 at 4:03 pm

    My perspective on mixed-modal would one of a balanced viewpoint. I am in agreement with the fact that practitioners shouldn’t be resorting to mixed-modal only to ensure sample coverage. But, having said that, mixed-modal definitely has its advantages for hard to reach sample groups etc. Another aspect that we definitely need to keep in mind in the increasingly global nature of research work, is the accessibility and maturity levels of methodologies in different regions of the world. If I have an engagement that requires me to conduct research in the EU and multiple South Asian countries, I would invariably need to resort to mixed-modal due to the nascent stage of development of online panels in many South Asian countries.

    Another important factor is the legacy of a methodology in a country. If for long periods of time, online is the primary mode of conducting research, lets say in Germany, then the reliability of findings from Germany will always be higher when we have online data. The same will be true for Indonesia but for the face-to-face methodology because that has been the legacy technique.

  12. Paul Richard McCullough says:

    May 28th, 2014 at 5:10 pm

    I understand the practical need for mixed mode. But need is not justification. The problem, as I see it, is this: there are two populations, one surveyed with one method, the other surveyed with a second method. I think we all agree that different data collection modes can potentially generate very different responses. We all certainly agree that different populations can generate different responses. So we simply have confounded variables. Mode and population are perfectly correlated. It is analytically impossible to know if the different responses are due to different opinions in the two populations or are due to different data collection modes. This commonly occurs in commercial marketing research, is commonly ignored and is bad research.

Leave a Reply

*

%d bloggers like this: