A Debate Between Survey Length and Data Quality
By Zontziry Johnson
Here’s the scenario for you: a new panel has been identified that has high-quality, pre-qualified respondents for a survey you have fielded a few times in the past using other panels. The original survey is hovering near the 30-minute mark (taken online), and, because it has been fielded a few times already, there are a number of stakeholders who use the data to inform various decisions and efforts and are interested in keeping trending intact. The idea is that the survey should be shortened to about a fifth its original size using this new panel, so it’s up to you to trim the survey. As a side-note, this particular panel is comprised of heavy mobile-users — previous surveys done with this panel have shown that the majority take the surveys via mobile devices.
The first issue is that the panelist information being used to validate responses can only be fitted after the survey results are back, meaning there is a risk of ending up with a smaller sample size than desired by the time the data has been cleaned.
The second issue is that the panelist information being used doesn’t contain all of the information needed, so at least some of the questions used to determine how to pipe respondents to the rest of the survey (to ask about product usage) need to remain in the survey.
The third issue is that the survey is being used for multiple objectives. While the objectives have some overlap, it’s not enough to mean that we can measure along both with the same set of questions. Instead, we need to find a way to add the minimum number of questions possible to the core set in order to achieve both objectives with this single study.
The give and take
Ultimately, this combination of issues makes for a very difficult time trying to trim a survey to the desired length. At first pass, we only trimmed ten minutes from the survey. Between the fact that most of the questions were matrix questions and the number of “must-haves” that were being included, it was growing more and more difficult to see where questions could be cut. Finally, it took multiple discussions about the number of objectives behind the study needing to be trimmed, and all interested parties sitting together to really ask the question, “Do we actually need this in the study?” for each question in the survey to get us down to a study that was roughly one-third the size of the original.
The debate between length and data accuracy
This exercise caused me to reflect quite a bit on the debate between the length of the survey and the data accuracy that can be gained from a super-short survey.
For this particular scenario, the desired end-state for the survey was that the survey was no longer than five minutes. I get it: our attention spans are shrinking dramatically (such as seen in Canada, per a Microsoft study), response rates are getting more and more difficult to achieve, and so the shorter the survey, the higher the likelihood of achieving the desired response rates. But I’m not entirely convinced that 5-minute studies can meet the same needs that longer studies achieve.
Please note: I am not advocating for a 30-minute online survey.
Instead, I’m calling for a need to examine this type of exercise and call for ever-shorter surveys from a few different angles. First, how rigorous do you need to be with respondent and data quality (before applying data cleaning processes)? Depending on the panel being used, you may need more respondent qualification questions up-front. For example, you want to get opinions from doctors, but if you field the survey with a medical panel full of professionals involved in the medical industry, you will need something more than a quick, “Are you a doctor?” to be certain you’re getting the responses from the group you need. And while high-quality panels can help (i.e., a panel that is comprised only of doctors to begin with), some surveys may still need to hone in on the desired audience (are they family practice physicians or podiatrists).
Second, what are the actual data needs? Sure, there are many, many questions that would be “great to know.” But when faced with needing to make the most of the short time you have from your respondents, you need to stick with what you need to know, and discard the rest for another study. The reality, though, is that for surveys that are routinely fielded, that list of items that we need to know gets inevitably longer and longer as the group of stakeholders gets wider and wider.
Third, while response rates might be fantastic with five-minute studies, when dealing with studies that need an extra level of rigor around the respondent qualification process, I think expanding that to 10 minutes to increase the confidence in the data and reduce the amount of data that has to be discarded is just fine. Ultimately, it can come down to this: do we create a five-minute study without a rigorous respondent qualification process that results in only 200 of 500 responses being usable, or do we create a ten-minute study with a rigorous respondent qualification process that results in 200 of 200 responses being usable?
Why mobile-first is still better
The scenario I’ve described involves trimming an existing study instead of starting from scratch. I know that’s going to be a method that is taken often, especially for existing studies with existing stakeholders. But I still think that whenever possible, it’s better to start focusing on the mobile-first approach, as opposed to the adjust-it-for-mobile approach. This allows for things like starting with what information is available about panelists that can be used to already pre-qualify them for the study, creating questions that are mobile-friendly from the beginning (as opposed to trying to figure out how to make that long and wide matrix into a more mobile-friendly question), and naturally wanting to keep the study focused on a single objective knowing that drop-out rates will increase quickly as the length of the survey increases. So, the next time you’re thinking of fitting an existing survey to a mobile experience, try starting fresh with a mobile-first approach.