I was recently tasked with cutting up a survey to fit a mobile audience. I’ve written before about the need to design surveys for mobile, as opposed to writing mobile surveys. This experience has sealed the deal for starting survey development with mobile in mind first.
Here’s a scenario for you: the original survey is hovering near the 30-minute mark (taken online), and has been fielded a few times already, meaning there are a number of stakeholders who use the data to inform various decisions and efforts. There is some information that could be gathered using non-survey means, allowing for the survey to be shortened dramatically. However, by dramatically, we’re talking about making the survey literally 1/5 to 1/6 its current size.
The first issue is that the survey target audience is one that requires a certain level of rigor for qualification for the study in order to be certain that the data is as accurate as possible. After all, the data are informing some company-wide strategies.
The second issue is that non-survey qualification information can only go so far to be certain the data is accurate. Not only that, the information can only be fitted after the survey results are back, meaning there is a risk of ending up with a smaller sample size than desired by the time the data has been cleaned.
The third issue is that the secondary information set being used doesn’t contain all of the information needed. In this case, the data set involves product usage; unfortunately, the products measured don’t include some of the products asked about in the original study.
The fourth issue is that the set of questions to qualify respondents is pretty significant and can be up to 1/2 of the time a respondent spends taking the survey!
The give and take
Ultimately, retrofitting a survey in this case results in a lot of compromise. From needing to ease up on the rigor behind the qualification process to taking a risk with the data cleaning process to removing nearly all of the “nice-to-know” items, it can feel like the resulting trimmed version of the study is one that could ultimately be discarded for a whole variety of reasons. Looking at this from a “can we switch methods” type of perspective for the future, there is also the risk that the information won’t be as accurate.
The debate between length and data accuracy
For this particular scenario, the desired end-state for the survey is that the survey is no longer than five minutes. I get it: our attention spans are shrinking dramatically (such as seen in Canada, per a Microsoft study), response rates are getting more and more difficult to achieve, and so the shorter the survey, the higher the likelihood of achieving the desired response rates. But I’m not entirely convinced that 5-minute studies can meet the same needs that longer studies achieve.
Please note: I am not advocating for a 30-minute online survey.
Instead, I’m calling for a need to examine this from a few different angles. First, what are the business needs? In this case, rigor around the data gathered is crucial. The data needs to be coming from qualified respondents, meaning this study really looks for a certain level of knowledge in a respondent before they can continue with the rest of the study. It’s like saying you want to get opinions from doctors, but you’re going to field the survey with a medical panel full of professionals involved in the medical industry without being certain that the respondents are actually doctors. Somehow, you’re going to need to scrub that data, because just asking, “Are you a doctor?” will not assure me that my respondents really are all doctors.
Second, what are the actual data needs? Sure, there are many, many questions that would “be great to know.” But when faced with needing to make the most of the short time you have from your respondents, you need to stick with what you need to know, and discard the rest for another study. The problem with this, though, is for surveys that are routinely fielded, that list of items that we need to know gets inevitably longer and longer as the group of stakeholders gets wider and wider. And any market research professional is going to be excited to learn about another group of stakeholders that finds your data valuable.
Third, while response rates might be fantastic with five-minute studies, when dealing with studies that need an extra level of rigor around the respondent qualification process, I think expanding that to 10 minutes to increase the confidence in the data and reduce the amount of data that has to be discarded is fine to do. Ultimately, it can come down to this: do we create a five-minute study without a rigorous respondent qualification process that results in only 200 of 500 responses being usable, or do we create a ten-minute study with a rigorous respondent qualification process that results in 200 of 200 responses being usable?
Why mobile-first is still better
The scenario I’ve described involves trimming an existing study instead of starting from scratch. I know that’s going to be a method that is taken often, especially for existing studies with existing stakeholders. But I still think that whenever possible, it’s time to start focusing on the mobile-first approach, as opposed to the adjust-it-for-mobile approach. This doesn’t have to just mean using questions that can be answered best on a mobile device, either, but the idea that you can use available secondary sources of information to assist in the respondent qualification process. It also means really focusing on what is needed versus what is a “nice to have” in the survey. It seems people are more likely to let go of the “nice to have” items when you’re focusing on a study that is really mobile-friendly as opposed to a study that is just computer-friendly. And last, it means keeping an keen eye on the length of the survey and being realistic about the fact that as the survey gets longer, the response rates are likely to drop.
What have been your experiences with either trimming a survey to fit a new time limit or using a mobile-first approach? Have you been able to convince a client or a stakeholder to start over with a study that has been in field for awhile? How did you convince them to start fresh?
— zontziry (@zontziry) September 15, 2015