3 ways we fail our audiences and how to turn the ship around

Before I get going, I want to start by defining “audiences.” In this context, the audiences in question are those who we are asking to participate in our market research studies. Typically, the failure is happening with quantitative studies, not qualitative studies (at least, in my experience, this is the case). In quantitative surveys, we find bloated surveys, biz-speak language, long grids at the end of long surveys that we expect our audience to be alert enough to answer thoughtfully, and more.

In my last post, I cited two blog posts from other market researchers who posed a question about how we’re treating our audiences and opined that the reason this is happening is because we’ve forgotten that our audiences are made up of humans.

In this post, I want to explore some of the ways I’ve seen myself failing my own audiences.

Asking too many dang questions

This is the easiest and possibly most common way that I have found myself failing my audiences. I try to keep the survey short, sweet, and to the point, but then need to add more questions because other stakeholders need to ask them of the same audience I’m intending to talk to, which means the survey gets longer.

Sometimes, because of the stakeholder asking for the question, I feel like I can’t say no. For example, if my manager tells me I need to include a particular question in the survey, I’m 99% likely to say, “Okay,” and sigh about how long the survey is getting.

I don’t have a clear answer for this one, other than pushing back when we can and fighting constantly to keep the survey fit and trim. Let’s face it, there will be times when we can’t push back easily or at all. If your VP is telling you that you need to include a question in your survey because of an executive’s need for the answer and your survey is the next available tool in which to ask the question, well…there’s only so much pushing back you can do before it becomes, as one friends puts it, career-limiting. But there are other times when we can push back, ask for the business objective, offer alternatives to getting the answer. Still, we might find that the answer is, “Just gonna have to do it,” but the more we stand up for our audience of research participants, the better off our audiences will be.

Not using the data we already collect

In one of the roles I’ve held, I remember griping about our study’s length being ridiculous. I wanted something to go back to my client to make the case for shortening the study. Suddenly, I remembered: panels seem to always include the “would you take this survey again” question at the end of every study. However, I’d never seen the resulting data. When I did ask for it, I was met with a surprised, “Oh! Let me see if we still can get that from the panel providers.”


Why are we collecting information we’re not going to use? Why are panel providers collecting the data on what types of surveys people enjoy taking, or at least are likely to take again, but not sharing with those of us conducting those surveys? As tough as it might be to see, I wonder if we would see a change in the way we created our surveys if we got this report card at the end of each study that indicated what percentage of those who took the study said they’d take it again (or, conversely, NEVER take anything like it again), let alone what percentage of those who took the study bothered to answer these last couple of questions provided by the panel! Panel providers can proactively provide this data along with the results from the study itself; but we should equally proactively seek it out.

Now please wait a moment while I go ask my market research partner for this data from the last study I conducted…

Not standing up for our research participants

I alluded earlier to the idea of being our research audiences’ advocates, and I stand by that. I have a colleague who constantly uses the time max on her study as her rationale for why she won’t accept more questions than necessary in her study. People know that they can expect to hear, “Sure, but what question should I drop to stay within my time frame?” And you know what? That’s fine. She’s advocating for her research participants. So far, I’m finding that if I hammer on keeping a survey to a certain time length, I’ll be able to end with a survey that’s five minutes over that desired time limit. When I hammer on a survey being five minutes, I’ll end up with a ten-minute survey.

What can I do, then? Drop my published and advocated time limit by five minutes!

I have one study with a screener that, combined, takes up 7 minutes. It’s a long screener, and it’s long because it’s actually necessary for me to include some questions that thoroughly screen my research participants. Since this study is done using panelists, though, a colleague suggested that I go back to the panel providers and let them know who passed the screeners, so that they can start marking them as already-know they meet certain requirements. Over time, that means I can start to shrink my screener down because I have enough people flagged across the various panel providers used who already basically pre-screened. Will that take time? Yes. But ultimately, am I helping my audience? Absolutely.

Ignore the fact pretty much everyone’s on a mobile phone

Smartphone use is pretty prevalent. Mobile devices are ubiquitous. So why am I still creating surveys meant to be taken on a desktop machine? If I’m more likely to take any survey on my smartphone or tablet, then why don’t I use that perspective with my research audience? And if I do try to make my surveys device-agnostic, then am I testing to be sure the survey works on any device? Here, I can certainly do better. Much better.

With all these examples, I’m curious what you have done to help your research participant audience?

Leave a Reply

Your email address will not be published. Required fields are marked *