In the past year, two main items have come forward that has caused me, at least, to step back and reflect a bit on the market research industry as a whole.
The first item: an article about reproducing psychology studies and how those replicated studies produced different results than the originals.
The second item: an article about a bug in fMRI software that calls into question some 40,000 papers – and the results those papers espoused.
Are we in market research taking note?
Market research is, in my opinion, an industry where our research can be very difficult to duplicate. After all, we are often researching the likelihood of potential customers choosing one thing over another.
However, much of market research relies on psychological and sociological studies to inform how we will be conducting our own research, and the types of biases and assumptions we should be making as we interpret the research results.
We also are facing the call for faster studies, faster results, all with a goal of producing deeper insights.
That means we don’t have time for waiting for studies to be replicated, results verified, etc., when we are adopting new market research methodologies. This might be particularly troubling when we’re adopting methods such as neuroscience to get some of those deeper, better insights, then find out 15 years later that some of those studies we were basing our methods on had wrong conclusions.
The first thing that comes to mind is that we need to acknowledge the fact that science is constantly evolving. If that weren’t the case, we’d still be stuck thinking the earth was flat, or that the earth was the center of the universe around which the sun orbited. The beauty of science is that it is a field of constant exploration. What looks true today might end up being disproved tomorrow as we learn a bit more, dig a bit deeper, and keep testing various theories.
To me, this means that when it comes to market research, we need to be sure that we are acknowledging the various caveats around our research. This could be everything from the way a study was fielded and the potential biases that brings, to the sample makeup, to events around the time the study was fielded that may have impacted the way people were thinking about the topics being studied.
This also makes me question the whole idea of trending. Trending seems to inherently assume behaviors and audiences don’t change, when, it turns out, people’s very personalities change based on a variety of circumstances (listen to the below podcast for a fascinating exploration into personalities). When we present or interpret results from tracking studies, then, should we be expanding beyond what the study showed and looking at a bigger picture that includes qualitative and quantitative information? I think the answer is yes. It would provide a more robust story, and it would that we’re doing our due diligence to try to take into account as many pieces of influence as we can into the way that respondents answered the study.
I also think we all could benefit from slowing down a bit in both the scientific research communities and in the market research community. We need to be taking time to check the data, look at other sources to check what else might be impacting audience behaviors, and we need to be open to constant discovery that could change our baseline premises upon which we’ve made conclusions.
I’d love to hear what you think about the potential impact of the neuroscience research issue and the study replication issues on market research. Leave a comment or reach me on twitter @zontziry!