top of page
Writer's pictureLiviu

I have built my castle on a statistically significant foundation – why is it crumbling?

Updated: Feb 27, 2023

You could say qualitative and quantitative marketing research are like the artist and the engineer.

On one hand, qualitative research is dealing with the human, softer side and aim to provide marketers and decision makers a wide perspective on how people are relating with the organizations, with the world.

On the other hand, quantitative marketing research purpose is to provide solid data – cold, hard numbers on which you can build a solid foundation for business decisions.


Of course, this is a simplification:

· At one end of the spectrum are data like brand image / brand perception results which are trying to quantify rather fuzzy feelings and perceptions. You always have to wonder carefully what the results really mean, in a somehow qualitative way.

· At the other end of the spectrum are pure cold hard data like retail audit and market share data, which are taken at face value – of course you can argue (like it often happens) about the margin of error, about the quality of data collection and so on, but unless you completely reject the results, their meaning is clear and simple. You start to look at other things related to these data, like why you sold as much (or as little) as you did, who’s buying you and why, who’s not buying you and why not, but never have to wonder what “I’ve sold 5000 bottles” or “I have a 35% market share in bottled red wine” really mean.


And there are vast majority of cases in between, where pure cold hard data and quantitative data about people’s feelings, attitudes and behavior mix to various degrees and where things can be pretty clear and mind boggling at the same time.


Imagine a clothing store chain running a survey to compare themselves with a direct competitor (to keep it simple, let’s assume both have identical stores traffic, identical visitors profile and the impact of both companies’ commercials is identical).


The results are in and, lo and behold:

- Product range diversity perception – Our company is ahead by a large, statistically significant gap

- Products quality perception – Our company is at the same level with competition

- Products price satisfaction – Our company is ahead, not by much, but the difference is still statistically significant

- Stores design & layout satisfaction – The competitor is a bit ahead, but not statistically significant

- Customers’ service (in stores) satisfaction – Again, the competitor is a bit ahead, but not statistically significant

- Finally, sales – The competitor is clearly ahead


After a while, they repeat the survey – same results. And then again, a 3rd wave – same results.

How could this be? You are leading in 2 areas with significant margins, one which a large, while you competitor seems a bit ahead in 2 other areas (which most likely will be doubted, since their advantage is not ‘statistically significant’) – surely you can offset their puny advantage, if any, with you strong performance in the other 2 areas.


Back to our example, the explanation is:

- Competitor’s product range diversity is good enough, thus the big advantage our company has in this area doesn’t count much

- Products price satisfaction: although the competitor is ‘significantly’ behind, its prices are considered acceptable

- Stores design & layout + Customers’ service satisfaction - These 2 factors complement each other and the small but constant advantages the competitor has in each area amplify each other, leading to a perceptible better overall store experience, which is helping in closing the sales more easily.


Simply put, a statistically significant difference / score tells you for sure something is going on in that area, but tells nothing about the importance of that area or the impact it has on other areas.


People looking at the data should always keep in mind those data are about living, breathing people, not nuts and bolts and data interpretation should rely on a multiple perspective:

· One based on researchers experience on dealing with research results;

· One based on marketers’ knowledge about the market they operate and the brand approach they want to build and project to on customers;

· One based on operational managers’ knowledge about the market they operate (partially overlapping with, partially different from and in the end completing marketers’ own knowledge) and the operational capabilities and limitations;

· One based on top decision makers’ knowledge about the market they operate (again, overlapping with and supplementing others’ knowledge) and the strategical rationales (while others focus on their department’s need and way of working, top management always must the whole picture in mind);

· On based on the front workers experience, which could bring on the table some surprisingly relevant pieces of the puzzle; unfortunately, “foot soldiers” are often overlooked and their input deemed not worthy of any attention.


Ideally, everybody – researchers, marketers and managers of all levels – should be involved in looking at and understanding the data, in a dynamic way with as many feedback loops as necessarily between any of the involved parties. In such an ideal world, the stakeholders from our company will be willing to accept from the start the results are telling more that seems initially and they would start in earnest to brainstorm and dig for the deeper meaning.


The main barriers for the ideal approach:

· Lack of time (it’s difficult to bring all of them together in the same room for as much time as necessary, especially top management);

· A high degree of mental inflexibility and risk avoidance (the way we always did it worked so far, why go on a wild goose chase) – this approach requires a high degree of openness to any idea, no matter how unbelievable it seems at first.

· Reluctance to share organizations' internal information (sometimes marketers, middle & top managers are reluctant to share some information with each other or with researchers – usually because is considered too sensitive, not relevant or simply because they not feel to.), information which could shed a different light and help make sense of the research results.


Thus, in real life the process is usually more linear: researchers create the report, present it to marketers and/or operational managers, a bit of feedback and a quick revision, then the report is presented to top management, with or without researchers’ involvement. Thus, our company will probably acknowledge something’s going on only after 3 or more waves with similar results – either when a clear pattern is becoming visible, although the individual differences are never statistically significant or when they will finally have their beloved statistically significant results when looking at combined multiple waves data.


Thus, a lot of time which could have been used to fix things and catch the competitor will be instead wasted arguing and waiting for the Holy Statistical Significance.


And that’s a pity, because thus some pieces of the puzzle are destined to fall into hidden cracks. Same pieces, if brought on the table, at the right time, with the right people, could make or break a project – from a mediocre or a good one into one very good or even great.

Comments


bottom of page