How can we avoid data disruptions when making changes to brand trackers?
Steph Clapham
Now and then, our clients require changes to their brand tracking setups. This is usually in the form of adding or removing brands from their trackers when the competitive landscape changes, or making changes to questions like brand image statements, to better fit their brand positioning and marketing efforts. Whenever we implemented these changes in the past, we saw that the data for the remaining brands, or list items, could change, sometimes only slightly and other times quite significantly.
We often wondered where these changes come from, and suspected that they are due to the format of the question. As with most brand trackers, we asked our brand questions in a list format, in which respondents, for example, select the brands that they are aware of from a longer list of brands. Lists are highly effective at saving space within the survey, meaning you can ask about lots of brands on one screen. However, because changing the composition of these lists often resulted in changes to the data, we wanted to find out more about why this was happening.
To more systematically test the impact of changing list composition, we set up several surveys in the UK, asking respondents about the awareness of household cleaning brands when paired with other larger or smaller brands. Here’s what we found.
In our first set of tests, we paired Mr Muscle, a very widely known detergent brand in the UK, firstly with 4 other large brand names (Persil, Fairy, Domestos and Flash), and then with small brands (Splosh, Seventh Generation, Cheeky Panda and Miniml):
The results showed clear differences in the awareness levels of Mr Muscle in the different list compositions. When Mr Muscle was paired with other large brands like Persil, 73% reported being aware of the brand. However, when Mr Muscle was paired with small brands, like Seventh Generation, only 64% of respondents reported knowing the brand - a drop of 13% (which in the UK represents 4.6 million adults).
To also understand the impact on small brands, we looked at Splosh, a niche brand not very well known in the UK. Firstly, we paired it with other small brands (Seventh Generation, Smol, Cheeky Panda and Miniml), and then with large brands (Persil, Flash, Demestos and Mr Muscle):
When Splosh was paired with other small brands like Seventh Generation, 4% reported being aware of the brand. However, when Splosh was paired with well-known brands, like Persil, only 3% of respondents reported knowing the brand, which in the UK represents a drop of 400,000 adults.
We wanted to explore this list composition bias further, to see if the same dynamics take place with other list-based items aside from brands.
To test this we looked at brand image statements for the IKEA brand in Germany, firstly pairing the image statement “good value for money” with 4 other functional brand image statements (“sustainable”, “innovative design”, “high quality” and “easy to use”), then pairing it with 4 emotional image statements (“warm”, “likeable”, “comforting” and “trendy”):
When people are asked whether they see IKEA as providing “good value for money”, 36% agree when the other listed options include functional statements like “high quality” or “sustainable”. However, when we ask them whether they see Ikea as “good value for money” alongside other more emotional statements, like “warm” and “comforting”, only 29% agree - a drop of 7% (which in Germany represents 4.8 million adults).
The awareness levels of the brands we tested showed a clear halo effect from the composition of the list. When a brand is paired with other similar-sized brands, the reported awareness levels are higher than if the brand is placed among a list of brands either smaller or larger than it. This could make comprehensive industry benchmarking a challenge, given that most categories consist of a wide array of brand sizes. Large brands could find it difficult to track newer or smaller competitors, while small brands might miss vital insights on their larger competitors. Similarly, brand image statement levels were also affected by the composition of the list, making it difficult for brands to keep their questions closely tied to their most recent brand marketing and positioning values.
Given the list format is the key reason for the “halo effect” bias, the only viable solution to correct for that bias would be to “silo” brand questions, asking respondents about one brand or item at a time. This approach does come with some downsides. It adds considerable length to the survey, where a list question with 10 items is still only one question; siloing them means 10 individual questions, one for each item.
However, considering the substantial benefits, we decided to opt for a siloed approach as a default. Since implementing this, we have not only seen an elimination of the “halo” bias, but also a range of additional benefits:
1 - Increased survey engagement
After implementing a siloed brand question format, we found that respondents spend on average 38% longer answering the question, indicating that they are less likely to rush through the surveys.
2 - Enable maximum flexibility to make changes to the survey
Siloing questions makes us completely flexible in our tracker setups - we can easily add or remove brands, amend or extend image statements or other association questions without any disruption to the existing data.
3 - Capturing uncertainty
By siloing the question format, we were also able to introduce a “not sure” option to reduce acquiescence bias and to eliminate other undesirable effects that come with binary answer options on eg image statements
Discover how Latana's advanced data collection and data processing techniques can elevate your brand strategy.
Book demo