Trackers, Brand Tracking Surveys or Usage & Attitude Surveys are synonymous and intended to measure changes in consumer behavior related to brand, product or service over a period of time. They are also known as waves, as like a wave, they run repeatedly every X months or years, generally using the same questions and metrics. Brand Tracking Research differs from the normal consumer survey as it relies on metric measurement and changes in measurement from wave to wave.
Before you run a wave or tracker, it is important you have a clear segment and agreed brand positioning in mind; it must be clear why you intend to measure and how it will be measured. The research agency undertaking the fieldwork can help you establish HOW to do this, however it is your task as the client to determine WHY you are running a tracker.
Frequency of the tracker is also important. You don’t want to run recurrently as saturation can result. Comparatively, you don’t want to measure too sporadically. Generally, you want to match them with launching, changing or testing new advertisements, products or services. You want to ensure you get data back in time for your company strategy meeting. Global brands will run trackers every month, big brands once a quarter, medium brands twice a year and smaller, yearly.
The logic: unless your brand is omnipresent, it will be difficult to find a qualifying sample for monthly or quarterly studies. It is also unlikely that attitudes and usage would change quickly enough.
An ubiquitous brand can experience changes in U&A fairly often with additional factors and exposure that can lead to such variations, so they monitor more often. It is advisable that smaller brands run trackers yearly or twice a year at most. With a smaller consumer pool to reach, constructing a decent size fresh sample proves difficult. As such, repeat sample may need to be used. In this case, respondents must be given adequate time before being re-invited to a tracker to ensure objective metric measurement, without previous wave bias. This is called an exclusion period.
Now we ask ourselves: do we use fresh sample for each wave or use repeat sample? The answer is never straight forward.
Tracker studies must be conducted with a sample that is brand aware, otherwise it would not be possible to measure changes of attitudes (i.e.: if X has never heard of XYZ brand, X won’t be able to measure/rank brand attributes). For well-known brands, it is easy to secure fresh samples, but for smaller ones, it’s often a challenge. This means samples must be re-used. Allowing enough time between trackers ensures answers are not affected by previous waves.
Reasons for using fresh samples each tracker include avoiding bias, over-exposure, fatigue and subjectivity in relation to the answers. It also means there cannot be a true correlation in metric variations. A true correlation within a time interval should be measured against equal subjects, minimizing respondent dependent variations (i.e.: asking the same question each year for 10 years to the same person vs asking the same question each year to different people).
The middle way uses a split sample of 50% fresh and 50% re-used and creates further correlations between U&A changes. However, this is not always an affordable luxury.
To measure trackers the correct way, it’s important you are always using rating and ranking scales. For more in-depth insight, card-slot style questions are recommended. To avoid subjectivity and misread questions, the tracker would consist of rated questions with a clearly balanced scale. Rank questions are also used to define stratification and importance of a group.
Card slots are hybrid questions where respondents assign a series of attributes to specific products and are more complex than MaxDiff and Conjoint studies.
MaxDiff are used to obtain importance (or preference) scores for different items; also known as best-worst scale.
Conjoint, similar to MaxDiff, select a group of attributes but don't need to be opposite each other and can be similar.
Ultimately, trackers or wave studies, are cost effective quantitative research options to measure usage and attitudes. They provide insightful business analytics, allow consumer trend understanding, rectify mal-intended business solutions and generate lucrative material for marketing, R&D and advertising departments.
As Senior Research Analyst, Tim focuses on the quantitative services of the company, overseeing successful delivery of all projects, clients varying from Microsoft, Skyscanner and Waitrose. His areas of speciality include panel management, online communities (both creating and fostering), survey, conjoint, max-diff, data analysis, SPSS, moderation, online focus groups, semiotics and ethnographic. Before joining Atomik Research, Tim worked as Community Manager and Researcher for companies including Cision, Allegra Strategies and Channel 4.