New Concepts Benefit From Wider Testing
empty
Measuring Performance Beyond Purchase Intent Alone
It is common in early phase concept screening to rely on a handful of consumer interest questions, such as purchase intent, preference, rank order, liking, uniqueness and brand fit. Over the years, we have found that the most emphasis is often placed on purchase intent or a weighted purchase intent score.
We agree, of course, that purchase intent is important. It's where the 'rubber hits the road'. However, placing too much research emphasis on purchase intent can be problematic, especially when comparing concepts across a range of prices, branding, or categories. Why? For one thing, purchase intent ratings are volatile. Consider, for instance, that a good purchase intent score for a product at $19.99 may not be - and usually is not - a good score for a product at $5.99.
In market validations of our forecasting system - where we compare pre-launch research scores and in-market performance - we have discovered that purchase intent alone is not the best predictor of market success. Weighted purchase intent is also problematic. Under the latter approach, the analyst assigns a 'discount factor' to each point in a purchase intent scale. While this helps to paint a somewhat more realistic picture of product penetration potential, it does not help in comparing scores between concepts. That is, the discount factor needs to be dynamic for different price points, different customer/prospect groups, different categories, branded/unbranded concepts, etc.
What follows is an example of purchase intent distribution across a number of studies done by us. It is apparent that there is some systematic difference across price points and in different categories. For example: Durable goods over $200 (the blue line) operate on a different curve from products under $200 (the pink line). This clearly illustrates the danger of looking at raw purchase intent as a sole predictor of success.
A second pitfall in relying solely on purchase intent is its sensitivity to methodology. We have tested the same idea using different data collection approaches (mail surveys, mall-intercept surveys, pre-recruited or CLT interviewing, and Internet surveys). Purchase intent is notably different for each of these approaches.
The graph below illustrates this dynamic. Note, for example, mail surveys and online surveys elicit lower scores than in-person methodologies.
A more successful way to test new products requires blending or weighing together several measures. We have found that value and liking ratings, in combination with purchase intent, greatly enhance the accuracy of research results. Moreover, value and liking are more stable measures than purchase intent when comparing concepts across brands, prices, categories and data collection methods.
Our experience has proven that using blended measures helps to overcome some of the limitations of using purchase intent alone. It significantly improves new product development and, ultimately, leads to more wins in the marketplace.
More insights about Public Sector