Making Advanced Analytics Better Reflect the Real Marketplace
Intimidating terms to those not fully doused in the language of marketing research, conjoint and discrete choice modeling are nonetheless often and debated techniques that have been the subjects of many a conference and paper. And, for good reason. Essentially, they consist of several sets of independent purchasing scenarios in which, for each one, the respondent is asked what they would do (i.e., how they would behave). For example, one set might have a product characterized by (a) low price, as well as (b) its having had little promotion. The next would have this product characterized by (a) high price, as well as (b) its having had a good deal of promotion. At the end of the exercise, the idea is that you have a mathematical model that reflects how consumers would decide about a purchase in the real world, confronted with similar choices.
As much as these techniques are attractive, given their ability to mimic consumers' real behavior of trading off the benefits and drawbacks of one product versus those of another, they do draw criticism unless they include some measure of calibration. Skeptics complain that in the exercises, price sensitivity is exaggerated, or that a new feature draws too much (hypothetical) market share compared to relative alternatives, and that this would not be so in the actual marketplace. Hence, the need for calibration.
The problem is, it can be difficult to find something to calibrate to. We don't always work in an existing category with existing products, good category data is often hard to come by, and if we're working with a new product, it's unreasonable to assume, as most choice models do, that consumers are 100% aware of your product and therefore able to compare it to competitive alternatives. What to do?
Using a Virtual Marketplace to Enhance the Value of Choice Modeling
An approach Ipsos-Vantis has been developing for the past several years involves combining the results of choice modeling with the results from simulated test marketing (STM). STM takes a new product concept and tests it in a market research setting to understand how appealing it will be to consumers in the real marketplace. (Mathematical models bridge the virtual-to-real gap by translating what consumers say in a research setting to what they will do in the marketplace once they know about the product.)
A typical STM product will take into account, among other things:
- The price of the product
- What the competitive landscape looks like
- How much of a need the product fulfills for the consumer
- How quickly the consumer would act on buying the product.
But of course, estimating demand must also recognize that different types of marketing will have an influence on how the consumer reacts to the new product. So, again using mathematical models that calibrate test results to in-market validations, we can attribute what impact specific marketing activities will have on consumer demand.
Using the simulated environment allows the research team to generate multiple forecasts at a relatively inexpensive cost compared to what it would cost in actual test markets.
One Case In Point
This approach was useful when we were helping our client, a major technology company, with their launch of a product into different markets. The product was unique because it bundled two-way communication (also known as "direct connect") with cell phones. A tradeoff exercise helped the client understand how much consumers would prefer this new technology to walkie-talkies, cell phones, and pagers.
At the time there were no category data to calibrate to, as the product really straddled categories. We used the STM approach to understand the demand for this product for three configurations:
- Relatively low handset price and monthly fee
- Relatively high handset price and monthly fee
- A combination in-between 1 and 3.
Potential buyers evaluated one of these three scenarios and we used their feedback for the basis for the simulated test market. These three data points now served as calibration points for a tradeoff (choice) exercise that followed later. They were chosen because they would encompass all configurations for the choice exercise that tested thousands of variations in handset pricing, service fee pricing, and product capabilities. We then calibrated the choice model using the results from the STM, and got a net effect of more closely replicating the price sensitivity that would happen in-market once the product was launched.
After a period of time, we could compare the results from the model to in-market results. From the live data, we found that had we only used the uncalibrated choice model, our results would have indicated an exaggerated degree of price sensitivity among consumers. The added information the STM added enhanced the overall predictiveness of the results. The following chart shows how much closer the elasticity predicted by the STM-calibrated portion of our study came to the elasticity that really happened in the marketplace than did the elasticity predicted by the straight choice model.
Combining traditional discrete choice, then, and STM, looks like a promising approach that will give companies putting new products into the marketplace--always a risky venture--a much better sense of what they can expect from different options. The approach takes an excellent technique and puts it closer to a real-world environment, which can only lead to better predictions.