In this paper, we compare the standard, single-response choice-based conjoint (CBC) approach with three extended CBC procedures in terms of their external predictive validity and their ability to realistically capture consumers’ willingness to pay: (1) an incentive-aligned CBC mechanism (IA-CBC), (2) a dual-response CBC procedure (DR-CBC), and (3) an incentive-aligned dual-response CBC approach (IA-DR-CBC). Our empirical study features a unique sample of 2,679 music consumers who participated in a conjoint choice experiment prior to the market entry of a new music streaming service. To judge the predictive accuracy, we contacted the same respondents again 5 months after the launch and compared the predictions with the actual adoption decisions. The results demonstrate that IA-CBC and DR-CBC both increase the predictive accuracy. This result is promising because IA-CBC is not applicable to every research context so that DR-CBC provides a viable alternative. While we do not find an additional external validity improvement through the combination of both extensions, the IA-DR-CBC approach yields the most realistic willingness-to-pay estimates and should therefore be preferred when incentive alignment is feasible.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Price excludes VAT (USA)
Tax calculation will be finalised during checkout.
Previous applications that evaluate external validity are scarce (please see Toubia et al. (2003) for an overview).
Please see, however, the discussion in Carson and Groves (2007) on (strategic) incentives in survey research in general, which may influence the results.
We only assigned respondents to the IA conditions if we had access to their names and addresses, which is why these conditions exhibit fewer observations.
Note that we ultimately refrained from carrying out the buying obligation after the survey. Rewarding only participants under incentive alignment with access to the analyzed service would have induced bias in the external validation. Instead, all respondents were appropriately debriefed and rewarded equally for their participation. While there was no other way to proceed in the present study in order to keep the experimental groups comparable, we emphasize that this procedure would raise ethical concerns when applied regularly.
We also checked measures that are more sensitive to prediction errors on large and small shares, i.e., root-mean-square error (RMSE) and chi-square (χ2). We also calculated measures of relative entropy, i.e., Kullback-Leibler (KL) divergence (Ding et al. 2011; Kullback and Leibler 1951), and uncertainty explained U2 (based on individual-level RLH values; Kalwani et al. 1994). The ranking of the procedures remains largely unaffected so that we only discuss MAE and hit rates. Please refer to Appendix 3 for details.
Our WTPS measure should not be confused with the WTP for individual product features (i.e., the amount by which the price can be raised if a feature is added) which can be calculated without the no-choice option by the ratio of utility estimates.
We thank an anonymous reviewer for suggesting this step.
To ensure a high degree of comparability, we used the same wording in the non-incentive-aligned groups, stressing that service concepts should only be selected if respondents would subscribe to the respective services under real-world conditions.
Allenby, G., Fennell, G., Huber, J., Eagle, T., Gilbride, T., Horsky, D., Kim, J., Lenk, P., Johnson, R., Ofek, E., Orme, B., Otter, T., & Walker, J. (2005). Adjusting choice models to better predict market behavior. Marketing Letters, 16(3/4), 197–208.
Anderson, C. (2010). Free: how today’s smartest businesses profit by giving something for nothing. New York: Hyperion.
Becker, G. M., Degroot, M. H., & Marschak, J. (1964). Measuring utility by a single-response sequential method. Behavioral Science, 9(3), 226–232.
Brazell, J. D., Diener, C. G., Karniouchina, E., Moore, W. L., Séverin, V., & Uldry, P.-F. (2006). The no-choice option and dual response choice designs. Marketing Letters, 17(4), 255–268.
BVMI (2012). Jahreswirtschaftsbericht 2011. Berlin.
BVMI (2013). Musikindustrie in Zahlen 2012. Berlin.
Carson, R. T., & Groves, T. (2007). Incentive and informational properties of preference questions. Environmental and Resource Economics, 37(1), 181–210.
Dhar, R. (1997). Context and task effects on choice deferral. Marketing Letters, 8(1), 119–130.
Dhar, R., & Simonson, I. (2003). The effect of forced choice on choice. Journal of Marketing Research, 40(2), 146–160.
Diener, C. G., Orme, B., & Yardley, D. (2006). Dual response “none” approaches: theory and practice. Proceedings of the Sawtooth Software Conference, 157–167.
Ding, M. (2007). An incentive-aligned mechanism for conjoint analysis. Journal of Marketing Research, 44(2), 214–223.
Ding, M., Grewal, R., & Liechty, J. (2005). Incentive-aligned conjoint analysis. Journal of Marketing Research, 42(2), 67–82.
Ding, M., Hauser, J., Dong, S., Dzyabura, D., Yang, Z., Su, C., & Gaskin, S. (2011). Unstructured direct elicitation of decision rules. Journal of Marketing Research, 48(1), 116–127.
Dong, S., Ding, M., & Huber, J. (2010). A simple mechanism to incentive-align conjoint experiments. International Journal of Research in Marketing, 27(1), 25–32.
Gilbride, T. J., & Allenby, G. M. (2004). A choice model with conjunctive, disjunctive, and compensatory screening rules. Marketing Science, 23(3), 391–406.
Haaijer, R., Kamakura, W., & Wedel, M. (2001). The ‘no-choice’ alternative in conjoint choice experiments. International Journal of Market Research, 43(1), 93–106.
Hauser, J., Dong, S., & Ding, M. (2014). Self-reflection and articulated consumer preferences. Journal of Product Innovation Management, 31(1), 17–32.
Huber, J., & Zwerina, K. (1996). The importance of utility balance in efficient choice designs. Journal of Marketing Research, 33(3), 307–317.
Kalwani, M. U., Meyer, R. J., & Morrison, D. G. (1994). Benchmarks for discrete choice models. Journal of Marketing Research, 31(1), 65–75.
Karty, K. D., & Yu, B. (2012). Taking nothing seriously or “much ado about nothing”. Proceedings of the Sawtooth Software Conference, 129–151.
Kullback, S., & Leibler, R. A. (1951). On information and sufficiency. The Annals of Mathematical Statistics, 22(1), 79–86.
Louviere, J. J., Hensher, D. A., & Swait, J. D. (2000). Stated choice methods—analysis and application. New York: Cambridge University Press.
Miller, K. M., Hofstetter, R., Krohmer, H., & Zhang, Z. J. (2011). How should consumers’ willingness to pay be measured? an empirical comparison of state-of-the-art approaches. Journal of Marketing Research, 48(1), 172–184.
Moore, W. L., Gray-Lee, J., & Louviere, J. J. (1998). A cross-validity comparison of conjoint analysis and choice models at different levels of aggregation. Marketing Letters, 9(2), 195–207.
Papies, D., Eggers, F., & Wlömert, N. (2011). Music for free? How free ad-funded downloads affect consumer choice. Journal of the Academy of Marketing Science, 39(5), 777–794.
Rossi, P. E., Allenby, G. M., & McCulloch, R. (2005). Bayesian Statistics and Marketing. New York: Wiley.
Toubia, O., Simester, D. I., Hauser, J. R., & Dahan, E. (2003). Fast polyhedral adaptive conjoint estimation. Marketing Science, 22(3), 273–303.
Wertenbroch, K., & Skiera, B. (2002). Measuring consumers’ willingness to pay at the point of purchase. Journal of Marketing Research, 39(2), 228–241.
The authors contributed equally to this research. The authors thank Christine Eckert, Dominik Papies, Edlira Shehu, and three anonymous reviewers for their constructive comments on previous versions of this manuscript. Nils Wlömert acknowledges financial support of the German Academic Exchange Service (D/12/45493).
Incentive alignment instructions (IA-CBC)
The following part of the questionnaire contains an auction, giving you the chance to subscribe to a real music streaming service. Your understanding of the auction process is crucially important for the remainder of the questionnaire. So please read the instructions provided on the following pages very carefully.
Selection decisions: Subsequent to the instructions, you will be shown different service configurations in 12 consecutive selection decisions. On each page of the questionnaire, please select among the three alternatives the configuration you prefer the most. Please only select service concepts you would also subscribe to under real-world conditions. If none of the options appeal to you, please select “None of these.” You may view a short explanation regarding the service attributes by moving the mouse pointer over the texts (a screenshot was provided as visual aid).Footnote 8
Calculation of your maximum auction bid: After you have completed the 12 choice tasks, we are able to calculate the maximum amount of money you would pay for a specific service configuration based on your selection decisions. This amount will be in the range between €0.00 and €12.49, depending on your choices, and constitutes your maximum bid for the subsequent auction.
Auction process: After the survey, a random price will be drawn from an equal distribution of values in the range between €0.00 and €12.49.
If this random price is lower than your inferred maximum bid, you may subscribe to the music service for 1 month (and possibly longer, if you wish). Note, however, that in this case you are obliged to pay the randomly drawn price, which gives you access to the service for 1 month.
If this random price is higher than your inferred maximum bid, you may not subscribe to the service.
About this article
Cite this article
Wlömert, N., Eggers, F. Predicting new service adoption with conjoint analysis: external validity of BDM-based incentive-aligned and dual-response choice designs. Mark Lett 27, 195–210 (2016). https://doi.org/10.1007/s11002-014-9326-x