Including Opt-Out Options in Discrete Choice Experiments: Issues to Consider


Providing an opt-out alternative in discrete choice experiments can often be considered to be important for presenting real-life choice situations in different contexts, including health. However, insufficient attention has been given to how best to address choice behaviours relating to this opt-out alternative when modelling discrete choice experiments, particularly in health studies. The objective of this paper is to demonstrate how to account for different opt-out effects in choice models. We aim to contribute to a better understanding of how to model opt-out choices and show the consequences of addressing the effects in an incorrect fashion. We present our code written in the R statistical language so that others can explore these issues in their own data. In this practical guideline, we generate synthetic data on medication choice and use Monte Carlo simulation. We consider three different definitions for the opt-out alternative and four candidate models for each definition. We apply a frequentist-based multimodel inference approach and use performance indicators to assess the relative suitability of each candidate model in a range of settings. We show that misspecifying the opt-out effect has repercussions for marginal willingness to pay estimation and the forecasting of market shares. Our findings also suggest a number of key recommendations for DCE practitioners interested in exploring these issues. There is no unique best way to analyse data collected from discrete choice experiments. Researchers should consider several models so that the relative support for different hypotheses of opt-out effects can be explored.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2

Data availability statement

For this paper, the data have been synthetically generated. Full details on the data-generating process and the code required to replicate our analysis are given in Appendix A of the ESM.


  1. 1.

    We note that random utility maximisation is not the only framework for modelling choices. Indeed, for certain decisions, other choice axioms may be better suited, such as regret minimisation. In this paper, we utilise the most widely used framework to analyse opt-out effects.

  2. 2.

    Note, however, that the derivation of the nested logit model does not necessarily imply that participants make choices in this hierarchical manner.

  3. 3.

    While this design ensures that all attribute levels can be estimated independently of each other, we recognise that a more efficient experimental design could have been used to minimise the variance of the parameters. However, in a Monte Carlo experiment with specified parameters it may be more appropriate to show that the results stand up in cases where the experimental design is not tailored too closely to the data-generating parameters. Indeed, this would be the case in a real-life empirical application.

  4. 4.

    This is sufficient for the purpose at hand since idiosyncratic simulation errors are not found to be large, as will be shown in Tables 3 and 4.

  5. 5.

    In this paper, we use the Bayesian information criterion . We derive this for each estimated model m in treatment t and replication r as follows: \(\text {IC}_{m_\mathrm{tr}}= \ln \left( N\right) K_{m_\mathrm{tr}} - 2\ln \left( \hat{\mathcal {L}}_{m_\mathrm{tr}}\right)\), where N is the number of choice observations, \(\hat{\mathcal {L}}_{m_\mathrm{tr}}\) is the maximised value of the likelihood function for model m in treatment t and replication r, and \(K_{m_\mathrm{tr}}\) is the number of estimated parameters associated with this model.

  6. 6.

    As noted when describing the independent availability logit model in Sect. 2.2.3, the alternatives taken into account by a (real or simulated) participant cannot be established with certainty. For the sake of comparison, we assume an alternative is deemed to be not in a participant’s consideration set if they never choose it in any of their eight choices.


  1. 1.

    Craig BM, Lancsar E, Mühlbacher AC, Brown DS, Ostermann J. Health preference research: an overview. Patient Patient Cent Outcomes Res. 2017;10(4):507–10.

    Article  Google Scholar 

  2. 2.

    Ryan M, Skåtun C. Modelling non-demanders in choice experiments. Health Econ. 2004;13(4):397–402.

    Article  Google Scholar 

  3. 3.

    Lancsar E, Louviere JJ. Conducting discrete choice experiments to inform healthcare decision making. Pharmacoeconomics. 2008;26(8):661–77.

    Article  Google Scholar 

  4. 4.

    Boxall P, Adamowicz WL, Moon A. Complexity in choice experiments: choice of the status quo alternative and implications for welfare measurement. Aust J Agric Resour Econ. 2009;53(4):503–19.

    Article  Google Scholar 

  5. 5.

    Veldwijk J, Lambooij MS, de Bekker-Grob EW, Smit SA, de Wit DA. The effect of including an opt-out option in discrete choice experiments. PLoS ONE. 2014;9(11):e111805.

    Article  Google Scholar 

  6. 6.

    Louviere JJ, Lancsar E. Choice experiments in health: the good, the bad, the ugly and toward a brighter future. Health Econ Policy Law. 2009;4(4):527–46.

    Article  Google Scholar 

  7. 7.

    Bridges JFP, Hauber AB, Marshall D, Lloyd A, Prosser LA, Regier DA, Johnson FR, Mauskopf J. Conjoint analysis applications in health—a checklist: a report of the ISPOR good research practices for conjoint analysis task force. Value Health. 2011;14(4):403–13.

    Article  Google Scholar 

  8. 8.

    Johnston RJ, Boyle KJ, Adamowicz W, Bennett J, Brouwer R, Cameron TA, Hanemann WM, Hanley N, Ryan M, Scarpa R, Tourangeau R, Vossler C. Contemporary guidance for stated preference studies. J Assoc Environ Resour Econ. 2017;4(2):319–405.

    Google Scholar 

  9. 9.

    Niebor A, Xander K, Elly S. Preferences for long-term care services: willingness to pay estimates derived from a discrete choice experiment. Soc Sci Med. 2010;70(9):1317–25.

    Article  Google Scholar 

  10. 10.

    Milte R, Ratcliffe J, Miller M, Whitehead C, Cameron I, Crotty M. What are frail older people prepared to endure to achieve improved mobility following hip fracture? A discrete choice experiment. J Rehabil Med. 2013;45(1):81–6.

    Article  Google Scholar 

  11. 11.

    Dhar R, Simonson I. The effect of forced choice on choice. J Market Res. 2003;40(2):146–60.

  12. 12.

    Bahamonde-Birke FJ, Navarro I, de Dios Ortúzar J. If you choose not to decide, you still have made a choice. J Choice Model. 2017;22:13–23.

    Article  Google Scholar 

  13. 13.

    Schlereth C, Skiera B. Two new features in discrete choice experiments to improve willingness-to-pay estimation that result in SDR and SADR: separated (adaptive) dual response. Manag Sci. 2017;63(3):829–42.

    Article  Google Scholar 

  14. 14.

    Salkeld G, Ryan M, Short L. The veil of experience: do consumers prefer what they know best? Health Econ. 2000;9(3):267–70.

    CAS  Article  Google Scholar 

  15. 15.

    Ryan M, Ubach C. Testing for an experience endowment effect in health care. Appl Econ Lett. 2003;10(7):407–10.

    Article  Google Scholar 

  16. 16.

    Meyerhoff J, Liebe U. Status quo effect in choice experiments: empirical evidence on attitudes and choice task complexity. Land Econ. 2009;85(3):515–28.

    Article  Google Scholar 

  17. 17.

    Oehlmann M, Meyerhoff J, Mariel P, Weller P. Uncovering context-induced status quo effects in choice experiments. J Environ Econ Manag. 2017;81:59–73.

    Article  Google Scholar 

  18. 18.

    Kahneman D, Knetsch JL, Thaler RH. Anomalies: the endowment effect, loss aversion, and status quo bias. J Econ Perspect. 1991;5(1):193–206.

    Article  Google Scholar 

  19. 19.

    Krosnick JA, Holbrook AL, Berent MK, Carson RT, Hanemann WM, Kopp RJ, Mitchell RM, Presser S, Ruud PA, Smith VK, Moody WR, Green MC, Conaway M. The impact of “no opinion” response options on data quality: non-attitude reduction or an invitation to satisfice? Public Opinion Q. 2002;66(3):371–403.

    Article  Google Scholar 

  20. 20.

    Tversky A, Shafir E. Choice under conflict: the dynamics of deferred decision. Psychol Sci. 1992;3(6):358–61.

    Article  Google Scholar 

  21. 21.

    Baron J, Ritov I. Reference points and omission bias. Organ Behav Hum Decis Process. 1994;59(3):475–98.

    Article  Google Scholar 

  22. 22.

    Masatlioglu Y, Ok EA. Rational choice with status quo bias. J Econ Theory. 2005;121(1):1–29.

    Article  Google Scholar 

  23. 23.

    Brazell JD, Diener CG, Karniouchina E, Moore WL, Séverin V, Uldry PF. The no-choice option and dual response choice designs. Market Lett. 2006;17(4):255–68.

    Article  Google Scholar 

  24. 24.

    Samuelson W, Zeckhauser R. Status quo bias in decision making. J Risk Uncertain. 1988;1(1):7–59.

    Article  Google Scholar 

  25. 25.

    R Core Team. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2016.

  26. 26.

    Scarpa R, Ferrini S, Willis K. Performance of error component models for status-quo effects in choice experiments. In: Scarpa R, Alberini A, editors. Applications of simulation methods in environmental and resource economics. Dordrecht: Springer; 2005. p. 243–73.

    Google Scholar 

  27. 27.

    Train K. Discrete choice methods with simulation. 2nd ed. Cambridge: Cambridge University Press; 2009.

    Google Scholar 

  28. 28.

    von Haefen RH, Massey RH, Adamowicz WL. Serial nonparticipation in repeated discrete choice models. Am J Agric Econ. 2005;87(4):1061–76.

    Article  Google Scholar 

  29. 29.

    Manski CF. The structure of random utility models. Theory Decis. 1977;8:229–54.

    Article  Google Scholar 

  30. 30.

    Frejinger E, Bierlaire M, Ben-Akiva M. Sampling of alternatives for route choice modeling. Transp Res Part B Methodol. 2009;43(10):984–94.

    Article  Google Scholar 

  31. 31.

    Campbell D, Hensher DA, Scarpa R. Cost thresholds, cut-offs and sensitivities in stated choice analysis: identification and implications. Resour Energy Econ. 2012;34(3):396–411.

    Article  Google Scholar 

  32. 32.

    Kaplan S, Shiftan Y, Bekhor S. Development and estimation of a semi-compensatory model with a flexible error structure. Transp Res Part B Methodol. 2012;46(2):291–304.

    Article  Google Scholar 

  33. 33.

    Campbell D, Erdem S. Position bias in best-worst scaling surveys: a case study on trust in institutions. Am J Agric Econ. 2015;97(2):526–45.

    Article  Google Scholar 

  34. 34.

    Erdem S, Campbell D, Thompson C. Elimination and selection by aspects in health choice experiments: prioritising health service innovations. J Health Econ. 2014;38:10–22.

    Article  Google Scholar 

  35. 35.

    Henningsen A, Toomet O. Maxlik: a package for maximum likelihood estimation in R. Comput Stat. 2011;26(3):443–58.

  36. 36.

    Buckland ST, Burnham KP, Augustin NH. Model selection: an integral part of inference. Biometrics. 1997;53(2):603–18.

    Article  Google Scholar 

  37. 37.

    Symonds MRE, Moussalli A. A brief guide to model selection, multimodel inference and model averaging in behavioural ecology using Akaike’s information criterion. Behav Ecol Sociobiol. 2011;65(1):13–21.

    Article  Google Scholar 

  38. 38.

    Layton DF, Lee ST. Embracing model uncertainty: strategies for response pooling and model averaging. Environ Resour Econ. 2006;34(1):51–85.

    Article  Google Scholar 

  39. 39.

    Campbell D, Mørkbak MR, Olsen SB. The link between response time and preference, variance and processing heterogeneity in stated choice experiments. J Environ Econ Manag. 2018;88(1):18–34.

    Article  Google Scholar 

  40. 40.

    Wuertz D et al. fExtremes: Rmetrics—extreme financial market data. 2013. R package version 3010.81.

  41. 41.

    Aizaki H. Basic functions for supporting an implementation of choice experiments in R. J Stat Softw. 2012;50(2):1–24.

    Google Scholar 

Download references


We thank the editor Christopher Carswell for his invitation to write this paper. We also thank four anonymous reviewers for their helpful comments and suggestions on previous versions of this paper. Any remaining errors or misinterpretations are solely the authors’ responsibility.

Author information




DC and SE contributed equally to all aspects of this paper, including the conceptualisation, data generation, analysis and drafting of the manuscript.

Corresponding author

Correspondence to Seda Erdem.

Ethics declarations


The study was not supported by any external sources or funds.

Ethical approval

The study did not involve the collection of primary data or the use of secondary data sources, thus negating the need for ethical approval.

Informed consent

Participants have been artifically generated as part of the Monte Carlo simulation, meaning that informed consent is not applicable.

Conflict of interest

Danny Campbell and Seda Erdem declare no conflicts of interest relevant to the content of this manuscript.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (PDF 95 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Campbell, D., Erdem, S. Including Opt-Out Options in Discrete Choice Experiments: Issues to Consider. Patient 12, 1–14 (2019).

Download citation