Abstract
Stated-choice (SC) surveys, such as conjoint analysis, present some interesting problems for researchers that are not addressed in the traditional survey-development literature. While the constraints imposed by preference theory, the experimental design of the choice sets, and the statistical methods used to analyze choice data all pose challenges for researchers new to SC methods, they also direct such researchers towards techniques that are not possible with more traditional survey methods. In this article, we focus on issues of preference heterogeneity (variation in preferences across subjects by observable and non-observable co-variates) and attribute dominance to illustrate the synergistic roles that survey-design and analytical strategies play in SC research. In this article, we show how advanced analytical techniques are likely to be more important than survey design solutions when addressing preference heterogeneity. Good practice supports the use of mixed-logit and similar modeling approaches to mitigate the problem of unobserved preference or variance heterogeneity. However, if the sample size is not large enough or the survey instrument does not contain questions about important subject characteristics, then the source of heterogeneity cannot be identified and the problems caused by heterogeneity will be magnified.
In contrast, minimizing attribute dominance and testing for attribute dominance relies on careful survey design, rather than more complex analysis. In general, survey design needs careful attention from researchers. No amount of complex analysis can compensate for a poor survey design that can generate only flawed SC data.
Similar content being viewed by others
Notes
Figure 1 in the supplementary material (see ‘ArticlePlus’ at http://thepatient.adisonline.com )illustrates the problem of disentangling preference from scale.
Differences in the scale parameter across attributes could also produce response patterns that might look like dominant preferences. For example, if respondents are more familiar with one attribute, this could result in a smaller random utility component for that attribute, which would lead to tighter estimates of the coefficient.
If the dominance pattern results from differences in variance (scale) rather than means (preference parameters), the standard errors on the dummy variables will be incorrect.
References
Kanninen B, editor. Valuing environmental amenities using choice experiments: a common sense guide to theory and practice. Dordrecht: Springer, 2007
Lancsar E, Louviere J, Flynn T. Several methods to investigate relative attribute impact in stated preference experiments. Soc Sci Med 2007; 64(8): 1738–53
Louviere J, Street D, Carson R, et al. Dissecting the random component of utility. Market Lett 2002; 13(3): 177–93
Swait J, Louviere J. The role of the scale parameter in the estimation and comparison of multinomial logit models. J Market Res 1993; 30(3): 305–14
Train K. Discrete choice methods with simulation. Cambridge (UK): Cambridge University Press, 2003
Johnson FR, Özdemir S, Mansfield C, et al. Crohn’s disease patients’ risk-benefit preferences: serious adverse event risks versus treatment efficacy. Gastroenterology 2007; 133(3): 769–79
Chrzan K, Orme B. An overview and comparison of design-based strategies for choice-based conjoint analysis. Sequim (WA): Sawtooth Software Research Paper Series, 2000 [online]. Available from URL: http://www.sawtooths-oftware.com/download/techpap/desgncbc.pdf [Accessed 2008 Oct 30]
Kuhfeld WF. Experimental design, efficiency, coding and choice design. SAS Technical Support TS-722C, 2005 [online]. Available from URL: http://support.sas.com/techsup/technote/ts722.pdf [Accessed 2008 Nov 6]
Centre for the Study of Choice (CenSoC). Discrete choice experiments [online]. Available from URL: http://crsu.science.uts.edu.au/choice [Accessed 2008 Nov 10]
Kanninen B. Optimal design for multinomial choice experiments. J Market Res 2002; 39(2): 214–27
Mansfield CA, Pattanayak SK. Getting started. In Kanninen B, editor. Valuing environmental amenities using choice experiments: a common sense guide to theory and practice. Dordrecht: Springer, 2007
Hensher DA, Rose JM, Greene WH. Applied choice analysis: a primer. Cambridge (UK): Cambridge University Press, 2005
Louviere J, Swait J, Hensher D. Stated choice methods: analysis and application. Cambridge (UK): Cambridge University Press, 2000
Marley AAJ, Louviere JJ. Some probabilistic models of best, worst, and best-worst choices. J Math Psychol 2005; 49: 464–80
Johnson FR, Ozdemir S, Manjunath R, et al. Factors that affect adherence to bipolar disorder treatments: a stated-preference approach. Med Care 2007; 45(6): 545–52
McIntosh E, Ryan M. Using discrete choice experiments to derive welfare estimates for the provision of elective surgery: implications of discontinuous preferences. J Econ Psychol 2002; 23: 367–82
Acknowledgements
No sources of funding were used to assist in the preparation of this article. The authors have no conflicts of interest that are directly relevant to the content of this article.
Author information
Authors and Affiliations
Corresponding author
Electronic supplementary material
Rights and permissions
About this article
Cite this article
Johnson, F.R., Mansfield, C. Survey-Design and Analytical Strategies for Better Healthcare Stated-Choice Studies. Patient-Patient-Centered-Outcome-Res 1, 299–307 (2008). https://doi.org/10.2165/1312067-200801040-00011
Published:
Issue Date:
DOI: https://doi.org/10.2165/1312067-200801040-00011