The Patient - Patient-Centered Outcomes Research

, Volume 12, Issue 1, pp 113–123 | Cite as

“I Was Trying to Do the Maths”: Exploring the Impact of Risk Communication in Discrete Choice Experiments

  • Caroline VassEmail author
  • Dan Rigby
  • Katherine Payne
Original Research Article



Risk is increasingly used as an attribute in discrete choice experiments (DCEs). However, risk and probabilities are complex concepts that can be open to misinterpretation, potentially undermining the robustness of DCEs as a valuation method. This study aimed to understand how respondents made benefit–risk trade-offs in a DCE and if these were affected by the communication of the risk attributes.


Female members of the public were recruited via local advertisements to participate in think-aloud interviews when completing a DCE eliciting their preferences for a hypothetical breast screening programme described by three attributes: probability of detecting a cancer; risk of unnecessary follow-up; and cost of screening. Women were randomised to receive risk information as either (1) percentages or (2) percentages and icon arrays. Interviews were digitally recorded then transcribed to generate qualitative data for thematic analysis.


Nineteen women completed the interviews (icon arrays n = 9; percentages n = 10). Analysis revealed four key themes where women made references to (1) the nature of the task; (2) their feelings; (3) their experiences, for instance making analogies to similar risks; and (4) economic phenomena such as opportunity costs and discounting.


Most women completed the DCE in line with economic theory; however, violations were identified. Women appeared to visualise risk whether they received icon arrays or percentages only. Providing clear instructions and graphics to aid interpretation of risk and qualitative piloting to verify understanding is recommended. Further investigation is required to determine if the process of verbalising thoughts changes the behaviour of respondents.



The authors are grateful for feedback received at the Society for Medical Decision Making’s 36th and 37th annual meetings. We are also grateful to experts Professor Gareth Evans and Professor Tony Howell for their clinical input on framing the choice question for a hypothetical breast screening programme in the UK. We are grateful to Dr Michelle Harvie for her assistance in specifying the relevant background questions and Ms Paula Stavrinos for her feedback on the introductory materials and video used in the survey. We are also grateful to Professor Stephen Campbell of the Centre for Primary Care at The University of Manchester for providing his thoughts and comments on the interview schedule and providing feedback as the themes developed.

Author contribution

DR and KP conceptualised the research question, helped develop the interview schedule, read transcripts, provided feedback on the analysis of the data and contributed to the writing of the manuscript. CV arranged and conducted all interviews, completed the analysis of the qualitative data, and prepared the first draft of the manuscript.


Ethical approval for this study was granted by The University of Manchester’s Research Ethics Committee. All participants provided consent.


Preparation of this manuscript was made possible by a grant awarded by The Swedish Foundation for Humanities and Social Sciences (Riksbankens Jubileumsfond) for a project entitled ‘Mind the Risk’. Caroline Vass was in receipt of a National Institute for Health Research (NIHR) School for Primary Care Research (SPCR) Ph.D. Studentship between October 2011 and 2014. The views expressed in this article are those of the authors and not of the funding bodies.

Conflict of interest

Caroline Vass, Dan Rigby and Katherine Payne declare that they have no conflicts of interest.

Supplementary material

40271_2018_326_MOESM1_ESM.docx (580 kb)
Supplementary material 1 (DOCX 580 kb)


  1. 1.
    Ryan M, Bate A, Eastmond CJ, Ludbrook A. Use of discrete choice experiments to elicit preferences. Qual Health Care. 2001;10(Suppl I):i55–60.Google Scholar
  2. 2.
    Lancaster KJ. A new approach to consumer theory. J Polit Econ. 1966;74(2):132–57.Google Scholar
  3. 3.
    McFadden D. Conditional logit analysis of qualitative choice behaviour. In: Zarembka P, editor. Front economy. New York: Academic Press Inc; 1974. p. 105–42.Google Scholar
  4. 4.
    Hensher D. An exploratory analysis of the effect of numbers of choice sets in designed choice experiments: an airline choice application. J Air Transp Manag [Internet]. 2001;7(6):373–9.Google Scholar
  5. 5.
    Smyth RL, Watzin MC, Manning RE. Investigating public preferences for managing lake champlain using a choice experiment (England: Rubenstein School of Environment and Natural Resources, Aiken Center, University of Vermont, Burlington, VT 05045, USA). J Environ Manag [Internet]. 2009;90(1):615–23.Google Scholar
  6. 6.
    Harrison M, Rigby D, Vass CM, Flynn T, Louviere JJ, Payne K. Risk as an attribute in discrete choice experiments: a systematic review of the literature. Patient [Internet]. 2014;7(2):151–70.Google Scholar
  7. 7.
    Vass CM, Payne K. Using discrete choice experiments to inform the benefit—risk assessment of medicines: are we ready yet? Pharmacoeconomics. 2017;35(9):1–21.Google Scholar
  8. 8.
    Hauber A, Fairchild AO, Johnson F. Quantifying benefit–risk preferences for medical interventions: an overview of a growing empirical literature. Appl Health Econ Health Policy [Internet]. 2013;11(4):319–29.Google Scholar
  9. 9.
    Ho MP, Gonzalez JM, Lerner HP, Neuland CY, Whang JM, McMurry-Heath M, et al. Incorporating patient-preference evidence into regulatory decision making. Surg Endosc Other Interview Tech. 2015;29(10):2984–93.Google Scholar
  10. 10.
    Reed SD, Lavezzari G. International experiences in quantitative benefit–risk analysis to support regulatory decisions. Value Health. 2016;19(6):727–9.Google Scholar
  11. 11.
    Muhlbacher AC, Juhnke C, Beyer AR, Garner S. Patient-focused benefit–risk analysis to inform regulatory decisions: the european union perspective. Value Health. 2016;19(6):734–40.Google Scholar
  12. 12.
    Johnson FR, Zhou M. Patient preferences in regulatory benefit–risk assessments: a us perspective. Value Health. 2016;19(6):741–5.Google Scholar
  13. 13.
    FDA. Patient preference information voluntary submission, review in premarket approval applications, humanitarian device exemption applications and de novo requests, and inclusion in decision summaries and device labeling. U.S. Dep. Heal. Hum. Serv. Food Drug Adm. Cent. Devices Radiol. Heal. 2016; FDA-2015-D.Google Scholar
  14. 14.
    Coast J, Al-Janabi H, Sutton E, Horrocks SA, Vosper J, Swancutt DR, et al. Using qualitative methods for attribute development for discrete choice experiments: issues and recommendations. Health Econ. 2012;21(6):730–41.Google Scholar
  15. 15.
    Hammitt JK, Graham JD. Willingness to pay for health protection: inadequate sensitivity to probability? J Risk Uncertain. 1999;18(1):33–62.Google Scholar
  16. 16.
    Lipkus I. Numeric, verbal, and visual formats of conveying health risks: suggested best practices and future recommendations. Med Decis Mak [Internet]. 2007;27(5):696–713.Google Scholar
  17. 17.
    Lipkus I, Samsa G, Rimer B. General performance on a numeracy scale among highly educated samples. Med Decis Mak [Internet]. 2001;21(1):37–44.Google Scholar
  18. 18.
    Gigerenzer G, Gaissmaier W, Kurz-Milcke E, Schwartz L, Woloshin S. Helping doctors and patients make sense of health statistics. Psychol Sci Public Interes [Internet]. 2007;8(2):53–96.Google Scholar
  19. 19.
    Watson V, Ryan M, Watson E. Valuing experience factors in the provision of chlamydia screening: an application to women attending the family planning clinic (United States: Health Economics Research Unit, University of Aberdeen, Foresterhill, Aberdeen, UK). Value Health [Internet]. 2009;12(4):621–3.Google Scholar
  20. 20.
    Ryan M, Watson V, Entwistle V. Rationalising the “irrational”: a think aloud study of a discrete choice experiment responses. Health Econ [Internet]. 2009;18:321–36.Google Scholar
  21. 21.
    Lagarde M. Investigating attribute non-attendance and its consequences in choice experiments with latent class models. Health Econ. 2012;22(5):554–67.Google Scholar
  22. 22.
    Fossum M, Alexander GL, Göransson KE, Ehnfors M, Ehrenberg A. Registered nurses’ thinking strategies on malnutrition and pressure ulcers in nursing homes: a scenario-based think-aloud study. J Clin Nurs [Internet]. 2011;20(17–18):2425–35.Google Scholar
  23. 23.
    Reicks M, Smith C, Henry H, Reimer K, Atwell J, Thomas R. Use of the think aloud method to examine fruit and vegetable purchasing behaviors among low-income african american women. J Nutr Educ Behav. 2003;35(3):154–60.Google Scholar
  24. 24.
    Vass CM, Rigby D, Payne K. The role of qualitative research methods in discrete choice experiments: a systematic review and survey of authors. Med Decis Mak. 2017;37(3):298–313.Google Scholar
  25. 25.
    Coast J, Flynn T, Natarajan L, Sproston K, Lewis J, Louviere J, et al. Valuing the icecap capability index for older people. Soc Sci Med [Internet]. 2008;67(5):874–82.Google Scholar
  26. 26.
    Cheraghi-Sohi S, Bower P, Mead N, McDonald R, Whalley D, Roland M. Making sense of patient priorities: applying discrete choice methods in primary care using “think aloud” technique. Fam Pract [Internet]. 2007;24(3):276–82.Google Scholar
  27. 27.
    Cheraghi-Sohi S, Hole AR, Mead N, McDonald R, Whalley D, Bower P, et al. What patients want from primary care consultations: a discrete choice experiment to identify patients’ priorities (United States: National Primary Care Research and Development Centre (NPCRDC), University of Manchester, Manchester, United Kingdom). Ann Fam Med [Internet]. 2008;6(2):107–15.Google Scholar
  28. 28.
    Whitty J, Walker R, Golenko X, Ratcliffe J. A think aloud study comparing the validity and acceptability of discrete choice and best worst scaling methods. PLoS One [Internet]. 2014;9(4):e90635.Google Scholar
  29. 29.
    Ericsson KA, Simon HA. Protocol analysis: verbal reports as data (revised edition). Cambridge: MIT Press; 1993.Google Scholar
  30. 30.
    Baum M. Harms from breast cancer screening outweigh benefits if death caused by treatment is included. BMJ Br Med J [Internet]. 2013;346:f385.Google Scholar
  31. 31.
    Kirwan CC. Breast cancer screening: what does the future hold? BMJ Br Med J [Internet]. 2013;346(1):f87. Google Scholar
  32. 32.
    Pace LE, Keating NL. A systematic assessment of benefits and risks to guide breast cancer screening decisions. JAMA [Internet]. 2014;311(13):1327–35.Google Scholar
  33. 33.
    Torjesen I. How much is too much breast screening? BMJ [Internet]. 2015;350:h139.Google Scholar
  34. 34.
    Gøtzsche PC, Nielsen M. Screening for breast cancer with mammography. Cochrane Database Syst Rev. 2009. Scholar
  35. 35.
    Website center for bioethics and social sciences in medicine. University of Michigan. Available from: Accessed 15 Aug 2018.
  36. 36.
    Vass CM, Rigby D, Payne K. Investigating the heterogeneity in women’s preferences for breast screening: does the communication of risk matter? Value Health. 2018;21(2):219–28.Google Scholar
  37. 37.
    Lincoln YS, Guba EG. Naturalistic inquiry. Evaluation Program Plann. Thousand Oaks: Sage Publications; 1985.Google Scholar
  38. 38.
    Independent UK Panel on Breast Cancer Screening. The benefits and harms of breast cancer screening: an independent review. Lancet. 2012;380(9855):1778–86.Google Scholar
  39. 39.
    Snow SJ. Health and greater manchester in historical perspective. Representation. 2015;51(4):439–52.Google Scholar
  40. 40.
    QSR International Pty Ltd. NVivo qualitative data analysis software. Version 10; 2012.Google Scholar
  41. 41.
    Gale N, Heath G, Cameron E. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol [Internet]. 2013;1:117.Google Scholar
  42. 42.
    Hole AR. Small-sample properties of tests for heteroscedasticity in the conditional logit model. Econ Bull. 2006;3:1–14.Google Scholar
  43. 43.
    Czajkowski M, Barczak A, Budziński W, Giergiczny M, Hanley N. Within- and between-sample tests of preference stability and willingness to pay for forest management. Univ St Andrews Discuss Pap Environ Econ. 2014;6:1–24.Google Scholar
  44. 44.
    Frederick S, Loewenstein G, O’donoghue T. Time discounting and time preference: a critical review. J Econ Lit [Internet]. 2002;40(2):351–401.Google Scholar
  45. 45.
    Torgerson DJ, Raftery J. Discounting. BMJ Br Med J. 1999;319:914–5.Google Scholar
  46. 46.
    Hess R, Siegrist M. Risk communication with pictographs: the role of numeracy and graph processing. Judgm Decis Mak. 2011;6(3):263–74.Google Scholar
  47. 47.
    Galesic M, Garcia-Retamero R, Gigerenzer G. Using icon arrays to communicate medical risks: overcoming low numeracy. Health Psychol [Internet]. 2009;28(2):210–6.Google Scholar
  48. 48.
    Garcia-Retamero R, Galesic M, Gigerenzer G. Do icon arrays help reduce denominator neglect? Med Decis Mak. 2010;30:672–84.Google Scholar
  49. 49.
    Zikmund-Fisher BJ, Witteman HO, Dickson M, Fuhrel-Forbis A, Kahn VC, Exe NL, et al. Blocks, ovals, or people? icon type affects risk perceptions and recall of pictographs. Med Decis Mak [Internet]. 2014;34(4):443–53.Google Scholar
  50. 50.
    Galesic M, Garcia-retamero R. Using analogies to communicate information about health risks. Appl Cogn Psychol. 2013;27:33–42.Google Scholar
  51. 51.
    Katapodi MC, Lee K, Facione NC, Dodd MJ. Predictors of perceived breast cancer risk and the relation between perceived risk and breast cancer screening: a meta-analytic review. Prev Med [Internet]. 2004;38(4):388–402.Google Scholar
  52. 52.
    Veldwijk J, van der Heide I, Rademakers J, Schuit AJ, de Wit GA, Uiters E, et al. Preferences for vaccination: does health literacy make a difference? Med Decis Mak [Internet]. 2015;35(8):948–58.Google Scholar
  53. 53.
    Varian HR. Microeconomic analysis. 3rd ed. New York: W. W. Norton & Company; 1992.Google Scholar
  54. 54.
    Carlsson F, Kataria M, Lampi E. Dealing with ignored attributes in choice experiments on valuation of sweden’s environmental quality objectives. Environ Resour Econ [Internet]. 2010;47(1):65–89.Google Scholar
  55. 55.
    Augustovski F, Beratarrechea A, Irazola V, Rubinstein F, Tesolin P, Gonzalez J, et al. Patient preferences for biologic agents in rheumatoid arthritis: a discrete-choice experiment. Value Health [Internet]. 2013;16(2):385–93.Google Scholar
  56. 56.
    Carlsson F, Mørkbak MR, Olsen SB. The first time is the hardest: a test of ordering effects in choice experiments. J Choice Model. 2012;5(2):19–37.Google Scholar
  57. 57.
    Blamey R, Bennett J, Morrison M. Yea-saying in contingent valuation surveys. Land Econ [Internet]. 1999;75(1):126–41.Google Scholar
  58. 58.
    Boren T, Ramey J. Thinking aloud: reconciling theory and practice. IEEE Trans Prof Commun [Internet]. 2000;43(3):261–78.Google Scholar
  59. 59.
    Durning SJ, Artino AR, Beckman TJ, Graner J, Van Der Vleuten C, Holmboe E, et al. Does the think-aloud protocol reflect thinking? exploring functional neuroimaging differences with thinking (answering multiple choice questions) versus thinking aloud. Med Teach. 2013;35(9):720–6.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Manchester Centre for Health EconomicsThe University of ManchesterManchesterUK
  2. 2.Department of EconomicsThe University of ManchesterManchesterUK

Personalised recommendations