Skip to main content

Right to Contest AI Diagnostics

Defining Transparency and Explainability Requirements from a Patient’s Perspective

  • Reference work entry
  • First Online:
Artificial Intelligence in Medicine

Abstract

The problem of the transparency and explainability of AI decision-making has attracted considerable attention in recent years. In this chapter, we argue that patients have a right to contest AI medical decisions and that the transparency requirements of AI decision-making in health care should be guided by this right. We define the right to contest AI medical decisions both formally and substantially. Formally, the right to contest AI medical decisions must be a right (i) that is grounded in moral values, (ii) that is effective in protecting patients’ rights, (iii) that is proportional to the potential costs to others, and (iv) that is an application of a more general right to contest medical decisions. Substantially, the right to contest AI medical decisions should enable patients to contest (I) the AI system’s use of personal and sensitive data, (II) the system’s potential biases, (III) the system performance, and (IV) the division of labor between the system and healthcare professionals. We justify and define 14 specific informational requirements – i.e., transparency requirements – that follow from the substantial notion of the right to contest AI medical decisions. Finally, we briefly discuss the patient-centered approach taken in this chapter against alternative approaches grounding transparency requirements in considerations of democracy and the interests of science.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 699.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 1,199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health. 2019;1(6):e271–97.

    Article  Google Scholar 

  2. Lipton ZC. The mythos of model interpretability. ArXiv160603490 Cs Stat [Internet]. 2016 Jun 10 [cited 2019 May 22].

    Google Scholar 

  3. Burrell J. How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc. 2016;3(1):2053951715622512.

    Article  Google Scholar 

  4. Doshi-Velez F, Kim B. Considerations for evaluation and generalization in interpretable machine learning. In: Escalante HJ, Escalera S, Guyon I, Baró X, Güçlütürk Y, Güçlü U, et al., editors. Explainable and interpretable models in computer vision and machine learning [Internet]. Cham: Springer International Publishing; 2018 [cited 2019 May 22]. p. 3–17.

    Google Scholar 

  5. Adadi A, Berrada M. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access. 2018;6:52138–60.

    Article  Google Scholar 

  6. Caruana R, Lou Y, Gehrke J, Koch P, Sturm M, Elhadad N. Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining – KDD ’15 [Internet]. Sydney: ACM Press; 2015 [cited 2019 May 22]. p. 1721–30.

    Google Scholar 

  7. London AJ. Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hast Cent Rep. 2019;49(1):15–21.

    Article  Google Scholar 

  8. Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1(9):389–99.

    Article  Google Scholar 

  9. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance) [Internet]. OJ L, 32016R0679 May 4, 2016.

    Google Scholar 

  10. Goodman B, Flaxman S. European Union regulations on algorithmic decision making and a “right to explanation.” AI Mag 2017;38(3):50–57.

    Google Scholar 

  11. Edwards L, Veale M. Enslaving the algorithm: from a “right to an explanation” to a “right to better decisions”? IEEE Secur Priv. 2018;16(3):46–54.

    Article  Google Scholar 

  12. Wachter S, Mittelstadt B, Floridi L. Why a right to explanation of automated decision-making does not exist in the general data protection regulation [Internet]. Rochester: Social Science Research Network; 2016 [cited 2017 Apr 12]. Report No.: ID 2903469.

    Google Scholar 

  13. Richens JG, Lee CM, Johri S. Improving the accuracy of medical diagnosis with causal machine learning. Nat Commun. 2020;11(1):3923.

    Article  CAS  Google Scholar 

  14. Daniels N, Sabin J. Limits to health care: fair procedures, democratic deliberation, and the legitimacy problem for insurers. Philos Public Aff. 1997;26(4):303–50.

    Article  Google Scholar 

  15. Kahneman D, Tversky A. Choices, values, and frames. 1st ed. Cambridge University Press; 2000. 860 p.

    Google Scholar 

  16. Ploug T, Holm S. The four dimensions of contestable AI diagnostics – a patient-centric approach to explainable AI. Artif Intell Med. 2020;107:101901.

    Article  Google Scholar 

  17. Ploug T. In Defence of informed consent for health record research – why arguments from ‘easy rescue’, ‘no harm’ and ‘consent bias’ fail. BMC Med Ethics. 2020;21(1):75.

    Article  Google Scholar 

  18. Shen N, Bernier T, Sequeira L, Strauss J, Silver MP, Carter-Langford A, et al. Understanding the patient privacy perspective on health information exchange: a systematic review. Int J Med Inform. 2019;125:1–12.

    Article  CAS  Google Scholar 

  19. Esmaeilzadeh P, Sambasivan M. Patients’ support for health information exchange: a literature review and classification of key factors. BMC Med Inform Decis Mak. 2017;17(1):33.

    Article  Google Scholar 

  20. Sankar P, Mora S, Merz JF, Jones NL. Patient perspectives of medical confidentiality: a review of the literature. J Gen Intern Med. 2003;18(8):659–69.

    Article  Google Scholar 

  21. Scott IA. Hope, hype and harms of Big Data. Intern Med J. 2019;49(1):126–9.

    Article  Google Scholar 

  22. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447–53.

    Article  CAS  Google Scholar 

  23. Bobrowski D, Joshi H. Unmasking A.I.’s bias in healthcare: the need for diverse data. Univ Tor Med J. 2019;96(1):48–50.

    Google Scholar 

  24. Cabitza F, Ciucci D, Rasoini R. A giant with feet of clay: on the validity of the data that feed machine learning in medicine. ArXiv170606838 Cs Stat [Internet]. 2018 May 14 [cited 2019 Nov 17].

    Google Scholar 

  25. Char DS, Shah NH, Magnus D. Implementing machine learning in health care – addressing ethical challenges. N Engl J Med. 2018;378(11):981–3.

    Article  Google Scholar 

  26. Chen JH, Asch SM. Machine learning and prediction in medicine – beyond the peak of inflated expectations. N Engl J Med. 2017;376(26):2507–9.

    Article  Google Scholar 

  27. Gianfrancesco MA, Tamang S, Yazdany J, Schmajuk G. Potential biases in machine learning algorithms using electronic health record data. JAMA Intern Med. 2018;178(11):1544–7.

    Article  Google Scholar 

  28. Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med [Internet]. 2019 [cited 2019 Nov 12].

    Google Scholar 

  29. Mac Namee B, Cunningham P, Byrne S, Corrigan OI. The problem of bias in training data in regression problems in medical decision support. Artif Intell Med. 2002;24(1):51–70.

    Article  CAS  Google Scholar 

  30. Lichter AS. Conflict of interest and the integrity of the medical profession. JAMA. 2017;317(17):1725.

    Article  Google Scholar 

  31. Mitchell AP, Trivedi NU, Gennarelli RL, Chimonas S, Tabatabai SM, Goldberg J, et al. Are financial payments from the pharmaceutical industry associated with physician prescribing? Ann Intern Med [Internet]. 2020 [cited 2021 Feb 3].

    Google Scholar 

  32. Cabitza F, Rasoini R, Gensini GF. Unintended consequences of machine learning in medicine. JAMA. 2017;318(6):517–8.

    Article  Google Scholar 

  33. Tsai TL, Fridsma DB, Gatti G. Computer decision support as a source of interpretation error: the case of electrocardiograms. J Am Med Inform Assoc. 2003;10(5):478–83.

    Article  Google Scholar 

  34. Povyakalo AA, Alberdi E, Strigini L, Ayton P. How to discriminate between computer-aided and computer-hindered decisions: a case study in mammography. Med Decis Mak. 2013;33(1):98–107.

    Article  Google Scholar 

  35. Goddard K, Roudsari A, Wyatt JC. Automation bias: a systematic review of frequency, effect mediators, and mitigators. J Am Med Inform Assoc. 2012;19(1):121–7.

    Article  Google Scholar 

  36. Goddard K, Roudsari A, Wyatt JC. Automation bias: empirical results assessing influencing factors. Int J Med Inform. 2014;83(5):368–75.

    Article  Google Scholar 

  37. Lyell D, Coiera E. Automation bias and verification complexity: a systematic review. J Am Med Inform Assoc. 2017;24(2):423–31.

    Article  Google Scholar 

  38. Zerilli J, Knott A, Maclaurin J, Gavaghan C. Transparency in algorithmic and human decision-making: is there a double standard? Philos Technol. 2019;32(4):661–83.

    Article  Google Scholar 

  39. Garg AX, Adhikari NKJ, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005;293(10):1223–38.

    Article  CAS  Google Scholar 

  40. Sullivan F, Wyatt JC. How decision support tools help define clinical problems. BMJ. 2005;331(7520):831–3.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Ploug .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Ploug, T., Holm, S. (2022). Right to Contest AI Diagnostics. In: Lidströmer, N., Ashrafian, H. (eds) Artificial Intelligence in Medicine. Springer, Cham. https://doi.org/10.1007/978-3-030-64573-1_267

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-64573-1_267

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-64572-4

  • Online ISBN: 978-3-030-64573-1

  • eBook Packages: MedicineReference Module Medicine

Publish with us

Policies and ethics