Abstract
The use of artificial intelligence in healthcare contexts is highly controversial for the (bio)ethical conundrums it creates. One of the main problems arising from its implementation is the lack of transparency of machine learning algorithms, which is thought to impede the patient’s autonomous choice regarding their medical decisions. If the patient is unable to clearly understand why and how an AI algorithm reached certain medical decision, their autonomy is being hovered. However, there are alternatives to prevent the negative impact of AI opacity in shared (healthcare professional–patient) decision-making processes, and benefit from the high accuracy of such systems. Despite these ethical issues, this paper advocates for its usage and also defends the positive role that it could have in enhancing patients’ autonomy. I advocate for a system where the implementation of AI is only partial, and humans will remain in control of oversighting algorithm interpretation of medical images and patients’ data.
Similar content being viewed by others
Data availability
Not applicable.
Notes
The argument defended in the paper advocating for the use of AI systems that are opaque does not preclude further research in explainable AI.
Following Mittelstadt et al. (2016, pp. 2–3), the use of “algorithm” here includes both the mathematical construct and its broader societal use, which includes the algorithm’s implementation and configuration for a specific task.
However, AI opacity is becoming easier to explain in local areas of the query (Edwards and Veale 2018).
It might be further argued that AI could be perceived as another relational agent in the medical encounter, besides HCPs, patients, and family. However, I believe that the current state-of-the-art AI technologies in healthcare must be understood as tools HCPs use for specific purposes, hence lacking the attributes to be considered agents (to the same extent that we do not consider other devices currently used by HCPs). Thanks to an anonymous reviewer for pointing this out.
AI is only involved in the decision-making to the extent that it offers the different treatment alternatives and takes part in the diagnosis process.
For a detailed analysis of the prioritization of accuracy over explainability cf. (Hatherley et al. 2022).
For an alternative perspective on the trade-offs between explainability and accuracy cf. (Rudin 2019).
Shared decision-making shall not become a stronghold for medical paternalism.
Although their chosen example applies to the risk of driving accidents, it could be easily extended to the use of AI in healthcare.
Cf. (National Institute for Health Research 2019): public juries favor accuracy over explanation in two scenarios involving AI in health.
See Ethics Guidelines for Trustworthy AI (High-Level Expert Group on Artificial Intelligence 2019).
Thanks to an anonymous reviewer for bringing this to my attention.
Cf. (Rueda et al. 2022), who argue that explainability is indispensable from a procedural justice perspective.
For the difficulties in the proper implementation of these technologies that would allow us to benefit from their clinical, economic, and quality positive outcomes cf. (Belard et al. 2017).
e.g., CoDoC, Deepmind’s new AI system, is applied alongside current diagnostic AI tools to help healthcare professionals identify when to trust the AI output or when to defer to a human expert (Dvijotham et al. 2023).
During the presentation of this paper at a conference, a member of the audience rightly pointed to a scenario where the use of AI could be potentially established as a standard of care, thus being impossible to opt out from its usage as a patient. While acknowledging the ethical significance of this case, addressing it is beyond the scope of this paper.
Moreover, in legal terms, full disclosure to inform the patient about the use of AI is not always mandatory; only when the use of AI is front-end, patients’ autonomy requires clear notification to be dealing with a machine (Schönberger 2019).
Additionally, we need to be aware of and concerned about the implementation of personal health and biometric data in surveillance capitalist structures whose objective is to monetise that information and predict our behavior. Cf. (Zuboff 2019).
Thanks to an anonymous reviewer for useful comments on these cases.
The authors talk about AI that offers predictions of treatment preferences for incompetent patients, but then they expand it to competent ones with the idea of the introspection tool.
References
Astromskė K, Peičius E, Astromskis P (2021) Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations. AI & Soc 36(2):509–520. https://doi.org/10.1007/s00146-020-01008-9
Beauchamp TL, Childress JF (2009) Principles of biomedical ethics, 7th edn. Oxford University Press, Oxford
Belard A, Buchman T, Forsberg J, Potter BK, Dente CJ, Kirk A, Elster E (2017) Precision diagnosis: a view of the clinical decision support systems (CDSS) landscape through the lens of critical care. In J Clin Monit Comput 31(2):261–271. Springer Netherlands. https://doi.org/10.1007/s10877-016-9849-1
Berner ES, La Lande TJ (2007) Overview of clinical decision support systems. In: Berner ES (ed) Clinical decision support systems: theory and practice. Springer, New York, pp 3–22. https://doi.org/10.1007/978-0-387-38319-4_1
Binns R, Van Kleek M, Veale M, Lyngs U, Zhao J, Shadbolt N (2018) It’s reducing a human being to a percentage; perceptions of justice in algorithmic decisions. In: Conference on human factors in computing systems—proceedings, 2018-April. https://doi.org/10.1145/3173574.3173951
Braun M, Hummel P, Beck S, Dabrock P (2021) Primer on an ethics of AI-based decision support systems in the clinic. J Med Ethics 47(12):E3. https://doi.org/10.1136/medethics-2019-105860
Burrell J (2016) How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc. https://doi.org/10.1177/2053951715622512
Chan B (2023) Black-box assisted medical decisions: AI power vs. ethical physician care. Med Health Care Philos. https://doi.org/10.1007/s11019-023-10153-z
Chin-Yee B, Michael S, Upshur R (2019) Three problems with big data and artificial intelligence in medicine. Perspect Biol Med 62(2):237–256
de Fine Licht K, de Fine Licht J (2020) Artificial intelligence, transparency, and public decision-making: why explanations are key when trying to produce perceived legitimacy. AI & Soc 35(4):917–926. https://doi.org/10.1007/s00146-020-00960-w
Di Nucci E (2019) Should we be afraid of medical AI? J Med Ethics 45(8):556–558. https://doi.org/10.1136/medethics-2018-105281
Doran D, Schulz S, Besold TR (2017) What does explainable AI really mean? A new conceptualization of perspectives. http://arxiv.org/abs/1710.00794
Dvijotham K, Winkens J, Barsbey M, Ghaisas S, Stanforth R, Pawlowski N, Strachan P, Ahmed Z, Azizi S, Bachrach Y, Culp L, Daswani M, Freyberg J, Kelly C, Kiraly A, Kohlberger T, McKinney S, Mustafa B, Natarajan V, Karthikesalingam A (2023) Enhancing the reliability and accuracy of AI-enabled diagnosis via complementarity-driven deferral to clinicians. Nat Med 29(7):1814–1820. https://doi.org/10.1038/s41591-023-02437-x
Edwards L, Veale M (2018) Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”? IEEE Secur Priv, 16(3):46–54. https://doi.org/10.1109/MSP.2018.2701152
Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K, Cui C, Corrado G, Thrun S, Dean J (2019) A guide to deep learning in healthcare. Nat Med 25(1):24–29. https://doi.org/10.1038/s41591-018-0316-z
Ferrario A, Gloeckler S, Biller-Andorno N (2023) AI knows best? Avoiding the traps of paternalism and other pitfalls of AI-based patient preference prediction. J Med Ethics 49(3):185–186. https://doi.org/10.1136/jme-2023-108945
Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena E (2021) An ethical framework for a good AI Society: opportunities, risks, principles, and recommendations. Philosophical studies series, vol 144. Springer Nature, Cham, pp 19–39. https://doi.org/10.1007/978-3-030-81907-1_3
Hatherley J, Sparrow R, Howard M (2022) The virtues of interpretable medical AI. In: Cambridge quarterly of healthcare ethics, Forthcoming. Accepted 10 June 2022
Henin C, Le Métayer D (2022) Beyond explainability: justifiability and contestability of algorithmic decision systems. AI & Soc 37(4):1397–1410. https://doi.org/10.1007/s00146-021-01251-8
High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Holzinger A, Biemann C, Pattichis CS, Kell DB (2017) What do we need to build explainable AI systems for the medical domain? http://arxiv.org/abs/1712.09923
Jennings B (2016) Reconceptualizing autonomy: a relational turn in bioethics. Hastings Cent Rep 46(3):11–16. https://doi.org/10.1002/hast.544
Kapeller A, Loosman I (2023) Empowerment through health self-testing apps? Revisiting empowerment as a process. Med Health Care Philos. https://doi.org/10.1007/s11019-022-10132-w
Klugman CM (2021) Black boxes and bias in AI challenge autonomy. Am J Bioethics 21(7):33–35. https://doi.org/10.1080/15265161.2021.1926587
Kreitmair KV (2023) Mobile health technology and empowerment. Bioethics. https://doi.org/10.1111/bioe.13157
Lamanna C, Byrne L (2018) Should artificial intelligence augment medical decision making? The case for an autonomy algorithm. AMA J Ethics 20(9):902–910. https://doi.org/10.1001/amajethics.2018.902
London AJ (2019) Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Cent Rep 49(1):15–21. https://doi.org/10.1002/hast.973
Mazoué JG (1990) Diagnosis without doctors. J Med Philos 15(6):559–579. https://doi.org/10.1093/jmp/15.6.559
McDougall RJ (2019) Computer knows best? The need for value-flexibility in medical AI. J Med Ethics 45(3):156–160. https://doi.org/10.1136/medethics-2018-105118
Meier LJ, Hein A, Diepold K, Buyx A (2022) Algorithms for ethical decision-making in the clinic: a proof of concept. Am J Bioeth. https://doi.org/10.1080/15265161.2022.2040647
Mittelstadt B (2019) The ethics of biomedical ‘Big Data’ analytics. Philos Technol 32(1):17–21. https://doi.org/10.1007/s13347-019-00344-z
Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L (2016) The ethics of algorithms: Mapping the debate. Big Data Soc 3(2). https://doi.org/10.1177/2053951716679679
National Institute for Health Research (2019) Involving the public in complex questions around artificial intelligence research. https://www.nihr.ac.uk/blog/involving-the-public-in-complex-questions-around-artificialintelligence-research/12236
Nong P (2023) Demonstrating trustworthiness to patients in data-driven health care. Hastings Cent Rep 53(S2):S69–S75. https://doi.org/10.1002/hast.1526
Obermeyer Z, Emanuel EJ (2016) Predicting the future—big data, machine learning, and clinical medicine. N Engl J Med 375(13):1216–1219. https://doi.org/10.1056/NEJMp1606181
Ploug T, Holm S (2020a) The four dimensions of contestable AI diagnostics—a patient-centric approach to explainable AI. Artif Intell Med. https://doi.org/10.1016/j.artmed.2020.101901
Ploug T, Holm S (2020b) The right to refuse diagnostics and treatment planning by artificial intelligence. Med Health Care Philos 23(1):107–114. https://doi.org/10.1007/s11019-019-09912-8
Popa EO, van Hilten M, Oosterkamp E, Bogaardt MJ (2021) The use of digital twins in healthcare: socio-ethical benefits and socio-ethical risks. Life Sci Soc Policy. https://doi.org/10.1186/s40504-021-00113-x
Ribeiro MT, Singh S, Guestrin C (2016) “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining, 13–17-August-2016, pp 1135–1144. https://doi.org/10.1145/2939672.2939778
Ross A (2022) AI and the expert; a blueprint for the ethical use of opaque AI. AI & Soc. https://doi.org/10.1007/s00146-022-01564-2
Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. In Nature Machine Intelligence 5(1):206–215. Nature Research. https://doi.org/10.1038/s42256-019-0048-x
Rueda J, Rodríguez JD, Jounou IP, Hortal-Carmona J, Ausín T, Rodríguez-Arias D (2022) “Just” accuracy? Procedural fairness demands explainability in AI-based medical resource allocations. AI Soc. https://doi.org/10.1007/s00146-022-01614-9
Schaefer GO, Kahane G, Savulescu J (2014) Autonomy and enhancement. Neuroethics 7(2):123–136. https://doi.org/10.1007/s12152-013-9189-5
Schönberger D (2019) Artificial intelligence in healthcare: a critical analysis of the legal and ethical implications. Int J Law Inf Technol 27(2):171–203. https://doi.org/10.1093/ijlit/eaz004
Schubbach A (2021) Judging machines: philosophical aspects of deep learning. Synthese 198(2):1807–1827. https://doi.org/10.1007/s11229-019-02167-z
Topol EJ (2019) High-performance medicine: the convergence of human and artificial intelligence. Nat Med 25(1):44–56. https://doi.org/10.1038/s41591-018-0300-7
Véliz C (2021) Privacy is power. Melville House Brooklyn, Brooklyn
Wachter S, Mittelstadt B, Russell C (2018) Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv J Law Technol 31(2):841–888
Watson DS, Krutzinna J, Bruce IN, Griffiths CEM, McInnes IB, Barnes MR, Floridi L (2019) Clinical applications of machine learning algorithms: beyond the black box. BMJ. https://doi.org/10.1136/bmj.l886
Zerilli J, Knott A, Maclaurin J, Gavaghan C (2019) Transparency in algorithmic and human decision-making: is there a double standard? Philos Technol 32(4):661–683. https://doi.org/10.1007/s13347-018-0330-6
Zuboff S (2019) The age of surveillance capitalism. London: Profile books
Acknowledgements
I would like to thank my colleagues at the Applied Philosophy & Ethics department for valuable comments and suggestions on previous versions of this paper. I would also like to thank an anonymous reviewer for helpful comments that contributed to the final version.
Funding
The Funding was provided by Czech Academy of Science (AV ČR), Grant no. LQ300092001.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Guerrero Quiñones, J.L. Using artificial intelligence to enhance patient autonomy in healthcare decision-making. AI & Soc (2024). https://doi.org/10.1007/s00146-024-01956-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00146-024-01956-6