Skip to main content
Log in

Using artificial intelligence to enhance patient autonomy in healthcare decision-making

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

The use of artificial intelligence in healthcare contexts is highly controversial for the (bio)ethical conundrums it creates. One of the main problems arising from its implementation is the lack of transparency of machine learning algorithms, which is thought to impede the patient’s autonomous choice regarding their medical decisions. If the patient is unable to clearly understand why and how an AI algorithm reached certain medical decision, their autonomy is being hovered. However, there are alternatives to prevent the negative impact of AI opacity in shared (healthcare professional–patient) decision-making processes, and benefit from the high accuracy of such systems. Despite these ethical issues, this paper advocates for its usage and also defends the positive role that it could have in enhancing patients’ autonomy. I advocate for a system where the implementation of AI is only partial, and humans will remain in control of oversighting algorithm interpretation of medical images and patients’ data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

Not applicable.

Notes

  1. The argument defended in the paper advocating for the use of AI systems that are opaque does not preclude further research in explainable AI.

  2. Following Mittelstadt et al. (2016, pp. 2–3), the use of “algorithm” here includes both the mathematical construct and its broader societal use, which includes the algorithm’s implementation and configuration for a specific task.

  3. However, AI opacity is becoming easier to explain in local areas of the query (Edwards and Veale 2018).

  4. It might be further argued that AI could be perceived as another relational agent in the medical encounter, besides HCPs, patients, and family. However, I believe that the current state-of-the-art AI technologies in healthcare must be understood as tools HCPs use for specific purposes, hence lacking the attributes to be considered agents (to the same extent that we do not consider other devices currently used by HCPs). Thanks to an anonymous reviewer for pointing this out.

  5. AI is only involved in the decision-making to the extent that it offers the different treatment alternatives and takes part in the diagnosis process.

  6. For a detailed analysis of the prioritization of accuracy over explainability cf. (Hatherley et al. 2022).

  7. For an alternative perspective on the trade-offs between explainability and accuracy cf. (Rudin 2019).

  8. Shared decision-making shall not become a stronghold for medical paternalism.

  9. Although their chosen example applies to the risk of driving accidents, it could be easily extended to the use of AI in healthcare.

  10. Cf. (National Institute for Health Research 2019): public juries favor accuracy over explanation in two scenarios involving AI in health.

  11. See Ethics Guidelines for Trustworthy AI (High-Level Expert Group on Artificial Intelligence 2019).

  12. Thanks to an anonymous reviewer for bringing this to my attention.

  13. Cf. (Rueda et al. 2022), who argue that explainability is indispensable from a procedural justice perspective.

  14. For the difficulties in the proper implementation of these technologies that would allow us to benefit from their clinical, economic, and quality positive outcomes cf. (Belard et al. 2017).

  15. e.g., CoDoC, Deepmind’s new AI system, is applied alongside current diagnostic AI tools to help healthcare professionals identify when to trust the AI output or when to defer to a human expert (Dvijotham et al. 2023).

  16. During the presentation of this paper at a conference, a member of the audience rightly pointed to a scenario where the use of AI could be potentially established as a standard of care, thus being impossible to opt out from its usage as a patient. While acknowledging the ethical significance of this case, addressing it is beyond the scope of this paper.

  17. Moreover, in legal terms, full disclosure to inform the patient about the use of AI is not always mandatory; only when the use of AI is front-end, patients’ autonomy requires clear notification to be dealing with a machine (Schönberger 2019).

  18. Additionally, we need to be aware of and concerned about the implementation of personal health and biometric data in surveillance capitalist structures whose objective is to monetise that information and predict our behavior. Cf. (Zuboff 2019).

  19. Thanks to an anonymous reviewer for useful comments on these cases.

  20. The authors talk about AI that offers predictions of treatment preferences for incompetent patients, but then they expand it to competent ones with the idea of the introspection tool.

References

Download references

Acknowledgements

I would like to thank my colleagues at the Applied Philosophy & Ethics department for valuable comments and suggestions on previous versions of this paper. I would also like to thank an anonymous reviewer for helpful comments that contributed to the final version.

Funding

The Funding was provided by Czech Academy of Science (AV ČR), Grant no. LQ300092001.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jose Luis Guerrero Quiñones.

Ethics declarations

Conflict of interest

The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guerrero Quiñones, J.L. Using artificial intelligence to enhance patient autonomy in healthcare decision-making. AI & Soc (2024). https://doi.org/10.1007/s00146-024-01956-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00146-024-01956-6

Keywords

Navigation