If mistakes are made in clinical settings, patients suffer. Artificial intelligence (AI) generally — and large language models specifically — are increasingly used in health settings, but the way that physicians use AI tools in this high-stakes environment depends on how information is delivered. AI toolmakers have a responsibility to present information in a way that minimizes harm.
References
Kung, T. H. et al. PLoS Digit. Health 2, e0000198 (2023).
Suriyakumar, V. M., Papernot, N., Goldenberg, A. & Ghassemi, M. Chasing your long tails: Differentially private prediction in health care settings. In Proc. 2021 ACM Conf. on Fairness, Accountability, and Transparency, 723–734 (ACM, 2021).
Seyyed-Kalantari, L., Zhang, H., McDermott, M. B. A., Chen, I. Y. & Ghassemi, M. Nat. Med. 27, 2176–2182 (2021).
Zhang, H., Lu, A. X., Abdalla, M., McDermott, M. & Ghassemi, M. Hurtful words: quantifying biases in clinical contextual word embeddings. In Proc. ACM Conf. on Health, Inference, and Learning, 110–120 (ACM, 2020).
Ghassemi, M. & Nsoesie, E. O. Patterns 3, 100392 (2022).
Chen, I. Y. et al. Annu. Rev. Biomed. Data Sci. 4, 123–144 (2021).
Bates, D. W. et al. New Engl. J. Med. 388, 142–153 (2023).
Raji, I. D., Kumar, I. E., Horowitz, A. & Selbst, A. The fallacy of AI functionality. In 2022 ACM Conf. on Fairness, Accountability, and Transparency, 959–972 (ACM, 2022).
Adashi, E. Y. & Cohen, I. G. Nat. Med. 28, 2241–2242 (2022).
Smallman, M. Nature 567, 7 (2019).
Wong, A. et al. JAMA Intern. Med. 181, 1065–1070 (2021).
Gaube, S. et al. NPJ Digit. Med. 4, 31 (2021).
Gichoya, J. W. et al. Lancet Digit. Health 4, e406–e414 (2022).
Adam, H. et al. Write it like you see it: detectable differences in clinical notes by race lead to differential model recommendations. In Proc. 2022 AAAI/ACM Conf. on AI, Ethics, and Society, 7–21 (ACM, 2022).
Adam, H., Balagopalan, A., Alsentzer, E., Christia, F. & Ghassemi, M. Commun. Med. 2, 149 (2022).
Robinette, P. et al. Overtrust of robots in emergency evacuation scenarios. In 2016 11th ACM/IEEE Internat. Conf. on Human–Robot Interaction, 101–108 (IEEE, 2016).
Goodman, K. E., Rodman, A. M. & Morgan, D. J. New Engl. J. Med. 389, 483–487 (2023).
Acknowledgements
M.G. is a CIFAR AI Chair, CIFAR Azrieli Global Scholar, Herman L. F. von Helmholtz Career Development Professor, and JameelClinic Affiliate, and acknowledges support from these programmes.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Peer review
Peer reviewer information
Nature Human Behaviour thanks Sanmi Koyejo, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Rights and permissions
About this article
Cite this article
Ghassemi, M. Presentation matters for AI-generated clinical advice. Nat Hum Behav 7, 1833–1835 (2023). https://doi.org/10.1038/s41562-023-01721-7
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41562-023-01721-7
- Springer Nature Limited