Abstract
This paper considers the ethical risks and opportunities presented by generalist medical artificial intelligence (GMAI), a kind of dynamic, multimodal AI proposed by Moor et al. (2023) for use in health care. The research objective is to apply widely accepted principles of biomedical ethics to analyze the possible consequences of GMAI, while emphasizing the distinctions between GMAI and current-generation, task-specific medical AI. The principles of autonomy and health equity in particular provide useful guidance for the ethical risks and opportunities of novel AI systems in health care. The ethics of two applications of GMAI are examined: enabling decision aids that inform and educate patients about certain treatments and conditions, and expanding AI-driven diagnosis and treatment recommendation. Emphasis is placed on the potential of GMAI to improve shared decision-making between patients and providers, which supports patient autonomy. Another focus is on health equity, or the reduction of health and access disparities facing underserved populations. Although GMAI presents opportunities to improve patient autonomy, health literacy, and health equity, premature or inadequately regulated adoption of GMAI has the potential to compromise both health equity and patient autonomy. On the other hand, there are significant risks to health equity and autonomy that may arise from not adopting GMAI that has been thoroughly validated and tested. A careful balancing of these risks and benefits will be required to secure the best ethical outcome, if GMAI is ever employed at scale.
Similar content being viewed by others
Notes
One of the most powerful and distinctive features of such foundation models is their capacity for transfer or multimodal learning [13, 14], which involves taking “knowledge” learned from one task (e.g., object recognition in images) and applying it to another task (e.g., activity recognition in videos)” [13]. In theory, a foundation model capable of both multimodal inputs and outputs could be fed certain radiologic images and output a detailed diagnostic report, even if the images were not obviously similar to previous images or possibly in a different format [4].
Of course, what counts as “accurate” may not always be perfectly clear-cut, especially with respect to treatment recommendations. One means of assessing accuracy in treatment recommendations is to look at the degree of convergence between the medical AI system and a panel of experts. This method has been used to assess the quality of treatment recommendations provided by IBM’s Watson for Oncology, for example [60]. But there is at least a theoretical possibility that GMAI might eventually be able to make treatment recommendations for certain patients that deviate from the standard of care but might nonetheless be beneficial.
GMAI models might also promote adherence to a designated treatment plan by sending verbal and visual reminders in the patient’s preferred format, and by answering patients’ non-urgent questions about self-obtained readings (e.g., of blood pressure or glucose) through an integrated chatbot.
References
Topol, E.J.: High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25(1), 44–56 (2019). https://doi.org/10.1038/s41591-018-0300-7
Shaheen, M. Y. (2021). Applications of Artificial Intelligence (AI) in healthcare: A review. ScienceOpen Preprints. https://doi.org/10.14293/s2199-1006.1.sor-.ppvry8k.v1
Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., Tsaneva-Atanasova, K.: Artificial intelligence, bias and clinical safety. BMJ Qual. Saf. 28(3), 231–237 (2019)
Moor, M., Banerjee, O., Abad, Z.S.H., Krumholz, H.M., Leskovec, J., Topol, E.J., Rajpurkar, P.: Foundation models for generalist medical artificial intelligence. Nature 616(7956), 259–265 (2023). https://doi.org/10.1038/s41586-023-05881-4
Rajpurkar, P., Chen, E., Banerjee, O., Topol, E.J.: AI in health and medicine. Nat. Med. 28(1), 31–38 (2022). https://doi.org/10.1038/s41591-021-01614-0
Kung, T.H., et al.: Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS. Dig. Health. 2(2), 0000198 (2023)
Manickam, P., Mariappan, S.A., Murugesan, S.M., Hansda, S., Kaushik, A., Shinde, R., Thipperudraswamy, S.P.: Artificial intelligence (AI) and internet of medical things (IoMT) assisted biomedical systems for intelligent healthcare. Biosensors 12(8), 562 (2022). https://doi.org/10.3390/bios12080562
Matheny, M., Israni, S.T., Ahmed, M., Whicher, D.: Artificial intelligence in health care: The hope, the hype, the promise, the peril. National Academy of Medicine, Washington, DC (2019). https://doi.org/10.1001/jama.2019.21579
Hashimoto, D.A., Rosman, G., Rus, D., Meireles, O.R.: Artificial intelligence in surgery: promises and perils. Ann. Surg. 268(1), 70–76 (2018). https://doi.org/10.1097/sla.0000000000002693
Johnson, K.B., Wei, W.Q., Weeraratne, D., Frisse, M.E., Misulis, K., Rhee, K., Snowdon, J.L.: Precision medicine, AI, and the future of personalized health care. Clin. Translat. Sci. 14(1), 86–93 (2021). https://doi.org/10.1111/cts.12884
Mai, G., Huang, W., Sun, J., Song, S., Mishra, D., Liu, N., & Lao, N. (2023). On the opportunities and challenges of foundation models for geospatial artificial intelligence. arXiv preprint arXiv:2304.06798. https://doi.org/10.1145/3557915.3561043
Qin, Y., Hu, S., Lin, Y., Chen, W., Ding, N., Cui, G., & Sun, M. (2023). Tool learning with foundation models. arXiv preprint arXiv:2304.08354.
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
Fei, N., Lu, Z., Gao, Y., Yang, G., Huo, Y., Wen, J., Wen, J.R.: Towards artificial general intelligence via a multimodal foundation model. Nat. Commun. 13(1), 3094 (2022)
Alkaissi, H., McFarlane, S.I.: Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus (2023). https://doi.org/10.7759/cureus.35179
Sallam, M.: ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare (2023). https://doi.org/10.3390/healthcare11060887
Santhoshkumar, S.P., Susithra, K., Prasath, T.K.: An Overview of Artificial Intelligence Ethics: Issues and Solution for Challenges in Different Fields. J. Art. Intellig. Caps. Netw. 5(1), 69–86 (2023)
Duraipandian, M.: Review on Artificial Intelligence and its Implementations in Digital Ersa. J Inform Technol Digit World. 4(2), 84–94 (2022). https://doi.org/10.36548/jitdw.2022.2.003
Beauchamp, T.L., Childress, J.F.: Principles of biomedical ethics. Oxford University Press (2019)
Evans, J.H.: A sociological account of the growth of principlism. Hastings Cent. Rep. 30(5), 31–39 (2000)
Sepucha, K., Atlas, S.J., Chang, Y., et al.: Patient decision aids improve decision quality and patient experience and reduce surgical rates in routine orthopaedic care: a prospective cohort study. J. Bone. Jt. Surg. Am. 99(15), 1253–1260 (2017)
Stacey, D., Taljaard, M., Dervin, G., et al.: Impact of patient decision aids on appropriate and timely access to hip or knee arthroplasty for osteoarthritis: a randomized controlled trial. Osteoarth. Cartil. 24(1), 99–107 (2016)
Jayakumar, P., Moore, M.G., Furlough, K.A., Uhler, L.M., Andrawis, J.P., Koenig, K.M., Bozic, K.J.: Comparison of an artificial intelligence–enabled patient decision aid vs educational material on decision quality, shared decision-making, patient experience, and functional outcomes in adults with knee osteoarthritis. JAMA Netw. Open (2021). https://doi.org/10.1001/jamanetworkopen.2020.37107
Emanuel, E.J., Emanuel, L.L.: Four models of the physician-patient relationship. JAMA 267(16), 2221–2226 (1992)
Stiggelbout, A. M., Van der Weijden, T., De Wit, M. P., Frosch, D., Légaré, F., Montori, V. M., & Elwyn, G. Shared decision making: really putting patients at the centre of healthcare. BMJ, 344(7842), 28–31 (2012)
Gaube, S., Suresh, H., Raue, M., Merritt, A., Berkowitz, S.J., Lermer, E., Ghassemi, M.: Do as AI say: susceptibility in deployment of clinical decision-aids. NPJ. Dig. Med. 4(1), 31 (2021)
Astromskė, K., Peičius, E., Astromskis, P.: Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations. AI & Soc. 36, 509–520 (2021)
Nadarzynski, T., Miles, O., Cowie, A., & Ridge, D. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Digital Health 5, 1–12 (2019). https://doi.org/10.1177/2055207619871808
McCradden, M.D., Joshi, S., Anderson, J.A., Mazwi, M., Goldenberg, A., Zlotnik Shaul, R.: Patient safety and quality improvement: Ethical principles for a regulatory approach to bias in healthcare machine learning. J. Am. Med. Inform. Assoc. 27(12), 2024–2027 (2020)
Tait, A.R., Hutchinson, R.J.: Informed consent training in pediatrics—are we doing enough? JAMA Pediatr. 172(3), 211–212 (2018). https://doi.org/10.1001/jamapediatrics.2017.4088
Malhotra, N.K.: Information load and consumer decision making. J. Cons. Res. 8(4), 419–430 (1982). https://doi.org/10.1086/208882
Phillips-Wren, G., Adya, M.: Decision making under stress: The role of information overload, time pressure, complexity, and uncertainty. J. Decis. Syst. 29(sup1), 213–225 (2020)
Lipkus, I.M., Samsa, G., Rimer, B.K.: General performance on a numeracy scale among highly educated samples. Med. Decis. Making 21(1), 37–44 (2001). https://doi.org/10.1177/0272989x0102100105
Schwartz, P.H.: Questioning the quantitative imperative: decision aids, prevention, and the ethics of disclosure. Hastings Cent. Rep. 41(2), 30–39 (2011). https://doi.org/10.1353/hcr.2011.0029
Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H.: Causability and explainability of artificial intelligence in medicine. Wiley. Interdisc. Rev. Data. Min. Knowled. Disc. 9(4), e1312 (2019)
Beil, M., Proft, I., van Heerden, D., Sviri, S., van Heerden, P.V.: Ethical considerations about artificial intelligence for prognostication in intensive care. Intensive Care Med. Exp. 7(1), 1–13 (2019). https://doi.org/10.1186/s40635-019-0286-6
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31, 841 (2017)
Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K., & Müller, K. R. (Eds.). (2019). Explainable AI: interpreting, explaining and visualizing deep learning (Vol. 11700). Springer Nature.
Funnell, M.M., Anderson, R.M., Arnold, M.S., Barr, P.A., Donnelly, M., Johnson, P.D., White, N.H.: Empowerment: an idea whose time has come in diabetes education. Diab. Educ. (1991). https://doi.org/10.1177/014572179101700108
Schulz, P. J., & Nakamoto, K. (2011, March). “Bad” literacy, the internet, and the limits of patient empowerment. In 2011 AAAI Spring Symposium Series.
Wilson, P., Risk, A.: How to find the good and avoid the bad or ugly: a short guide to tools for rating quality of health information on the internet. BMJ 324(7337), 598–602 (2002). https://doi.org/10.1136/bmj.324.7337.598
Holone, H.: The filter bubble and its effect on online personal health information. Croat. Med. J. 57(3), 298 (2016). https://doi.org/10.3325/cmj.2016.57.298
Ryan, A., Wilson, S.: Internet healthcare: do self-diagnosis sites do more harm than good? Expert Opin. Drug Saf. 7(3), 227–229 (2008). https://doi.org/10.1517/14740338.7.3.227
Robertson, N., Polonsky, M., McQuilken, L.: Are my symptoms serious Dr Google? A resource-based typology of value co-destruction in online self-diagnosis. Australas. Mark. J. 22(3), 246–256 (2014). https://doi.org/10.1016/j.ausmj.2014.08.009
Wilson, P.M.: A policy analysis of the expert patient in the United Kingdom: self-care as an expression of pastoral power? Health Soc. Care Commun. 9(3), 134–142 (2001)
Fox, N.J., Ward, K.J., O’Rourke, A.J.: The ‘expert patient’: empowerment or medical dominance? The case of weight loss, pharmaceutical drugs and the Internet. Soc Sci Med 60(6), 1299–1309 (2005). https://doi.org/10.1016/j.socscimed.2004.07.005
Prevalence of Multiple Chronic Conditions Among U.S. Adults, 2018. CDC. 2020 <https://www.cdc.gov/pcd/issues/2020/20_0130.htm>.
Amann, J., Blasimme, A., Vayena, E., Frey, D., Madai, V.I.: Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med. Inform. Decis. Mak. 20(1), 1–9 (2020)
Čartolovni, A., Tomičić, A., Mosler, E.L.: Ethical, legal, and social considerations of AI-based medical decision-support tools: A scoping review. Int. J. Med. Informat. 161, 104738 (2022)
Braveman, P.: Health disparities and health equity: concepts and measurement. Annu. Rev. Public Health 27, 167–194 (2006)
Berkman, N.D., Sheridan, S.L., Donahue, K.E., Halpern, D.J., Crotty, K.: Low health literacy and health outcomes: an updated systematic review. Ann. Intern. Med. 155(2), 97–107 (2011)
Fleary, S.A., Ettienne, R.: Social disparities in health literacy in the United States. HLRP Health Liter. Res. Pract. 3(1), 47–52 (2019)
Gooberman-Hill, R., Sansom, A., Sanders, C.M., Dieppe, P.A., Horwood, J., Learmonth, I.D., Donovan, J.L.: Unstated factors in orthopaedic decision-making: a qualitative study. BMC Musculoskel. Disord. (2010). https://doi.org/10.1186/1471-2474-11-213
Youm, J., Chan, V., Belkora, J., Bozic, K.J.: Impact of socioeconomic factors on informed decision making and treatment choice in patients with hip and knee OA. J. Arthroplasty 30(2), 171–175 (2015)
McDougall, R.J.: Computer knows best? The need for value-flexibility in medical AI. J. Med. Ethics 45(3), 156–160 (2019). https://doi.org/10.1136/medethics-2018-105118
Ross, N. & Herman, B. Denied by AI: How Medicare Advantage plans use algorithms to cut off care for seniors in need. Stat News. https://www.statnews.com/2023/03/13/medicare-advantage-plans-denial-artificial-intelligence
Meiring, C., Dixit, A., Harris, S., MacCallum, N.S., Brealey, D.A., Watkinson, P.J., Ercole, A.: Optimal intensive care outcome prediction over time using machine learning. PLoS ONE (2018). https://doi.org/10.1371/journal.pone.0206862
McWilliams, C.J., Lawson, D.J., Santos-Rodriguez, R., Gilchrist, I.D., Champneys, A., Gould, T.H., Bourdeaux, C.P.: Towards a decision support tool for intensive care discharge: machine learning algorithm development using electronic healthcare data from MIMIC-III and Bristol UK. BMJ Open 9(3), 025925 (2019)
Di Nucci, E.: Should we be afraid of medical AI? J. Med. Ethics 45(8), 556–558 (2019)
Jie, Z., Zhiying, Z., Li, L.: A meta-analysis of Watson for Oncology in clinical application. Sci. Rep. 11(1), 5792 (2021). https://doi.org/10.1038/s41598-021-84973-5
Strickland, E.: IBM Watson, heal thyself: How IBM overpromised and underdelivered on AI health care. IEEE Spectr. 56(4), 24–31 (2019)
Acosta, J.N., Falcone, G.J., Rajpurkar, P., Topol, E.J.: Multimodal biomedical AI. Nat. Med. 28(9), 1773–1784 (2022)
Divya, S., Indumathi, V., Ishwarya, S., Priyasankari, M., Devi, S.K.: A self-diagnosis medical chatbot using artificial intelligence. J. Web. Develop. Web. Desig. 3(1), 1–7 (2018)
Greene, A., Greene, C.C., Greene, C.: Artificial intelligence, chatbots, and the future of medicine. Lancet Oncol. 20(4), 481–482 (2019). https://doi.org/10.1016/s1470-2045(19)30142-1
VanBuskirk, K.A., Wetherell, J.L.: Motivational interviewing with primary care populations: a systematic review and meta-analysis. J. Behav. Med. 37, 768–780 (2014)
Shi, L. The impact of primary care: a focused review. Scientifica, 1–22 (2012). https://doi.org/10.6064/2012/432892
Chokshi, D.A.: Income, poverty, and health inequality. JAMA 319(13), 1312–1313 (2018)
Chetty, R., Stepner, M., Abraham, S., Lin, S., Scuderi, B., Turner, N., Cutler, D.: The association between income and life expectancy in the United States, 2001–2014. JAMA (2016). https://doi.org/10.1001/jama.2016.4226
Nanayakkara, S., Fogarty, S., Tremeer, M., Ross, K., Richards, B., Bergmeir, C., Kaye, D.M.: Characterising risk of in-hospital mortality following cardiac arrest using machine learning: A retrospective international registry study. PLoS Med. (2018). https://doi.org/10.1371/journal.pmed.1002709
Langlotz, C.P.: Will artificial intelligence replace radiologists? Radiol Artific Intellig 1(3), 190058 (2019)
Contractor, D., McDuff, D., Haines, J. K., Lee, J., Hines, C., Hecht, B., & Li, H. (2022, June). Behavioral use licensing for responsible AI. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 778–788).
Lantos, J., Matlock, A.M., Wendler, D.: Clinician integrity and limits to patient autonomy. JAMA 305(5), 495–499 (2011). https://doi.org/10.1001/jama.2011.32
Ploug, T., Holm, S.: The right to refuse diagnostics and treatment planning by artificial intelligence. Med. Health Care Philos. 23(1), 107–114 (2020)
Currie, G., & Hawk, K. E. (2021, March). Ethical and legal challenges of artificial intelligence in nuclear medicine. In Seminars in Nuclear Medicine (Vol. 51, No. 2, pp. 120–125). WB Saunders.
Price, W.N., Gerke, S., Cohen, I.G.: Potential liability for physicians using artificial intelligence. JAMA 322(18), 1765–1766 (2019). https://doi.org/10.1001/jama.2019.15064
Maliha, G., Gerke, S., Cohen, I.G., Parikh, R.B.: Artificial Intelligence and Liability in Medicine. Milbank Q. 99(3), 629–647 (2021)
Obermeyer, Z., Emanuel, E.J.: Predicting the future—big data, machine learning, and clinical medicine. N. Engl. J. Med. 375(13), 1216 (2016). https://doi.org/10.1056/nejmp1606181
Beam, A.L., Kohane, I.S.: Big data and machine learning in health care. JAMA 319(13), 1317–1318 (2018). https://doi.org/10.1001/jama.2017.18391
Ohm, P.: Broken promises of privacy: Responding to the surprising failure of anonymization. UCLA l. Rev. 57, 1701 (2009)
Price, W.N., Cohen, I.G.: Privacy in the age of medical big data. Nat. Med. 25(1), 37–43 (2019). https://doi.org/10.1038/s41591-018-0272-7
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Sass, R. Equity, autonomy, and the ethical risks and opportunities of generalist medical AI. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00380-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s43681-023-00380-8