Abstract
Purpose of Review
This paper provides an overview of generative artificial intelligence (AI) and the possible implications in the delivery of mental health care.
Recent Findings
Generative AI is a powerful technology that is changing rapidly. As psychiatrists, it is important for us to understand generative AI technology and how it may impact our patients and our practice of medicine.
Summary
This paper aims to build this understanding by focusing on GPT-4 and its potential impact on mental health care delivery. We first introduce key concepts and terminology describing how the technology works and various novel uses of it. We then dive into key considerations for GPT-4 and other large language models (LLMs) and wrap up with suggested future directions and initial guidance to the field.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability
The author confirms that we do not analyse or generate any datasets in this work.
References
Papers of particular interest, published recently, have been highlighted as: • Of importance
91 Important chatGPT statistics & user numbers in April 2023 (GPT-4, Plugins Update) - Nerdy Nav [Internet]. 2022 [cited 2023 Apr 23]. Available from: https://nerdynav.com/chatgpt-statistics/.
Statement on AI risk | CAIS. [cited 2023 Jul 2]. Available from: https://www.safe.ai/statement-on-ai-risk.
Anderson M. ‘AI Pause’ open letter stokes fear and controversy - IEEE spectrum [cited 2023 Apr 25]. Available from: https://spectrum.ieee.org/ai-pause-letter-stokes-fear.
AI Principles [Internet]. Future of Life Institute. [cited 2023 Apr 23]. Available from: https://futureoflife.org/open-letter/ai-principles/.
AAMC [Internet]. [cited 2023 May 14]. A growing psychiatrist shortage and an enormous demand for mental health services. Available from: https://www.aamc.org/news/growing-psychiatrist-shortage-enormous-demand-mental-health-services.
AMIA - American Medical Informatics Association. [cited 2023 May 14]. AMIA 25x5. Available from: https://amia.org/about-amia/amia-25x5.
Reader TMP. The Czech play that gave us the word ‘robot’. The MIT Press Reader. 2019 [cited 2023 Jun 13]. Available from: https://thereader.mitpress.mit.edu/origin-word-robot-rur/.
Sarangi S, Sharma P. Artificial intelligence: evolution, ethics and public policy. London: Routledge India; 2018;164.
Muthukrishnan N, Maleki F, Ovens K, Reinhold C, Forghani B, Forghani R. Brief history of artificial intelligence. Neuroimaging Clin N Am. 2020;30(4):393–9.
How AI ruled our lives over the past decade | CNN Business. [cited 2023 May 5]. Available from: https://www.cnn.com/2019/12/21/tech/artificial-intelligence-decade/index.html.
Nadeem R. Public awareness of artificial intelligence in everyday activities. Pew Research Center Science & Society. 2023 [cited 2023 May 19]. Available from: https://www.pewresearch.org/science/2023/02/15/public-awareness-of-artificial-intelligence-in-everyday-activities/.
Chee FY, Coulter M, Mukherjee S. EU lawmakers’ committees agree tougher draft AI rules. Reuters. 2023 May 11 [cited 2023 May 19]; Available from: https://www.reuters.com/technology/eu-lawmakers-committees-agree-tougher-draft-ai-rules-2023-05-11/.
Browne R. CNBC. 2023 [cited 2023 May 19]. Europe takes aim at ChatGPT with what might soon be the West’s first A.I. law. Here’s what it means. Available from: https://www.cnbc.com/2023/05/15/eu-ai-act-europe-takes-aim-at-chatgpt-with-landmark-regulation.html.
Sam Altman: CEO of OpenAI calls for US to regulate artificial intelligence. BBC News [Internet]. 2023 May 16 [cited 2023 May 19]; Available from: https://www.bbc.com/news/world-us-canada-65616866.
Espinosa N. Forbes. [cited 2023 May 20]. Council post: the unforeseen consequences of chatGPT. Available from: https://www.forbes.com/sites/forbestechcouncil/2023/03/10/the-unforeseen-consequences-of-chatgpt/
Times TB. Belgian man dies by suicide following exchanges with chatbot. [cited 2023 May 20]. Available from: https://www.brusselstimes.com/430098/belgian-man-commits-suicide-following-exchanges-with-chatgpt
Priyadarshini R, Mehra RM, Sehgal A, Singh PJ, editors. Artificial intelligence: applications and innovations. New York: Chapman and Hall/CRC; 2022. 300 p.
GPT5: release date, AGI meaning and expected features - dataconomy. 2023 [cited 2023 Apr 25]. Available from: https://dataconomy.com/2023/04/03/chat-gpt5-release-date-agi-meaning-features/.
Bubeck S, Chandrasekaran V, Eldan R, Gehrke J, Horvitz E, Kamar E, et al. Sparks of artificial general intelligence: early experiments with GPT-4. arXiv; 2023 [cited 2023 Apr 30]. Available from: http://arxiv.org/abs/2303.12712.
• What is chatGPT doing … and why does it work?. 2023 [cited 2023 Apr 25]. Available from: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/. This article summarizes how LLMs such as ChatGPT function in the context of language processing, machine learning, and neural nets. It intuitively explains how LLMs are trained and function, with a focus on ChatGPT.
• Zhao WX, Zhou K, Li J, Tang T, Wang X, Hou Y, et al. A survey of large language models. arXiv; 2023 [cited 2023 Apr 30]. Available from: http://arxiv.org/abs/2303.18223. This article is a technical view of the recent developments made with LLMs and focuses on pre-training, adaptation tuning, utilization, and capacity evaluation of these models. It is a thorough resource about the current state of LLMs.
O’Reilly RC, Munakata Y, Frank MJ, Hazy TE. Computational cognitive neuroscience. Vol. 1124. PediaPress Mainz; 2012.
NewsGuard’s Misinformation Monitor: GPT-4 produces more misinformation than predecessor [Internet]. NewsGuard. [cited 2023 Apr 30]. Available from: https://www.newsguardtech.com/misinformation-monitor/march-2023.
Ovadya A. Red teaming improved GPT-4. violet teaming goes even further. Wired. [cited 2023 May 11]; Available from: https://www.wired.com/story/red-teaming-gpt-4-was-valuable-violet-teaming-will-make-it-better/.
Nadeem R. 60% of Americans would be uncomfortable with provider relying on AI in their own health care. Pew Research Center Science & Society. 2023 [cited 2023 May 14]. Available from: https://www.pewresearch.org/science/2023/02/22/60-of-americans-would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care/.
• Lagan S, Aquino P, Emerson MR, Fortuna K, Walker R, Torous J. Actionable health app evaluation: translating expert frameworks into objective metrics. NPJ Digit Med. 2020;3(1):1–8. This paper is an excellent review of the APA App Evaluation Model. It details the process by which the model was created discussing the available models at the time, the deficits that existed, and how the APA model was synthesized to address those deficits and provide an objective metric for app evaluation.
• Nissenbaum H. Privacy as contextual integrity. Wash Law Rev. 2004;79. This paper details the importance of privacy and elaborates on privacy policy in the US. “Contextual Integrity” is explained as a concept that ties privacy to the norms of a particular context. HIPAA is discussed as a privacy law that specially recognizes health and medical information.
Privacy policy. [cited 2023 May 14]. Available from: https://openai.com/policies/privacy-policy.
• Connolly SL. Leveraging implementation science to understand factors influencing sustained use of mental health apps: a narrative review’. J Technol Behav Sci. 2020;1–13. This review discusses the implementation of mental health apps including the qualities that lead to the success and use of such apps. The article highlights various factors such as simplicity, benefits over current tools, ease of navigation, and alignment with user’s needs and skill sets that facilitate mental health app implementation.
King D. ChatGPT not yet ready for clinical practice. Psychiatr News [Internet]. 2023 Jun 16 [cited 2023 Jul 24]; Available from: https://psychnews.psychiatryonline.org/doi/10.1176/appi.pn.2023.07.7.56.
Reardon S. Scientific American. [cited 2023 Jul 24]. AI chatbots could help provide therapy, but caution is needed. Available from: https://www.scientificamerican.com/article/ai-chatbots-could-help-provide-therapy-but-caution-is-needed/.
• Cooper AF, Moss E, Laufer B, Nissenbaum H. Accountability in an Algorithmic Society: relationality, responsibility, and robustness in machine learning. In: 2022 ACM conference on fairness, accountability, and transparency. 2022 [cited 2023 May 5]. p. 864–76. Available from: http://arxiv.org/abs/2202.05338. This paper discusses the important topic of accountability in relation to ML and AI. With data driven algorithmic processes such as these being used or being introduced to a variety of fields including medicine, barriers to accountability with their use become apparent. Overall, the paper introduces how the lack of accountability for errors or mistakes of AI poses a challenge in its implementation in healthcare.
Acknowledgements
The editors would like to thank Dr. Steven Richard Chan for taking the time to review this manuscript.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
Darlene R. King, Guransh Nanda, Allison Dempsey, Sarah Hergert, and Jay H. Shore each declare no potential conflicts of interest. Joel Stoddard has received grants from the NIH, the Brain and Behavior Research Foundation, the Colorado Office of Economic Development, and the Children's Hospital Colorado Foundation. Dr. Stoddard also has family equity in AbbVie, Merck, CVS, Bristol Meyers Squibb, Johnson & Johnson, Abbott Labs, and Pfizer. In addition, Dr. Stoddard has a patent 63/489,517 pending. John Torous is a scientific board member of Precision Mental Wellness.
Human and Animal Rights and Informed Consent
This article does not contain any studies with human or animal subjects performed by any of the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
King, D.R., Nanda, G., Stoddard, J. et al. An Introduction to Generative Artificial Intelligence in Mental Health Care: Considerations and Guidance. Curr Psychiatry Rep 25, 839–846 (2023). https://doi.org/10.1007/s11920-023-01477-x
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11920-023-01477-x