Abstract
Purpose
The market and application possibilities for artificial intelligence are currently growing at high speed and are increasingly finding their way into gynecology. While the medical side is highly represented in the current literature, the patient's perspective is still lagging behind. Therefore, the aim of this study was to evaluate the recommendations of ChatGPT regarding patient inquiries about the possible therapy of gynecological leading symptoms in a palliative situation by experts.
Methods
Case vignettes were constructed for 10 common concomitant symptoms in gynecologic oncology tumors in a palliative setting, and patient queries regarding therapy of these symptoms were generated as prompts for ChatGPT. Five experts in palliative care and gynecologic oncology evaluated the responses with respect to guideline adherence and applicability and identified advantages and disadvantages.
Results
The overall rating of ChatGPT responses averaged 4.1 (5 = strongly agree; 1 = strongly disagree). The experts saw an average guideline conformity of the therapy recommendations with a value of 4.0. ChatGPT sometimes omits relevant therapies and does not provide an individual assessment of the suggested therapies, but does indicate that a physician consultation is additionally necessary.
Conclusions
Language models, such as ChatGPT, can provide valid and largely guideline-compliant therapy recommendations in their freely available and thus in principle accessible version for our patients. For a complete therapy recommendation, an evaluation of the therapies, their individual adjustment as well as a filtering of possible wrong recommendations, a medical expert's opinion remains indispensable.
Similar content being viewed by others
Data availability
All data are available within the manuscript and the appendix.
References
Ruby D (2023) 57+ ChatGPT Statistics 2023 (Updated Data With Infographics). https://www.demandsage.com/chatgpt-statistics/. Accessed 21 May 2023
Kung TH, Cheatham M, Medenilla A et al (2023) Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS digital health 2:e0000198
Kemp MW, Logan SJ, Dimri PS et al (2023) ChatGPT outscored human candidates in a virtual objective structured clinical examination (OSCE) in obstetrics and gynecology. Am J Obstet Gynecol. https://doi.org/10.1016/j.ajog.2023.04.020
Sanchez-Ramos L, Lin L, Romero R (2023) Beware of references when using ChatGPT as a source of information to write scientific articles. Am J Obstet Gynecol. https://doi.org/10.1016/j.ajog.2023.04.004
Levin G, Meyer R, Kadoch E, Brezinov Y (2023) Identifying ChatGPT-written OBGYN abstracts using a simple tool. Am J Obstet Gynecol 5(6):100936. https://doi.org/10.1016/j.ajogmf.2023.100936
Lee P, Bubeck S, Petro J (2023) Benefits, limits, and risks of GPT-4 as an AI Chatbot for medicine. N Engl J Med 388:1233–1239
Ray PP (2023) Bridging the gap: integrating ChatGPT into obstetrics and gynecology research—a call to action. Arch Gynecol Obstet. https://doi.org/10.1007/s00404-023-07129-y
Grünebaum A, Chervenak J, Pollet SL et al (2023) The exciting potential for ChatGPT in obstetrics and gynecology. Am J Obstet Gynecol. https://doi.org/10.1016/j.ajog.2023.03.009
Lukac S, Dayan D, Fink V et al (2023) Evaluating ChatGPT as an adjunct for the multidisciplinary tumor board decision-making in primary breast cancer cases. Arch Gynecol Obstet. https://doi.org/10.1007/s00404-023-07130-5
Li J, Dada A, Kleesiek J, Egger J (2023) ChatGPT in healthcare: a taxonomy and systematic review. medRxiv 9(1):e001568. https://doi.org/10.1101/2023.03.30.23287899
Radford A, Narasimhan K, Salimans T, Sutskever I (2018) Improving language understanding by generative pre-training. OpenAI, San Francisco
Duong D, Solomon BD (2023) Analysis of large-language model versus human performance for genetics questions. medRxiv. https://doi.org/10.1038/s41431-023-01396-8
Potapenko I, Boberg-Ans LC, Hansen S et al (2023) Artificial intelligence-based chatbot patient information on common retinal diseases using ChatGPT. Acta Ophthalmol. https://doi.org/10.1111/aos.15661
Bulatov A, Kuratov Y, Burtsev MS (2023) Scaling transformer to 1M tokens and beyond with RMT. arXiv Prepr arXiv:230411062
Xie Y, Seth I, Hunter-Smith DJ et al (2023) Aesthetic surgery advice and counseling from artificial intelligence: a rhinoplasty consultation with ChatGPT. Aesthetic Plast Surg 47(5):1985–1993. https://doi.org/10.1007/s00266-023-03338-7
Fernández-Montes A, de Velasco G, Aguín S et al (2021) Insights into the use of peripherally acting μ-opioid receptor antagonists (PAMORAs) in oncologic patients: from scientific evidence to real clinical practice. Curr Treat Options Oncol 22:1–19
Bushuven S, Bentele M, Bentele S et al (2023) ChatGPT, can you help me save my child’s life?-Diagnostic accuracy and supportive capabilities to lay rescuers by ChatGPT in prehospital basic life support and paediatric advanced life support cases–an in-silico analysis. Res Square. https://doi.org/10.2120/rs.3.rs-2910261/v1
Liu S, Wright AP, Patterson BL et al (2023) Using AI-generated suggestions from ChatGPT to optimize clinical decision support. J Am Med Inform Assoc. https://doi.org/10.1093/jamia/ocad072
Corrales DM, Wells AE, Radecki Breitkopf C et al (2018) Internet use by gynecologic oncology patients and its relationship with anxiety. J Health Commun 23:299–305
Lawrentschuk N, Abouassaly R, Hewitt E et al (2016) Health information quality on the internet in gynecological oncology: a multilingual evaluation. Eur J Gynaecol Oncol 37:478–483
Ali SR, Dobbs TD, Hutchings HA, Whitaker IS (2023) Using ChatGPT to write patient clinic letters. Lancet Digit Health 5:e179–e181
Maboloc CR (2023) Chat GPT: the need for an ethical framework to regulate its use in education. J Public Health. https://doi.org/10.1093/pubmed/fdad125
Fournier-Tombs E, McHardy J (2023) A medical ethics framework for conversational artificial intelligence. J Med Internet Res 25:e43068
King MR (2023) The future of AI in medicine: a perspective from a Chatbot. Ann Biomed Eng 51:291–295
Zhang J, Zhang Z (2023) Ethics and governance of trustworthy medical artificial intelligence. BMC Med Inform Decis Mak 23:1–15
Funding
The authors declare that no funds, grants, or other support were received concerning the preparation of this manuscript.
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. EMB, EFS, IJB, CK and VH rated and commented the LLM therapy recommendations. Material preparation, data collection and analysis were performed by EMB and BJB. The first draft of the manuscript was written by EMB, BJB and DT and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Ethical approval
Only fictitious case vignettes were used. No actual patient data were analyzed, or used in any way. All experts consented to participate in the study and to the publication of their answers. No ethical committee vote was obtained.
Consent to participate
All experts consented to participate in the study.
Consent to publish
All experts consented to the publication of their answers.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Braun, EM., Juhasz-Böss, I., Solomayer, EF. et al. Will I soon be out of my job? Quality and guideline conformity of ChatGPT therapy suggestions to patient inquiries with gynecologic symptoms in a palliative setting. Arch Gynecol Obstet 309, 1543–1549 (2024). https://doi.org/10.1007/s00404-023-07272-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00404-023-07272-6