Avoid common mistakes on your manuscript.
To the Editor,
We are writing to address a specific point raised in the article titled ‘The Use of ChatGPT to Assist in Diagnosing Glaucoma Based on Clinical Case Reports’ by Delsoz et al. published in Ophthalmology and Therapy [1]. The paper presents itself as a timely and insightful contribution to the dynamic field of ophthalmology, particularly in the age of large language models. While we commend the authors for their pioneering effort in exploring the potential of AI in medical diagnosis, we find it necessary to point out a minor yet significant misunderstanding regarding the capabilities of OpenAI’s ChatGPT, as presented in the paper.
In one of the key sentences of the article specifically, “As ChatGPT may learn from previous interactions, we recorded all responses based on our first enquiry of provisional and differential diagnosis”, it is implied that ChatGPT, developed by OpenAI, can learn and retain information from individual interactions with users. However, this interpretation does not align with the actual functionality and design philosophy of ChatGPT, especially its latest GPT-4 iteration [2].
It is crucial for us to understand that ChatGPT, as it currently operates, does not have the capability to remember or learn from past interactions with users. This is a deliberate design choice, reflecting OpenAI’s commitment to ensuring user privacy and confidentiality. While ChatGPT does not learn directly from users, it indirectly learns from the process of analyzing user interactions and refining future models. ChatGPT’s training data get updated from time to time, and it uses feedback and user demonstrations to improve its responses over time. However, each interaction with ChatGPT is treated as independent and discrete, ensuring that no personal data shared during these interactions are retained or used in subsequent sessions. The model’s responses are generated based on a comprehensive and diverse range of data sources. These include licensed datasets, data curated and created by human trainers, as well as publicly available information. However, it is important for us to note that all these data were compiled and integrated into the model only up to the cut-off point of its last training update, which, as of our last knowledge, was in April 2023. Because of such training cut-offs, ChatGPT’s responses might not always reflect the most current research or data, particularly in fast-evolving fields like medicine and technology. Therefore, its output should not be used as a sole source for critical decision-making. Moreover, in academic settings, any claims or data provided by ChatGPT should be independently verified with current, peer-reviewed research, as the model can generate hallucinations, or fabricated information, when it has no answer as well as come up with biased answers [3, 4].
It is important for us to clarify this aspect of ChatGPT’s functionality to prevent misunderstandings about its design and capabilities, especially in the context of scientific and academic discussions where precision is paramount. Misconceptions about its design and capabilities could lead to inaccurate expectations or interpretations of its utility, especially in sensitive fields like medical diagnosis and treatment.
In conclusion, while the exploration of large language models like OpenAI’s ChatGPT in medical fields such as ophthalmology is indeed promising and exciting, it is important to have a clear and accurate understanding of the tool’s capabilities and limitations. ChatGPT, including its GPT-4 iteration, does not retain or learn from individual user interactions, ensuring user privacy and data confidentiality. Its knowledge base is extensive but static, limited to the data available up to its last training update. Consequently, while ChatGPT can be helpful in academia, it should not be relied upon as the sole source for critical decision-making in medicine or other rapidly evolving fields. Any information provided by ChatGPT must be corroborated with up-to-date, peer-reviewed research to avoid the mistake of relying on outdated or potentially biased information. Understanding these limitations is crucial in ensuring that the use of AI in academic and medical contexts remains responsible, accurate, and beneficial.
Data Availability
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
References
Delsoz M, Raja H, Madadi Y, et al. The use of ChatGPT to assist in diagnosing glaucoma based on clinical case reports. Ophthalmol Ther. 2023;12(6):3121–32. https://doi.org/10.1007/s40123-023-00805-x.
GPT-4. https://openai.com/gpt-4. Accessed 14 Dec 2023.
Azamfirei R, Kudchadkar SR, Fackler J. Large language models and the perils of their hallucinations. Crit Care. 2023;27(1):120. https://doi.org/10.1186/s13054-023-04393-x.
Ferrara E. Should ChatGPT be biased? Challenges and risks of bias in large language models. First Monday. Published online November 7, 2023. https://doi.org/10.5210/fm.v28i11.13346.
Funding
No funding or sponsorship was received for the publication of this article.
Author information
Authors and Affiliations
Contributions
Antonio Yaghy: concept and design, drafting the manuscript, reviewing and editing. Jacqueline R. Porteny: drafting the manuscript, reviewing and editing.
Corresponding author
Ethics declarations
Conflict of Interest
Antonio Yaghy and Jacqueline R. Porteny have nothing to disclose.
Ethical Approval
This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc/4.0/.
About this article
Cite this article
Yaghy, A., Porteny, J.R. A Letter to the Editor Regarding “The Use of ChatGPT to Assist in Diagnosing Glaucoma Based on Clinical Case Reports”. Ophthalmol Ther 13, 1813–1815 (2024). https://doi.org/10.1007/s40123-024-00934-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40123-024-00934-x