To the Editor,

We are writing to address a specific point raised in the article titled ‘The Use of ChatGPT to Assist in Diagnosing Glaucoma Based on Clinical Case Reports’ by Delsoz et al. published in Ophthalmology and Therapy [1]. The paper presents itself as a timely and insightful contribution to the dynamic field of ophthalmology, particularly in the age of large language models. While we commend the authors for their pioneering effort in exploring the potential of AI in medical diagnosis, we find it necessary to point out a minor yet significant misunderstanding regarding the capabilities of OpenAI’s ChatGPT, as presented in the paper.

In one of the key sentences of the article specifically, “As ChatGPT may learn from previous interactions, we recorded all responses based on our first enquiry of provisional and differential diagnosis”, it is implied that ChatGPT, developed by OpenAI, can learn and retain information from individual interactions with users. However, this interpretation does not align with the actual functionality and design philosophy of ChatGPT, especially its latest GPT-4 iteration [2].

It is crucial for us to understand that ChatGPT, as it currently operates, does not have the capability to remember or learn from past interactions with users. This is a deliberate design choice, reflecting OpenAI’s commitment to ensuring user privacy and confidentiality. While ChatGPT does not learn directly from users, it indirectly learns from the process of analyzing user interactions and refining future models. ChatGPT’s training data get updated from time to time, and it uses feedback and user demonstrations to improve its responses over time. However, each interaction with ChatGPT is treated as independent and discrete, ensuring that no personal data shared during these interactions are retained or used in subsequent sessions. The model’s responses are generated based on a comprehensive and diverse range of data sources. These include licensed datasets, data curated and created by human trainers, as well as publicly available information. However, it is important for us to note that all these data were compiled and integrated into the model only up to the cut-off point of its last training update, which, as of our last knowledge, was in April 2023. Because of such training cut-offs, ChatGPT’s responses might not always reflect the most current research or data, particularly in fast-evolving fields like medicine and technology. Therefore, its output should not be used as a sole source for critical decision-making. Moreover, in academic settings, any claims or data provided by ChatGPT should be independently verified with current, peer-reviewed research, as the model can generate hallucinations, or fabricated information, when it has no answer as well as come up with biased answers [3, 4].

It is important for us to clarify this aspect of ChatGPT’s functionality to prevent misunderstandings about its design and capabilities, especially in the context of scientific and academic discussions where precision is paramount. Misconceptions about its design and capabilities could lead to inaccurate expectations or interpretations of its utility, especially in sensitive fields like medical diagnosis and treatment.

In conclusion, while the exploration of large language models like OpenAI’s ChatGPT in medical fields such as ophthalmology is indeed promising and exciting, it is important to have a clear and accurate understanding of the tool’s capabilities and limitations. ChatGPT, including its GPT-4 iteration, does not retain or learn from individual user interactions, ensuring user privacy and data confidentiality. Its knowledge base is extensive but static, limited to the data available up to its last training update. Consequently, while ChatGPT can be helpful in academia, it should not be relied upon as the sole source for critical decision-making in medicine or other rapidly evolving fields. Any information provided by ChatGPT must be corroborated with up-to-date, peer-reviewed research to avoid the mistake of relying on outdated or potentially biased information. Understanding these limitations is crucial in ensuring that the use of AI in academic and medical contexts remains responsible, accurate, and beneficial.