Abstract
Purpose
The use of artificial intelligence (AI) in medical decision-making has once again come to the forefront with the prevalence of Natural Language Processing (NLP). In this exploratory article, we tested one such model, ChatGPT, for its ability to identify vestibular causes of dizziness
Methods
Eight hypothetical scenarios were presented to ChatGPT, which included varying clinical pictures and types of prompts. The responses given by ChatGPT were evaluated for coherence, clarity, consistency, accuracy, appropriateness, and recognition of limitations. ChatGPT provided coherent and logical responses.
Results
The model accurately provided differentials for both vestibular and non-vestibular causes of dizziness, with the correct diagnosis presented first in six of the cases, with important limitations
Conclusion
Being an AI tool, ChatGPT lacks the ability to process certain nuances in clinical decision making, in both identifying atypical dizziness, as well as in recommending further examination steps to elucidate a clearer diagnosis. We believe that AI will continue to forge ahead in the medical field. Merging the immense knowledge base of AI programming with the nuances of clinical assessment and knowledge integration will surely enhance patient care in the years to come.
Data availability
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
References
Kerber KA, Newman-Toker DE. Misdiagnosing Dizzy Patients: Common Pitfalls in Clinical Practice. Neurol Clin. 2015;33(3):565–75. https://doi.org/10.1016/j.ncl.2015.04.009
Hybels RL. History taking in dizziness. The most important diagnostic step. Postgrad Med. 1984;75(3):41–3. https://doi.org/10.1080/00325481.1984.11698590
Anh DT, Takakura H, Asai M, Ueda N, Shojaku H (2022) Application of machine learning in the diagnosis of vestibular disease. Sci Rep 12(1):20805. https://doi.org/10.1038/s41598-022-24979-9
Howard A, Hope W, Gerada A (2023) ChatGPT and antimicrobial advice: the end of the consulting infection doctor? Lancet Infect Dis 23(4):405–406. https://doi.org/10.1016/S1473-3099(23)00113-5
Dmitriew C, Regis A, Bodunde O, Lepage R, Turgeon Z, McIsaac S et al (2021) Diagnostic accuracy of the HINTS exam in an emergency department: a retrospective chart review. Acad Emerg Med 28(4):387–393. https://doi.org/10.1111/acem.14171
Sallam M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare (Basel). 2023; https://doi.org/10.3390/healthcare11060887
OpenAI. OpenAI Blog [Internet]. OpenAI, editor. https://openai.com/blog/chatgpt2022. [cited 2023].
Acknowledgements
The data that support the findings of this study are available on request from the corresponding author.
Author information
Authors and Affiliations
Contributions
JC, GX designed the study; All authors were involved in drafting, revision and approving of the manuscript. JC and GX agree to be accountable for all aspects of the work.
Corresponding author
Ethics declarations
Conflict of interest
There are no funding sources or other conflicts of interest.
Ethical approval
Ethics board approval was not applicable for this study.
Informed consent
No participant consent for publication is necessary.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Chee, J., Kwa, E.D. & Goh, X. “Vertigo, likely peripheral”: the dizzying rise of ChatGPT. Eur Arch Otorhinolaryngol 280, 4687–4689 (2023). https://doi.org/10.1007/s00405-023-08135-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00405-023-08135-1