Abstract
Chatbots such as ChatGPT have the potential to change researchers’ lives in many ways. Despite all the advantages of chatbots, many challenges to using chatbots in medical research remain. Wrong and incorrect content presented by chatbots is a major possible disadvantage. The authors’ credibility could be tarnished if wrong content is presented in medical research. Additionally, ChatGPT, as the currently most popular generative AI, does not routinely present references for its answers. Double-checking references and resources used by chatbots might be challenging. Researchers must also be careful not to harm copyright law or cause plagiarism issues using applications such as ChatGPT. Chatbots are trained on publicly available sources on the internet, increasing the risk of copyright or plagiarism issues. Therefore, chatbots such as ChatGPT should not be used routinely for professional medical research for now. However, further developments could make chatbots usable in medical research in the near future.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
ChatGPT (OpenAI, USA) is an artificial intelligence (AI) – based chatbot that was introduced recently and attracted millions of people soon after (Sedaghat, 2023a, b, c). ChatGPT and other AI-based chatbots like Bard (Google Inc., USA) are based on generative AI and have the potential to change researchers’ lives in many ways, such as mitigating the way medical research is conducted and resources are extracted. Despite all the advantages of chatbots like ChatGPT, many challenges in medical research, such as plagiarism issues and wrong content, remain and need to be taken seriously by medical researchers.
Wrong Content
The most apparent disadvantage of chatbots such as ChatGPT is that the information provided could be incorrect (Wen & Wang, 2023; OpenAI TB, 2022). Chatbots are not routinely used in medical practice, which shifts the spotlight toward chatbots in medical research. So, what happens if ChatGPT creates content for an author and the content is false? Of course, this will tarnish the authors’ credibility (Wen & Wang, 2023). Although chatbots like ChatGPT have the potential to assist in medical literature search (Sedaghat, 2023a, b, c), ChatGPT is rejected by many scientists and journals, as the application additionally lacks critical thinking (Arif et al., 2023).
Lack of References
Lubowitz stated in a recently published Editorial that ChatGPT did not present any references for its findings and provided redundant paragraphs that could quickly be deleted (Lubowitz, 2023). That is especially important for medical researchers dealing with references and sensitive scientific content. References must be requested separately from ChatGPT. However, there is still uncertainty about those references (Sedaghat, 2023a, b, c; Homolak, 2023). Double-checking all those resources and references could cause more work instead of saving time.
Risk of Plagiarism
Copyright and plagiarism are other challenges not yet discussed in this article (Biswas, 2023; Kitamura, 2023; Sedaghat, 2023a, b, c). Medical researchers must be careful not to harm copyright law or cause plagiarism issues using applications such as ChatGPT. The application is trained on publicly available sources on the internet (Biswas, 2023; Kitamura, 2023), increasing the risk of copyright or plagiarism issues, as chatbots like ChatGPT could provide texts similar to already published work. Another major task will be how chatbots will deal with restricted access to databases such as PubMed and non-open-access literature (Arif et al., 2023). Consecutively, chatbots like ChatGPT could automatically avoid the inclusion of such databases and literature in their search, eventually leading to a biased selection of sources presented to the medical researcher. Plagiarism could also cause a severe problem for authors in the long run, as plagiarism could happen years later. Therefore, it is of high interest to avoid later plagiarism and wrong content in medical research.
Non-Native English Writers
Another issue with using chatbots like ChatGPT is that the common scientific language is English, especially in medical research. Many medical researchers are non-native English speakers who could try primarily using ChatGPT to write abstracts and scientific manuscripts to improve their English quality. Chatbots could be advantageous for non-native English speakers to improve the language in their abstracts and scientific manuscripts. However, this could cause a misleading sense of security in cases when chatbots’ corrections and answers are not double-checked, potentially increasing the abovementioned issues, such as fabricated research with missing, wrong, or fake content and references (Eggmann et al., 2023; Else, 2023; Sallam, 2023; Shen et al., 2023).
How to Avoid Plagiarism and Wrong Content Using Chatbots
-
1.
For now, the best way to avoid plagiarism and wrong content in medical research is to use chatbots only for gaining overall information on various medical topics without any further research purpose (e.g., not using them for writing scientific abstracts and articles). As references are not provided standardly and double-checking facts is time-consuming, authors are encouraged to perform their searches conventionally. However, this could change with further improvements of ChatGPT and other chatbots, as the applications could be helpful for medical researchers in the future, especially for tasks like automatic fact checks or improving answers and the quality of manuscripts.
-
2.
If medical researchers insist on using ChatGPT or other chatbots for their professional research, they should ask for references and resources for provided information and facts. Those references and resources should be double-checked very carefully. However, double-checking references, facts, and resources could be challenging for authors.
-
3.
In the future, many more studies on plagiarism and wrong content using chatbots in medical research should be conducted, as only a few studies on this issue have been performed till now. These studies could show how chatbots perform in real-world medical research scenarios, where authors often deal with restricted and sensitive data.
-
4.
As developments in the field of chatbots emerge, there is hope that research-tailored chatbots could be invented or introduced one day. Waiting for research-adapted alternatives or improvements of current chatbots could be another strategy for avoiding plagiarism and wrong content using chatbots now.
Conclusion
Plagiarism and wrong content could cause severe problems for medical researchers using chatbots like ChatGPT for their research. Therefore, ChatGPT and other AI-based chatbots should not be used routinely for professional research purposes for now. With further developments and studies conducted on the reliability of chatbots for medical research, chatbots could be reliably usable in research in the near future. For now, it is still early to use chatbots at their full capacity in medical research.
References
Arif, T. B., Munaf, U., & Ul-Haque, I. (2023). The future of medical education and research: Is ChatGPT a blessing or blight in disguise? Medical Education Online, 28, 2181052.
Biswas, S. (2023). ChatGPT and the future of medical writing. Radiology, 223312.
Eggmann, F., Weiger, R., Zitzmann, N. U. (2023). Implications of large language models such as ChatGPT for dental medicine. Journal Of Esthetic And Restorative Dentistry : Official Publication Of The American Academy Of Esthetic Dentistry. [Et Al.].
Else, H. (2023). Abstracts written by ChatGPT fool scientists. Nature, 613, 423.
Homolak, J. (2023). Opportunities and risks of ChatGPT in medicine, science, and academic publishing: A modern Promethean dilemma. Croat Med J, 64, 1–3.
Kitamura, F. C. (2023). ChatGPT is shaping the future of medical writing but still requires human judgment. Radiology, 230171.
Lubowitz, J. H. (2023). ChatGPT, an artificial intelligence Chatbot, is impacting medical literature. Arthroscopy, 39, 1121–1122.
OpenAI TB. (2022). Chatgpt: Optimizing language models for dialogue. OpenAI.
Sallam, M. (2023). ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. Healthcare (Basel), 11.
Sedaghat, S. (2023a). Early applications of ChatGPT in medical practice, education, and research. Clin Med (Lond).
Sedaghat, S. (2023b). Success through simplicity: What other artificial intelligence applications in medicine should learn from history and ChatGPT. Ann Biomed Eng.
Sedaghat, S. (2023c). Future potential challenges of using large language models like ChatGPT in daily medical practice. J Am Coll Radiol.
Shen, Y., Heacock, L., Elias, J., et al. (2023). ChatGPT and other large language models are double-edged swords. Radiology, 307, e230163.
Wen, J., & Wang, W. (2023). The future of ChatGPT in academic research and publishing: A commentary for clinical and translational medicine. Clin Transl Med, 13, e1207.
Acknowledgements
None.
Funding
None.
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing Interests
The author declares that there are no financial or non-financial interests to disclose.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sedaghat, S. Plagiarism and Wrong Content as Potential Challenges of Using Chatbots Like ChatGPT in Medical Research. J Acad Ethics (2024). https://doi.org/10.1007/s10805-024-09533-8
Accepted:
Published:
DOI: https://doi.org/10.1007/s10805-024-09533-8