Introduction

ChatGPT (OpenAI, USA) is an artificial intelligence (AI) – based chatbot that was introduced recently and attracted millions of people soon after (Sedaghat, 2023a, b, c). ChatGPT and other AI-based chatbots like Bard (Google Inc., USA) are based on generative AI and have the potential to change researchers’ lives in many ways, such as mitigating the way medical research is conducted and resources are extracted. Despite all the advantages of chatbots like ChatGPT, many challenges in medical research, such as plagiarism issues and wrong content, remain and need to be taken seriously by medical researchers.

Wrong Content

The most apparent disadvantage of chatbots such as ChatGPT is that the information provided could be incorrect (Wen & Wang, 2023; OpenAI TB, 2022). Chatbots are not routinely used in medical practice, which shifts the spotlight toward chatbots in medical research. So, what happens if ChatGPT creates content for an author and the content is false? Of course, this will tarnish the authors’ credibility (Wen & Wang, 2023). Although chatbots like ChatGPT have the potential to assist in medical literature search (Sedaghat, 2023a, b, c), ChatGPT is rejected by many scientists and journals, as the application additionally lacks critical thinking (Arif et al., 2023).

Lack of References

Lubowitz stated in a recently published Editorial that ChatGPT did not present any references for its findings and provided redundant paragraphs that could quickly be deleted (Lubowitz, 2023). That is especially important for medical researchers dealing with references and sensitive scientific content. References must be requested separately from ChatGPT. However, there is still uncertainty about those references (Sedaghat, 2023a, b, c; Homolak, 2023). Double-checking all those resources and references could cause more work instead of saving time.

Risk of Plagiarism

Copyright and plagiarism are other challenges not yet discussed in this article (Biswas, 2023; Kitamura, 2023; Sedaghat, 2023a, b, c). Medical researchers must be careful not to harm copyright law or cause plagiarism issues using applications such as ChatGPT. The application is trained on publicly available sources on the internet (Biswas, 2023; Kitamura, 2023), increasing the risk of copyright or plagiarism issues, as chatbots like ChatGPT could provide texts similar to already published work. Another major task will be how chatbots will deal with restricted access to databases such as PubMed and non-open-access literature (Arif et al., 2023). Consecutively, chatbots like ChatGPT could automatically avoid the inclusion of such databases and literature in their search, eventually leading to a biased selection of sources presented to the medical researcher. Plagiarism could also cause a severe problem for authors in the long run, as plagiarism could happen years later. Therefore, it is of high interest to avoid later plagiarism and wrong content in medical research.

Non-Native English Writers

Another issue with using chatbots like ChatGPT is that the common scientific language is English, especially in medical research. Many medical researchers are non-native English speakers who could try primarily using ChatGPT to write abstracts and scientific manuscripts to improve their English quality. Chatbots could be advantageous for non-native English speakers to improve the language in their abstracts and scientific manuscripts. However, this could cause a misleading sense of security in cases when chatbots’ corrections and answers are not double-checked, potentially increasing the abovementioned issues, such as fabricated research with missing, wrong, or fake content and references (Eggmann et al., 2023; Else, 2023; Sallam, 2023; Shen et al., 2023).

How to Avoid Plagiarism and Wrong Content Using Chatbots

  1. 1.

    For now, the best way to avoid plagiarism and wrong content in medical research is to use chatbots only for gaining overall information on various medical topics without any further research purpose (e.g., not using them for writing scientific abstracts and articles). As references are not provided standardly and double-checking facts is time-consuming, authors are encouraged to perform their searches conventionally. However, this could change with further improvements of ChatGPT and other chatbots, as the applications could be helpful for medical researchers in the future, especially for tasks like automatic fact checks or improving answers and the quality of manuscripts.

  2. 2.

    If medical researchers insist on using ChatGPT or other chatbots for their professional research, they should ask for references and resources for provided information and facts. Those references and resources should be double-checked very carefully. However, double-checking references, facts, and resources could be challenging for authors.

  3. 3.

    In the future, many more studies on plagiarism and wrong content using chatbots in medical research should be conducted, as only a few studies on this issue have been performed till now. These studies could show how chatbots perform in real-world medical research scenarios, where authors often deal with restricted and sensitive data.

  4. 4.

    As developments in the field of chatbots emerge, there is hope that research-tailored chatbots could be invented or introduced one day. Waiting for research-adapted alternatives or improvements of current chatbots could be another strategy for avoiding plagiarism and wrong content using chatbots now.

Conclusion

Plagiarism and wrong content could cause severe problems for medical researchers using chatbots like ChatGPT for their research. Therefore, ChatGPT and other AI-based chatbots should not be used routinely for professional research purposes for now. With further developments and studies conducted on the reliability of chatbots for medical research, chatbots could be reliably usable in research in the near future. For now, it is still early to use chatbots at their full capacity in medical research.