Abstract
Introduction
ChatGPT, a recently released chatbot from OpenAI, has found applications in various aspects of life, including academic research. This study investigated the knowledge, perceptions, and attitudes of researchers towards using ChatGPT and other chatbots in academic research.
Methods
A pre-designed, self-administered survey using Google Forms was employed to conduct the study. The questionnaire assessed participants’ knowledge of ChatGPT and other chatbots, their awareness of current chatbot and artificial intelligence (AI) applications, and their attitudes towards ChatGPT and its potential research uses.
Results
Two hundred researchers participated in the survey. A majority were female (57.5%), and over two-thirds belonged to the medical field (68%). While 67% had heard of ChatGPT, only 11.5% had employed it in their research, primarily for rephrasing paragraphs and finding references. Interestingly, over one-third supported the notion of listing ChatGPT as an author in scientific publications. Concerns emerged regarding AI’s potential to automate researcher tasks, particularly in language editing, statistics, and data analysis. Additionally, roughly half expressed ethical concerns about using AI applications in scientific research.
Conclusion
The increasing use of chatbots in academic research necessitates thoughtful regulation that balances potential benefits with inherent limitations and potential risks. Chatbots should not be considered authors of scientific publications but rather assistants to researchers during manuscript preparation and review. Researchers should be equipped with proper training to utilize chatbots and other AI tools effectively and ethically.
Similar content being viewed by others
Introduction
ChatGPT, the OpenAI chatbot released in November 2022, has ignited significant academic and media interest. While chatbot technology existed before, recent advances in AI, driven by substantial resource investment, have ushered in a paradigm shift. ChatGPT leverages advanced deep learning techniques and extensive data training to generate human-like responses to user inputs [1]. The OpenAI website describes it as capable of engaging in dialogue, answering follow-up questions, recognizing errors, and rejecting inappropriate requests [2].
With over 100 million users in its first two months, excitement is brewing around ChatGPT’s potential across diverse societal domains, encompassing academia, commerce, professional settings, and personal life [4]. A multitude of applications are being explored, spanning marketing, advertising, consulting, customer service, and finance [10]. ChatGPT’s transformative potential lies in its ability to enhance data accuracy and analysis, support diverse languages, and automate repetitive tasks [5].
While ChatGPT shares a lineage with previous AI models in terms of its language processing architecture and deep learning training methodology, its distinguishing feature lies in its open accessibility as a conversational chatbot. The underlying architecture, GPT-3, released in 2020, exhibited impressive capabilities for generating human-like text, writing essays, translating and summarizing lengthy texts, and answering complex questions [6]. However, ChatGPT differentiates itself by prioritizing conversational interaction, enabling users to engage in open-ended dialogue and receive dynamic responses tailored to their specific queries and context. While GPT-3 boasts a significantly larger parameter count of 175 billion compared to ChatGPT’s 20 billion, the latter exhibits specialized optimization for human-like conversational text generation. This design choice, in conjunction with its open availability and accessibility, has fueled ChatGPT’s rapid global dissemination and swift rise in popularity [7].
ChatGPT’s emergence has precipitated a surge of media, social media, and academic discourse, prompting calls for reevaluating academic practices considering its capabilities. Notably, demonstrations have shown ChatGPT’s ability to convincingly generate facsimiles of research abstracts, even deceiving some scientists [8]. An early example of ChatGPT’s academic application can be found in an article published in February 2023 [9]. While the generated text demonstrated impressive fluency and minimal grammatical errors, showcasing clarity through its use of simple language, it revealed a certain superficiality and robotic quality. Notably, the text lacked the depth of analysis and stylistic diversity typically associated with human authorship. Nevertheless, advancements in ChatGPT’s training are anticipated to mitigate these limitations in the near future [9].
ChatGPT’s capabilities extend beyond mere text generation, as demonstrated by its successful integration into the drafting of editorials and preprints that bypassed plagiarism detection measures [10,11,12]. However, within the dynamic landscape of scientific knowledge, it is imperative to leverage technological advancements while safeguarding the essential human element. The software’s potential for rapid and accurate output offers the prospect of enhancing efficiency and freeing researchers to engage in more critical tasks [13]. This confluence of factors has catalyzed calls for reimagining the research process. Our present study aims to investigate the knowledge, perceptions, and attitudes of Egyptian researchers towards the utilization of ChatGPT and other chatbots within the research context.
Methods
Study Design and Target Population
The present study employed a cross-sectional design, conducted entirely online. Our target population encompassed researchers affiliated with diverse universities and academic institutions across Egypt. To ensure a representative and heterogeneous sample, we implemented a multi-pronged approach recruitment strategy to identify, then to contact potential participants.
Leveraging Online Research Networks
-
Targeted Searches on Scholarly Platforms: We utilized keyword-based searches on Google Scholar, Microsoft Teams, and ResearchGate to identify researchers actively publishing in fields relevant to the study. We then filtered results based on institutional affiliation and research focus to curate a targeted pool of potential participants.
-
Professional Social Media Groups: We disseminated study invitations and concise descriptions within pertinent research discussion groups on platforms like Facebook and LinkedIn. These communities often require academic credentials or institutional affiliation for membership, enhancing the likelihood of reaching qualified researchers.
Collaboration with Universities and Research Institutions
-
Faculty Liaison and Research Office Outreach: We established contact with some key individuals, such as faculty liaisons or research office personnel, at selected Egyptian universities. We collaborated with them to disseminate the study invitation through departmental email lists or internal communication channels, maximizing reach within academic structures.
-
Utilizing Online Institutional Directories: We leveraged university websites that maintain comprehensive researcher profiles, including contact information and research areas. This facilitated the identification of potential participants based on their expertise and institutional affiliation.
Snowball Sampling
-
As we enrolled initial participants, we encouraged them to refer colleagues who might be interested in the study. This snowball sampling technique proved advantageous in expanding our reach and recruiting additional researchers from diverse institutions.
Selection Criteria
Throughout the recruitment process, we adhered to specific criteria to ensure the participation of qualified individuals:
-
Academic Background: Only researchers with demonstrable research experience and affiliation with accredited universities or academic institutions were invited.
-
Publication History: While not an exclusionary factor, prioritizing researchers with recent publications helped ensure active engagement within their respective fields.
Having identified the target population, we implemented a multi-pronged recruitment strategy to maximize reach and ensure a representative sample. This strategy encompassed two key avenues:
Utilizing Social Media Networks
-
Targeted Outreach through Facebook Groups: We sought participation by posting study invitations in relevant research-focused Facebook groups boasting high membership across various universities and disciplines. To further refine our reach, we leveraged targeted advertising within Facebook, enabling us to connect with researchers whose specific interests closely aligned with our study topic.
-
Leveraging Trusted Networks via WhatsApp Groups: We cultivated fruitful collaborations with faculty members and research coordinators at several universities. Through their established WhatsApp groups for researchers, we disseminated study information, capitalizing on trusted personal networks within academic communities.
Dissemination through Established Email Channels
-
Collaboration with University Mailing Lists: We secured permission from multiple universities to send concise email invitations to their faculty and research staff listservs. This approach broadened our reach, guaranteeing inclusivity of researchers from diverse departments and institutions.
Participant responses were recorded consecutively until the desired sample size was attained.
Study Tool
We employed a self-administered, pre-designed questionnaire hosted on Google Forms to gather data. The questionnaire investigated participants’ knowledge of ChatGPT and other chatbots, their awareness of current chatbot and AI applications, and their attitudes towards ChatGPT and its potential future uses. The complete questionnaire instrument is available in supplementary file 1.
To ensure a targeted and relevant sample, we adopted a non-probability purposive sampling approach. We began by identifying key academic databases and research communities frequented by our target population (researchers). Subsequently, we implemented a snowball sampling technique, leveraging the recommendations of initial participants to reach additional researchers within these communities.
Sample Size
Using One-Epi software [14] for sample size calculation, and assuming 50% proportion of having good knowledge about ChatGPT among researchers, the minimum sample size required was 196 subjects at 95% level of confidence and 7% margin of error. A preliminary stage was conducted to assess the validity and reliability of the questionnaire before its wider use. Initially, three Egyptian experts in the field of AI research were asked to evaluate the appropriateness of the questionnaire items and correctly measure the researchers’ knowledge, perception and attitude towards the use of ChatGPT in research. Then, minimal corrections were made. The next step was the pre-testing of the questionnaire. We included 20 participants who were asked to fill out the questionnaire twice, three weeks apart. The collected data were used to assess internal consistency reliability using Cronbach’s alpha as well as test-retest reliability using intra-class correlation coefficient. The results showed adequate internal consistency reliability (With Cronbach’s Alpha = 0.81). Additionally, the intra-class correlation coefficient was 0.96.
Statistical Analysis
The statistical analysis for this study was conducted using Minitab 17.1.0.0 for Windows (Minitab Inc., 2013, Pennsylvania, USA). Data normality was assessed using the Shapiro-Wilk test. Continuous data were summarized as mean ± standard deviation or median (interquartile range), while categorical data were presented as frequencies and percentages. The correlation between the degrees of disagreement/agreement and the demographic characters was performed using an independent t-test or person correlation coefficient. All tests were two-tailed, with a significance level set at p < 0.05.
IRB Approval
Approval of Institutional Review Board (IRB) at Faculty of Medicine, Alexandria University was obtained before starting the study (IRB No: 00012098).
Ethical Considerations
Prior to accessing the questionnaire, participants were required to provide online informed consent to participate in the study. All data were collected and presented anonymously, ensuring participant confidentiality throughout the study and beyond.
Results
A total of 512 researchers were invited to participate in the study through the aforementioned communication tools. Data was collected between June and August 2023. Of those invited, 320 agreed to participate, resulting in a response rate of 64%. However, only 200 (62.5%) individuals ultimately completed the survey. We used the “limit to 1 response” option in Google Forms to ensure participants could only submit the form once. Table 1 presents the demographic characteristics of these final respondents. The majority of participants were female (57.5%), with a median age of 35 years (interquartile range: 30–42 years). Notably, over two-thirds (68%) belonged to the medical sector.
Table 2 details our participants’ familiarity with ChatGPT and other chatbots, as well as their engagement with AI tools in their research endeavors. While a substantial majority (67%) had encountered ChatGPT, its uptake within the research setting remained limited, as indicated by the relatively low proportion (11.5%) who reported using it. Similarly, familiarity with other chatbots exceeded usage, with 32% acknowledging awareness and 27.5% reporting prior use. Notably, while nearly half (44%) of the participants identified as familiar with the broad concept of AI, only 20% actively integrated AI applications into their research activities.
Figure 1 illustrates the distribution of ChatGPT usage among participants. Rephrasing paragraphs (10%) and searching for references (7%) emerged as the most prevalent applications, while data analysis and translation assignments constituted the least frequently employed tasks (2.5% and 2%, respectively).
The survey revealed a positive attitude among participants regarding the potential benefits of ChatGPT in research, with a majority agreeing on its usefulness and applicability. Notably, over one-third endorsed the idea of listing ChatGPT as a contributing author in publications. However, concerns surfaced surrounding the potential displacement of researchers’ roles, particularly in tasks like language editing, statistics, and data analysis.
Interestingly, approximately half the participants acknowledged the utility of ChatGPT for paraphrasing, resource retrieval, and data analysis, albeit with reservations about the accuracy of its output. They further recognized the potential for enhanced data collection, research efficiency, and productivity if ChatGPT’s capabilities were expanded. Nevertheless, ethical concerns related to AI integration in scientific research resonated with another half of the respondents, highlighting the need for further consideration of these implications (Table 3; Fig. 2 further detail these findings).
Factors Influencing ChatGPT and Chatbot Utilization
Table 4 presents findings from univariate analyses investigating factors related to the use of ChatGPT and other chatbots by the participants. A prominent revelation is the markedly higher inclination towards these tools among younger researchers. Their median age of 34 years (IQR: 28–38) stands in contrast to 37 years (IQR: 30–45) for non-users (p = 0.01). Furthermore, awareness of the existence of ChatGPT and other chatbot models significantly influenced their propensity to utilize them (p = 0.001). Notably, no statistically significant associations were observed between the use of these technologies and variables like sex, specialty, academic affiliation, degree level, or publication count.
The multivariate analysis presented in Table 5 identified two key factors influencing the use of ChatGPT and similar models: researcher age and prior familiarity with these models. Younger researchers were found to be slightly more likely to adopt these models for each year younger, while those with prior knowledge were 17 times more likely to do so (odds ratios = 1.1 and 17, p-values = 0.05 and < 0.001, respectively).
We also correlated the attitude and future uses of ChatGPT with different academic characteristics of participants. The analysis of correlations between participants’ academic background (specialty, affiliation, publications) and their attitudes and future uses of ChatGPT, presented in Supplementary Tables 1–3, yielded no statistically significant relationships.
Discussion
This study investigated the perceptions of Egyptian researchers regarding the utilization of ChatGPT in academic research. The findings indicate that ChatGPT adoption remains in its nascent stages within this cohort. Despite this, awareness of its potential benefits is burgeoning, with many researchers expressing interest in leveraging it to enhance their work.
Despite the presence of earlier chatbot releases, ChatGPT has sparked a notable surge of interest and engagement within academic circles. This is reflected in the relatively high degree of familiarity with ChatGPT among our study participants compared to other chatbots. Intriguingly, this awareness did not necessarily translate into widespread research utilization. Currently, the primary identified applications of ChatGPT in research involve paragraph rephrasing and reference retrieval. Notably, data analysis emerged as a potential function, although concerns surrounding the accuracy and reliability of outputs were expressed by many participants. Interestingly, our data revealed a positive association between age and chatbot use, with younger researchers exhibiting a higher likelihood of engagement. This association can potentially be attributed to the increased technological fluency and comfort often observed in younger generations, which is further supported by the observed higher usage among participants with prior familiarity with chatbots.
While ChatGPT and other language models present potential benefits for research endeavors, inherent limitations require critical consideration. One primary concern lies in their restricted comprehension of the complex nuances within the published literature. This inadequacy can lead to erroneous analyses and potentially misleading conclusions drawn from the processed information. Furthermore, the absence of robust citation mechanisms within these models creates a significant risk of perpetuating misinformation [15, 16].
This concern is demonstrably illustrated by an anecdotal incident encountered during the present study’s preparation. The first author of this manuscript engaged ChatGPT in a query regarding a specific research topic of interest. While the model provided a seemingly plausible explanation, its attempts at referencing relevant sources proved demonstrably unreliable. The two purported complete references (authors, year, journal, and DOI) offered by ChatGPT were either entirely fabricated, with a DOI linked to an unrelated article, or an existing publication, but its content bore no connection to the initial query, and the accompanying DOI was inaccurate.
While the identified limitations of ChatGPT, particularly its limited understanding of literature and unreliable citation mechanisms, pose significant challenges to its widespread adoption in research, we believe that advancements in AI technology and refined training methodologies hold the potential to mitigate these concerns over time. Nevertheless, until such advancements materialize, researchers and students should exercise caution when utilizing ChatGPT. Rigorous verification of the quality and accuracy of generated outputs is essential, and its application should be restricted to tasks that require minimal literary analysis or citation accuracy. Currently, tasks such as summarizing existing literature, enhancing written content, and conducting basic statistical analyses appear to be more suitable for ChatGPT’s capabilities.
More than one third of participants in our study believed ChatGPT could be designated as an author on scientific publications under the condition of its meaningful contribution to the research work. However, roughly half expressed concerns regarding the ethical implications of integrating AI applications into scientific research. Additionally, concerns pertaining to ethical, legal, and social issues (ELSI) surrounding chatbot implementation in research have been raised, despite existing examples where ChatGPT has been listed as an author in several articles and preprints [10, 17].
Major publishers have adopted a range of responses to the question of ChatGPT’s potential authorship, with some implementing restrictions on listing it as a co-author and others opting for a complete prohibition of its use. Similarly, the use of text generated by ChatGPT within research manuscripts incurs varying degrees of scrutiny, with some publishers imposing outright bans and others permitting its use for stylistic improvements under specific conditions, such as excluding critical tasks like data analysis and interpretation and mandating transparent disclosure of its involvement [18]. A leading plagiarism detection software company has unveiled a novel technology capable of recognizing AI-assisted writing, encompassing texts produced by ChatGPT [19].
The diverse array of responses to ChatGPT’s utilization in research necessitates closer examination. In this context, it is pertinent to consider the International Committee of Medical Journal Editors (ICMJE) guidelines, which stipulate four essential criteria for authorship in scientific publications and academic works: (1) conceptualization and design, (2) data collection, analysis, and interpretation, (3) substantial contribution to writing, drafting, or critical revision of the intellectual content, and (4) final approval of the version intended for publication [20, 21]; Individuals who do not meet the aforementioned criteria should be recognized solely in the acknowledgments section of the publication [21]. Furthermore, all authors bear the collective responsibility to ensure that any concerns regarding the accuracy or integrity of any aspect of the work are appropriately investigated and satisfactorily addressed [22]. Ethical and responsible authorship hinges upon three fundamental pillars: truthfulness, ensuring no falsity or misrepresentation is present; trustworthiness, demanding authors diligently strive to minimize bias; and fairness, upholding objectivity, and impartiality throughout the research process. Accountability, ethical conduct, and independence are further requisites for authors to fulfill their obligations [22, 23].
Based on the established criteria for authorship outlined above, two primary arguments preclude the listing of ChatGPT as an author on scientific publications. Firstly, its capabilities do not align with the aforementioned requirements. Secondly, and more importantly, ChatGPT lacks the capacity to be held accountable for the presented work, a fundamental characteristic for authorship as stipulated by The ICMJE guidelines, which state that “Chatbots (such as ChatGPT) should not be listed as authors because they cannot be responsible for the accuracy, integrity, and originality of the work, and these responsibilities are required for authorship.” [21].
In line with the International ICMJE authorship criteria, the World Association of Medical Editors (WAME)’s recent recommendations on AI-assisted writing explicitly deny Chatbots the status of author. This exclusion stems from their inability to fulfill crucial authorship responsibilities, such as approving the final version before publication, ensuring the work’s integrity and accuracy, comprehending, and legally signing the conflict-of-interest statement. Consequently, WAME emphasizes that authors bear the ultimate responsibility for the accuracy of any material generated by a chatbot and included in their publications [24].
As technological advancements continue, chatbots may gradually acquire the capability to perform more complex research tasks and, consequently, raise the question of their accountability for their actions. This potential scenario necessitates a critical re-examination of authorship guidelines to address the issue and formulate clear recommendations regarding the attribution of authorship in such situations.
The potential for widespread utilization of ChatGPT in research paper drafting also raises significant ethical concerns surrounding the potential for text similarity within papers addressing the same field. This could manifest as high rates of plagiarism flagged by plagiarism detection software. Additionally, the potential designation of ChatGPT as a co-author presents a novel challenge for the research community, sparking debate amongst supporters and opponents [25, 26]. Furthermore, the issue of transparency regarding AI-generated content within research outputs necessitates clear disclosure practices [27].
A systematic review investigating the potential and pitfalls of ChatGPT in healthcare education and research found that concerns surrounding its use were prevalent in over 90% of analyzed publications. These concerns encompassed ethical considerations, copyright and plagiarism issues, lack of originality, inaccurate content, limited knowledge base, incorrect citations, and a propensity for “artificial hallucination” – the generation of misleading or factually incorrect outputs by AI models [28]. This phenomenon of AI-generated “hallucinations,” particularly pronounced in models trained on extensive unsupervised data, underscores the crucial role of human evaluation in ensuring the accuracy and validity of generated content [29].
Artificial intelligence (AI) is increasingly permeating diverse medical fields, exhibiting promising potential in areas such as basic research, disease diagnosis, patient risk identification, drug discovery, and clinical trials [30, 31]. The integration of AI into healthcare raises a multitude of social concerns, with anxieties regarding the potential for AI to replace doctors occupying a prominent space [32]. This concern regarding AI replacing human practitioners is particularly salient for diagnosticians like radiologists and pathologists, whose workflows may be significantly impacted by AI technologies. While complete automation may not be imminent, the rapid advancements in AI necessitate exploration of the timeline for transitioning to semi-autonomous and eventually fully autonomous diagnostic systems [32].
The emergence of advanced language models like ChatGPT raises analogous questions about their potential roles in research. Specifically, their capabilities can blur the lines of traditional researcher functions, leading to concerns about replacing or significantly automating tasks currently performed by human researchers and supporting personnel, such as data analysts and language editors. Interestingly, similar anxieties regarding potential job displacement by AI have surfaced in other fields, notably with programmer concerns surrounding models like Google DeepMind’s AlphaCode. [33, 34] While current evidence suggests that AI is unlikely to entirely replace researchers in the near future, its growing capabilities necessitate a shift in focus from competition to collaboration. Researchers in the coming years should prioritize adapting to and effectively integrating AI into their workflows, rather than viewing it as a threat.
A key limitation of our study lies in the sampling and recruitment methods, as reliance on self-selection introduces the possibility of non-representative data. This potential bias, where only individuals already interested in the topic participated, may limit the generalizability of our findings, and necessitate caution in interpreting them.
Conclusions and Recommendations
The increasing popularity of chatbots in academic research presents an opportunity to foster responsible and ethical AI integration. Research efforts should prioritize the development of advanced text analysis techniques for identifying fabricated or misleading content, alongside the development of training programs for researchers to effectively utilize chatbots and other AI tools while adhering to ethical principles. Such initiatives can ensure the benefits of AI augmentation without compromising academic integrity.
Data Availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Abbreviations
- AI:
-
Information Technology
- ChatGPT:
-
Chat Generative Pre-Trained Transformer
- DOI:
-
Digital Object Identifier
- ELSI:
-
Ethical, legal and social issues
- ICMJE:
-
International Committee of Medical Journal Editors
References
Gilson A, Safranek CW, Huang T, et al. How Does ChatGPT Perform on theUnited States Medical Licensing Examination? The Implications of LargeLanguage Models for Medical Education and Knowledge Assessment. JMIR
Med Educ. 2023;9:e45312. Published 2023 Feb 8. 10.2196/45312OpenAI. Introducing ChatGPT. Available at: https://openai.com/blog/chatgpt/. Accessed 20th of January 2024
Reuters. ChatGPT sets record for fastest-growing user base - analyst note. Available at: https://shorturl.at/kuyQX. Accessed 20 Jan 2024
Harvard Business Review. ChatGPT Is a Tipping Point for AI. Available at: https://hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai. Accessed 20th of January 2024
Entrepreneur. What Does ChatGPT Mean for the Future of Business?Available at: https://rb.gy/dxka9n. Accessed 20th of January 2024.
Sezgin E, Sirrianni J, Linwood SL. Operationalizing and Implementing Pretrained, Large Artificial Intelligence Linguistic Models in the US Health Care System: Outlook of Generative Pretrained Transformer 3 (GPT-3) as a Service Model. JMIR Med Inform. 2022;10(2):e32875. Published 2022 Feb 10. https://doi.org/10.2196/32875
Ghacks. What is the difference between ChatGPT and GPT-3?Available at: https://www.ghacks.net/2022/12/30/difference-between-chatgpt-and-gpt-3/. Accessed 20th of January 2024.
Else H. Abstracts written by ChatGPT fool scientists. Nature. 2023;613(7944):423. https://doi.org/10.1038/d41586-023-00056-7
Manohar N, Prasad SS. Use of ChatGPT in academic publishing: a rare case of seronegative systemic lupus erythematosus in a patient with HIV infection. Cureus 2023;15(2):e34616
Blanco-Gonzalez, A. et al. The Role of AI in Drug Discovery: Challenges, Opportunities, and Strategies. Preprint at arXiv https://doi.org/10.48550/arXiv.2212.08104 (2022)
O’Connor S, ChatGPT. Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? [published correction appears in Nurse EducPract. 2023;67:103572]. Nurse EducPract. 2023;66:103537. https://doi.org/10.1016/j.nepr.2022.103537
Macdonald C, Adeloye D, Sheikh A, Rudan I. Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. J Glob Health. 2023;13:01003. Published 2023 Feb 17. https://doi.org/10.7189/jogh.13.01003
Hill-Yardin EL, Hutchinson MR, Laycock R, Spencer SJ. A Chat(GPT) about the future of scientific publishing. Brain, Behavior, and Immunity 2023;110:152–4
Dean AG, Sullivan KM, Soe MM. OpenEpi: Open Source Epidemiologic Statistics for Public Health, Version. www.OpenEpi.com, updated 2013/04/06, Accessed 20th of January 2024
van Dis EAM, Bollen J, Zuidema W, van Rooij R, Bockting CL. ChatGPT: five priorities for research. Nature. 2023;614(7947):224–226. https://doi.org/10.1038/d41586-023-00288-7
Biswas S. ChatGPT and the Future of Medical Writing [published online ahead of print, 2023 Feb 2]. Radiology. 2023;223312. https://doi.org/10.1148/radiol.223312
ChatGPT Generative Pre-trained Transformer, Zhavoronkov A. Rapamycin in the context of Pascal’s Wager: generative pre-trained transformer perspective. Oncoscience. 2022;9:82–84. Published 2022 Dec 21. https://doi.org/10.18632/oncoscience.571
The Guardian. Science journals ban listing ChatGPT as co-author on papers. Available at: Science journals ban listing of ChatGPT as co-author on papers | Peer review and scientific publishing | The Guardian. Accessed 20th of January 2024
Turnitin. Sneak preview of Turnitin’s AI writing and ChatGPT detection capability Available at: Sneak preview of Turnitin’s AI writing and ChatGPT detection capability | Turnitin. Accessed 20th of January 2024.
Singhal S, Kalra BS. Publication ethics: Role and responsibility of authors. Indian J Gastroenterol. 2021;40(1):65–71. https://doi.org/10.1007/s12664-020-01129-5
The International Committee of Medical Journal Editors. Defining the Role of Authors and Contributors. Available at: ICMJE | Recommendations | Defining the Role of Authors and Contributors Accessed 20th of January 2024
McKneally M. Put my name on that paper: reflections on the ethics of authorship. J Thorac Cardiovasc Surg. 2006;131:517–519.) (Anderson PA, Boden SD. Ethical considerations of authorship. SAS J. 2008;2(3):155–158. Published 2008 Sep 1. https://doi.org/10.1016/SASJ-2008-Comment1
Anderson PA, Boden SD. Ethical considerations of authorship. SAS J. 2008;2(3):155–158. Published 2008 Sep 1. https://doi.org/10.1016/SASJ-2008-Comment1
Zielinski C, Winker MA, Aggarwal R, Ferris LE, Heinemann M, Lapeña JF, et al. Chatbots, Generative AI, and Scholarly Manuscripts. WAME Recommendations on Chatbots and Generative Artificial Intelligence in Relation to Scholarly Publications. WAME 2023. Available at: https://wame.org/page3.php?id=106. Accessed: 20 January 2024.
Macdonald C, Adeloye D, Sheikh A, Rudan I. Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. J Glob Health 2023;13:01003.
Marušić A. JoGH policy on the use of artificial intelligence in scholarly manuscripts. J Glob Health 2023;13:01002.
Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. 2023;6:1169595. Published 2023 May 4. https://doi.org/10.3389/frai.2023.1169595
Sallam M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare (Basel). 2023;11(6):887
Alkaissi H, McFarlane SI. Artificial Hallucinations in ChatGPT: Implications in Scientific Writing. Cureus. 2023;15(2):e35179
Brasil S, Pascoal C, Francisco R, Dos Reis Ferreira V, Videira PA, Valadão AG. Artificial Intelligence (AI) in Rare Diseases: Is the Future Brighter?. Genes (Basel). 2019;10(12):978. Published 2019 Nov 27. https://doi.org/10.3390/genes10120978).
Kumar Y, Koul A, Singla R, Ijaz MF. Artificial intelligence in disease diagnosis: a systematic literature review, synthesizing framework and future research agenda [published online ahead of print, 2022 Jan 13]. J Ambient Intell Humaniz Comput. 2022;1–28. https://doi.org/10.1007/s12652-021-03612-z
Forbes. Can Doctors Truly Be Replaced By Technology? Available at: https://www.forbes.com/sites/saibala/2021/09/22/can-doctors-truly-be-replaced-by-technology/?sh=2e0ee8c54a83. Accessed 20th of January 2024.
Li Y, Choi D, Chung J, et al. Competition-level code generation with AlphaCode. Science. 2022;378(6624):1092–1097. https://doi.org/10.1126/science.abq1158
Castelvecchi D. Are ChatGPT and AlphaCode going to replace programmers? [published online ahead of print, 2022 Dec 8]. Nature. 2022;https://doi.org/10.1038/d41586-022-04383-z.
Acknowledgements
We want to thank Dr Gabrielle Samuel, the Senior Research Fellow at the Department for Global Health and Social Medicine, King’s College London, UK, and the Welcome Centre for Human Genetics, Oxford University, UK., for her help and support during the preparation of this work.
Funding
Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB).
Author information
Authors and Affiliations
Contributions
ASA conceptualized the study and wrote the introduction and discussion sections. AA analyzed data and wrote the result section. MAM, AMM and EAS participated in data collection. MAM participated in writing the methods section. HHZ participated in writing the methods and results sections. All authors participated in study design, have read and approved the manuscript.
Corresponding author
Ethics declarations
Ethics Approval and Consent to Participate
.The Research Ethics Committee, Faculty of Medicine, Alexandria University approved the conduction of this study. All experiments were performed in accordance with relevant guidelines and regulations (such as the Declaration of Helsinki). Informed consents were obtained from participants after explanation of the purpose of the study as a first step to proceed to the survey questions. Participants had the right to withdraw from the study before submitting the filled questionnaire.
Consent for Publication
Not applicable.
Competing Interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic Supplementary Material
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Abdelhafiz, A.S., Ali, A., Maaly, A.M. et al. Knowledge, Perceptions and Attitude of Researchers Towards Using ChatGPT in Research. J Med Syst 48, 26 (2024). https://doi.org/10.1007/s10916-024-02044-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10916-024-02044-4