On many websites, a chatbot in a window corner will pop up asking if the user needs assistance. Chatbots are intelligent conversational software programs designed to mirror human communication and are widely used to provide automated online support to customers and users. They utilize artificial intelligence (AI) methods and algorithms, and their versatility has led to their wide adoption by varied industry players to aid customers. Early chatbots such as ELIZA and PARRY were mostly developed to mimic humans to make the user believe that they are interacting with another person. While a chatbot on a browser window corner may appear technologically impressive, they are just the basic forms of chatbots.

Chatbots have come a long way and developers powered by huge amounts of data are employing deep learning, natural language processing, and machine learning algorithms to build advanced chatbots.1 One example of such a cutting-edge chatbot is ChatGPT (Chat Generative Pre-Trained Transformer). Based on natural language processing, it has taken the world by storm—it was released in November 2022 by OpenAI and it took only five days to reach 1 million users and crossed 100 million users within two months after its release. In contrast, it took more than two months for Instagram to have the first million signups and TikTok took about nine months after its global launch to reach 100 million users.1,2,3 ChatGPT represents the fastest adoption of any consumer app ever to date.

While technological innovations are generally forward-looking, AI has long been a source of contention between its champions and critics. Applications of AI include personalized shopping and learning, AI-powered assistants, fraud prevention, smart content creation, voice assistants, and autonomous vehicles, for example. Notwithstanding the applications of AI, some major criticisms of the technology are its (mis)use by authoritarian governments, algorithm bias, and the existential threat posed by superintelligent AI wherein AI may be able to improve itself to the point that we humans could not control it. It, therefore, is no surprise that ChatGPT in its short span has unleashed hopes among champions of AI as well as panic among critics.

The proposed use of ChatGPT in academia—including writing research grants, discussing newer research directions, and writing research manuscripts—has rightly caused panic, to the point that publishers such as the American Association for the Advancement of Science (AAAS), which publishes the highly reputable journal Science, banned listing ChatGPT as an author and having its text appear in scientific scholarly papers.4, 5 Other major publishers such as Springer NatureFootnote 1 and Elsevier have also banned listing ChatGPT as an author in their papers, but both publishers allow its use ostensibly to improve the legibility and language of the research article.6

Despite the slightly different approaches taken by the top three publishers, a consensus on containing the bot seems to be arriving. ChatGPT has already appeared as a co-author4,5,6 in many papers but banning its listing as an author by some major publishers appears to be a reasonable approach because ChatGPT cannot agree to be a co-author and, most importantly, cannot be held accountable for work published. Or is there a way of changing the ChatGPT co-author agreement and accountability liability on the paper’s corresponding author? Maybe yes or maybe no, only time will tell.

A gray area, however, appears to be its use and assistance in writing research articles. AAAS has banned this outright while Springer Nature and Elsevier appear to be okay with its use. Other publishers including Taylor & Francis are reviewing their policies and therefore have yet to decide. Some publishers including the American Chemical Society have already published content produced by ChatGPT,7 and there is sound logic in allowing its use wherein nonnative English speakers could use AI-powered programs such as ChatGPT to improve the language and coherence in their research articles. Language has long been an issue, rather a hindrance, for scientific publications, and ChatGPT could very well level the playing field when it comes to language to strengthen the growth of science through publications.

A major issue, however, is the promotion of “junk science” for which ChatGPT could inadvertently play a role. It is a known fact that there are outright fraudulent and predatory publishers8 only interested in making a profit and authors publishing in such journals are also mostly concerned with increasing their publication and citation indices. Such a combination is deeply worrying and ChatGPT has the potential to catalyze it to great lengths. While a fight against predatory publishers will continue and “junk science” will keep appearing, a possible remedy for genuine publishers may very well be allowing the use of ChatGPT in manuscript writing with a clear mention including details of ChatGPT usage in the Acknowledgment section of the paper—at least until the time we have counter technologies to detect ChatGPT- and other bot-produced work.

An equally important problem we see for ChatGPT-assisted manuscript writing is the referencing and crediting of the original authors for the work cited. The negative impact of secondary and tertiary citations including oversimplification and misinterpretation of original work has already been identified as a problem in scientific research. Now it appears that ChatGPT could further increase such incidences because the content it produces—even though it could include citations to previously published work—would still need thorough work by the scientists to improve the referencing. The worst-case scenario that we observe periodically is an entire list of fake references. A few examples of manuscript introductions with references produced by ChatGPT are in the Appendix. When compared to the corresponding manuscript introductions published by us in related research areas,9,10,11,12,13,14 fake referencing by ChatGPT is very vivid. More importantly, however, the published original article introductions are denser, more detailed, and richly referenced than the demonstrations by the bot. Irrespective of these shortfalls it is undeniable that ChatGPT would be a useful tool for scientific paper and grant writing, but whether it would impact the role of literature survey and possibly negatively impact the knowledge base of early-career scientists is still an open question.

Above all, the most serious concern with bots such as ChatGPT is algorithm bias, for instance, in the context of climate change. This is compounded by the fake referencing capability of ChatGPT. A climate change denier would be able to write an article with apparently rich (fake) referencing to sway readers into believing that there is widely published scientific literature debunking climate change. In a field such as materials science where one of the research focus areas is technologies to mitigate climate change, this should be identified as a cause of concern. While arguing for and banning AI-powered innovations such as ChatGPT would be a knee-jerk reaction, a much more circumspect approach—including working alongside players such as OpenAI and promoting open-source data science—may be advantageous. This probably also reinforces the argument for close collaboration between science and policymaking, which could involve lobbying for legally binding laws on players such as OpenAI to keep a regular check on the content produced by their technologies, with a particular eye on the content relating to existential challenges such as climate change.

It is very early to make proclamations on the uproar caused by ChatGPT in the past months, but we believe the hype will die soon. This probably is the nature of human civilization that we “overreact” and for good reasons—the fear of new things, after all, is embedded in us for evolutionary reasons. While it is difficult to say with a high degree of confidence, ChatGPT may very well turn out to be the Microsoft Excel of our times, so don’t be surprised if we all end up integrating it into our daily lives, including scientific research and writing.