Introduction

Artificial Intelligence (AI) has long been proposed as a disruptive power for higher education (Hinojo-Lucena et al., 2019). The launch of the AI chatbot ChatGPT by OpenAI in late 2022 can be considered a watershed moment. While text-enabled generative AI (genAI) has existed for several years (Bengio et al., 2000; Radford et al., 2018), the accessibility and popularity of ChatGPT have promoted instantaneous and widespread discussions beyond the scholarly literature about the impact of genAI upon the higher education sector. But what impact do these discussions have and are they even offering anything new?

There are, generally speaking, polarised positions about technology in higher education, where the “doomsters” declare that technology will ruin what is already present and the “boosters” believe that the technology will revolutionise our practices (Selwyn, 2014). The latter, optimistic, view is reflected in a 2019 systematic review of research on AI in higher education, where all the included studies describe AI as a tool to enhance teaching and learning, and almost none mention ethical concerns (Zawacki-Richter et al., 2019). The critical literature, on the other hand, reports significant concerns about the impact of AI on higher education. For instance, the automated recognition of plagiarism prompts worry about the undue influence of corporate ventures on education (Popenici & Kerr, 2017).

Bearman et al. (2023) conducted a discourse analysis of references to AI in major higher education journals prior to the release of ChatGPT. This identified two dominant discourses. Firstly, a discourse of imperative response outlined that it was crucial that higher education institutions immediately respond to the arrival of AI, by either resisting or embracing the technological advances. Secondly, a discourse of altering authority suggested that AI, in whatever form it took, would alter the power structures within teaching and learning. Across the studies, AI was seen as the “ghost in the machine”, an unknown and invisible technology. However, ChatGPT introduced a tangible, accessible, and clearly labelled experience of working with an AI technology to the higher education sector. How might discussions about an accessible and concrete example of AI contrast with this earlier, scholarly and often hypothetical work?

This paper seeks to outline the ways in which the higher education sector discussed ChatGPT in the first months of its release. Given the speed of the technological release, the peer-reviewed literature could not serve as an initial forum for academic discussion. Rather, higher education commentary texts in outlets such as Times Higher Education or The Conversation, which we call the “grey literature”, were a significant forum for the immediate academic debate. These positive and negative visions for how AI will change higher education came with implicit assumptions. By making such assumptions explicit, we can see how such visions may “contribute to closing off alternative pathways to the future” (Vicsek, 2021, p. 853).

We regard these early discussions about genAI as significant, not just with respect to what they are saying, but also with respect to what they are mobilising. We take the notion of mobilisation from the technology discourses literature, which describes how the ways that digital technologies are discussed can prompt certain ways of thinking and doing (Fisher, 2010), thereby shaping the influence of these technologies in practice. Thus, examining these early claims about genAI chatbots highlights the groundwork that they laid for institutions, teachers, and researchers. By analysing initial suggestions for how ChatGPT might influence teaching and learning in higher education, we may also have insight into how these claims continue to shape the current landscape.

Objectives

Drawing from a scoping review methodology, this paper presents a critical literature review of blog posts and opinion pieces about ChatGPT published by leading higher education sector media outlets during the first 3 months after ChatGPT was made public (Dec 2022–Feb 2023). Focussing exclusively on these early days, the aim is to identify how the authors imagine genAI chatbots will influence teaching and learning in higher education. This is followed by a discussion, in which we seek to critically interrogate how these perspectives might mobilise practice.

Methods

This paper combines a scoping review methodology with a critical analysis of the texts. Scoping reviews are appropriate in cases of emergent fields where there is not yet a large number of studies and previous empirical evidence (Arksey & O’Malley, 2005). In our case, the scoping review methodology is used to synthesise relevant grey literature, i.e. literature that has not undergone the process normally associated with academic publishing. We chose this corpus because the early discussions of what genAI chatbots may mean for higher education primarily took place outside of the traditionally much slower peer-reviewed literature.

In this section, we first outline the process of identifying articles for inclusion in the dataset. This is followed by a description of the analytical steps we went through to identify claims.

Search strategy

Since its launch on 30 November 2022, many articles have been written about ChatGPT. To identify articles for this review, we prioritised those published in outlets which are close to the higher education sector. We comprehensively searched international media units with a primary focus on higher education. We thus identified five international media outlets that are specifically concerned with covering topics of relevance to higher education. These outlets are Times Higher Education (THE), The Chronicle of Higher Education (CHE), Inside Higher Education (IHE), University World News (UWN), and The Conversation (TC).

On 28 February 2023, we systematically searched each media outlet’s website for opinion pieces and blog posts that included the search term “ChatGPT”. (We cross-checked the final yield using alternate terms such as “chat-gpt”, “gen-AI”, “genAI”, and “generative AI”, but there were no additional relevant articles.) This cut-off date was chosen to include the first 3 months of ChatGPT’s public availability. The searches returned a total of 134 articles (15 from THE, 20 from CHE, 36 from IHE, 20 from UWN, and 43 from TC). After initial and then in-depth screening, we excluded 29 articles for being news items with no expressed opinion, and a further 60 articles that did not focus on how AI chatbots may influence higher education. Forty-five articles were included in the review. Figure 1 illustrates this process as a PRISMA-style flowchart.

Fig. 1
figure 1

Identification of articles for inclusion

Analytical approach

The included articles were analysed in two phases. In the first phase of analysis, we started by conducting an open coding of a subset of the articles (10). This was done by two authors (AB, LXJ) individually in order to generate more perspectives and ensure that all relevant themes were identified. However, comparing the two independent rounds of open coding showed a high level of correspondence, i.e. the two coders had identified many of the same themes. The open coding also revealed a significant thematic overlap between the articles. Based on this open coding, we developed a draft coding framework. This framework was then applied to 20 articles by two authors (AB, LXJ) independently. Following this, a discussion of differences and further adjustments resulted in an updated coding framework. This stepwise process ensured that the final codes represented strong themes appearing across several articles. By the end of this process, the coding framework consisted of 10 codes divided into three groups, with each code representing a claim about ChatGPT and higher education. The framework is shown in Table 1.

Table 1 Coding framework—each code represents a claim about ChatGPT and higher education

The final coding framework was then used by one author (AB) to code all 45 articles. The coding was done on the article level, meaning that for each article, it was noted down which of the claims it included. An insignificant allusion to a theme was not sufficient to merit categorisation. We only recorded claims central to the article’s argument. In the second phase of analysis, we interpreted how the included literature describes the future of higher education teaching and learning in a time of genAI.

Findings

This section will first provide a brief description of the included articles. This is followed by a presentation of each of the 10 claims, synthesizing how each claim appears across the included articles, highlighting both similarities and variance.

Description of included articles

The 45 articles consisted of 9 from THE, 2 from CHE, 17 from IHE, 5 from UWN, and 12 from TC. They were published between 14 December 2022 and 28 February 2023, with the frequency of articles increasing throughout the 3-month period: 5 in December, 19 in January, and 21 in February.

In 39 out of 45 cases, the articles had a single author, with four articles co-written by two authors and two articles co-written by three or more authors. Two of the authors wrote more than one included article. Although the outlets were chosen to reflect international perspectives, there was a clear tendency for authors to be based in English-speaking countries, with 25 from the USA alone. There was only one article with an author from a low- or middle-income country, South Africa. Almost all authors are employed in the higher education sector, the only exceptions being Biaou (2023), an author and management consultant, and Gill (2023), a higher ed journalist.

In this paper’s supplementary materials, we have included a table which provides an overview of the article title, author name, author location, outlet, publication date, and word count for each of the 45 included articles.

Claims about the nature of ChatGPT

The following three claims are about what ChatGPT is and what this means for teaching and learning in higher education.

ChatGPT is an inevitable disruptor

Seventeen articles claim that the release of ChatGPT will bring comprehensive change to the higher education sector. This claim is often expressed in colourful language, e.g. referring to the integration of AI in our lives as “the undeniable direction of travel” (Biaou, 2023), “a pivotal moment in the history of education” (Weissman, 2023), and arguing that “we ignore this at our peril” (Byrnes, 2023).

Sixteen of these articles consider the appearance of ChatGPT an opportunity for positive change, arguing that “disruption does not have to be a negative word. Sometimes it can be the mother of transformation” (Breen, 2023). For example, Biaou (2023) argues in their optimistic article, “Universities cannot resist AI – Rather, they must embrace it”, that higher education institutions have always been at the forefront of technological advances, and they should seek to lead the “AI revolution” (Biaou, 2023). These authors primarily focus on how ChatGPT can disrupt outdated structures and practices, provide an opportunity to rethink higher education, and thus empower students and teachers alike (Kovanovic, 2022; Saunders, 2023). Weissman (2023) presents a more critical perspective, highlighting the many challenges that institutions and teachers need to overcome to maintain control over their own situations. Under the headline “ChatGPT Is a Plague Upon Education”, he suggests that ChatGPT is going viral, comparing it to the spread of COVID-19 as characterised by denial and doubt and a ‘lockdown’ response of a return to onsite handwritten exams.

Despite differences in sentiment towards the inevitable disruption facing higher education, there is general agreement that the development is inevitable and that in the longer run, “banning these technologies is neither the solution nor even a possibility” (Saunders, 2023). Even the most critical perspectives argue that institutions and teachers should “cultivate the powers of the human mind in the face of this novel threat to our intelligence” (Weissman, 2023).

ChatGPT as an inevitable disruption can be considered a sort of precursor claim, providing part of the urgency and rationale for other claims described below, e.g. that a certain new assessment or teaching practice is necessary because of the inevitability of this technology.

ChatGPT is just another digital tool

Compared to the more fatalism of inevitable disruption, 19 articles present a less concerned perspective through the claim that ChatGPT is just another digital tool. Although these texts also assume that genAI will lead to changes, the authors do not envision these as revolutionary, but rather as just the newest one in a series of digital tools that have shaped teaching and learning in higher education (Gill, 2023; Loble, 2023).

Six articles compare the fear of ChatGPT to that of MOOCs (Massive Open Online Courses), Wikipedia, or the Internet as such (e.g. Breen, 2023; Gill, 2023). The reasoning is that initially, these technologies were prophesied to upend higher education, but now have become a part of everyday reality in higher education. Breen (2023) argues that “AI will not fundamentally change teaching any more than previous technologies have done. […] it is pedagogy, not technology, that has improved many aspects of learning”.

In 13 out of the 19 cases, ChatGPT is compared to digital tools that can automate time-consuming tasks and improve human abilities (Byrnes, 2023). This is most prominently seen in frequent comparisons to various writing support tools (e.g. spell checkers) and calculators. When these tools were introduced, they were also met with scepticism from teachers who feared that now students would never learn to spell or calculate. Of the articles making the claim that ChatGPT is just another digital tool, 13 conclude that the tool itself is neither good nor bad; it all depends on how it is used (Sarofian-Butin, 2023). For example, Byrnes (2023) suggests that ChatGPT can provide invaluable support to second-language students or students with communication or learning difficulties.

Three articles, however, highlight that there are some fundamental differences between genAI and other digital tools. For example, in a comparison of calculators and ChatGPT, Warner (2023b) argues that a main difference is that calculators in math courses do not negatively impact learning, as they perform exactly the same tasks as students would, but with greater speed and accuracy and thus enables thus students to concentrate on thinking about maths. The next claim describes how this may differ with ChatGPT.

ChatGPT decouples thinking and writing

Twenty-five texts describe how ChatGPT could disrupt an essential learning experience that takes place during the writing process. There are a range of claims about how this decoupling takes place. For example, Warner suggests we tend to think of writing as an output or product of thinking (Warner, 2023c). This connection between thinking and its product is broken by AI chatbots because they are able to create text without thinking (Sarofian-Butin, 2023). Several authors underline, however, that writing is not the product of thinking but rather a form of thinking. Mills (2023) describes it as “a slow mode of thinking, an antidote to snap judgments” and argues that writing struggles have a purpose in promoting deeper thinking. Technologies that make the writing process easier thus risk leading to lower quality thinking. Baron (2023) echoes this sentiment and highlights that human writing is an iterative process in which we “question what we originally wrote, we rewrite, or sometimes start over entirely”.

Although this claim generally reflects a worry that ChatGPT will lead to superficial thinking and less student learning, there is also general agreement that the texts which ChatGPT can currently generate often reveal the lack of thinking behind its responses (e.g. Strang, 2023). Since ChatGPT is based on a language model consisting of already established knowledge, several authors make the point that no new thinking and knowledge is constructed; this creativity is something that is still inherently human (e.g. Jørgensen, 2023) although others dispute this (Farnell, 2023).

Claims about changing practices of institutions and teachers

These four claims are about the ways that institutional and teaching practices will change as a consequence of AI chatbots.

ChatGPT will necessitate new assessment practices

This claim has received the most attention in the articles which we included in this review. Following reports of ChatGPT generating essays that could pass as student work, 32 articles see the availability of genAI primarily as a challenge to the integrity of marking, grading, and certification. Following from this, several of the included articles present what Kelley (2023) refers to as “strategies for controlling how AI is used in your classroom”. Her paper lists several strategies, including some that simply aim to block access, for instance, by requiring students to use pen and paper or by hosting written exams in testing centres. Other authors propose less restrictive approaches, for instance, promoting a culture of academic integrity through the use of explicit honour codes that students must pledge to follow (Gift & Norman, 2023). A third route is the use of AI detection software, i.e. programmes that can recognise whether a text is written by genAI. Current solutions, however, are not very good at this task, and some argue that it “will never be possible to make AI text identifiers perfect […] and there will always be new ways to mislead them” (Alimardani & Jane, 2023).

The shortcomings of the abovementioned approaches lead 29 of these authors to argue that restrictions, honour codes, and detection will not suffice and that current assessment practices will need more comprehensive change (e.g. Breen, 2023; Illingworth, 2023; Kovanovic, 2022). Twenty-eight out of the 29 authors, however, regard this as a good opportunity “to rethink assessment practices and engage students in deeper and more meaningful learning that can promote critical thinking skills” (DeLuca et al., 2023).

Warner (2023b, 2023c) notes that ChatGPT is only perceived as a threat to assessment because current practices are too preoccupied with student outputs (e.g. essays) rather than the actual learning, arguing that “if ChatGPT can do the things we ask students to do in order to demonstrate learning, it seems possible to me that those things should’ve been questioned a long time ago” (Warner, 2023a). Indeed, many of the arguments put forth in favour of rethinking assessment are already well-established discourses in higher education research, e.g. authentic assessment (Illingworth, 2023), project-based learning (Viljoen, 2023), formative feedback (DeLuca et al., 2023), and emphasising process over product (Kovanovic, 2022). As such, the claim that current assessment practices are inadequate is nothing new, and the availability of ChatGPT has simply exposed existing weaknesses. In an article entitled ChatGPT reveals the uncomfortable truth about graduate skills, Warren (2023) writes that “[t]he scandal that should be grabbing the headlines is the fact that for a generation we have been training our undergraduates to be nothing more than AI bots themselves”.

ChatGPT will necessitate a new role for teachers

Another claim within the texts (in 8 articles) is that access to ChatGPT and other forms of genAI will lead to the automation of many tasks that teachers are currently responsible for (Biaou, 2023). In essence, some of the teaching will now—or soon—be undertaken by AI systems (Farnell, 2023). Responding to this, teachers should be “moving on from delivering a homogenised intellectual diet to students who, if they want middle-of-the-road, will be able to get it from an app” (Gill, 2023). This necessitates a new teacher role, one focussed on the things that ChatGPT and similar systems do not do well. This includes a focus on human traits, e.g. caring for students, being a role model, or being an authority (Farnell, 2023), as well as an opportunity to reimagine pedagogies towards more student-centred and problem-based approaches (Viljoen, 2023).

This claim speaks into existing agendas of reducing one-way lecturing in favour of a teacher role focussed on guidance and dialogue. Consequently, 6 out of the 8 articles present the new teacher role as good news for both teachers and students. Schroeder (2022) argues that AI will enable teachers to provide new learning opportunities for their students, and Warren (2023) writes that AI offers “an opportunity to stop focusing on teaching how to solve problems that have already been answered and put more emphasis on how to recognise and tackle those problems remaining. We should relish that opportunity, not run scared from it”.

Although the new teacher role is framed as the alternative to teaching by machines (Farnell, 2023; Gill, 2023), this claim is far from Luddite. Higher education institutions “should be at the vanguard of the implementation of technological tools in our society” (Biaou, 2023), not least because they need to prepare students for a “rapidly changing and technology-driven workforce” (Viljoen, 2023).

ChatGPT and AI should be part of the curriculum

This claim is mentioned in 26 of the articles. It is based on the idea that AI technologies will have a profound impact on the job market, and as a response, educational institutions should prepare students to understand and use the new technologies that are needed in their future work (e.g. Groves, 2022; Viljoen, 2023). Watkins (2022) notes the importance of teaching data literacy while future employers look for AI competencies in employees: “If we don’t educate our students on the potential dangers of AI, we may see harmful consequences in our classrooms and beyond”.

The included articles differ in whether they refer to the use of AI in the curriculum as learning about AI or as learning with AI. Learning about AI relates to developing student understanding of AI’s limitations and benefits. In contrast, learning with AI relates to improving students’ use of AI while working on tasks and assignments. The authors stress the importance of teaching students digital and critical skills regarding the productive and responsible use of AI.

Several authors argue that learning about AI in classrooms should include developing student understanding of the limitations of genAI (Grobe, 2023), including ethical issues such as limited data protection (Zou, 2022). Other articles mention that there is also a need to develop student understanding of the benefits and opportunities offered by AI, both academically and professionally. Griffith (2023) writes that “[w]e can help students consider how and when to use AI in their academic and professional lives”.

The authors mention examples of new practices that can help students acquire new critical skills regarding the use of AI. This includes class discussions and student assignments evaluating ChatGPT output (Illingworth, 2023), asking students to “cheat” and document their reflections on ChatGPT (Mintz, 2023), and having students edit ChatGPT-generated texts to help teach them editing skills (Rigolino, 2023). Lui et al. (2023) suggest that teachers should share how they use ChatGPT to prepare their teaching, to open a discussion of AI bias and misinformation.

In relation to learning with AI, several authors mention new learning practices to support the curriculum that AI can offer, e.g. Griffith (2023) describes teaching her students how to effectively engage with AI when writing prompts for designs.

ChatGPT can help teachers prepare courses

This claim in 9 of the articles relates to the way teachers may use AI as part of their work, arguing that “ChatGPT can help teachers save time preparing lessons and resources” (Liu et al., 2023). The authors highlight that many time-consuming tasks can be automated or supported by genAI. These range from AI-generated drafts of lectures (Grobe, 2023) to using ChatGPT when creating student activities, such as exemplars, practice tests, and quizzes (Liu et al., 2023; Viljoen, 2023).

Viljoen (2023) argues that genAI will make it more feasible to provide differentiated assignments for students at different levels or different tasks for each study group. Exemplifying this, Illingworth (2023) suggests using AI to generate various scenario-based tasks, and Liu et al. (2023) propose the use of automatically generated discussion prompts.

Some authors highlight that such productive use of genAI requires teachers to learn about genAI and develop new skills and practices or to collaboratively develop guidelines for use (Mills, 2023). This points to a need for further professional training for teachers—something that, in turn, necessitates investments from both governments and higher education institutions (Loble, 2023; Watkins, 2022).

Claims about new independent practices of students

The last three claims relate to how students may productively use ChatGPT on their own. None of these student practices is described in any detail in the included articles, and although these practices may in some cases overlap, we have decided to approach them individually.

ChatGPT can support student thinking

The 12 articles that include the claim that ChatGPT and genAI can support student thinking generally imagine three main scenarios: to generate ideas, to challenge student thinking, or as a collaborator. Generating ideas is the most frequently mentioned use case, with many of them relating to initial brainstorms (Sarofian-Butin, 2023) that can help kick-start thinking (Baron, 2023). Watkins (2022) highlights the value of such use when approaching a new research topic. ChatGPT can also be prompted to challenge student thinking. This can help students explore different perspectives (Watkins, 2022), generate alternative approaches (Rockwell, 2023), or identify gaps in knowledge (Liu et al., 2023).

The practices all consider ChatGPT as a tool which students use deliberately and for a specific purpose. Mintz (2022) suggests that it may be more fruitful to consider ChatGPT as a collaborator. The ability of ChatGPT to function as a conversation partner, writes Rockwell (2023), offers an opportunity for students to learn to think through dialogue and thereby “rediscover the rich history and potential of this form of engagement”. Lui et al. (2023) propose that dialogues with ChatGPT could benefit from genAI’s ability to simplify complex explanations and address common misconceptions.

ChatGPT can support student writing

Eleven articles include this claim. In 6 of these articles, claims about ChatGPT-supported student writing overlap with the previous claim about ChatGPT-supported student thinking. We present them as a separate claim, with the practices described here exclusively dealing with how students may productively use genAI either as a way to generate drafts that they then edit or alternatively as a way to improve their own drafts.

Several authors mention this distinction between letting the ChatGPT generate a first draft versus using it to improve a text originally written by the student (e.g. Baron, 2023). Watkins (2022) refers to this as replacing or augmenting the writing process, arguing that it is more likely that students will be going the route of augmentation.

To Thaker (2023), this means that student writers can use the ability of ChatGPT to “revise draft writing by improving grammar and clarity”. Lui et al. (2023) also propose that genAI can be used by students to generate feedback comments on their own writing, or even to suggest specific improvements—and the rationale behind them—based on criteria provided by the student.

The articles that propose letting genAI do the initial drafting see it as a way to overcome writer’s block and the dreaded blank page (e.g. Grobe, 2023). Lui et al. (2023) suggest that such initial support can also simply take the form of AI-generated topic sentences or ideas for structuring the text. Even when ChatGPT generates drafts for students, it does not mean that there is no human intelligence involved—the student still needs to bring their ideas, develop a useful prompting strategy, critically assess the outcome, and edit the text to make it suitable for their purposes (Grobe, 2023). These are challenging and formative tasks, similar to what a student goes through when writing a text. In fact, some articles argue that since much of the writing of the future will be undertaken by genAI, we should instead help students become critical readers and good editors (Rigolino, 2023).

Several authors argue that genAI may help level the playing field for students who are critical thinkers but not very strong writers (e.g. Bissonette, 2023). They point out that this has the potential to help otherwise disadvantaged students, such as those with disabilities and reading/writing difficulties. It can also assist students who are not studying in their native language (e.g. Liu et al., 2023) because they can engage with texts and arguments in a deeper way that may otherwise be overshadowed by concerns about grammar or spelling (Watkins, 2022).

ChatGPT can function as a personal tutor

The previous two claims describe ideas about how students can benefit from using genAI within the framework of a teacher-led course. This claim, mentioned in 8 articles, differs in this respect, as it proposes that genAI can be used as a replacement for the teacher role, for instance, by letting the ChatGPT create the study plan, propose student assignments, and assess student work (Mintz, 2022; Trumbore, 2023). The difference from traditional self-paced online coursework would be that ChatGPT can mimic a human tutor, adapting content and tasks to student performance (Schroeder, 2022) and providing, in principle, unlimited access to further elaboration and productive detours in a conversational format (Mintz, 2023).

Schroeder (2022) connects this claim to the agendas of self-directed lifelong learning, proposing that with AI tutors, we are approaching “a bright and attainable future for lifelong continuing learning” with universal access to individualised tutoring on any topic.

While the articles generally consider the use of AI-based tutors to be a likely and positive use case, there are some worries that students will not experience interactions with AI chatbots in the same way as with a human tutor. Gill (2023) brings up this issue of authenticity in connection to a mental health support line using a conversational AI tool: “People in distress found the AI-generated responses to be as helpful as human responses, right up until the moment when it was revealed that they were talking to an algorithm”.

Discussion

The analysis presents a thematic representation about the claims made by the sample, which encompass the nature of ChatGPT, how it will change institutional and teaching practices, and how it can support new independent student practices. Many of these have a strong resonance with Bearman et al.’s earlier discourse analysis of AI in higher education (Bearman et al., 2023). For example, the claim that ChatGPT is an inevitable disruptor closely parallels the discourse of imperative response of universities to AI. Moreover, new roles and practices for teachers and students speak to the discourse of altering authority. This shows that, despite being a new technology, our conceptualization of ChatGPT is rooted in previous ways of thinking and talking about AI. Thus, this is the first contribution of this paper. While we talk at length about a “new” technology, what we say about it and indeed how we understand it may be grounded in our visions of the future from a time before the technology itself existed.

So taken together, what do these articles mobilise? The first, and somewhat surprising point, is that as suggested by the predominance of positive claims, most of the articles in this review are optimistic about the role genAI can play in higher education. Overall, this is remarkable, considering that the broader societal discussions of AI that followed the release of ChatGPT have been dominated by topics like existential risks, bias, hallucinations, fake news, big tech, copyright infringements, and user privacy (e.g. Roe & Perkins, 2023). Although many of these issues are central to the role and purpose of higher education institutions, the included articles have very little to say about how the sector should respond to these points. Indeed, the authors view the emergence of genAI as a disruption that can open up new practices that are positive and long-awaited developments for higher education. The overwhelmingly positive perspectives may be partly attributed to the well-documented novelty effect, i.e. that teachers and students tend to judge new technologies more positively simple because they are new (e.g. Clark, 1983). A more general tendency towards a hype cycle with an initial peak of inflated expectations (Fenn & Raskino, 2008) for new technologies is also frequently observed. While we are seeing some more critical perspectives from the higher education sector as empirical studies emerge, this unbridled positivity is still very marked (McGrath et al., 2024).

We suggest that a consequence of this continued optimism may be to overlook some considerable challenges. There are three real concerns mentioned: that students will not learn to think as writing and thinking are coupled, concerns that students need to learn ethical means of engaging with genAI through new topics in the curriculum; and that assessment designs must adapt to the new practices to help address academic integrity concerns. Despite being mentioned, the challenges they entail are surprisingly downplayed; the suggested responses are mostly focused on adapting to a new situation rather than trying to influence it. While some emerging empirical literature focusses on micro-issues of teaching and learning such as whether ChatGPT writing can be distinguished from human writing (Farazouli et al., 2023; Li et al., 2023), there appears a gap between these broad-scale concerns and the fine-grained nature of the research that responds to them.

Our second observation is that many authors regard genAI as a catalyst for teaching and learning agendas already existing in this space. Some of these agendas are supported by new affordances that genAI can bring to higher education, and we note particularly those that align with the many discourses in the general literature on educational technologies—that digital technologies will improve access to personal, flexible, and scalable learning experiences. These include positions such as self-directed and lifelong learning; supporting marginalised learners; adaptability and personalisation of the learning experience; and scalability and improved reach of educational opportunities. Again, we suggest that there should be caution. In his work on discourses of technologies, Fisher (2010) suggests that flexibility due to technical advances may not be ultimately in a person’s interest directly counter to the promise contained within the associated discourses. His argument is that the technological innovation reflects certain ideologies rather being neutral (Fisher, 2007). Thus, a discourse of flexibility can mobilise people to believe in the promise of flexibility, and this belief cements activity despite evidence to the contrary. This is not to say, that ChatGPT will do one thing or another, but simply that by invoking words like flexible and scalable, certain approaches to higher education may be mobilised. Selwyn (2014) particularly points to how, despite the rhetoric, technology in universities tends to perpetuate the status quo, mirroring inequality rather than overturning it. Thus a more critical perspective might suggest that while genAI may be a hoped for catalyst, it may actually bring old affordances rather than new ones.

Others take a more reactive perspective: the emergence of genAI exacerbates the need for reform. This perspective is primarily illustrated by the claims that ChatGPT will necessitate new assessment practices and ChatGPT will necessitate a new role for teachers. Here, the emergence of genAI creates an urgency to address preexisting issues. As examples, one article thanks ChatGPT for “exposing the banality of undergraduate essays” (O’Shea, 2023) while another argues that teachers need to start challenging their students again if they do not want to be replaced by AI (Farnell, 2023). This differs from the discourse of imperative response as the response is harnessing the arrival of an innovation to respond to problems that already needed solving in a pre-AI world. This is strongest in the assessment space, where change is difficult to come by (Boud et al., 2018).

Our final observation is the positioning of students within the articles. Although many authors take an explicit student-centred perspective, the texts mainly concern how institutions and teachers may respond to genAI. Students are often portrayed as either plagiarists or victims of a failing educational system. Claims about how students may productively use genAI were the least prevalent of the three overall categories, mentioned in 21 articles. This suggests that during the first 3 months, there was more focus on developing ways to address cheating than on developing productive ways for students to use genAI. However far this may be from the authors’ minds, it may be that collectively this grey literature mobilises a view of students in a secondary position.

While we ground these interpretations in our analysis, we note that our claims, like those of the authors of the included commentaries, are speculative. However, we think they serve as useful warnings as institutions, teachers, and students embark on a higher education where genAI is commonplace. They warn against over-optimism, suggesting a deeper but nuanced consideration of risks and ethical considerations. They remind us that ChatGPT is integrated into a teaching and learning landscape that already has many agendas and to interrogate the value of these agendas. Indeed, empirical research may be highly important here. Finally, we need to ensure, not just suggest, that students are part of any conversation.

Strengths and limitations

The methods employed in this paper represent a systematic, inductive way of bringing perspectives from the grey literature into the research literature. The main limitation to consider relates to the way we identified the articles. Searching sector outlets captures only a subset of perspectives. In the first months after ChatGPT’s release, many conversations took place in other spaces, such as Twitter or various online webinars. Furthermore, owing to our search strategy, the included papers have a strong bias towards authors from anglophone high-income countries. This means that most of the perspectives described in this review reflect fairly similar traditions of teaching and learning. Searching a more comprehensive list of outlets, for instance, including local university newspapers from a wide variety of high-, middle-, and low-income countries, could result in a longer list of claims. The purpose of this paper is to examine the perspective from the early days of AI chatbots. Including articles from a longer period than 3 months could have generated different and maybe more mature perspectives.

Conclusion

This analysis of the early grey literature on ChatGPT alerts us to claims about the technology and its role in the future of higher education. While the examined literature contains some caveats about its possible negative influences, particularly concerning the decoupling of thinking and writing, the majority of articles were highly positive. As we move forward and improve our understanding of genAI and its effect on teaching and learning, those working in higher education may use this synthesis to be mindful of how our early assumptions can influence what and how we teach and research.