Introduction

As in previous cases of technological progress, the current developments surrounding the introduction of generative artificial intelligence (AI) into everyday life are destined to lead to profound changes. Since generative AIs are used to create texts, pictures, videos, and other types of content from simple text prompts, there is an enormous potential to automatize different types of complex and cognitively demanding tasks. Decades ago, the Internet made a wealth of information accessible in an instantaneous manner, leading to the emergence of the “Google effect” in which people memorize where to find information instead of the information itself (Sparrow et al., 2011).

Crucially, technology use can lead to overconfidence regarding abilities and knowledge (Eliseev & Marsh, 2023; Fisher & Oppenheimer, 2021; Ward, 2021) and distorting source memory (Siler et al., 2022). The consequential change in how memory is offloaded (e.g., Fisher et al., 2022; for an overview, see Risko & Gilbert, 2016) is likely to be followed by a normalization of the externalization of entire tasks to generative AIs (Skulmowski, 2023).

In contrast to merely offloading memories, the utilization of generative AIs has several consequences that alter the relationship between humans and technology. While external memory stores play a more passive role and simply deliver information on demand, generative AIs (as their name implies) feature a creative element that, until now, was considered uniquely human. These novel capabilities strongly resembling human creative efforts have raised the question of AI authorship (e.g., Draxler et al., 2024; Thorp, 2023) and the ethical use of AI-generated output (e.g., Eaton, 2023; Lund et al., 2023). Most importantly, an emerging field of research investigates the effects of generative AI use on cognition. This paper offers an overview of some of the most important current results and their implications. After a discussion of the technical background of generative AIs and the problem of (academic) dishonesty arising from its use, recent findings concerning the AI-related overestimation of abilities will be presented. These effects have wide-ranging implications for learning and instruction that will be discussed.

The Technical Basis of Generative Artificial Intelligence

The chatbot ChatGPT has mainstreamed the possibilities of using AI to produce summaries, critical considerations, and even research ideas simply by chatting on a web page. As the output of ChatGPT is in most cases indistinguishable from human writing, this new and popular technology can be seen as a substantial challenge for academic norms at various levels of education (e.g., Bouteraa et al., 2024; Bringula, 2023). Due to the short time this technology has been widely available, few academic norms regarding its use exist. However, incorporating writing or ideas generated using an AI application without acknowledgment is likely to be considered as a form of plagiarism by most (see Mogavi et al., 2024; Tlili et al., 2023). After the Internet had become widely adopted by consumers, copy-and-paste plagiarism has established itself as a major problem in academic settings. Although technology-based solutions such as automated checking software are used by many educational institutions and publishers, idea plagiarism and other forms of plagiarism remain difficult to detect (Roe & Perkins, 2022). Chatbots operating on large language models (LLMs) will likely further complicate this issue. Such chatbots can be used to answer questions spanning a vast area of publicly available knowledge, generate summaries of research topics, and can even identify gaps in the literature in order to come up with research ideas.

The appeal of chatbots such as ChatGPT lies in their ability to autonomously write high-quality texts based on simple commands such as “Write a summary of the history of Psychology,” “Explain the modality effect,” or “What are some of the open research questions concerning cognitive load in digital learning?”. LLM-based chatbots will usually generate a short overview of the research area, summarize two or more perspectives on the topic, and may end with suggestions regarding topics that may be investigated in the future. These not overly long “replies” could easily be combined into an essay or even a thesis in many academic environments, in particular given that the chat can be continued with additional questions until sufficient material has been generated (Giray, 2023). In contrast to copy-and-paste plagiarism, there exists no definitive method of checking the provenience of the texts (and underlying ideas) at the moment (see Gao et al., 2023). This issue has sparked debate among educators as to whether essays can remain a viable examination method and how academic writing in general will be affected by this technology.

Generative Artificial Intelligence Blurring the Line Between a Tool and a Collaborator

Besides the problem of currently being impossible to crawl using conventional plagiarism checking software, the similarity of chatbots to other AI-based technology widely used without a comparable level of public debate should be considered. Current search engines are able to find text fragments in the most obscure publications available in digitized form without requiring much effort or even specialized knowledge. In addition, search engines and scientific social networking sites have had the functionality to suggest research that may be of interest to users based on AI algorithms for quite some time. Lastly, the interactive dialogue provided by LLM-based chatbots could be thought of as a digital version of discussions with peers or advisors. In all of the above cases, there would be no formal requirement to list the contribution of the AI or another human in the acknowledgments section of a research paper or a thesis. However, students and educators still largely find the unacknowledged use of generative AIs unacceptable (Barrett & Pack, 2023), underlining the special status of generative AIs.

The emerging field of LLM-based chatbots results in novel ethical challenges, with some authors claiming that we are entering the era of “postplagiarism,” in which advances in technology will render the detection of plagiarism impossible (Eaton, 2023). Furthermore, keeping track of one’s own work and AI-based contributions is likely to become a difficult task in itself as outlined in the following section.

Placebo Effects Resulting from Artificial Intelligence Usage

As just described, generative AIs can be used as tools to strongly enhance human capabilities. Prompt-based AI tools generate texts, images, and even entire videos based on written commands, however, raising novel issues regarding (academic) honesty. While these new technologies have the potential to revolutionize education, they have been found to introduce the risk for certain negative effects. In the following sections, two types of cognitive misattribution related to the use of AIs are presented, namely placebo effects and ghostwriter effects, and their complex relationship with the human tendency to anthropomorphize technical artefacts is discussed.

Artificial Intelligence as an Illusory Augmentation

Currently, a number of effects related to how generative AI use can alter humans’ assessment of their own abilities are under investigation. Probably the most consequential development consists in AI-related placebo effects (e.g., Kosch et al., 2023; Villa et al., 2023). Similar to how the consumption of “make-believe” medication, for instance, in the form of pills not containing any medically active substances, can trick patients into believing that their symptoms have improved, AI-related placebo effects involve the illusion of having improved or acquired an ability experienced through AI use. Consequently, AI users can misjudge their own abilities.

Examples for this type of placebo effect include a study in which participants estimated their task performance in a word puzzle task as higher when told to be receiving AI support (Kosch et al., 2023) and an experiment in which more riskier decisions were made in a card game by participants wearing a brain-computer interface which was presented to them as an augmentation system (Villa et al., 2023). In that study, participants using the fake augmentation system actually believed in the benefits of the technological artifact, even after having completed the card game. Villa et al. (2023) even were able to show that being told to utilize augmentation technology affects brain waves as measured using electroencephalography (see also Czeszumski et al., 2021, for related findings on action monitoring from human–robot interactions).

Artificial Intelligence as an Unacknowledged Ghostwriter

Another finding related to the failure to correctly attribute AI-generated work consists in the “AI ghostwriter effect” described by Draxler et al. (2024). In their studies, an intricate pattern of ownership and authorship of texts emerged. They found that while AI users did not claim ownership and authorship over short AI-generated texts produced on a website, they still did not publicly acknowledge the contributions of the AI. Personalizing the AI model using information gained from a survey did not have an effect on the ghostwriter effect. However, the participants felt a higher degree of ownership if they had a stronger influence over the AI-generated texts. This seemingly contradictory pattern suggests that AI users appear to be aware of their limited contribution to AI-generated texts but, for reasons to be more closely investigated, tend not to publicly acknowledge the support received from an AI.

The AI ghostwriter effect differs from the placebo effect and the Google effect in that, at least judging from the two studies reported by Draxler et al. (2024), people appear to fully be aware of generative AIs being the main driver behind the text output. In contrast to a confusion regarding the source of “googled” information (Siler et al., 2022) and unconscious effects of technology-based augmentation on the judgment of one’s own abilities (Villa et al., 2023), letting an AI generate a text seems to be an intentional and conscious action that is cognitively monitored and remembered. Thus, people may actually perceive generative AIs as digital assistants, similar to other forms of technology that are helpful, but that do not require any kind of acknowledgment. While the aspect of whether users perceive AIs as human-like partners will be discussed in a later section, a cognitive explanation for not giving credit to AIs could be that humans perceive generative AIs merely as a tool to offload parts of their work (Skulmowski, 2023).

The question of authorship and attribution is further complicated by several aspects. Firstly, while not writing entire texts themselves, AI users do contribute to an AI-generated text by composing prompts, checking the AI output, and curating the most appropriate fragments of the output. Thus, humans still actively contribute to AI-generated content and may not wish to fully assign the credit of a work to an AI, perhaps even fearing that the acknowledgment of any AI assistance will let others believe that they did not contribute anything original to the work themselves. As a potential solution, Draxler et al. (2024) suggested to provide the possibility for a more detailed contribution statement describing which tasks were completed by the AI. Furthermore, the aspects of ethics, authorship, copyright, and attribution are in a complex relationship centered around human content creation. This leads to the question regarding whether humans perceive AIs as equal cooperation partners that will be discussed in the following section.

Anthropomorphization or Assistance?

The currently ongoing changes on how we use AI systems in many facets of everyday life raise the question how the relationship between humans and AIs will be affected by the ever-increasing functionalities and more natural modes of interaction afforded by modern AIs. Most importantly, this relationship is likely to determine whether AI users will distinguish between their own work and AI-supplied contributions.

For decades, the relationship between computers and humans has been investigated in the context of anthropomorphization. The highly influential theoretical framework called Computers as Social Actors (CASA; Reeves & Nass, 1996; for a recent overview, see Nielsen et al., 2022) holds that humans expect computers to function in alignment with certain social rules, leading them to perceive technology as a human-like partner. These assumptions have been extended to AI systems (e.g., Tschopp et al., 2023) and to devices such as smartphones (e.g., Wang, 2017) and smartwatches (Makady, 2023). Research from this field indicates that elements that trigger the humanization of chatbots increase their social presence, which, in turn, amplifies trust, empathy, and satisfaction (Janson, 2023).

The current research literature on anthropomorphization demonstrates that subtle design factors can have a strong impact on attitudes toward AI agents (and interactions with them). For instance, the smiles of AI-based agents can be an important determinant of behavior, such as in the case of charity donations (Baek et al., 2022). It also needs to be noted that a more human-like design can have divergent effects on different people, such as in a study that suggests that people with two types of investment goals are affected differently by the human-like design of robots (Baek & Kim, 2023a). Differential aspects behind the tendency to anthropomorphize have been investigated for years (e.g., Waytz et al., 2010), potentially complicating the design of generative AIs.

A comprehensive meta-analysis on the anthropomorphization of various artificial entities (including chatbots) by Blut et al. (2021) revealed a complex interplay of design choices, user traits, and the purpose of the interaction. Blut et al. (2021) found that anthropomorphization is an important factor that increases the intention to use technology (see also Li & Sung, 2021). Furthermore, they identified among others intelligence, likability, and social presence as mediators and emphasize that these should be considered when studying the intricate relationship between humans and artificial agents. In addition, they found support for the claim that anthropomorphization plays a key role in case of interactions related to information-processing tasks, underlining the relevance of anthropomorphization for the present issue of the effects of generative AI.

Another recent study revealed aspects that may affect the prevalence of placebo and ghostwriter effects. Hong et al. (2022) investigated whether the creativity and human-like qualities of AIs used for music generation affect people’s attitudes toward the generated musical pieces and the AIs. They found that the creativity of the AIs did not influence their participants’ tendency to acknowledge the AI as a “musician,” in contrast to human qualities. This result shows how important anthropomorphization cues are to establish (generative) AIs as creative entities. Interestingly, Hong et al. (2022) describe that people who consider AIs as musicians rate AI-generated music more favorably than those who do not. These findings will need to be assessed in the context of placebo and ghostwriter effects in academic settings. It would be plausible to assume that those who attribute human-like qualities to an AI will rate the AI output as having a better quality, in turn prompting them to use it in their own work. However, the ghostwriter effect could result in a conflict for AI users: If they anthropomorphize the AI and like the quality of the output, how can they take the credit for it? This conflict reveals a research gap within the CASA literature, as little is known concerning these emerging complex relationships with AI collaborators, in particular in the context of education.

Importantly, most of the studies presented in the current section were published before the widespread adoption of generative AI systems in 2023. As a result, it may be worthwhile to consider how triggers for anthropomorphization could affect interactions with (generative) AIs (Jacobs et al., 2023; Laban, 2021; Lee et al., 2020; Pelau et al., 2021; Rajaobelina et al., 2021).

A well-known effect in human-technology interaction is the uncanny valley (Mori, 1970), named after the nonlinear nature of humans’ assessment of nonhuman entities. While people’s assessment becomes more positive when artifacts feature human-like qualities, this trend sharply reverses as soon as entities are almost human-like, but still can be identified as non-human (for overviews, see Vaitonytė et al., 2023; Wang et al., 2015). This effect with its primary roots in visual appearance has been transferred to the psychological level in the form of an uncanny valley of the mind (Stein & Ohler, 2017). Their study describes that digital entities that are hard to distinguish from real humans regarding their mental capabilities can elicit negative responses, similar to the visual uncanny valley effect.

A number of studies suggest that humans prefer to share their workload with technological aids under certain circumstances (e.g., Wahn & Kingstone, 2021; Wahn et al., 2023), while there are factors determining whether an AI is found to be trustworthy and acceptable or whether users feel uneasy because of the “intelligence” of their AI cooperation partners (e.g., Baek & Kim, 2023b; Ma & Huo, 2023). This conflict results in a complex situation, in particular concerning the placebo effects discussed in earlier sections.

The fundamental issue that will need to be investigated in future studies is how the conflict between users being aware of an external entity helping them with their work and their knowledge about AIs not being human cooperation partners will affect their judgments and usage habits. Of particular importance are the factors that will facilitate users to keep track of their own work and the contributions of AIs. The uncanny valley effect could be helpful in this regard. For instance, it has been found that the uncanny valley effect can be reduced by the intentional dehumanization of robots (Yam et al., 2021). At the same time, the aspect of hierarchy in human-technology cooperation has been found to have a high impact, with negative feedback given by an anthropomorphized robot “supervisor” resulting in a more negative perception than feedback provided by a non-anthropomorphized robot (Yam et al., 2022). An additional study highlights the interplay between the perceived power dynamics and anthropomorphization. In a study by Yang et al. (2022), participants assumed that anthropomorphic AI agents would offer a better performance when they were in control of the situation. However, if the AI agents were perceived to be in control, participants rated the less human-like AI agent more favorably. This aspect will need to be kept in mind in the design of generative AIs, as not much is known whether users perceive these AI tools as more “powerful” than themselves.

From the studies just summarized, we can gather that human-like technology is likely to produce strong and complex emotional responses that can, however, be attenuated through dehumanization. This result pattern suggests that there are ways to support AI users in their monitoring process. For example, one way of making users aware that AIs are not merely “anonymous assistants” whose work does not need to be attributed, AI tools could receive the functionality to remind users that they are tools, and that their utilization can in some cases be necessary to be disclosed, with corresponding recommendations being included automatically for task types that usually require an acknowledgment. This aspect will likely become of even greater importance as soon as word processors enable AI-based functionalities such as sentence completion by default, thereby making it more difficult for users to keep track of the division of work.

At the same time, people’s tendency to not disclose AI use could be a result of knowledge concerning the legal state of AI output and copyright. According to some analyses, the copyright owner of the output created by generative AI systems is the user who gives the commands to the AI (Hugenholtz & Quintais, 2021). However, it could be helpful to inform AI users of the rich source material required for the training of generative AIs, the work that has gone into training them, and the resulting ethical issue of passing off this output as their own work. Considerable work must be done to arrive at definitive guidelines on how to design the cooperation between humans and generative AIs in a manner that prevents placebo effects, distorted judgments, and their unethical use.

Effects on Learning Outcomes

The general influence of generative AI use on learning outcomes and cognitive capabilities is currently being investigated. Due to the short duration of the widespread availability of generative AIs, no studies concerning the long-term effects of these systems have been published, but several empirical studies offering preliminary evidence regarding the effects of generative AIs on learning and related cognitive capabilities have been published. As generative AIs have an unprecedented potential for the automatization of highly complex tasks, the literature contains various hypotheses concerning the effects of AI use in educational contexts (e.g., Bai et al., 2023; Farrokhnia et al., 2023; Gill & Kaur, 2023; Kasneci et al., 2023; Lee, 2023; Skulmowski, 2023). Some authors highlight the risk for diminished abilities in various domains (e.g., Bai et al., 2023; Skulmowski, 2023). As a first look into the research on the effects of AI on learning and education, the following section presents a discussion of findings regarding self-regulated learning and creativity, followed by a brief summary of exemplary studies on using AIs for training writing and coding skills.

Self-Regulated Learning

One of the main reasons why placebo and ghostwriter effects should be alarming to educators is that these attribution tendencies could have the potential for interfering with self-regulated learning. Self-regulated learning describes a set of methods and habits that help learners effectively utilize and plan the necessary time to achieve their learning objectives (Zimmerman, 2002). An important prerequisite for self-regulated learning is that learners need to be able to accurately monitor their progress in order to estimate how much more time and effort is necessary for them to achieve their goals (Seufert, 2018; Wolters & Brady, 2021; Zimmerman, 2002). As a result of placebo effects, learners could develop the impression that their abilities are much higher than they actually are, possibly leading them to the conclusion that they need less further practice. Having the illusion of being a better writer, artist, or composer will likely negatively affect the training process, at least for those learners eager to accept self-serving interpretations.

Furthermore, self-regulated learning has been linked to Sweller et al.’s (1998, 2019) cognitive load theory by Seufert (2018, 2020). Seufert (2018) describes how the different tasks involved in self-regulated learning, such as monitoring and goal-setting, all introduce their own cognitive load into the learning process. The emerging literature on placebo and ghostwriter effects suggests that keeping track of one’s own contributions to a task and at the same time resisting the urge to pass off work contributed by AIs as one’s own (or navigating the complexities of proper acknowledgment) could result in considerable additional cognitive load.

However, current research on generative AIs and self-regulated learning revealed several positive effects. In a recent study, a generative AI was more effective at fostering self-regulated learning (including motivation and study habits) than a more traditional AI-based implementation (Ng et al., 2024). Based on the scarce empirical literature published on this issue, it will be interesting to assess the costs and benefits involved in generative AI use. While it is possible that generative AIs introduce cognitive load into learning processes as summarized earlier, they may end up resulting in a positive net effect on learning by raising motivation and instilling more consistent learning habits (for an overview on cognitive cost–benefit models, see Skulmowski & Xu, 2022). Different facets of self-regulated learning have been conceptualized as “layers” (Wirth et al., 2020), and it is possible that the use of generative AIs can have different or even conflicting effects on these different components. Further research is needed to determine exactly how the different processes involved in self-regulated learning are affected by AI use.

Creativity and Creative Writing

As an example of research on the effects of generative AIs on cognitive capabilities, let us now turn to creativity. Since the creative output of generative AIs has been found to match the level of human creativity (Haase & Hanel, 2023), there is a widespread hope that generative AIs will support humans in various creative tasks, such as creative writing. It is important to distinguish isolated creative (learning) activities (such as creative writing) from creativity (or the ability for creative thinking) as a trait. Regarding the latter, the empirical basis for positive effects of AI use appears to be complex and not entirely positive. First results indicate that the ability for creative writing can be weakened through generative AI use (Niloy et al., 2023). Such negative effects could be explained by the fact that generative AIs enable the externalization of tasks that depend on fundamental cognitive faculties (Skulmowski, 2023). Less training of basic and innate capabilities thus may have grave consequences (León-Domínguez, 2024; Skulmowski, 2023) that will need to be explored further. It is important to note that we are at a very early point regarding conclusive empirical evidence on the effects of generative AIs on creativity, as a recent study found the opposite effect, namely an increase in creative problem-solving strategies through the use of a generative AI (Urban et al., 2024). Another recent study found that chatbot use can increase the quality of new ideas in a task in which alternative uses for a paperclip were to be found (Habib et al., 2024).

Divergent effects can most likely be attributed to differences in the investigated tasks: creative writing versus creative problem-solving. Yet, educators should currently be wary of letting students externalize their creative tasks to generative AIs until more findings have been published and precise guidelines can be formulated. Crucially, it will be important to distinguish between AI-based support improving learners’ creative output and habitual AI use negatively affecting learners’ creativity as a trait. It will be interesting to see whether creativity can be fostered by letting learners observe the AI generate output and by analyzing the steps involved in these creations in order to arrive at creative strategies.

Coding and Academic Writing

The application of generative AIs to educational activities appears to be suitable for certain and quite specific tasks. For instance, performance in a programming course was shown to be higher with support from a chatbot (Yilmaz & Yilmaz, 2023). Likewise, generative AIs appear to be helpful for engineering students’ essay writing (Bernabei et al., 2023). However, training academic writing using generative AIs can involve several obstacles, as their output has been found to contain inaccurate or false information (Alkaissi & McFarlane, 2023). As a result, the use of generative AI still requires caution and critical reflection. But do students and educators appropriately consider the risks and potentials of generative AI for education? In the following section, their perspectives on AI use will be summarized.

Students’ and Educators’ Perspectives on Artificial Intelligence

Since the widespread availability of generative AI tools, several surveys have been conducted to assess students’ and educators’ hopes, fears, and perspectives concerning AI use in education. One of the first studies in this area was published by Chan and Hu (2023) using a sample of 399 university students. In their study, the overall assessment of AI for education was rather positive, with students highlighting the functionalities for support and assistance offered by generative AIs. However, their study also revealed several aspects deemed problematic by the students, such as ethics, privacy, and the potential for a too strong reliance on these tools that could negatively affect their development. ElSayary (2023) investigated the attitudes of teachers concerning ChatGPT and found that they mainly see a benefit of this tool for planning their lessons and specific teaching activities, while the aspects of grading and feedback were not the main focus of their evaluation of this tool.

Barret and Pack (2023) directly compared university students’ and educators’ assessments of AI use in six subtasks of academic writing. While they found that the students’ and educators’ attitudes generally did not differ substantially across the task components they assessed, there were some points of disagreement. The educators found it far more unacceptable to use generative AI in the brainstorming phase without acknowledgment than the students did. For outlining a paper, students found it more acceptable to use an AI to come up with initial ideas and to use an AI tool with or without acknowledgment compared with the educators. Regarding the writing and revision of an essay, students’ and educators’ assessments were similar, but educators more strongly disagreed with using an AI tool to write or revise an essay with or without acknowledgment. Overall, the results of that study indicate that while educators and students seem to be in agreement that unacknowledged AI use is not acceptable for brainstorming, writing, and revision, the majority of students at least somewhat agrees to AI use if a particular task has already been practiced and if the AI output is only used to come up with first ideas. This assessment is in line with the perception of AI as an assistant, but for a considerable part of the student sample, they seem to have no problem with using AI as an unacknowledged ghostwriter. These differences in judgments regarding the ethical and educational aspects of AI use should be monitored in the future in order to arrive at common guidelines with a high level of acceptance and compliance.

Potential Solutions and Novel Avenues for Research

The reviewed findings regarding placebo effects, ghostwriter effects, and potentially negative long-term effects result in several implications that should be further investigated. These implications concern learning and teaching in general as well as content production such as (academic) writing in particular. Proposals for solutions worthy of investigation are presented in the following.

Increasing Transparency and Awareness

The current debate surrounding generative AIs in education revolves around how to train students for using these new tools (e.g., Kasneci et al., 2023). If AI users knew more about the sheer volume of training data necessary for generative AIs and the effort required for the training of an AI, the appreciation for the work behind these systems could grow and possibly reduce the cognitive misattribution of AI-generated work to be one’s own. Furthermore, by disclosing (at least a part of) the training material relevant for each output, users could gain insights into the body of work that went into it, potentially further helping them to distinguish their own work from AI-based contributions.

Based on the research on the uncanny valley (of mind), it should be investigated how different forms of embodiment (e.g., Cassell, 2000) affect users’ perception of AI-generated content. For example, users may be more inclined to attribute work to an embodied AI featuring a human face or other (visual) human characteristics in the user interface. The currently popular chatbot-style interfaces used by several AIs may make it particularly easy to disregard the AI as a tool that needs to be acknowledged (as this interface strongly differs from typical software interfaces) and at the same time does not lead to enough anthropomorphization to be considered an equal partner. This appearance of a “silent partner” may be one of the contributors to placebo and ghostwriter effects.

Distinguish Between Tasks That Should and Should not be Automated

Students and educators clearly distinguish between using AIs in order to automate well-trained tasks (and task components) on one hand and externalizing entire tasks that would usually be expected to be completed by learners themselves on the other hand (Barrett & Pack, 2023). This very distinction is likely to constitute the core of future research on AI-based externalization. A recently presented model of activity-based learning could be a guide for this research area. Skulmowski (2024) developed a model with three layers of learning activities based on Pacherie’s (2008) conceptualization of action planning. In this model of learning activities, an educational task commonly consists of a semantic part at a high level of abstraction, such as the intention to write an essay (Skulmowski, 2024). This layer is thought to be associated with a considerable level of cognitive load and requires a large amount of planning, decisions, and acquired skills (Skulmowski, 2024). In the case of writing an essay, this could include knowledge regarding style and composition and a firm grasp on the content to be discussed in the text. These semantic activities are thought to consist of a number of proximal activities (Skulmowski, 2024). In the case of essay writing, these proximal activities would be the formulation of individual paragraphs and sentences. Proximal activities are usually realized through motor activities (Skulmowski, 2024), such as typing individual letters or writing them down on paper.

The three-layer model of learning activities by Skulmowski (2024) could be used as a starting point to investigate which tasks can be externalized without significant risks to learners. It is plausible to assume that the lower layers of learning tasks lend themselves to AI-based externalization. As soon as students have mastered how to write on paper, the enhanced functionalities for easily correcting and revising texts using word processors offer new possibilities to train the layer of proximal activities. When the motor activity involved in writing is not at the center of the learning task anymore, the potentials of technology for quickly and easily revising sentences and paragraphs for style and flow become more valuable. It is plausible to assume that learners might benefit from AI-based functionalities at this stage, such as receiving support and feedback concerning their grammar, style, clarity, and factual correctness. However, as soon as AI functionalities take away too much effort and do not foster understanding, such as sentence completion tools in AI-enhanced word processors, there likely will be a danger of a too strong reliance on these tools. According to this logic, the highest level of learning tasks, the semantic component, will be at the greatest risk for a diminished ability and illusions of competence when being offloaded to an AI. Generating an entire essay simply by writing a series of prompts concerning the content, the intended audience, and style cannot be a substitute for engaging with the content, acquiring one’s own writing style through training and feedback, and learning to effectively communicate information.

As a consequence, educators should ensure that learners have mastered the lower layers (or components) of a task before letting learners automate them using an AI, similarly to how calculators are introduced and used in schools. However, it will be necessary to analyze learning tasks and objectives with a high precision in order to arrive at guidelines concerning the skill levels at which different parts of tasks can be offloaded. More empirical research is needed to ascertain the effectiveness of such an approach and the transferability to different subjects, content domains, and types of learning tasks.

Foster Self-Regulated Learning by Keeping Track of Learners’ Progress

Based on recently published initial evidence in favor of generative AI-based support for self-regulated learning (Ng et al., 2024), generative AIs used for educational purposes could be useful monitoring tools for the entire learning process. However, it is likely that such generative AIs would need to offer a tailor-made toolset appropriate for education. In accordance with the discussion of layer-based models of learning activities in the previous section, educational chatbots should acquire a learner model instead of starting at tabula rasa with each new chat. Thereby, they could support younger learners in developing their ability for self-regulated learning by keeping track of their progress and highlighting areas they should work on. While doing so, the generative AI could offer suggestions on which parts of the task learners should not offload. The development and evaluation of such an educational generative AI should make use of the wealth of findings that can be found in the literature on anthropomorphization in human-technology interaction briefly discussed in earlier sections. By incorporating such functionalities, generative AIs could reach the status of mentors (see Baylor, 2003) rather than being assistants or even unacknowledged ghostwriters.

Develop and Evaluate AI Training for Learners and Instructors

The disruption caused by the introduction of generative AIs is a global challenge for education. Therefore, it is necessary to quickly establish global standards, for instance, by comparing the guidelines adopted by different institutions (e.g., Moorhouse et al., 2023) and evaluating their long-term effects on learners. Beyond global standards for working with generative AIs in educational settings, it would also be beneficial to develop and evaluate training programs for educators at all levels of education. The trend toward digitalization in the last few years has shown to require continued training of educators in their use of emerging digital technologies. These types of training programs should be offered for the utilization of generative AIs and best practices for such programs should be made available based on evaluations.

Conclusion

When pitted against the human urge to anthropomorphize technology, it appears that self-serving illusions of competency are the more likely outcome for some learners. Instead of perceiving a generative AI as a coworker or assistant, the emerging research on AI-based placebo effects suggests that humans often prefer to attribute the (quality of) AI output to their own skills and abilities, even claiming the intellectual ownership of AI-generated content. Given that previous research indicates a strong tendency to anthropomorphize technology, it is somewhat surprising that people appear to lack a propensity to give a generative AI (and the body of work it was trained on) any credit. These findings could be explained by the idea that technology is often considered as a cognitive extension. If one considers their AI assistant as a part of their cognitive system, it may appear ethically sound to assign all the credit related to a work to oneself. Further research is necessary to determine the exact mechanisms underlying human attribution of credit towards generative AIs. Most importantly, illusions of ability due to AI support may turn out to be major problems for all levels of education. The effects on cognitive development and strategies to counter placebo effects urgently need to be investigated.