Introduction

At the risk of stating the obvious, artificial intelligence (AI) poses many ethical challenges for our world, now and in the future. The massive uptake of the AI application ChatGPT to generate natural-sounding text from large datasets of human language has centered this debate. For decades, a community of critics has warned that unthinking acceptance of complex technology, such as AI, risks sacrificing human needs and that data may be used unscrupulously in digital economies (Loeb 2021; Knox 2019). AI technology now pervades personal, workplace, and educational environments, even if this is not always apparent to users (Bearman and Luckin 2020; Siemens et al. 2022). Associated ethical challenges in AI, such as the pervasive ‘datafication’ and monetization of our daily lives and the potential harm to our well-being, attention, and decision-making, are in the mainstream news (Hari 2022; Zuboff 2019).

AI and software applications create efficiencies, but automated decision-making can replicate inherently biased data, based on cultural and social assumptions (Perrotta and Selwyn 2020). Machine learning algorithms are also complex and difficult to comprehend, so automated processes may be misused as new forms of surveillance and control (Andrejevic 2019; Bayne et al. 2020; Buchanan et al. 2018; Wajcman 2010). Globally, many ethical frameworks with guiding principles have been formulated in response to these concerns. The OECD policy guidelines for education and research emphasize human rather than technical capability development for an AI world in ‘critical and creative thinking, teamwork, communication, socio-emotional and AI ethics skills’ (UNESCO 2022: 33).

However, higher education is struggling to keep up with AI technology use and its ethical implications (Bozkurt et al. 2021; Markauskaite et al. 2022). Bayne et al. (2020) invokes Latour’s black-box concept, to warn that teachers and students may unwittingly accept undesirable outputs from technology, when unaware of its complex, inner workings. A critical stance on the surveillance and exploitation of educational data in commercial platforms is essential (Knox 2019; Selwyn et al. 2021; Williamson and Eynon 2020). When used ethically, AI could help many students to navigate complex digital learning environments.

For example, personal assistants might act as a human interface, reduce keyboard interaction, and benefit students and teachers (Seymour et al. 2018). AI avatars could function as assistive technology for people with disabilities and have the potential to empower people who experience discrimination based on their appearance (Boucher 2022). AI promises much in many areas of education, when risks and unintended consequences are minimized, and its capabilities are designed, developed, and deployed in critical, creative, and ethical ways (Bayne et al. 2020; Selwyn et al. 2021). Hence, advocates of AI in education call for a stronger pedagogical and ethical approach, with more practical examples and guides for educators that are less technology-centric and more interdisciplinary (Bearman et al. 2022; Zawacki-Richter et al. 2019; Zhang and Aslan 2021).

Historically, research into artificial intelligence in education (AIED) has focused on highly technical, adaptive, and intelligent tutoring applications. Automated systems and processes may be used to collect student data, provide personalized recommendations and guidance, and support educators in decision-making (Hwang et al. 2020). Much research has also studied AI systems that automate grading and feedback and predict students’ progress (Zawacki-Richter et al. 2019). Many researchers in the learning analytics field also have a broad AI agenda, including policies, processes, and practices to evaluate and improve education with ethical and pedagogical approaches (Buckingham Shum and Luckin 2019). Yet, a recent review of publications found a lack of educational theory and practice and ethics in AI studies in higher education, with most studies originating from the context of computer science, engineering, and mathematics (Bozkurt et al. 2021). Applied research in AI in higher education tends to be from computer scientists, often based on the testing and reproduction of knowledge, as opposed to critical and reflective capabilities (Bates et al. 2020).

With the explosion of automated chatbot technologies, both educators and students need to engage with AI and understand how it might influence teaching and learning. There is much scope for multidisciplinary research into the practice of AI-mediated learning as a tool for ethical, creative, and critical reflection in diverse settings (Bearman and Luckin 2020; Markauskaite et al. 2022). As AI continues to develop and students and educators interact more with such applications, AI may move from a support role to an ‘active and equal partner’ (Siemens et al. 2022: 8).

In this qualitative study, educational and business information researchers have pooled their expertise to design, develop, and trial a recorded AI-generated avatar to present videos. Our postdigital research traverses the discipline boundaries of business information systems and education, to generate transdisciplinary insights (Fawns et al. 2023). The aim is to understand how students experience learning in a self-paced interactive online environment with AI-generated avatars and whether this format helped them to reflect on business ethics. We are interested in the affective attributes experienced by students and their tutors in relation to avatars. The authors then explore the implications for teaching with AI-generated avatars as presenters.

AI Avatars in Postdigital Education

Postdigital education acknowledges that web and mobile technologies are part of the fabric of the twenty-first century. The automated efficiencies of AI mediate our everyday experience, intensifying and amplifying this reliance, and higher education is no exception. Much activity in education already consists of interactions between humans mediated by technology in the classroom and at home, whether that be digital computers or analog whiteboards, for example (Snaza and Weaver 2014; Wardak et al. 2021), so much so that it is difficult to imagine life without them (Hari 2022).

Underpinning our research is a post-phenomenological, postdigital methodology that centers AI from a human perspective, as well as considering it integral to a non-human assemblage. In this way, we hope to ‘surpass instrumentalist views and gain nuanced understandings’ (Aagaard 2017: 529). This critical lens sees the world as more complex than binary oppositions of virtual or physical, human or non-human, and conceptualizes these experiences as entangled and impossible to neatly separate (Jandrić et al. 2018). Similarly, pedagogy, technology, and ethical considerations are inextricably entangled (Fawns 2022).

Unproductive and hackneyed narratives of AI robots as an existential threat are refuted in postdigital education. The terms ‘virtual human’ or ‘digital human’ may describe the code and data of AI-generated avatars designed to create the illusion of being human, but our intelligence and experience are fundamentally different from computational processes (Burden and Savin-Baden 2019; Seymour et al. 2018). Algorithms and avatars cannot replicate human intelligence that is embodied, ‘autonomous, resilient, and integrated’ (Maruyama 2020: 245). AI-generated avatars, no matter how sophisticated, only approximate the relational and social aspects of human intelligence, teaching, and learning.

Still, we are most influenced by technology that looks and behaves like a human (Borenstein and Arkin 2019). For varying reasons and in varying circumstances, teachers and students might prefer to interact with technology than with people (Bayne 2020; Selwyn et al. 2021). For example, those who are shy or feel uncomfortable in social situations may be more inclined to ask questions of a chatbot than a busy person. The consumption and production of synthetic media will continue to shape our interactions and, ultimately, higher education. Consider the popularity of the algorithm and online celebrity Lil Miquela, with over three million followers on Instagram (Lacković 2021). Recent research suggests that people may prefer to interact with highly realistic AI-generated avatars, rather than simple caricatures, as long as the process is transparent and trustworthy, despite the fact that AI algorithms in a virtual human form can evoke unsettling and uncanny feelings (Seymour et al. 2021).

Avatars are becoming more human-like with nuanced vocal and facial expressions and are presented to mainstream audiences (Seymour et al. 2021). The synthetic media techniques behind AI-generated avatars are so advanced that many people have increasing difficulty discerning whether a representation is a human or a ‘deepfake’ (Vaccari and Chadwick 2020). Deepfakes of celebrities are common (Blackall 2020). Avatars that appear to be human might be created for entertainment but could also constitute a form of deception and lead to manipulation and control (Pasquale and Selwyn 2023). Students could potentially project traits or characteristics onto the technology that it does not possess, thinking, for example, that the AI avatar ‘cares’. Many philosophical, ethical, and legal issues arise when avatars are mistaken for humans (Seymour et al. 2018), issues important to the decision-making students will grapple with in future careers.

Our design intention in this study is far more circumspect than simulating interaction with digital humans. We do not reproduce or imitate the complex face-to-face interactions of teachers and students via interactive agents or cognitive agents. Similar highly realistic, sophisticated human avatars and interactions in immersive environments are explored in other cutting-edge information systems studies (Seymour et al. 2021). While we acknowledge the incredible potential of cognitive agents for conversational computing and personal tutoring, among many other educational applications, our study has different aims. We borrow from multiple disciplines, including education, postdigital science, and business ethics to begin a transdisciplinary conversation on the use of AI-generated avatars in education. In doing so, we center the student voice, focusing on their experience learning in an online module presented by AI-generated avatars and its subsequent implications for teaching.

The study responds to and explores students’ level of acceptance, surprise, or indifference to learning from AI-generated avatars from a postdigital perspective. We concur with Savin-Baden (2021) that engaging with and critically questioning AI is essential, especially as the very idea of what is human begins to blur. As there is little evidence about how AI-generated ‘virtual humans’ may alter our thinking, let alone our education (Reader and Savin-Badin 2020), we consider it fitting to integrate such technology into course design and for students to consider this impact through their own experiences. Accordingly, we designed educational videos with AI-generated avatars and activities so students might experience and critique its impacts first-hand. Our intention was for students to explore the influence of algorithms and generative technologies as commonplace in our postdigital society (Jandrić et al. 2018) and to reflect on ethical decision-making in business.

Ethics in Managing with Information and Data

We situate our study of AI-generated avatars in a business subject with 714 students, two subject coordinators, and seven tutors that were delivered remotely during the pandemic and lockdown. Managing with information and data is a postgraduate subject that aims to equip students as future leaders with critical thinking skills to understand new technologies and business analytics, in order to make evidence-based, ethical decisions. The subject was designed, developed, and facilitated so that students learn the fundamentals of business decisions, analytics, and data quality, including strategy, managing stakeholders, business ethics and change, and horizon scanning for future trends.

Students learn about these concepts online at their own pace, as preparation for weekly workshops. They are asked to engage with a combination of online resources such as recorded lectures, video explainers, readings, self-check exercises, and reflective prompts, with opportunities to apply and check their understanding, in accordance with active learning principles (Poquet et al. 2018). Workshops are reserved for critical debate, practice, and facilitated feedback, where students collaborate in groups on structured activities around business information cases and issues.

Business ethics is an important topic in the ninth week and assessed in group work, near the end of the semester. The rationale for using an AI avatar to present on ethics was to generate discussion and insights about the business applications of such technologies, as students might well be expected to make decisions about AI avatars as future leaders. Hence, the videos were designed to engage students in a realistic experience of AI as consumers and to consider their ethical implications from a user’s perspective.

A series of scripts were developed around the theory and application of ethics in a business information systems context, particularly the practical challenges in the workplace. The coordinator of this subject recorded a brief introduction and explained why the ethics topic would be presented by an AI-generated avatar. In the videos, the term ‘AI presenter’ was used instead of a personal name or virtual or digital human, which may have suggested consciousness and agency.

Students were asked to prepare for their ethics workshops by watching the AI presenter in five short videos and by completing the online self-paced activities for that week. Relevant diagrams, images, and texts were added by the media team to reinforce key messages on ethical issues in data science and the idea of surveillance capitalism. Asimov’s three laws of robotics were also introduced, as well as Pasquale’s extension of the rules for AI, emphasizing that robots should not be used to counterfeit or substitute humans, but rather complement human skills (Pasquale and Selwyn 2023). See Fig. 1.

Fig. 1
figure 1

AI-generated avatar presents Pasquale’s New Laws of Robotics (2020), explaining that ‘You should be able to attribute AI to a person’

Students were encouraged to question the accuracy of the information presented by the AI avatar and the intellectual property and authenticity of the content presented to them. For example, students are provided with a link to the commercial AI video creation platform where the stock avatar was licensed and asked to scrutinize the company’s policy on the responsible use of synthetic media. The online activities and video content are based on the PAPA acronym and framework, which consist of ethical principles relating to information privacy, accuracy, property, and accessibility. This framework has been widely adopted in the business information systems discipline (Mason 1986; Parrish 2010). Students were also encouraged to connect ethics to their personal experiences and reflect on the privacy of their own and others’ personal information on social media when sharing content. This was to prompt students to think critically and make moral decisions for themselves, consistent with their values and ethical principles.

In the synchronous online workshops, teachers reiterated that the content presentation was AI-generated and stimulated discussion around data provenance. Students were then challenged to deepen their knowledge with activities and critiques of business ethics, with explicit reference to the readings, avatar presentation, and how this might apply to their own experience and in business scenarios. This was designed to support students in completing a group assessment, in which groups analyzed and proposed a solution to a business information problem. One of the assessment criteria was an analysis of the business information problem from an ethical perspective, with reference to the PAPA framework.

Research Design

One month after the workshops and assessment, all students were emailed with an invitation to take part in a focus group and participant information. Two focus groups were conducted over Zoom in June 2022, with a total of ten students, of which eight were international students, and half identified as male and half as female. Although students were self-selected into the focus groups, their demographic characteristics resembled the subject’s enrollment. Ninety-five percent of this cohort were international students, and sixty-one percent were female.

Students were asked a series of questions and prompts to share their experience of the AI avatar presenters and associated online activities and how useful they were in helping them understand ethical issues. They were shown specific pages of the online module and asked what they felt were the most and least engaging aspects of the content and if they had suggestions for improvements.

In a third focus group, seven tutors were also shown the online module and asked whether they thought the experience had helped students understand business ethics and if they wanted to share any other comments about the avatar. All focus groups were conducted 2 weeks after the subject concluded. As the subject coordinators are co-authors of this paper, they did not participate in the facilitation of the student and tutor focus groups to avoid potential conflicts of interest.

For transcript analysis, we adapted a systematic inductive research approach to thematic development (Gioia et al. 2021). To begin, two educational researchers conducted an independent analysis of focus group transcriptions to identify categories or concepts. The two subject coordinators were not involved in the analysis of raw data to avoid potential bias. Care was taken to adhere to participants’ terms in this first-order analysis. These independent initial analyses were transferred to a whiteboard for shared sense-making and to discuss similarities and differences among the initial categories. Where agreements were low, we revisited and re-examined the data until a consensus was reached. Following this, two business information systems researchers also considered the overarching themes and dimensions of the data, and themes were further distilled, and the research questions were revisited and adjusted.

After the data was aggregated, we looked for dynamic interrelationships between the emergent themes and concepts from the data to explain the phenomenon of interest and highlight connections between data and theory. Finally, all four researchers arrived at the following interrelated dimensions as important to the use of AI-generated avatars: the audience level of awareness, the learning design and purpose of the presentation, and a preference for personal and social presenters.

Ambivalent Perceptions

Prior to this study, we supposed students might have more polarized reactions to the use of AI-generated avatars to replace their usual lecture content. Perhaps we imagined students would argue against the ethics of AI avatars replacing or generating teachers, or for escaping ‘the human machine distinction’ as Costello puts it (2023). Yet, for the most part, the students interviewed had neither strongly positive nor negative attitudes to seeing and hearing their subject content embodied in an AI avatar. Instead, the idea of personalizing avatars for future educational and social interaction piqued their interest. These student views were corroborated by research in game-based learning which suggests personalization and customization of avatars is a more engaging and immersive experience (Chen et al. 2019).

Similarly, when seven tutors in a focus group were asked whether the experience helped students understand business ethics, four had no opinion, and two commented only on how international students found the AI-generated avatar very clear and easy to understand. Only one tutor thought it might be ‘a way of getting the students more engaged’, but also referred to one long video, which indicated a lack of awareness about the series of short AI-generated videos.

Interacting with simulated voices and faces has become much more common on the web and social media, compared to traditional higher education settings. In virtual worlds and games, users commonly tailor their avatars or digital representations to reflect their own personal identity and values (Ducheneaut et al. 2009). Increasingly, identity is entangled with digital cultures, where human-like media representations and algorithms are inseparable from an assemblage of the human and nonhuman (Savin-Baden 2021). Students’ conceptions of identity seem profoundly postdigital and entangled with digital technology. For example, one student explained that while it was ‘nice to have the professor face in the workshop,’ they would prefer to have ‘some input,’ and to ‘try it ourselves’.

…we all play games, we all see advertisements everywhere, they all have some kind of AI, and so on, so if you insist on the vizualisation side, we might not be as impressed... (Student 6)

We surmise that students’ immersion and entanglement with digital experiences, including synthetic media, perhaps dampened the impact of the AI-generated avatars as a unique catalyst for reflection on ethics. In the discussion, we analyze how educators might design for learning that is aware of the entangled digital, human, and nonhuman realities of students’ experiences outside of university and challenge students to take a more critical stance. Three interrelated design principles that emerged from our exploration of student perceptions of AI-generated avatars are represented in Fig. 2.

Fig. 2
figure 2

Design principles for AI-generated presenters in education

Level of Awareness

The level of awareness or the extent to which students recognized the video as presented by an AI avatar shaped their experience. Sometimes students were unaware that the video content was presented by an AI-generated avatar and assumed the presenter was human, despite the lecturer’s introductory video that explained the design intentions of presenting with an AI avatar, the label ‘AI presenter’ on each video, and other text signposts throughout the module. See Fig. 3, for example, where the presenter states that ‘this content has been delivered by me, an AI-generated avatar’.

Fig. 3
figure 3

AI-generated avatar asks a series of questions about whether it follows the laws of robotics guidelines

One student explained, ‘I guess I didn’t pay full attention when I watched a video, so I didn’t realize that there are differences’. When teachers explained the avatar in workshops, students reported feeling ‘really shocked’. One student said, ‘she just acts so natural, I still can’t believe’ (it was an AI presenter). This was unexpected as the content was intentionally designed to be explicit and transparent about the use of AI.

This initial lack of awareness drew attention to and raised questions about the nature of AI for some students, with students revisiting the videos and paying closer attention to the differences and nuances, particularly the performance aspects such as the AI’s consistent tone and even voice modulation. In some cases, the (unintended) shock of discovering that the presenter was ‘not a real person talking’ added to their learning. One student even suggested that this shock could be used as a deliberate teaching strategy to think about ethics. This was perceived as an ‘impress[ive] (sic) way of presenting’ information about ethics in business. The avatar presentations prompted another student to become ‘aware of the fact that AIs, and like, the technology in general, how far it has evolved’.

While one student learned that ‘what you see is not necessarily true’ and was left with ‘a really deep impression’ about ethics and ‘how can we use our data,’ others perceived no great difference between lectures delivered by an AI or human presenter, other than it was ‘cool’ to have the technology in the module. However, students reflecting on the novelty of the experience doubted that ongoing exposure to such AI presenters would be engaging. The amount of exposure to an AI presenter and the video duration impacted on experience.

…at first it looked impressive very, very realistic but is when it applied to every single video, I think we start to lose this sense of excitement, because we can get familiar… (Student 9)

The lack of spontaneity in scripted videos was unfavorably compared to the authentic speech of traditional in-person lectures by some students. The AI-generated video felt too slick, ‘like a YouTube video’. The usual fillers, stumbles, and digressions of an unrehearsed lecture were missing. In this iteration, the voice was pleasant, professional, and perhaps a little too generic, conveying the message without emotion or character. The text-to-speech algorithm generated an accent, pitch, and tone that seemed too smooth on a close or repeated listening. This could be interpreted as more of a technical complaint, especially as synthetic audio becomes more expressive and human-like. With more customization and effort, natural-sounding speech may be replicated that voices different emotions and styles, based on context.

Also, the decisions and decision-making process around avatars might have been more easily understood with greater transparency around the production process. Watermarking the video with clear and simple information about when, who, and what was involved in creating the avatar would have made the production process more transparent (Pataranutaporn et al. 2021). Furthermore, digital metadata is important for replicating and improving the content over time, as well as ethical reasons (Herschel et al. 2017).

While increasing students’ awareness of the AI nature of the presentation is the first step, moving to active involvement in the process of generating avatars may support deeper learning in how data can be manipulated. Sharing ‘lecture’ scripts with students or involving students in the design and development of a shared or their own avatar may have prompted more ethical reflection on AI, increased their capability to interrogate its business implications, and extended their learning (Buckingham Shum and Luckin 2019; Markauskaite et al. 2022).

Learning Design and Purpose

Despite the highly topical and relevant subject matter, the use of an innovative technology, and the high-end design and production values, some students interviewed had only engaged partially or not at all with the week’s content. One student missed the content and workshop because of work commitments. Other students were very strategic in their use of time; ‘I’m not really quite interested on this one … I spend most of time for group assignment’. Students struggled with the subjects’ ‘high workload’ and tried to complete all activities in the first weeks, but gradually engaged less as ‘the semester gets busier’. Another participant in the study explained that time-poor students often chose to read the case studies and skipped watching the video lectures, regardless of the topic. In this respect, student’s divided attention seemed to be a situated decision related to the complexities of their lives, rather than an indication of the success or otherwise of the learning design of the business ethics content and activities (Menendez Alvarez-Hevia et al. 2021).

Several students in the two focus groups commented that there was little difference between having an AI presenter or a human delivering a lecture recording. These students saw value for AI presenters for learning designs where the purpose was to impart knowledge and where the context and associated learning activities were less dependent on individual teachers’ personal experiences. AI presenters might also be reserved for certain topics in the broader subject. Students felt it made sense to use AI to present on current ethical issues in business, for example, but not necessarily across all topics.

The potential of AI to deliver content and support students for whom English as a second language was noted. Most students praised the clarity of AI-generated videos, with three students using the word ‘clear’ to describe the content and delivery. Students also commented that pauses (and phrasing) in the video were helpful, and the pronunciation ‘was easier to listen to’ and ‘easier to follow,’ especially for international students. Tutors supported this view, commenting that the pace and accent were clearer for international students, ‘but for local[s], not much different’. Another tutor, who had little else to add, also considered the speech and subtitles on AI presenter recordings as more accessible. However, the authors note that the use of an AI presenter could potentially generate inequalities and unintended ethical issues if students do not have equal access to such resources.

Some students went further and identified its potential for production efficiencies, as noted in other studies (Dao et al. 2021; Li et al. 2016). Using AI for the purpose of presenting content could ‘save some time for the professors,’ so ‘they can use their energy for some other coursework’. Students thought teachers could engage more with students if they did not need to deliver traditional lectures. One student mused that AI could ‘relieve the burden’ of lectures for teachers and discussed how this burden might be more evenly distributed.

The AI generation and media process were largely opaque to students, although the coordinator acknowledged the production collaborators, as well as the many subject matter experts who had contributed over the years to the subject. In fact, for this first iteration, the scripting, set-up, and production took considerably longer than the production of a standard, pre-recorded lecture and demanded skilled technical support staff.

With a streamlined production process, our media team could see the eventual benefits of AI to streamline creating and updating content for certain scenarios. AI video generation relies on tighter scripting and shorter videos, which switch focus to content generation away from teaching and facilitation. This is most suited to learning design models of flipped learning where students learn information outside of classes so that time in classes is reserved for more active learning approaches than in traditional lectures (Akçayır and Akçayır 2018). Nevertheless, it is worth noting that many educators incorporate learning activities into their lectures, rather than just transmit content. The efficient production of educational content for its own sake is not a positive pedagogical outcome of AI.

Another student felt that AI avatars may not be appropriate for presenting subject matter and learning activities perceived as more challenging. For financial risk management for example, ‘I don’t think AI couldn’t explain every part of it’. Where further explanation might be needed of difficult subjects or tasks, students doubted that an AI avatar could perform it as well as a human. The student seemed to be unaware of the scripted, static nature of the AI-generated avatar, and ambivalent about how great a role AI avatars as cognitive agents should take in the subject design. In this case, teacher facilitation and support were preferable.

But this is just from the content delivery, other than that the group discussions and the facilitator of our group discussion and answering our question has to be presented or completed by a real person. (Student 9)

Human and Nonhuman Interaction

Interacting with teachers and the AI presenter was also flagged as important to students. Most of the improvements suggested by students revolved around the potential for more social interaction with AI presenters that could be personalized to their needs. The use of AI could potentially increase the quality of a lecture and perhaps provide effective interaction in some but not all settings.

The ability to interact with the AI presenter to shape its characteristics was a strong theme. Changing the accent and language of the AI was important to students. Students also wished for avatars with different features, such as choice of gender, age, and appearance, according to their individual needs or preferences. Students expressed a desire for choosing and personalizing AI-generated avatars to ‘tailor our content for ourselves’ because ‘different people have different understandings’. One student even joked that ‘maybe one day my mum could taught me a lecture’.

As noted previously, synthetic media continues to improve rapidly, with ever more sophisticated avatars that can communicate effectively, control expression and body movement, and imitate a wider range of human gestures. Customizable avatars, mentioned in the literature and by students, are already available in commercial gaming platforms with detailed choices of appearance, language, and accent. Using such customized avatars to respond to questions and suggest personalized learning content and paths may assist students more effectively because of their affective value. Emotional engagement is important in online learning (Deng 2021). Students are perhaps more likely to interact, engage with, and be influenced by the avatars due to their emotional response to them.

In this small-scale study, human and nonhuman interactions were often entangled. The participants interpreted the recorded educational videos not only as pieces of information but also as situations of social communication, in a parasocial sense (Beege et al. 2019). Particularly in a remote learning context, teachers are quite literally the faces of their subjects. Videos in an online learning experience are one way of building teacher presence and teacher-student relationships. In this form of parasocial interaction, students interact and bond with teachers’ video representations, in a similar way to face-to-face interactions (Konijn and Hoorn 2017). In educational videos, students may imagine and interact with teachers’ representation as they would face-to-face, but without expecting a response (Beege et al. 2019).

Students still wanted human connection in their learning; they ‘would like to have lecturers present how they feel … and their own opinions’. Yet, the AI presentation also affected how students thought about interactions and communication with teachers. They also felt that some parts of learning needed to be facilitated by teachers and the avatar presentations were ‘not … like a real class.’ While they commented on the benefits of providing AI cases in an immersive way to explore ethical issues, they valued student–teacher relationships and personal communication. Students described missing the idiosyncratic, personal style of teachers, even though they praised the consistency of the videos.

Many participants of this study wanted interaction with the AI presenters, while discussing the need to balance this use with ‘real’ interaction with teachers. It was suggested that AI avatars might mimic human gestures to make them more vivid and engaging, with ‘facial expression changing in response to ours or what we say’ as that ‘might create more feeling’. Several students suggested that more interaction with an AI avatar would make the experience in Zoom workshops more impactful and engaging and generate more discussion. They wanted more than ‘a single way presentation from the AI’, with one student suggesting the avatar could be more social and ask, ‘oh hey how are you doing’, for example.

Most students reflected on the less social and personal nature of the AI presenter, without considering it as the product of a nonhuman algorithm that was incapable of replicating a human response. There was no explicit discussion about whether this use of AI might pose ethical problems. Only one student questioned the transparency and provenance of this method of presentation, whether they would know who was responsible for the content. They also asked:

if the Professor prepare the content for them, will AI know which part should be the important one? … And if I have some questions after watching the video, would professor know which part I was asking for… (Student 10)

In contrast, another student saw drawing on diverse expertise for a lecture as an advantage, as ‘multiple lecturers can give their opinion and to finalize what these AI avatars are going to present’. Students, who were learning remotely under difficult circumstances, were keenly focused on technological developments to replicate human activity, less so on the ethical aspects of using AI technology to augment or even substitute human intelligence.

We all like in-person teaching right, but maybe in this century, there will be more and more stuff will be transferred from in person to online so maybe this process is very important to develop the AI presenter or some technology to help us learn through online courses. Maybe the direction is that making online courses more like in-person communication, maybe that would be a good direction to improve the process... (Student 7)

Future Directions

From this small study, many notable questions were generated which could be interpreted in different ways. The small sample of data and focus on AI-generated avatars as presenters in this study may limit its interpretation for broader use cases of AI and ethics. These findings are preliminary, and more empirical research is needed to explore the student experience of AI-generated avatars in different design contexts and applications.

Nevertheless, the rich qualitative comments from students and our thematic analysis have been informed by extensive pedagogical and postdigital research and prior studies. A primary contribution of our qualitative study is the pedagogical design and evaluation of student experience of AI-generated avatars in an authentic context that valorizes the student’s voice. Understanding and applying these broad dimensions to student experience can guide the informed conversation around teaching and learning with AI-generated avatars and support designing, developing, and implementing AI in an effective and meaningful way in future activities.

In sum, students seemed largely untroubled by the automation of presenting content, instead citing the technical benefits of embodying algorithms and artificial intelligence (AI) in a realistic, virtual form. The AI-generated avatar was perceived as an efficient vehicle for content delivery that might be less or more engaging, depending on its educational context and purpose and students’ own context. It could be argued that some traditional recorded lecture content, at least, could be automated, if the educational design intention is only to present subject matter knowledge. AI-enhanced presentations of concepts in multiple formats could make content more efficient to produce and more accessible to more people, such as open educational resources (OER) for example (Wolfenden and Adinolfi 2019). The process of adapting AI-generated content to local languages and cultural contexts may prove easier than current multimedia formats.

On the other hand, teaching and learning are not simply about content production and consumption. Mastering content that is broadcast is very different to knowing how to engage and apply often complex concepts. In this study, AI was used to augment rather than replace traditional teaching and complemented by an interactive workshop interaction where students could reflect and respond to what they learned.

Participants in this study did not consider the ethical use or misuse of AI until prompted, nor did they explicitly articulate how their experience might relate to the basic principles of information privacy, accuracy, property, and authenticity, despite our design intentions. Technology and chatbots may be altering our beliefs and values about relationships to the point where simulations are becoming acceptable in certain situations. A far greater awareness of AI and its use in education would benefit students and educators alike.

Conceptualizing avatars solely as a scalable, efficient technology limits the potential pedagogical benefits of working with generative AI. To participate and ethically lead in education or business, we need to generate and shape AI in far more active and creative ways. AI-generated videos and chatbots seem to blur the accepted binary understandings of what is real and fake, human and nonhuman. AI avatars and ChatGPT, with their uncanny human-like communication, reflect back to us our fears and hopes for education. The idea of postdigital humans is no longer a science-fiction story, and educators and students alike need to reflect on what humanity means in this context. Perhaps, as Savin-Baden (2021: 5) argues, we are heading inexorably toward a changed society where postdigital and posthuman concepts and ethics are also needed. The question of how best to engage students in a critical approach to the postdigital and posthuman remains. The algorithms remain unseen. So what pedagogies can educators draw on to make the invisible visible?

Students’ ambivalence in this study reminds us that actively learning about business ethics is more effective than passive content consumption, no matter how clear and engaging the content may be. For example, students and educators might develop more critical stances to automation from experiential exercises, such as generating their own AI presenters. Such authentic learning about and with AI may lead to more ethical thinking, which is drastically needed, given the ease with which human traits are attributed to AI-generated avatars and the potential for counterfeiting information in the form of deepfakes.

The context in which the AI was used and its design purpose influenced perceptions about whether the AI presenter was fit for learning experiences. There are unanswered ethical questions about reproducing content in this form and whether an educator needs to be faithfully represented for effective learning. Now is the time for educators to reflect on when, why, and how content is presented for education. Which types of videos could or should this process be used to create, and when may it be inappropriate to include an AI presenter? How much of teaching via the screen is relational; how much is transactional and can be automated? If educators did automate lectures, how should this best be framed for students? How to explain the production process to educators and students alike? More educators and learners need to participate in such debates, to take a critical stance and lead the design and development of educational technologies, including AI.

The future is now. AI applications are fast becoming collaborative tools and potential partners, offering far more than productivity gains. While the evidence in this small case is limited, it illustrates how human and artificial intelligence may complement each other in our postdigital world and transcend binary, either-or approaches. Further exploration of these dimensions appears to be urgent and critical, given the exponential increase in AI applications in education, as elsewhere. Instead of viewing AI as a hostile threat to the humanistic aspiration of good teaching, it would be more helpful to focus on the learning design and deployment of avatars in educational contexts that best support the development of ethical and critical thinking.