Introduction

The current summer of Artificial Intelligence (AI) has focused attention on the importance of lifelong learning (LLL) to prepare the workforce and to ensure ethical and responsible use of AI for society now and in the future (Schiff 2021). This interest in lifelong learning, in which education gains prominence as a result of technical and deterministic assumptions about the latest networked technology and its implications for social change, is discernable in policy and practice since the end of the last century (e.g. Gorard and Selwyn 1999; Selwyn 2012). In many such policies, technology is seen both as the motivator for the need for LLL and an important way to enable learning to occur. Many governments now provide LLL grants to individuals; workplaces pay for technical systems to help with training needs; and outside the workplace individuals learn via an array of digital platforms that vary in the degree to which they are designed for learning (e.g. YouTube, TikTok, Coursera) and the cost to the individual in terms of their data or finances (Eynon and Malmberg 2021).

Lifelong learning can be understood as the collection of events throughout an individual’s lifetime that results in the integration of new practices into said individual’s life (Jarvis 2007). It is a term used to encapsulate the activities a person performs throughout their life to improve their knowledge, skills, and competence in a particular field, given certain personal, societal, or employment-related motives (Aspin and Chapman 2007; Field 2000). Alongside its lifelong focus (i.e. cradle to grave), LLL is often described as ‘life-wide’ (i.e. encompassing all spheres of life) and ‘life-deep’ (i.e. motivated by individuals’ own interests, beliefs and concerns) (Bélanger 2016; Walters 2010).

In its broadest sense, LLL is central to the overall ‘health’ of a society (Tuckett 2017), and it is considered an essential element of democracies (Bélanger 2016; Biesta 2006). However, over the past few decades, scholars have critiqued the narrowing of lifelong learning provision in the contexts and topics of learning, with a focus primarily on the needs of the economy, risking the possibilities for personal and democratic learning opportunities (Bélanger 2016; Biesta 2006; James 2020; Jarvis 2007). This continued narrowing of policy and practice is present in current debates around AI and lifelong learning. A trend that has been further exacerbated by the significant role of the commercial sector in developing AI for LLL (Eynon and Young 2021); and the ways that learning opportunities may also be directly oriented around short-term perspectives built into machine learning strategies to identify skills gaps (e.g. Gonzalez Ehrlinger and Stephany 2023).

Alongside this narrowing of knowledge is an instrumentalist view of lifelong learning with AI that is commonplace in policy and public discourse. This view tends to focus on individual skill acquisition as the primary mode of how learning takes place, where technology is a neutral tool to deliver these skills in an efficient way (e.g. Schiff 2021). This situation mirrors longstanding research agendas in education with an instrumentalist focus on how technology serves to ‘enhance’ learning (Bayne 2015; Jandrić and Knox 2022); and a tendency to theorize learning but not technology (Oliver 2016). This view not only ignores the complex ways that learning and technology can be theorized, but also elides the role of social context in learning, including in situations in which power differentials exist (Jarvis 2007).

In sum, a large proportion of discourse in policy and practice tends to talk about lifelong learning with AI as a panacea, where learning is inevitable, technology is rarely theorized and the relations between learning and technology are largely ignored. Overall, the discourse around AI’s role in society is increasingly mobilized by those who would profit from its spread (Suchman 2023). These are not new trends and are mirrored in a large proportion of the research about education and technology (Oliver 2016). However, to date there has been limited research that has explored the ways that AI technologies for lifelong learning are conceptualized in academic literature. Making such academic debates visible is important, both to contribute to a stronger theorization of the relationships between AI and lifelong learning within relevant research communities and also to use academic research to better inform policy and practice.

This is no easy task: critical engagement with AI may serve to further the business interests of those who create AI itself (Suchman 2023). Essential is a deeper engagement with the underlying conditions in which AI is said to operate, the problems it is said to solve, the contexts in which it is said to solve those problems, and the contextual arrangements of its deployment and uptake. We join other critical digital education scholars in suggesting that technology for learning must not be narrowly envisioned as something having discrete, predictable effects (e.g. Oliver 2011; Selwyn 2010) and, crucially, that the current focus on AI and lifelong learning requires particular attention.

In this paper, we argue that a postdigital perspective can help achieve these aims. Such a perspective serves to recognize the entangled nature of technology, learning, and society, whilst avoiding narrow, deterministic understandings of the role of technology within the context of lifelong learning. Based on a thematic review of academic literature on AI and lifelong learning, we ask, in what ways are AI technologies conceptualized in relation to lifelong learning, and what ‘work’ does AI do in this relationship?

Methodology

This study is based on a thematic review of the existing academic literature on AI and LLL. The details of our selection and review process are outlined below. Although our approach may seem misaligned with typical postdigital research approaches, it fits within the philosophy of the community, who remain ‘open to what constitutes knowledge, and the means by which to attest, authenticate, or audit the truthfulness of its creation’ (Jandrić et al. 2024: 7). As such, we recognise the complexity of digital-human relations in research of this kind, for example, the role of digital databases and the use of them in the research process, the partial nature of the knowledge that such approaches generate, and the epistemic injustices such work (re)creates (Jandrić et al. 2024; MacKenzie 2023).

Searching for Relevant Studies

We searched for literature on five principal databases: (1) EBSCOHost; (2) ProQuest; (3) Scopus; (4) Web of Science; and (5) ACM digital. We made use of these five principal databases to increase the degree to which the findings were representative of the bodies of research from which they were drawn. Nevertheless, our findings are very much shaped by the technologies available to us as researchers, our search practices, and the ways that those technologies came to be (Jandrić et al. 2024).

The searches were conducted using the sets of search terms listed in Table 1. Given our focus on understanding the multiple ways that AI can be conceptualized for lifelong learning, we split the search into the three key areas of interest. As seen in Table 1, the first category of terms covered multiple keywords that were likely to be used in studies focused on AI, including those related to specific techniques (e.g. machine learning or educational data mining) and those that reflect common technologies related to AI such as immersive technology, or Internet of Things (IoT).

Table 1 Search terms included

The second category dealt with various kinds of lifelong learning that appear in the literature. Although, as noted above, LLL is conceptualized as ‘cradle to grave’ in reality, the majority of LLL policy initiatives tend to be aimed at adults (Bélanger 2016), beyond the realm of formal education (Eynon and Malmberg 2021). Given this, and given the differences in why and how adults engage in learning as compared to young people (Illeris 2018), we selected keywords that reflected LLL for adults who had moved out of formal education. This allowed for a more expansive view of lifelong learning, not just as learning that happens within the workplace, with the attendant economic imperatives, but as something that happens throughout the life course across domains. Further, specifically excluded from this paper were studies whose contexts were formal centers of learning for children and young people—among which, primary, secondary, and tertiary schooling.

The third category was used to capture terms related to the varied ways that the relations between AI technologies and LLL could be conceptualized. Previous research has shown that the majority of studies of learning and technology tend to only theorize learning and not the technology (Oliver 2016). As Knox (2019) elaborates, here we move beyond instrumental understandings of technology in education as devices that can be used in a straightforward manner. A postdigital perspective helps to move beyond this in the arenas of both practice and research, while also foregrounding the possibilities that human choices might allow. Thus, this third category was important to identify those papers where some explicit theorization of technology had taken place.

This list of terms was chosen after multiple trials of a wider set of search terms where the team reviewed the search results and examined the key words used in those papers that were identified as most relevant to the study. This was done to ensure as wide a range of relevant studies as possible would be highlighted in database searches whilst keeping the corpus of studies to a manageable level. Table 1 lists the three sets of search terms used.

We restricted our search to peer-reviewed articles and peer-reviewed conference papers. In carrying out the search, we placed no time restrictions on the year of publication, to allow for the inclusion of all possible papers on the relevant topics over time. In practice, all of the articles that were ultimately included in the review are from 1998 onwards. This reflects the rise of (re)engagement in interest in lifelong learning around the time when many countries were focused on becoming competitive in the Information or Network Society (Webster 2014). These cycles of interest have continued to date, with hype around varied technologies refueling the debate (e.g. mobile learning, massive open online courses and machine learning).

Search and Screen

All searches were conducted between September and December 2022. The articles were then shortlisted and analyzed in three phases. In the first phase, two researchers went through all search results within the five databases that resulted from using the search terms listed in Table 1. Since the number of results in ProQuest exceeded 1000, we restricted our search to the first 750 (in the order of relevance) as we noticed a dip in relevance after this point. The other four database searchers yielded fewer than 500 results each, so all results were included in the first stage. All articles were uploaded to Zotero and shared with the research team. Duplicate articles across the databases were removed and the final list comprised 1457 articles.

In the second phase, the abstract of each paper was read, and the paper was placed in one of four categories: include, maybe, reject, or background research. A scan of the reference list was also carried out and any additional articles or conference papers of interest were added to the assessment at this stage. The decision to categorize was based on whether the article fully accounted for all three aspects of the search terms as detailed in Table 1. Background research represents papers of peripheral interest or papers which included only two of the three areas of focus in Table 1.

To ensure that the researchers used the same inclusion criteria, a small sample of papers were used to discuss and agree upon selection criteria before coding began. Each abstract was then read by one research team member and sorted into the initial four categories. All abstracts categorized as maybe were read by two members of the research team and then sorted into include, reject or background research, resulting in 86 included papers.

The full articles of these 86 included papers were read in a final review by one researcher to ensure the presence of AI in the articles as well as ensuring they met the other two criteria.

The presence of AI is not always immediately apparent: in fact, it has been noted that the terms AI, automated decision-making, and algorithms are invoked by researchers in various ways and often defined contextually (Pink et al. 2022). The situation is even more complex: ‘The line between AI proper and other forms of technology can be blurred, rendering AI invisible: if AI systems are embedded within technology we tend not to notice them.’ (Coeckelbergh 2020: 16) Indeed, this invisibility may be an explicit goal of the technology developers, aiming for ubiquity (Weiser 1991). Not only is AI sometimes invisible, but AI is understood and instantiated in different ways by the people involved at every stage of its ideation, design, development, and deployment; this applies to how AI is discursively constructed (Eynon and Young 2021) and how AI is deployed and taken up by humans (Pink et al. 2022). This applies to how researchers study and describe AI in their work; Seaver, discussing algorithms, makes a point that could be made about AI: ‘If we understand algorithms as enacted by the practices used to engage with them, then the stakes of our own methods change. We are not remote observers, but rather active enactors, producing algorithms as particular kinds of objects through our research’ (Seaver 2017: 5).

Thus, we looked for evidence of ‘intelligence displayed or simulated by code (algorithms) or machines’ (Coeckelbergh 2020: 64), even if the phrase ‘artificial intelligence’ was not explicit in the research itself.

Using this approach, the final number of papers amounted to 49. Although a complex process, the approach enabled us to collect an array of papers, offering the researchers a range of varied perspectives and an opportunity to draw connections across and between disciplines, an important characteristic of postdigital research (Jandrić et al. 2024). The outcomes of the overall search and screen process are noted in Fig. 1.

Fig. 1
figure 1

The search and screen process as reported in the ‘Methodology’ section

Thematic Analysis

The thematic review of the papers was conducted through a thorough review of each paper by a minimum of two members of the research team. A spreadsheet was created to capture the insights offered by each paper. Column headings included the name of the author, the year it was published, origin of research (region and country), setting of the industry, and academic field (e.g. Educational Data Mining, Computer Science and AIED). Each article was briefly summarized and the team also noted the study type (e.g. ‘empirical,’ ‘theoretical,’ ‘opinion,’ ‘quant,’ ‘qual,’ ‘mixed’). Wherever possible, the theoretical framing of each paper was recorded, with particular attention to the conceptualization of the relationship between AI and LLL. After initial tagging and discussions, additional columns were added to tag papers under the following categories: setting of research (formal, informal or both); methodologies of researchers; explicit or implicit theorization of technology; theorization of learning; and the role and rationale of AI (i.e. how AI ‘worked’ from the perspective of each paper).

Findings

Based on our analysis of all 49 papers, we categorized them into three groups. The largest number of papers (25 of 49) were those in group 1, which viewed AI from the perspective of acquisition of skills while describing learning from a primarily behaviorist or cognitivist perspective. This research implicitly, and at times explicitly, reinforces narratives of technological determinism that have long played a role in fostering the accelerated introduction of and acclimation to new technology (Robins and Webster 1985).

The second category, the second largest collection of papers (18 of 49), focused on sociocultural theories of learning, with a recognition of the need to consider social context and other actors in learning. This role-based consideration of AI recognizes the changing nature of digital technology in people’s lives.

Finally, the third and smallest selection of the articles reviewed (6 of 49) treats AI as something that reconfigures work, everyday life, and learning. We view this perspective as generative: it allows for the shaping of technology by human actors and admits the possibility that lifelong learning happens alongside technology and is relevant to contexts beyond the workplace.

Notably, nearly half of these 49 papers were published in the period between 2020 and 2022; the other half span the two preceding decades. This indicates the need to continue interrogating this research and outline potential new directions.

Group 1: AI for Efficiency (Working AI)

The first commonality among these research papers was the lack of diversity of research settings. Formal workplaces dominated the research: these included the construction industry, education, medicine, the military, manufacturing or a generalized workplace or industry. While there were several studies that took place within adult language or literacy centers, these were in the minority. Beyond these, the one study beyond the purview of formal or workplace learning explored the language learning app Duolingo (Ritonga et al. 2022). The research thus highlights the role of AI predominantly in the workplace, at the expense of other areas of life in which AI is increasingly implicated for learning.

Second, many studies in this group studied the deployment of intelligent tutoring systems (ITS), most often carried out by researchers in the fields of engineering, computer science, business studies, or the learning sciences. In these instances, the ITS acts as a substitute for humans performing specific tasks, including the ‘routine’ aspects of teaching, which has the stated effect of freeing the human teacher to focus on more challenging or creative endeavors. This effect, however, goes unsupported by empirical data, the implications of which will be explored below. For example, this is evidenced in research on an ITS for naval training (Bratt 2009), in addition to computer-assisted pronunciation tools in which the ITS plays the role of language teacher (Agarwal and Chakraborty 2019). This was also the case for ITS in adult literacy instruction in several instances (Cheung et al. 2003; Fang et al. 2022; Johnson et al. 2017; Zhu and Ren 2022) and for ITS more broadly (Belghith et al. 2012; Chen et al. 2021; Kim et al. 2009; Mehta et al. 2022).

Third, one frequent justification for the introduction of AI in these environments is couched in the need to respond to the technologization of society in light of the current ‘digital transformation’ (Johnson 2021), while preparing for the ‘future of work’ (Adami et al. 2021; Harborth and Kümpers 2022; Selby et al. 2021). Harborth and Kümpers (2022) detail the pressures of technology on the German labor market, stating the need to provide workers with new skills because of shifting job requirements or, indeed, the elimination of certain jobs entirely. As a potential solution, the authors point towards the use of virtual reality and augmented reality to allow for the upskilling or reskilling of low-skill workers (Harborth and Kümpers 2022). Similarly, Chirgwin notes the need to use ‘appropriate tools to meet the challenges facing learners in the digital world’ amid the fourth and fifth industrial revolutions (Chirgwin 2021: 50–51).

The fourth point, and indeed a focal point of these studies, is the role of AI in increasing efficiency, whether increasing the efficiency of teaching as compared to human instructors or increasing the efficiency of learning certain tasks (Crampes et al. 2000; Gronseth and Hutchins 2020; Johnson and Lester 2018; Kowald and Bruns 2021; Sabeima et al. 2022; Srinivasan et al. 2006). Algorithmic technology is used to ‘enhance learning efficiency at work’ and overcome ‘limitations of human operation and enhancing the efficiency, accuracy and convenience of knowledge management and learning performance’ (Hung et al. 2015: 1384).

Theoretically, these approaches to learning also rely on behaviorist and cognitivist conceptualizations of learning, which have an impact on understandings of technology. Not only, then, are AI and digital technologies designed to allow for certain learning interactions with users, but the research methods employed to understand these impacts rely on similar understandings. One paper, in studying AI software that interprets finger-based gestures on a tablet designed to administer exams, gives an example: ‘keystroke dynamics … can reveal aspects of the cognitive state of a user’ (Ding et al. 2021: 89). Friesen makes a similar point: ‘Besides providing an explanation for mental processes, computer technology is also the principle means for supporting, refining, and developing these same processes.’ (Friesen 2010: 89) Thus, while the theories of learning are not always explicit, the questions that researchers ask and the methods they employ give evidence of implicit theories, which necessarily frame and shape the research.

Finally, beyond behavioralist and cognitivist theories of learning, the other commonly used theory of learning invoked by this group relates to the work of Michael Knowles on andragogy, a theory (or praxis) at the heart of much scholarly debate and critique (Edwards 2002). Even though there is some attempt to appeal to constructivist paradigms, in most cases, however, these theories of learning focus on individuals, most often in the workplace, with AI claimed to increase efficiency while helping to ensure the workforce can respond to inevitable technological progress: AI, thus, is understood as a technological intervention in the workplace (Oliver 2016).

Group 2: AI as a Colleague (Working with AI)

The second group approaches AI as more embedded in a sociocultural context. This is notably different from group 1 in the explicit departure from behavioral and cognitive theories of learning, with increased focus on the ‘social.’ In addition, theories of technology point towards agent-like actors, with AI conceptualized as a ‘colleague’ that acts on individuals. These theoretical approaches recognize an increased complexity of technology in society.

The first theme in this group is one repeated from group 1. Most of the settings of the research are restricted to formal workplaces. While slightly more represented are informal learning settings, including reading groups (Parde and Nielsen 2019, foreign language learning (Riedmann et al. 2022), and more general life-wide learning (Galanis et al. 2016), the majority take place in workplace settings (Friesen and Anderson 2004; de Laat et al. 2020; Wilkens 2020).

Sociocultural theories of learning are the most prominent in group 2, leveraging ideas around how individual humans learn in the Vygotskian terms of scaffolding and the zone of proximal development (Alimisis and Zoulias 2013; Emmenegger et al. 2016; Riedmann et al. 2022). Similarly, this extends to how AI is imagined in the research as a peer that can affect learning in an individual.

More specifically, several papers from this group make use of the idea of Wenger’s Communities of Practice (CoP), including Dobson et al. (2001); Greer et al. (1998); Koren and Klamma (2018); and Poquet and Laat (2021). This represents an explicit departure from narratives of technology and technological artifacts as the cause of changes in behavior to an understanding that interactions with technology are the result of social practice. Not only are these outcomes contextually contingent, an idea which has implications for research, but ‘technologies are no longer positioned as the cause of practice, but instead as its residue’ (Oliver 2016: 48). Technologies become ‘residue’ in the sense that practice undergoes reification through and across CoPs. A community incorporates technology into their practice, thus embedding it into a specific social context.

There is also an increased tendency in discursively constructing AI as a colleague (or actor) within systems. Described variously as a ‘personal learning aid,’ ‘personal assistant,’ or ‘learning collaborator,’ the agent provides just-in-time feedback to users (Emmenegger et al. 2016; Greer et al. 1998; Poquet and Laat 2021). This conceptualization is repeated throughout the papers part of CSCW/CSCL research paradigms, as much of the AI is understood in terms of ‘agents’ (Holden et al. 2005). These agents are often assigned very specific roles in their contexts. In one paper, three different agents where theorized: mediator, information and facilitator agents (Ayala 2002); in another, an intelligent agent is described as taking on a subset of a teacher’s tasks (Caron et al. 2007). It should be noted, however, that within these systems, even as sociocultural theories render the context more complex, the research still focuses on how individuals engage with those agents, and the effect of those agents on individuals. In addition, there is little attention to how these agents are introduced to the workplace.

Importantly, in this grouping, papers coming from the domains of Computer Supported Collaborative Work (CSCW) emphasize the role of the human user in modifying AI. Inspired principally by the work of Judy Kay, the role of users in regulating and modifying the learner models that act on them is given theoretical consideration. This allows some transparency into a user’s own learner model for modification, making explicit the possibilities for user agency, albeit it in a technical sphere. Within a content and document recommendation system, Greer et al. propose allowing users to inspect their own user models, as well as the user models of collaborating partners, with the ability to make corrections to their own (Greer et al. 1998).

Even with this turn towards the social and added focus on user agency, there is still a dominant focus on how individuals make use of AI systems and, again, principally in the workplace. There is additional theoretical nuance in how technology is treated; generally, however, AI is imagined as a tool-like actor that can enhance a worker’s ability to operate in the changing conditions of knowledge with the increased spread of AI. Thus, there is a lingering and latent technological determinism, in addition to technical solutions—whether a ‘personal data vault’ or the ability to open the ‘black box’ of AI (Kay 2016)—to the marked complexity of deploying AI in workplaces.

Group 3: AI as Part of a Reconfiguration (Reconfiguring AI)

The final category of research is the smallest, comprising 6 of the 49 papers that met the relevant criteria. Notably, while some articles from groups 1 and 2 were published more than 10 years ago, all articles from group 3 were published since 2018. Because of the small number in this category, it is difficult to arrive at specific themes that span multiple articles. However, these researchers explicitly reject what they perceive to be deterministic perspectives of technology in the workplace and beyond. The theoretical approaches include ANT (Bozkurt et al. 2018), (critical) posthumanism (Bozkurt et al. 2018; Jandrić and Hayes 2020) and sociomateriality (Willems and Hafermalz 2021).

These studies notably highlight the need to consider AI from a sociotechnical perspective. That is, the implementation of AI is one actor among others: ‘including institutional regimes, management ideologies, pre-existing work practices, the particular choices made in a local setting, the degree of operational uncertainty, and more’ (Parker and Grote 2022: 1181).

While two of the articles are exclusively focused on formal workplaces in their research, the others look at the use of AI in a massive open online course (MOOC) (Bozkurt et al. 2018) and in foreign language learning (Engwall et al. 2022), while another undertakes a discussion of AI more broadly across the life course (Jandrić and Hayes 2020). This illustrates a less rigid focus on workplace settings: research seeks to understand how AI is used more widely.

There is also treatment of the various ways that AI might be imagined and indeed implemented: one study does not assume any given role that a language learning robot might play, but allows for the possibility of multiple roles, including interviewer, narrator, facilitator, and interlocutor, and allows the human users and teachers to collaboratively assist in the implementation (Engwall and Lopes 2022). This was reinforced by a review of an AI-powered teaching assistant in a MOOC, in which the AI assistant plays an important role in facilitating online discussion by connecting students to each other, occupying a ‘betweenness’ that lacked a direct correspondence with a discrete human teacher role (Bozkurt et al. 2018). In this case, AI is not replacing, augmenting, or automating; rather, it is creating something novel. This is notably different from casting AI in one specific role, often taking the place of a human, as much of group 2 does.

Crucially, the possible outcomes of AI in the workplace are not pre-determined: they are shaped by choices within workplaces and beyond. This implies an imperative: ‘[S]ystems should be designed to meet the needs of the organization and its employees, rather than simply keeping up to date within new technologies’ (Parker and Grote 2022: 1197).

Studies in group 3 also directly address lifelong learning, outlining learning futures that are not confined to the workplace. Most of these papers’ authors advocate for a more expansive view of learning. The home/work binary may no longer be productive, as technology structures life differently; this entails moving beyond instrumental and technical understandings. The effects of technology must also be considered, at the personal, group, and systemic levels (Dzubinski et al. 2012). Similarly, this approach to research resists the need to anticipate every new technology, avoiding ‘mastery of the future,’ in which lifelong learning is subject to never-ending signals from the market (Edwards 2010).

Discussion

This categorization of existing research highlights five important areas of focus for future research addressing the role of AI in the context of lifelong learning. These five areas include the need for closer academic scrutiny of claims that AI straightforwardly leads to increased efficiency; the apparent responsibilization of preparing for and responding to AI; a lack of theory on how AI, and technology more generally, works; the importance of framing research questions to avoid the trap of technological determinism; and, finally, a call for a shift in research strategies and methods that leverage, among others, ethnographic approaches within and beyond the workplace.

What Works

Research must continue to explore the efficacy of AI where it is deployed, whether in the workplace or beyond. Claims of increased efficiency on the part of technology developers should indeed be met with scholarly scrutiny. We note in many of these articles the narrow framing of ‘efficacy,’ however.

While this applies to research interventions more broadly, it is important to echo other scholars who maintain that technology does not necessarily have straightforward ‘intended’ effects; indeed, this often stems from under-theorized notions of both technology and learning, as shown in groups 1 and 2 above. This position has been taken up by other critical scholars of education research, who critique the ‘what works’ discourse in education broadly, including its role in shaping educational research agendas and deterministic understandings of both practice and research (Bayne 2015; Biesta 2007; Eynon 2024; Hargreaves 1999).

At the same time, such treatment also has the effect of assuming that technology is both neutral and apolitical. Philosophers of technology have long called for more critical understandings of the role of technology in society (Feenberg 1999; Winner 1980).

For researchers, this also involves understanding intended effects of the deployment of AI in society while also asking other questions, specifically around secondary or unintended effects (Oliver 2016). In the case of AI and LLL, this necessitates more research to document not only the immediate ‘effects’ of technology, but also whether the given example of technology is the best use of resources across varied contexts. Importantly, it is essential to gain an understanding of whether there are effects not just on the individual, but also on the extent to which human relations are affected by the deployment of AI.

In addition, we might ask: Who stands to benefit from deployment of a given system? What emergent capabilities of surveillance and control result? How do AI-based digital technologies change the nature of work and ways of working? The studies in group 1 largely avoid asking such questions. These questions, however, follow from a discussion by Biesta (2007), and the application of AI may intensify reliance on the existing ‘what works’ discourse without careful consideration of the political and normative elements of the interventions in LLL.Footnote 1

This leads to another related point: when the justification for deploying AI, especially in an agent-like capacity, is its time-saving effects, it is important to verify this through empirical research. This is especially true for contexts in which ITS are deployed, including claims such as the following from group 1: ‘If an ITS can replace the more routine parts of human instruction, it can allow human instructors to focus on challenging and complex aspects’ (Bratt 2009: 338). This is a common justification for the implementation of automating technologies; however, like narrow understanding of ‘what works’ in technology, the introduction of technology may change tasks in unpredictable ways. Indeed, as recent work in the study of algorithms shows, these technologies both govern and reconfigure future possibilities (Christin 2020; Ratner and Elmholdt 2023).

Who Works and How

In asking better questions about ‘what works,’ as described above, it is equally important to deepen our collective understanding of who works and how they work. Group 1, with the preponderance of ITS deployed in work environments, and group 2, with its focus on personal assistant or collaborators, place the burden on individuals, whether in terms of inspecting their own learner models, or in responding to AI and digital technology more broadly.

This focus on the individual is highlighted to an even greater degree when researchers make use of Knowles’ theory of andragogy, which is notably focused on individual motivation and self-direction (e.g. Adami et al. 2021; Fake and Dabbagh 2021; Hetzner et al. 2011; Iqbal et al. 2011; Ritonga et al. 2022). Relevant to the groupings posited in these findings, andragogy also finds conceptual overlap with the frames of groups 1 and 2, as the predominant theories of learning also place rhetorical focus on the individual. This has important implications for understandings of technology in research: AI can act on individuals, act as individuals, or augment the tasks of individuals, rendering them more efficient. The theory of andragogy has been critiqued on various levels for its focus on an idealized individual who acts with autonomy, cleft from larger societal structures (Edwards 2002).

There is also sustained emphasis on individual skill development, the focus of much lifelong learning in recent years (Parker and Grote 2022). Charting the historical trajectory of international policy discourse related to lifelong learning, Biesta outlines three primary purposes: the personal, the democratic and the economic. He explains: ‘Whereas in the past lifelong learning was seen as a personal good and as an inherent aspect of democratic life, today lifelong learning is increasingly understood in terms of the formation of human capital and as an investment in economic development.’ (Biesta 2006: 169) This focus on skill development shifts the burden of responsibility onto individuals who must continuously reskill or upskill, without considering how individuals or groups might shape the implementation of technology in society. The ‘digital’ in this context plays an ideological role, subjecting the individual to the pressures of the labor market; it is not simply a technical reality (Feenberg 2019).

This discourse of individual responsibility has led to the source of learning being created by outside actors; in the research reviewed above, this is often workplaces specifically or workplaces in partnership with other companies creating AI-based solutions. Thus, workplaces are in a position of constantly responding to ongoing waves of technological development, fueled almost exclusively by economic interests. In this sense, there is never an end to what must be learned, as what must be learned is dictated by technological innovation.

We argue that what is required is a more relational, postdigital perspective on learning and on research. This is especially important because AI is a ‘constantly moving, sociotechnical collection of different meanings and practices attributed to it by the different stakeholders and other actors within the network’ (Eynon and Young 2021: 169). Not only can it be deployed rhetorically in various forms, but AI is one actor among others and plays party to a reconfiguration of roles and responsibilities (Parker and Grote 2022), as researchers in group 3 elaborate. This requires attending to the everyday aspects of algorithms in these contexts, beyond what is perceived as the immediate impacts or the most impacted person (Pink et al. 2022).

In addition, while it is useful to attend to what AI does in specific contexts, the idea of the digital can be helpfully expanded through a postdigital conceptualization. This requires scholarly engagement with the absences and failures that accompany AI and digital technology, as these may have marked impacts on the lived experiences of individuals. This also directly counters overly deterministic understanding of technology (further described below). It is equally important to address the ways in which AI fails and underperforms expectations. Recent research into the affect surrounding automated systems presents a potentially fruitful path forward: ‘[V]iewing datafication and algorithmic technology through the lens of friction suggests that their powers should not be taken for granted or treated as isolated from mundane experiences and practices’ (Ruckenstein 2023: 9). Further, Edwards, in advising a more posthuman approach, states the importance of moving away from the learning subject towards entanglements; this might entail new ways of learning and seeing, as the studies in group 3 propose, moving beyond ‘metaphorical uptakes of the cyborg’ (Edwards 2010: 12), as groups 1 and 2 tend to embrace.

When Theory Works

Overall, both technology and learning are under-theorized by groups 1 and 2. Indeed, while longstanding theories of learning may be useful, they may also have their limits. As Norm Friesen explains: ‘Like behaviorism and cognitivism before it, constructivism is used in instructional technology and related discourses to assign terms to thinking that are derived from or at least clearly consonant with the functions of a computer’ (Friesen 2010: 88).

Beyond theories of learning, we also recognize the influence of the human capital approach to learning on much of the work in this space, following the work of Frey and Osborne (2017) among others. As mentioned above, these studies reassert the importance of reskilling and upskilling in direct response to the digitalization of work. This has the discrete impact of validating the introduction of new digital technologies, including AI, into the workplace as a way of ‘enhancing’ the labor force, with little consideration to broader impacts. While the human capital approach is more focused on learning, it also implicitly touches on theories of technology, which brings us to a third way that theory works.

Group 2 also persists in its treatment of AI as a ‘tool,’ even if how it conceptualizes the tool—as a collaborator or agent—is more complex. Thus, just-in-time feedback from AI is portrayed as transformative, fundamentally acting as a mechanism by which humans can acquire new skills that serve them in their work or lives. This shows technology filling the role of intervention or having specific social effects, even as we call for more complex treatments (Oliver 2016). These instrumental understandings of technology must be challenged.

Even as instrumentalist approaches to technology persist, underlying much of the reviewed research are theories of technological determinism. This applies to the ways in which AI is introduced into working contexts and the ways in which said technology is researched, as described above. This is remarked by postdigital scholars: ‘Situated within the neoliberal marketplace, policymakers, managers and technology developers have focused on solutions which can be easily implemented, measured and evaluated.’ (Jandrić and Knox 2022: 3) This equally impacts research.

Indeed, these theories of learning and technology no longer seem fit for purpose. Here, we suggest that a postdigital lens offers researchers new ways of thinking about and enacting theory and research. To that end, Jandrić and Hayes (2020) make use of a novel framing of how a gathering humans and machines learn together: the postdigital we-learn. This theorization explicitly moves away centering the human, thus rejecting andragogy and theories of human capital, towards novel relations and forms of learning. Technology is not assumed to replace (or even augment) humans: it reconfigures workplace relations and it reconfigures the learning that happens.

Much of the work in group 3 offers theoretical nuance, rejecting calls to upskill as a deterministic response to technological innovation (Parker and Grote 2022), problematizing the ‘soft power’ exerted by AI and similar digital technologies. Indeed, Parker and Grote call for ‘job-crafting’ as a response: tasks can certainly be automated, they argue, but without a job there are no tasks to automate. This is similarly reflected in Jandrić et al. (2019: 180) who reject a human capital approach to skill development and advocate otherwise: ‘A postdigital critical pedagogy hopes to reclaim the digital sphere as a commons, for the production of surplus consciousness and educational superabundance.’

Reframing Questions That Work

As discussed above, we call for new ways of engaging with LLL, including ways of ‘learning’ how to work with these non-human actors, and how these non-human actors and humans ought to best work together. Ultimately, this involves more explicit and nuanced theories of technology and learning, but it may also involve reframing research questions to avoid technological determinism.

A report from Microsoft Research recommends a path forward. Instead of asking ‘How will AI affect work?’ we might ask, ‘How do we want AI to affect work?’ (Butler et al. 2023). In addition, there is a strong need for more participatory, deliberative approaches to the implementation of AI technology in the workplace. We, as authors in this paper, extend this need beyond the workplace. These deliberative approaches could broadly help in enabling learning in more contexts and increase the possibility of lifelong learning.

Previous research has shown that upon the introduction of CT scanners in hospitals in the 1980s, the outcomes were contingent on contexts: technologists in one hospital felt empowered, while radiologists felt disempowered; in a second hospital, this situation was reversed. This highlights, again, the contextual nature of the uptake of technology (Parker and Grote 2022). In addition, past research has shown the importance of understanding the process of technology implementation. In a British university, the introduction of somewhat mundane technology, including auto-grading assessments, did more than simply digitize: systems were transformed as the implementation of digital technology generated demands for structures and policies (Cornford 2000).

As Eynon and Young (2021) and Rezazade Mehrizi (2023) show, and, indeed, all of the articles from group 3, the future of AI in LLL is far from decided. Exploring the idea of how AI is framed could help move beyond deterministic ideas of technology; while there are many visions of AI, there is some use to re-envisioning technology and work. This has important implications for potentially fruitful methodological approaches, which we explore next.

Seeing, Researching, and Working for Reconfiguration

Of the research reviewed, it should be noted that the availability of relevant research engaging in nuanced theoretical treatment, or lack thereof, of technology and learning. Much of the available research does not treat AI and lifelong learning explicitly; the research that does could benefit from added nuance. This leads to a situation in which there is the possibility for researchers to make significant inroads in the field of LLL in which AI is implicated. Given the increasing discourse focused on AI and skills, this research is timely and a critical counterpoint to existing research.

Some of the research among the studies reviewed that yielded novel perspectives, in groups 2 and 3, involved ethnographic study (Willems and Hafermalz 2021) or network analysis (Bozkurt et al. 2018) to gain a better sense for the contextual and contingent nature of environments in which AI is active. Group 2 helpfully brings AI into focus as an actor, but researchers need not confine AI to a given role. More naturalistic research is necessary to gain a better understanding of possible secondary effects and to understand the role of humans in living with AI (Pink et al. 2022).

Through their study of a sports betting floor in Singapore, Willems and Hafermalz (2021), in an article in group 3, employ ethnography to help conceptualize how algorithms in workplaces go beyond ideas of augmentation and automation towards the idea of reconfiguration. Bringing in analytic sensibilities from Science and Technology Studies, their theoretical innovation centers on ‘ways of seeing,’ which encompass a range of vantage points that, when combined with algorithmic ways of seeing, are responsive to context. This is an important theoretical perspective: AI isn’t assumed to automate or augment, but to change irreconcilably, complicating the picture but also providing new ways of seeing. The authors similarly make the point that not only is this new kind of vision partial, it is also not static: seeing breaks down; there are different ways of seeing between the humans and the algorithms. Indeed, at times, the traders must keep the algorithms ‘in line,’ as explicitly instructed by management, to privilege human choice when necessary. This kind of research allows for the revealing of novel relationships between humans and AI. These novel relationships are increasingly being explored in research in relation to autonomy (Savolainen and Ruckenstein 2022). Building on this point, Willems and Hafermalz argue for the need ‘to address the dynamic nature of technology introductions to work practices and how roles, responsibilities and knowledge are in embodied and spatiotemporally significant ways reconfigured in the process’ (Willems and Hafermalz 2021; 14).

These ways of seeing could also well apply to how researchers ‘see’ and frame their results. Another study, a collection of semi-structured interviews with radiologists, identified three categories of radiologists’ framing of AI in their work: an automation frame, an enrichment frame, and a reconfiguration frame. The automation and enrichment frames construct AI as having straightforward effects on work. The reconfiguration frame, however, helps radiologists see a ‘new empirical realm far beyond their perceptual reach’ (Rezazade Mehrizi 2023: 8); it does not simply augment, but it makes new kinds of tasks possible. This frame allows practitioners to see beyond the boundaries of their department and profession. This kind of framing and theory-building may be useful to deploy in research environments themselves, to help people shape the deployment of AI. This way of framing corresponds with other research in organizational studies that documents the changing nature of work with the introduction of technology (Parker and Grote 2022; Zuboff 1985), in addition to research that calls for attending to sociomateriality (Orlikowski 2007).

As noted above, the majority of articles we examine in this paper are based within a workplace setting, reflecting the current policy and practice context. However, given the increasing use of AI across applications and contexts ‘life-wide,’ this must change. Researchers must find ways of engaging with individuals to better understand how AI is implicated in these contexts. Important also is the possibility that adults’ lives no longer neatly fit into the work/home binary ‘as technological gadgets structure family life, workplace demands intrude on leisure, and boundaries between work and home continue to disappear’ (Dzubinski et al. 2012: 104).

Conclusion

In this review, we highlighted the theoretical and conceptual approaches employed in academic research conceptualizing AI in the context of lifelong learning.

We have problematized the narrow treatments of ‘what works’ when it comes to lifelong learning, whether that implicates AI in intelligent tutoring systems or more generally. We have questioned the actors at play in these settings, suggesting the utility of expanding who is working and how. We have emphasized the role that strong theorizations of learning and technology should play in research. We have also made recommendations to engage in the study of life-wide learning that extends beyond the workplace, while reframing questions and researching explicitly for reconfiguration, arguing for the usefulness of a postdigital approach to lifelong learning.

We suggest that lifelong learning in the ‘age of AI’ must take into account the complexity of the digital throughout the life course and account for the complex ways technologies are changing the nature of work and other spheres of life. Few of the studies reviewed, especially those in groups 1 and 2, touched on the ways in which the digital, and here we include AI, saturates and structures people’s lives. Indeed, there may be strong profit motives for the rollout of AI systems in workplaces, as power continues to be concentrated among technology companies large and small. These economic actors must be considered in future research (Knox 2019).

Departing from narrow contexts requires some degree of methodological flexibility (Seaver 2017), in addition to looking towards various sites of data generation. We affirm the need for methodological plurality through the incorporation of ethnographic and digital methods, while also attending to researcher-methods assemblages (Veletsianos et al. 2024). To this, we also add deeper engagement with theories of learning and technology, calling for explicit researcher-theory-methods assemblages. This might encourage more inter- and transdisciplinary research endeavors, such that we might create new connections across ‘ideas, approaches, and disciplinary norms of research’ (Veletsianos et al. 2024: 6). With the not insignificant number of academic fields interested in the research of AI at work, the larger researcher community only stands to benefit from such connections.