Introduction

Education — as with most areas of contemporary life — is becoming steadily infused with small acts of technology-based automation. These automations are intrinsic to the software, apps, systems, platforms and digital devices that pervade contemporary education. Viewed on a case-by-case basis, each of these different automations might appear to be of minor importance, and can quickly fade into the background of any school, university or other educational setting. For example, the novelty of a school’s facial recognition ‘visitor management system’ soon passes once one becomes accustomed to not having to ‘sign in’ at the front-desk. The same goes for the initial relief of not having to grade a newly submitted stack of fifty student essays, or not having to walk around to check what a class of students are actually doing on their laptops. These are all tasks that can quickly become unquestioned aspects of what we expect digital technology to be doing in educational contexts.

Yet, when viewed as a whole, these individual instances of automation soon mount up. Like many aspects of digital change, it can take a while to realise that we are now teaching, studying and working in highly-automated and digitally directed educational environments. Over the past 10 years or so, responsibility for all manner of everyday educational decisions and tasks has been passed over to automated software, systems and platforms — from identifying when a student might be lacking motivation, through to planning future lesson content. Moreover, there are plenty of ‘behind-the-scenes’ institutional automations that most people in a school or university remain largely unperturbed by, if not wholly unaware of. Automated decision-making (ADM) technologies now play a key part in how job candidates are pre-selected, students are deemed to be ‘failing’ and school resources are allocated. So, for anyone who is involved in any way with a school, university or other educational setting, it is important to remain mindful of how things are regularly being done for you (and done to you) by software that you might have little awareness of … let alone understanding of how it works.

There are many questions to be asked here. For example, we need to interrogate thoroughly the presumptions and promises of these educational automations. This is technology that is often sold to educators, students and parents under the pretext of increased reliability, efficiency, or plain-old convenience. So, what are these conveniences … who is being convenienced … and who is inconvenienced? Conversely, this is technology that is often sold to institutional managers, educational officials and other authorities under the pretext of increased standardisation, control and regulation. So, for example, what new forms of control and altered power relations arise from the implementation of these technologies? How are forms of automated governance being established at institutional and even system-wide levels? We also need to interrogate the broader implications of educational automation. For example, what work is implicit in these automations? What do these automations feel like? What is being lost when the responsibility for a task that was previously being carried out by humans is transferred to a machine (and the assemblage of humans and non-humans that underpin the functioning of that machine)? What alternate automations might be possible, if not desirable … and why have they not yet been enacted?

In short, there is much about these ongoing digital automations of education that should not be simply taken-for-granted, normalised and allowed to fade away into the background. Significant questions relating to shifts in power, control and autonomy need to be levelled at the incursion of automated technologies into educational contexts, however convenient and seamless these technologies might appear. Here, then, are a few of these questions and concerns in a little more detail.

The Work of Automation

For many people, the idea of technological automation is linked intrinsically with the changing nature of work. There are, for example, ongoing discussions over the possible emergence of ‘technological unemployment’, a ‘workless world’ or ‘post-work’ futures. Yet, whereas industrial-era automation was ushered in on the promise to displace manual labour (‘dull, dirty and dangerous’ jobs), the automation of knowledge work and mental labour is different — in short, this is not a direct replacement of labour. It is therefore important to not get too distracted by promises and/or threats of the full automation of educational work. Instead, we also need to consider the ways in which educational automations are dependent on new forms of (often decidedly dull) work being carried out by humans.

For example, how are teachers now having to ‘work to the algorithm’ — fitting their actions around what the machine is capable of recognising? How might these algorithmic accommodations extend and/or alter previous ways that teachers have fitted themselves around processes and routines of school systems? How are teachers developing ‘parseable’ pedagogies and otherwise acting in machine ‘readable’ ways? Moreover, what new forms of ‘background’ monitoring and maintenance work are now demanded of school staff to keep the technology functioning? This latter instance echoes what Bainbridge (1983) described 40 years ago as one of the ‘ironies of automation’ — i.e. the need for human workers to monitor and trouble-shoot machines supposedly capable of autonomously doing the job better than human workers.

Finally, how is educational ‘automation’ resulting in low-paid ‘invisible labour’? We know that particularly in early periods of their use, AI systems are dependent on swathes of precariously employed, low-paid outsourced labour that steps in when the technology cannot recognise things — taking responsibility for content moderation, AI tagging, data labelling, language translation, and so on. All told, any discussion of education and automation needs to acknowledge that these technologies are dependent on a range of human labour — what Taylor (2018) describes as a state of ‘fauxtomation’. Critical accounts of education and automation are therefore well advised to avoid adding to speculation about the end of work, but instead engage with the degradation of work. As has been the case with critiques of (post)industrialised work since the 1800s, ‘our task is to highlight the exploitation hidden behind the veil of machine autonomy’ (Monroe 2021).

Automation of Educational Judgement

Alongside these additional forms of labour, we should also ask questions about the diminishment of educational expertise and professionalism — not least the offloading of educator judgement onto automated systems. On one hand, some might argue that there are many routine and non-consequential tasks that can be delegated to technology, as well as what could be deemed ‘pure procedural’ decisions that are clear-cut and widely agreed upon (Sparrow 2021). That said, it should be noted that even these minor automations of mundane tasks run the risk of inadvertently altering professional sensitivities, if not side-lining them altogether (Zuboff 1988).

On the other hand, it could be argued that all judgements with potential to impact on the life-chances of individuals need to be kept within the purview of human expertise. In education, then, this could be seen to encompass any form of pedagogic judgement — from grading an assignment through to selecting suitable learning content. These could all be seen as instances of decision-making that need to be driven by expert educational professionals in conjunction with intelligent systems — what might be described in post-human terms as ‘intelligence augmentation (IA)’ as distinct to ‘artificial intelligence (AI)’. As Pasquale (2020), in proposing his ‘new laws of robotics’, puts it: ‘robotic systems and AI should complement professionals, not replace them’.

Moreover, questions need to be asked about the accountability that surrounds any decisions that are delegated to automated technology. For example, experience from fields such as medicine highlight the problem of ‘overtrust’ — especially when people are mandated to use automations by managers who may themselves have little insight into the workings of the automated decision-making processes (Jorritsma et al. 2015). How accountable should a classroom teacher be for the outcomes of an automated system whose functioning they have little understanding of? At the same time, we need to consider other more-distant actors that do have insight into the workings of these autonomous systems — such as programmers, learning engineers, instructional designers and software vendors. What lines of outsight, auditing and accountability exist for the decisions that these systems and software produce? How much can a particular automation be trusted and when? What lines of accountability exist for how software outputs are translated over into final grades by educational institutions? Indeed, Pasquale’s (2020) fourth ‘new law of robotics’ tackles this issue directly — stating that ‘robotic systems and AI must always indicate the identity of their creator(s), controller(s) and owner(s)’.

Automated Relations

At the same time, it is important to consider the appeal of delegating decisions and judgements to a machine — in other words, why people are prepared to go along with ‘the subsumption of subjectivity’ to automated systems (Andrejevic 2020). For example, teachers might be happy to defer responsibility, and dodge the awkward task of personally grading students that they have grown to know — particularly given increasing trends of students contesting grades, and even initiating legal action over mis-grading. At the same time, students might also welcome the option of not having to subject themselves to the vulnerability of being judged directly by their teachers and others who actually know them. While understandable, such examples raise questions about how these automations might work to recast and reduce the act of education into a transactional process.

Here, Gillard (2018) points to associations between the development of automated technology and a desire to reduce ‘frictions’ — in particular, ‘friction-free interactions, interfaces and applications in which a user does not have to talk to people, listen to them, engage with them or even see them’. In the minds of many tech developers, Gillard suggests this equates to the automated avoidance of awkward interactions with other people who are considered different — for example, having to tell a taxi driver where you want to be driven, feeling obliged to hand over a few banknotes to a delivery driver as a tip. We therefore need to pay attention to the likely extensions of these logics into educational settings through chatbots, automated attendance software, automated feedback, learning platforms and so on. For teachers, these fleeting interactions with students might be a vital part of the relational work that classrooms depend upon — getting a feel for students’ dispositions and generally ‘greasing the wheels’ of working together as a group. For students, these interactions with teachers and their fellow students might be similarly insightful and impactful. As Gillard reminds us, these are all small moments of contact and care where we have to interact with other people, and momentarily acknowledge them as human beings.

Automation as Desocialisation

Questions also need to be asked about the social consequences of how these automations seek ultimately to recast and reduce the act of education into an individualised and nonsocial activity. Take, for example, the ways in which automated learning systems are designed around the idea of an individual ‘learner’ interacting with ‘learning content’ in a hermetically-sealed context — free from the social settings and other people they might be surrounded by. As Horning (2021) reasons, the logic here is that students and teachers are individuals that ‘hav[e] a “self” that doesn’t derive from sociality’. In other words, students can be ‘who they really are’ independent of the socialisation implicit in being part of a classroom, family or community, and without that self being affected by the process of participating in school or society.

This desocialisation is evident, for example, in the ways in which personalised learning systems and classroom analytics software will profile a student primarily in decontextualised terms of what they have consumed (and what is recommended they consume next). As Horning (2021) continues, these forms of predictive AI are predicated around the delusion of ‘selves basically made up of lossy data, information with much of the social context subtracted’. The increased prominence of such technologies in educational settings therefore has significant connotations for what has previously been understood as a communal, conversation and relational activity — i.e. the idea that one teaches and learns in the company of others who are also engaged in the act of teaching and learning. This also bumps up against one of the key traditional functions of education — what Biesta (2015: 9) describes as a ‘socialisation’ function — i.e. ‘insert[ing] individuals into existing ways of doing and being’ such as particular social, cultural and political ‘orders’.

The Automated Subject

Perhaps the most significant consequence of these newly automated conditions is the likely side-lining (if not complete suppression) of the subject. For Andrejevic (2020), the ultimate logic underpinning automated media is as a means of working around the problem of the unpredictable subject. Actual subjects can behave in inconsistent, irrational or even resistant ways that threaten systems of control, management and governance. In contrast, the automated subject is ‘perfectly self-identical’ — therefore fitting neatly within the constraints of datafied predictability. This certainly challenges ‘post-human’ ambitions for the merging of humans and machines, and the ultimate enhanced transformation of the human subject. Instead, Andrejevic contends the ultimate logic of ADM technology might be seen as rendering the subject obsolete. Take, for example, the idea encouraged by software vendors and marketers that automated classroom systems can ‘know’ the student better than they (or their teacher) knows themselves, and can then decide what an individual really needs (or desires) next. Taken to its logical conclusion, such thinking presumes the eradication — rather than enhancement — of the subject.

It is therefore important to consider exactly what might be lost if such conditions are established in education. For example, while the design and development of ADM technology might be made more straightforward by not acknowledging people as subjects that are driven by subjective experience, unconscious thought, a sense of self and so on, these are all characteristics that make education an essentially human process. Moreover, it is important to ask what implications the stripping away of the subject and subjectivity has for one of the key broader functions of public education in terms of ‘subjectification’ — what Biesta (2015: 9) describes as giving individuals a sense of who they are, encouraging the ability to act autonomously, and think independently and critically.

Automated Shifts in the Timings and Spaces of Education

We also need to ask questions about how these automations seek to recast the spaces, places and timings of how education is enacted and experiencedFootnote 1. For example, automation brings an altered pre-emptive sense of timing to education — prioritising the idea of anticipation, prediction and dealing with future risks, all underpinned by comprehensive monitoring, continuous measurement and real-time feedback (Webb et al. 2020). As Andrejevic (2020) notes, priority here is placed on speedy action that pre-empts future events. Automated systems do not seek to understand or explain why things have happened previously — they are driven by logic of forward-facing ‘operationalism’ and automated action.

This is also technology that presupposes people’s social environments as something that can be modulated — i.e. contexts that are flexible and can be programmed to shape the conduct of individual actors. This logic is epitomised in the imagined ideal of the ‘smart school’ — an educational environment that is replete with sensor technologies that measure, monitor and regulate the building and all its occupants (De Freitas et al. 2020). This reconfigures the idea of ‘school’ or ‘university’ as an assortment of material spaces that are entwined with (and increasingly defined by) digital code — what Kitchin and Dodge (2011) term ‘code/space’. This sees computation as interwoven intrinsically into the built environments and everyday experiences of ‘schools’ and ‘universities’. This therefore raises the prospect of these environments being unable to continue in their intended functions in the event of any institutional software failure. One obvious example of this is an airport. As Bridle (2018: 37) puts it, ‘a software crash revokes the building’s status as an airport, transforming it into a huge shed filled with angry people’. So, might we be approaching the point where a school campus that is suddenly denied access to its ADM systems ceases to be able to function as a ‘school’?

Automation of Education Governance

Finally, are questions relating to the changing nature of institutional governance — not least in light of education’s increased dependency on large-scale automated operations that are now well beyond the resources and expertise of the state. It could be argued that we have now reached a point where the data-driven management of many school and university systems is now effectively being outsourced by the state to distant automated systems and platforms maintained by corporations such as Amazon and Google. As Fourcade and Gordon (2020: 78) put it, this marks a ‘transformation in political rationality, in which data affordances increasingly drive policy strategies’, while also consolidating the ‘state-like’ capacity of multinational tech corporations to influence educational institutions. One prominent instance of this ‘cyber-delegation’ is Amazon Web Services (AWS), which has become the dominant provider of automated ‘cloud’ services and infrastructure to education sectors around the world (Fiebig et al. 2021).

These shifts raise a number of macro-level questions and concerns. How does this process of cyber-delegation create structural and functional dependencies between education systems and multinational corporations? What opportunities and efficiencies do these automated outsourcings lead to, and at what cost? For example, who outside of technical expert communities can trace or control the problems that these systems might produce (Rieder 2020)? To what extent is automated system-wide governance decision-making open to oversight and scrutiny from client institutions, and/or democratic accountability to stakeholders in public education systems? What contractual obligations are involved in the provision of these services? How is institutional information and knowledge retained in the event of a change of service provider, who has sufficient knowledge and influence to decide on terminating service leases, and to what extent can these systems be ‘uninstalled’? Finally, what are the environmental costs of developing, training, installing and maintained these vast AI-driven infrastructures?

Conclusions

There are many more aspects of education automation that might be added to this brief overview. Nevertheless, this paper has already raised plenty of contentions, concerns and lines of inquiry to be taking forward. On one hand, then, is the question of what automation expects of education (and, more pointedly, what automation expects of the people involved in education). As Tennant and Stilgoe (2021) remind us, ‘technological promises, if they succeed, end up making demands on the world’. On the other hand, is the question of what education should be expecting (if not demanding) of automation. This relates back to a key contention that can often get lost in critiques of education and technology — i.e. that other ‘flavours’ and forms of technology are possible, that things can be otherwise and that we need to consider what mechanisms are required to initiative such alternative futures.

In terms of this first question, for example, this paper has pointed to various ways in which automated technologies in education depend on reconfigurations of educational spaces into all-sensing, programmable environments. We have seen how automated technologies narrow the temporal horizons of education — relentlessly pre-empting future events and seeking to influence future outcomes. These configurations therefore expect educators to pay less attention to complex questions of previous experiences, past progress and development. Instead, education becomes reconfigured as a process of dealing with what happens next … regardless of what has gone before.

The paper has also highlighted the significant amounts of human labour that is required by automated technologies if they are to function ‘on their own’. Perhaps most seriously, we have highlighted the ways in which automated technologies imply the subsumption (if not assimilation) of people’s autonomy, will, sociality and sense of self. In contrast to industry hype over ‘AI assistants’ and academic speculations over ‘post-humanism’, it could be argued that these are not technologies that ‘free’ teachers and students up to engage in higher-order activities and pursuits. As such, we need to challenge the notion that the rise of autonomous technologies in education somehow offers emancipation from what might be termed ‘attachments’ — i.e. obligations and relationships with people, institutions, infrastructures and any other aspects of the social world that give people definition (Tennant and Stilgoe 2021). This is clearly not the case. Instead, it seems more appropriate to begin to ask what additional attachments are formed through the use of ADMs in education. This is technology that certainly adds to the complexity of educational processes and practices.

In terms of the second question, then, it is important to also consider what education should be expecting of automation. At present, it seems that most educators and educationalists are largely content to take the designers, developers and vendors of these technologies at their own word — i.e. that these are genuinely autonomous technologies that simply need to be ‘plugged in’ before they work away in the background leveraging data-driven improvements and efficiencies. Clearly, this assumption is rarely — if ever — warranted. Instead, this paper has hinted at various ways in which these technologies are enmeshed in the messy and mundane social realities of education — not least the micro-politics of educational labour, and complex pedagogical relationships between teachers and students. We have pointed to the need for greater transparency — and, crucially, greater accountability — around the implementation of what these technologies do in education, and the outcomes that result. Despite industry hype, it could be argued that these are technologies that ultimately detract from education’s potency and transformative potential.

So what might be done? Much of what we have just lamented relates to a prevailing contemporary culture of what Tennant and Stilgoe (2021) refer to as ‘autonomism’. This is the deterministic assumption (implicit in the design, selling and implementation of most ADMs) that key structures, modes of governance, sensibilities and behaviours need to fall into line, and be reshaped and reconfigured to facilitate these innovations. The key question for critical observers of contemporary education, then, is how this sense of autonomism in education might be challenged. There is certainly room for resisting and pushing back against many of the clearly problematic forms of automation currently entering education. As Rasch (2020) points out, the idea of seamless automation leaves us with no room for radical change — ‘a wholly predictable future is just a continuous present, a tyranny of choices on offer … what we need is a politics of de-automation’.

Yet, we should not be too hasty in completely rejecting the idea of automation altogether. So, it is well worth also taking time to consider alternate forms of automation. For example, what other (currently unrealised) forms of ADM might be possible and/or preferable — for example, technologies that do not eradicate the subject, undermine professional judgement and/or strip education of its social and relational core? What would a feminist agenda for automated education look like? How might ADMs fit with ongoing reformations of education along decolonialist or ‘crip technoscience’ lines? Other forms of automation might well be possible. It is therefore important for any critique of current and emerging ADMs to also engage with the possibility of designing alternates.

In this sense, alongside debunking and deconstructing particularly problematic aspects of current educational ADM, it is also important for critical commentators to work towards what Latour (2004) terms ‘adding to reality’ — i.e. reorienting the direction of critique toward (rather than away from) the objects of education and automation. For sure, there is plenty in the current iterations of ADMs now entering education that demand criticism and push-back. But there is also a need to engage in powerful descriptive work that allows us to talk about new automations that we do consider worth developing, as well as pointing out existing aspects of education and/or technology use that are worth protecting and caring for. Such work also involves identifying allies, building alliances and supporting the development of communities that can work towards reform. In other words, our continued engagement with the topic of education and automation — as Bell (2021) succinctly puts it — needs to be ‘as much about critical doing as critical thinking’. Automation in education is not something that happens of its own accord. We need to get actively engaged!