Educational Automation and Taken-for-Grantedness

The discourses surrounding automation in education often focus on ‘resistances to the technological “working-over” of teaching’ (Bayne 2015: 457), the threats it poses to teachers’ long-held professional status (Selwyn 2019), and the new ‘divisions of labour’ (Perrotta et al. 2021: 109) emerging in response to the platforms and systems through which these automations circulate. Such critique suggests the ubiquity of automation, noting how it has become embedded in the everyday practices of education. As ‘technology can bring us closer but simultaneously withdraw from our attention’ (An and Oliver 2021: 10) its impact on the practices of education, automation exemplifies this withdrawal of attention into a taken-for-granted state. The taken-for-grantedness of automation means that users have few opportunities to recognise and alleviate the potentially harmful aspects of automation on education. A major challenge with automated educational technologies, then, is that little is known about how they operate as institutions have limited capacity to shape them according to their own contextual and pedagogical appraisals.

Instead, these automation technologies become taken for granted as soon as institutions incorporate them into their ongoing practices and they recede from the general institutional consciousness. This is precisely as they are reinforcing generally accepted ideas of what education is, how it ought to be practised, and who it ought to be for. The transmission of norms, values, and beliefs conveyed in the classroom and the social environment has made education a space where ‘[lessons] which are learned but not openly intended’ (Giroux and Penna 1983). Hidden curricula has often been seen as a ‘side effect’ of education, and Margolis (2001: 22) equates this hidden curricula with the production or reproduction of ‘social relations like race and gender hierarchy, social class reproduction, the inculcation of ideological belief structures’. Edwards (2015: 268) refers to hidden curricula as an implicit process relating to other structures, pointing out that the term is primarily used to criticise ‘educational institutions for reproducing implicitly the unequal opportunities, inequalities and exercises of power in the social order’, suggesting to many that higher education is ‘not for them’. The hidden curricula of education, Illich (1973: 51) argued, alienates the process of forcing ‘us down pathways functional to the perpetuation of the existing order rather than allowing the pursuit of avenues which call out to us as particular subjects’.

Automation can perpetuate and amplify hidden curricula as they intensify the reproduction of biases of who, and what, belongs in education through technologies that users have little control over (Edwards 2015). Hidden curricula have always been obstacles for transformation as those in education have limited capacity to recognise their own reproduction of bias and inequalities. Compounding the impact of hidden curricula, these biases are then reproduced in automated educational technologies and further circulated through the institution, which further contributes to their taken-for-grantedness. Both the biases and the automation that encodes them become self-sustaining and rarely questioned.

However, those in education have agency that can be exerted to proactively craft alternative futures of automation. But how can academics become engaged when the processes behind automation technologies are, or seem to be, inaccessible? This is not simply a matter of exerting agency but rather one that requires a robust methodological framing for showing how these alternatives can be crafted. Such a methodology needs to be grounded in emancipatory futures and speculative approaches to offset the milieu of late-stage capitalism and its muting of agency: ‘the colonization of the future and the active construction of hopelessness, in particular, disrupts the historic anticipatory logic shaping formal education in modern capitalist societies; namely, the linear theorisation of the relationship between learning-in-the-present and being-in-the-future’ (Adams et al. 2009: 252). Using hidden curricula as a critical foundation for critique and reimagining, we have developed a methodological response under the belief that ‘by defining and articulating a future we find desirable, we begin to build it’ (Bayne and Gallagher 2021: 622).

The critique of automation and problematic educational technologies are readily available in the literature, for example in terms of protection of users (Prinsloo and Slade 2017), sharing of information or connecting different datasets (Reidenberg and Schaub 2018), or the existential threat posed to universities by artificial intelligence itself (Preston 2022). However the everyday, seemingly banal, role of automation in education—from plagiarism detection processes to ongoing student communication to automated assessment—has received less attention. Instead of focusing on big tech, and the market-driven intentions’ embedded in the products they sell, this paper shifts the focus to academics’ possible complicity in reproducing biases and inequality through the taken-for-grantedness of educational technologies. More specifically, it develops a methodology that provides those in education with the means to move beyond critique and to disentangle the risks of automated educational technologies through creative and, ultimately, ethical praxis.

What is presented here is informed by past projects by the authors. In 2019–2020, the authors of this paper worked on the Expanding the Teacher Function project to explore the potential of automation in higher education designed by and with the staff and students. The methods included participatory workshops, interviews, and collaboration with academics and technical partners in the university to conceptualise and potentially pilot automation for use in teaching. The project generated a range of new insights that have been discussed elsewhere (Gallagher and Breines 2021; Breines and Gallagher 2020; Gallagher et al. 2021) but also provided the groundwork for this paper. Here we draw on insights from these past projects by advancing a methodology that allows those in education to become authoritative agents in their relationships with automation technologies that are often taken-for-granted and use that experience as a means of questioning how hidden curricula are reproduced.

Hence, there is a need for speculative and future approaches to reimagine education alongside this critique, to believe that ‘other “flavours” and forms of technology are possible, that things can be otherwise’ (Selwyn et al. 2021), and ultimately to posit a bold response to what we demand from automation or ‘how collectively defined, preferable futures might be built and described with confidence by university communities’ (Bayne and Gallagher 2021: 622). In this paper, we seek to avoid ‘a simple repetition of a common refrain about how we are trapped by the limits of our own imagination unable to find radically alternative trajectories for possible futures’ (Markham 2021: 384) while acknowledging the challenges involved in methodologically reimagining automation, how it comes to be, and what role it might serve in our educational future. Instead of assuming that automation is intrinsically a problem, we posit that these technologies provide an opportunity to interrogate how hidden curricula materialise. By exploring how hidden curricula are transmitted, this paper disrupts the taken-for-grantedness that pervades much of the use of automated educational technologies. In the following, we develop a methodology to enable any institution to assess and engage with educational automation based on an emergent form of ethical praxis. The methodology is grounded in emancipatory anticipatory logics to offset the muting of agency; ‘the colonization of the future and the active construction of hopelessness, in particular, disrupts the historic anticipatory logic shaping formal education in modern capitalist societies; namely, the linear theorisation of the relationship between learning-in-the-present and being-in-the-future’ (Adams et al. 2009: 252). The capacity to interrogate the hidden curricula in automation, then, is a means to realise agency and to provide the time and space for meaningful deliberation about the use of educational technology.

Situating Automation: Closures and the Discursive Patterns of Inevitability

Any future reimagining of automation in education will inevitably draw on or critique the discursive patterns of the past. Critique provides an opportunity to explore how automation is reordering or can reorder the existing structure of education: politically, socially, culturally, and economically. Critique alone, however, will not reimagine, for example, how educational ‘automation’ resulting in low-paid ‘invisible labour’ (Selwyn et al. 2021) is fostered in these automated systems, how platform pedagogies have emerged alongside these parallel and overlapping developments in automation (Perrotta et al. 2021), or how this discourse around automation in education further accelerates the seemingly inevitable turn towards the privatisation of public education (Saltman 2020). The complementary question to this sound critique is ‘what education should be expecting (if not demanding) of automation’ (Selwyn et al. 2021: 8). Questions about how automation can be studied and what role academics can have in influencing the production and implementation of automation in higher education remain unanswered.

Perhaps understandably is the preponderance of literature on automation focusing specifically on artificial intelligence (AI) and its attendant impact on education. This is not a recent development nor an ahistorical anomaly, rather an ongoing focus as ‘histories of AI stretch back at least as far as the birth of computer science and cybernetics in the 1940s’ (Williamson and Eynon 2020: 223). Studies of AI in higher education tend to emphasise its pedagogical value (Fahimirad and Kotamjani 2018; Roll and Wylie 2016). Further research provides insights into the ways in which AI might impact the role of teachers (Felix 2020; Guilherme 2019), how assessment, grading (Zawacki-Richter et al. 2019), and admission (Marcinkowski et al. 2020) can be done, among others.

There has been less focus on disentangling the different kinds of automation in education that exist outside the discourses of AI. This secondary body of automation is notable for its pre-programmed nature: dialogue flows in chatbots for common questions on courses, automated assessments, the automation that exists within if this, then that logic statements as administrative data moves from one educational system to another. These are largely non-adapted flows of activity that are not predicated on AI or machine learning interpretations. Examples from the research include the development of a teacherbot drawing on Twitter data to playfully explore pedagogy and posthumanism in a MOOC (Bayne 2015), the use of non-adaptive chatbots to surface conversations on complex topics (Roussou et al. 2019), and the speculative work on collectively defining the role of non-AI automation in teaching in a single institutional context (Breines and Gallagher 2020; Gallagher and Breines 2021; Gallagher et al. 2021). There is pedagogical and research utility in exploring these non-AI forms of automation largely due to their accessibility: those without technological skills can largely observe, critique, and design them. This is important insofar as equating automation solely with AI potentially reinforces its seeming inevitability and the ‘discursive patterns [that] continually strengthen the dominant frames of inevitability and powerlessness’ (Markham 2021: 384).

Hidden curricula potentially exacerbate biases in automation technologies, much in the same way as other technologies do (Noble 2018; Benjamin 2019). The role of automation in perpetuating and even reimagining this hidden curricula is predicated largely on its own hiddenness. Automation amplifies the hidden curricula of education by often removing the capacity for nuanced interpretation of a particular educational activity, both in terms of noting who, how, and where it privileges particular groups, and its robustness as a learning activity. With rare exceptions where automation is deliberately being used to provoke interpretation and reflection on these and other issues (Bayne 2015; Breines and Gallagher 2020; Roussou et al. 2019), the outcomes of an automated activity are pre-decided, and the available sequences to arrive at these outcomes are bound in a relatively small set. Both teacher and agency in response are potentially muted, and in doing so, the curricula that is embedded in the automation is submerged and eventually taken-for-granted.

Hidden curricula can be ‘buried in the material functioning of the interfaces themselves as well as the imagined affordances we are building into their features’ (Markham 2021: 397). It also imposes a hidden curricula itself: ‘the responsive software architectures of digital media are our new hidden curricula, reschooling adults and children alike in new modalities of knowing, perceiving, and acting’ (Adams 2017: 238). In this sense, technologies are hidden curricula in themselves in the ways they shape thinking. In them, there is a noted shift in their code and processes away from traditional understandings of human intelligence, characterised by relational, emotive, and inductive cognition, towards intelligence that is highly computational and ‘other-than-human’: a distributed, relational, emergent, and, crucially, not necessarily carbon-based mode of thinking (Marenko 2017: 2–3). The distributed nature of intelligence when seen through automated educational systems makes the curricula hidden within this distribution even more hidden.

Some automation technologies have become part of everyday practices and thereby made them less visible and taken-for-granted, whereas other technologies are imposed without academics having a meaningful say about their development and implementation. Combined with the tendency of intensifying extraction of academic labour in many universities (Angervall and Beach 2018; Loveday 2018), automation is often understood in context of top-down implementation of technologies for effectivization and possible future replacement of teachers with machines, however unlikely (Selwyn 2019). It is understandable that such initiatives are met with scepticism among academics as automation in higher education because of concerns about how it shapes education and the removal of agency and power from teachers. On the other hand, automation in higher education is an opportunity for rethinking how teaching and learning is organised. There are methodological opportunities to arrange automation in ways that positively shape hidden curricula; again ‘by defining and articulating a future we find desirable, we begin to build it’ (Bayne and Gallagher 2021: 622). Hidden curricula can be reimagined and re-embedded in the structures of education through critical yet playful exchanges and orchestrations of human and non-human forms of intelligence, which can provide opportunities for holding ‘space open for that which cannot be yet imagined and which is always yet-to-come’ (Amsler and Facer 2017: 4).

To enable institutions to take more proactive roles in embedding ethical praxis in their use of automation, the rest of the paper outlines a method that can be used by academics and institutions to identify and challenge hidden curricula in automation or more broadly with other educational technologies.

A Methodological Response Through Interrogation, Play, and Activism

To identify and unmask hidden curricula in digital educational technologies requires an innovative methodology that can facilitate this process in the interactions between humans and technology. Post hoc evaluative methodologies that rely on observable phenomena left behind by the introduction of automation in educational contexts are problematic in this respect for two reasons. First, these methodologies are compromised by how automation in education is ‘working its way through’ (Bloch 1995) and not fully knowable in the present. Second, the introduction of new forms of automation into educational contexts without fully understanding their impact is ethically problematic.

Both these reasons suggest the need for an engagement with speculative approaches. Speculative methodologies are aimed at ‘envisioning or crafting futures or conditions which may not yet currently exist’ and ‘to bring particular ideas or issues into focus’ (Ross 2017: 215) by focusing on ‘“what if” questions about the sorts of social worlds we want, as well as those we may want to avoid’ (Ehret and Čiklovan 2020: 708). Speculative methods are open about their impact on the emergence of a social event, and do not ‘work to conceal the ways in which social science research processes already actively intervene in the events they study’ (708).

Speculative methodologies are characterised by engagement with uncertainty and see emergence as a generative quality. Veletsianos (2010: 15) notes that many educational technologies are ‘not yet fully understood’ and ‘not yet fully researched or researched in a mature way’. The practises and pedagogies emerging in engagements with these technologies remain poorly understood or in a state of ‘not-yetness’ (Collier and Ross 2017: 8). Speculative approaches to automation in education carry complexity, uncertainty, and risk, as well as technological and pedagogical practices which are largely unknown. Rather than be seen as a flaw in research design, this uncertainty can be generative particularly in imagining new futures, not a response to any perceived ‘productivity deficits in teachers, or to replace teachers, but rather to explore how an assemblage of teacher-student-code in automation might be pedagogically generative’ (Bayne 2015: 465).

These methodologies are, or can be, participatory. The nature of this participation comes in many forms: photovoice, using photographic techniques to identify, represent, and enhance the community (Catalani and Minkler 2010); speculative fiction as a means of interrogating taken-for-granted elements of public discourse (de Freitas and Truman 2021) and specifically on the role of AI in education (Cox 2021); FutureCrafting as a means of exploring the contingencies around automation and AI and the blurring divides between human and machine (Marenko 2018); playful technological interventions to explore how new connections, couplings, and coalitions may emerge from a playful space where human actors (teachers and students) and non-human actors intertwine (Bayne 2015); speculative design workshops to elicit alternative prototypes to automation in education and surface positions of teaching in increasingly technological landscapes (Gallagher and Breines 2021; Breines and Gallagher 2020; Gallagher et al. 2021); and digital storytelling characterised by participatory media production, orality, and creative writing (Cunsolo Willox et al. 2013), to name but a few.

These methods are, by the very nature of their focus, delicate balances between future speculation, accessibility, and agency. ‘If it strays too far into the future to present implausible concepts or alien technological habitats, the audience will not relate to the proposal resulting in a lack of engagement or connection.’ (Auger 2013: 12) Speculative methods require a ‘bridge to exist between the audience’s perception of their world and the fictional element of the concept’ (12). This bridge might take many shapes: it can be institutionally bound (as in the case of Bayne 2015; Gallagher and Breines 2021) providing some measure of accessibility for those participating in the speculation or ontologically bound in a particular subject (as in the case of exploring the intersections of race and technology in Benjamin 2016). These bridges must be in place to avoid discursive closure, which are discursive patterns that ‘continually strengthen the dominant frames of inevitability and powerlessness’ and serve to limit the potential of alternative imaginaries (Markham 2021: 384).

Speculative methodologies are, inevitably, engagements with the future, a future that ‘cannot be considered, therefore, as a blank canvas waiting to be filled in nor is it a predetermined world waiting simply to be inhabited’ (Facer and Sandford 2010: 25). These futures are embedded throughout education as ‘assumptions about and aspirations for the future underpin all levels of educational activity: from learners deciding what to study in the light of their aspirations for their future lives, to national debates over the curriculum and teaching methods that will best equip societies for future social, economic and cultural worlds’ (Facer and Sandford 2010: 3). Automation in education further embeds or negates these assumptions, creates new ones, codifies them, and amplifies them through automation where they become self-generating and ultimately taken for granted. Discursive closure is felt most readily here as the ‘dominant frames of inevitability and powerlessness’ (Markham 2021) negate the agency needed to imagine alternative futures.

As such, it is important for institutions to engage with futures that are not of their own making. What we propose in this paper is an attempt to hold ‘space open’ (Amsler and Facer 2017: 4) for that engagement by drawing on critical anticipatory practice that moves between four stages:

a rehabilitative one of understanding past knowledges and possibilities which are latent in the present, a utopian one of imagining other realities that might emerge, a disappointing one of learning the limits of this knowledge and imagination as they interact with existing social forces, and a creative one of actively pursuing the realization of the alternative by transforming the fundamental conditions of its possibility… (Amsler and Facer 2017: 8)

By acknowledging the past, we rehabilitate the possibilities of the future and remove, potentially, its colonisation by futures that are contextually inappropriate, ones that do not fit into institutional cultures and practices (Sheraz et al. 2013). The hidden curricula become a site of interrogation, critique, and rehabilitation, without which a discursive closure (Markham 2021) of the future takes place.

Through several research projects focused on digital education, automation, and educational futures at the University of Edinburgh, this methodology emerges from a body of participatory design workshops, interviews, and focus groups with staff and students at the university from 2017 to 2020. Their perspectives were used to generate preferred futures of higher education at the University of Edinburgh in the Near Future Teaching project from 2017 to 2019 (Bayne and Gallagher 2021) and to articulate a range of prototypes for automation for use in teaching in the Expanding the Teacher function project from 2019 to 2020 (Gallagher and Breines 2021; Breines and Gallagher 2020). Both of these projects emphasised co-creation, participatory design, speculative methods, and embedding project outputs (either articulated futures for teaching or ways that automation can be used in teaching, respectively) in the values as stated by those participating. In the latter project, many saw automation as potentially a positive variable that could help service a broader sense of pastoral support, dedicated instruction, and an expanded sense of what teaching is indeed possible. The insights from these projects led us to recognise that hidden curricula persist in automation even when institutionally co-designed (Gallagher et al. 2021) and that these same hidden curricula are often more difficult to identify and critique when bound in digital technologies, particularly for automation.

Much work has been done on interrogating hidden curricula in ‘traditional’ teaching situations. Rammel and Vettori (2021) use hidden curricula to frame the ‘latent’ social and cultural aspects that underpin all teaching and learning efforts, structures, and processes and note their often desultory impact on institutional transformation. Cotton et al. (2013) note that this latency is at least partly a result of less densely codified formal curricula, which provides a context where multiple hidden curricula exist. The lack of guidance on how to identify and manage this latency and multiplicity in digital education has become increasingly clear to us. The methodology described in this paper is a response to this need for guidance. While it is drawn from past projects and has been used in different settings, it is only through the process of writing this paper that it has become a coherent method that can be implemented more widely.

Merging Speculative and Future Approaches with Activism

The three stages of the methodology presented in this paper represent ‘not that of a straight-line trajectory but a loop’ (Markham 2021: 399), which cycles back on itself at intervals. This cycling back on itself might be as a result of the introduction of a new educational technology or potentially the evaluation of an existing output of this process with automation. Ultimately this loop serves to mitigate a general decline into taken-for-grantedness as pathways, knowledge of the process itself, and indeed all hidden curricula are rendered invisible (Markham 2021). This is especially acute with automation technologies in education, where alternative pathways and processes are not always apparent due to the unfamiliarity with how these technologies function, where they are currently employed, and what impact that has on the individuals working through them.

Cycling through these stages and their explicit imperatives to interrogate the hiddenness of the curriculum, both educational and automation, is meant to reveal alternative paths. This potentially virtuous loop of methodological activity can disrupt the ‘negative feedback loops in social ecologies’ where ‘discourses are normalised or locked into repetitive loops’ (Markham 2021: 392). As such, ‘certain practices or technological designs are thus removed from any chains of causality or results of decision-making, so that they seem like processes that just exist’ and, as such, ‘they become value-free routines or routine ways of thinking, removing both agency and the origin point’ (Markham 2021: 392). We note in this the parallels to the taken for granted nature of the hidden curricula of both education and automation and engage with it methodologically by surfacing these hidden elements in an attempt to reclaim causality, agency, and the origins of automation in education.

Teams of individuals, institutions, and sectors can engage with this methodology as needed. It is designed to be deliberate and routine, however, so the intervals in which these stages of activity are engaged should be patterned and predictable. It can be clustered around practice-based affinities, affiliations, and networks at the small group, institutional, or disciplinary levels. Ideas emerging from this methodology around what a preferred conceptualisation of automation in education can radiate out to the sector through the conduits of professional bodies and networks, to institutional executive bodies, and to governmental bodies. Indeed, embedding the outputs of this process in tangible strategy, policy, governance, and labour bodies is an explicit part of the methodology itself. There are three stages to the methodology, and these are presented in the following sections alongside summary tables for each stage.

Stage 1: Rehabilitation Through Critique and Mapping

The first stage of this methodology is the ‘rehabilitative one of understanding past knowledges and possibilities which are latent in the present’ (Amsler and Facer 2017: 4). This is why this methodology is bound to an interrogation of hidden curricula. By acknowledging the past and present institutions can rehabilitate the possibilities of the future and remove, potentially, its colonisation by futures that are contextually inappropriate so that ‘the sense of the possible and the preferable can be occupied unthinkingly, inappropriately, and in all likelihood damagingly, by concepts developed in another context (and pursuing other interests) altogether’ (Sheraz et al. 2013: 180). The hidden curricula becomes a starting point for resisting this colonisation of what is possible as it becomes a site of interrogation and critique and then rehabilitation as more utopian and creative stages of activity follow. Without this initial stage of rehabilitation, discursive closure (Markham 2021) and a subsequent closing of the future (Bayne and Gallagher 2021) take place.

The first activity is a pre-mortem where the methodology provides space to identify and immobilise the particular facets of automation in education that we cannot control or particular biases embedded in hidden curricula that we cannot remove. It is designed to surface the a priori threats to the reimagining of automation in education and to note the cascading impact of those threats on the alternatives being imagined and enacted in this methodology. This pre-mortem can be a scenario design which presupposes that a failure has occurred (Eckert 2015) in the enactment of some preferred future of automation in education and activity is directed at identifying the essence of the failure to reverse engineer a more robust response to the identified threat. Or it can be a matter of discussion to surface actors and artefacts that impact this context of automation in education and note their permeability in terms of being reimagined. Those that are seemingly impenetrable must be acknowledged as such and scope drawn around the methodological work accordingly (Table 1).

Table 1 The activities and outcomes of stage 1 in summary form

This pre-mortem provides a space to both critique and mourn the disappointment of ‘learning the limits of this knowledge and imagination as they interact with existing social forces’ (Amsler and Facer 2017: 8) or what Fisher (2014) refers to as the ‘memory of lost futures’ (22). These ‘lost futures’ are problematic insofar as they retain their ‘affective’ charge (Knox 2017: 5); we remain taken by them despite their seeming obsolescence. The pre-mortem provides a space to note their passing as a future, yet ‘the remains’ of their ‘material connections, technocultures and cultural memory’ are reintroduced as a sort of ‘toolkit for creative bricolage in the present’ (Dawney 2021: 411).

Based on the pre-mortem, the team then creates a series of mappings designed to identify and conceptualise the context in which this methodology is enacted. The first is the mapping of the existing hidden curricula and its material manifestations in institutional policy, strategy, practice, and technology. This mapping involves first identifying as many of these manifestations as possible and noting what implicit or overt messages they are providing to students and staff about who belongs in higher education and what tacit information and practices allows that belonging to take place (see, e.g., Neve and Collett 2018). This is followed by a process of inductive coding of these material manifestations into thematic categories and axial coding into relational networks to demonstrate how one instance of the hidden curricula is related to, and reinforces, another. This is a significant and particularly challenging undertaking to perform, and no claim can or should be made to comprehensiveness: hidden curricula are abundant, obfuscated, and interwoven. No single individual or body will necessarily be able to identify the entirety of the hidden curricula at work at a single institution. One possible approach to manage this complexity is to rely on primary accounts from those who have been historically marginalised in higher education, to make them part of the team employing this methodology, and to allow their experiences to surface instances of hidden curricula.

The second mapping is targeted directly at identifying, categorising, and relating the different types of automation that exist at the institution. Insofar as possible, the first part of this mapping is to identify what types of automation exist at the institution, what degrees of transparency exist (open or black boxed) within those types, and subsequently what degree of critical interrogation and reimagining is possible. Once these automation types have been mapped, the task is to identify what teaching practices and pedagogical approaches these types support, modify, or circumvent and what messages this provides to students and staff about what types of education are explicitly encouraged or tacitly reinforced. As such, this second map is an attempt to understand how automation both acts as hidden curricula unto itself and reinforces existing hidden curricula.

The third map is an affinity mapping of institutional values where values are surfaced and then clustered around intent, problem, or affinity (Martin and Hanington 2012). These articulated values are necessary expressions of academic communities taking on ‘the task of articulating confident, alternative imaginaries for the future of teaching in universities which re-introduce the values we want to teach and live by’ (Bayne and Gallagher 2021: 608). These values need not be specific to automation, although automation and its possible impact on education can provoke them: epistemic justice in the face of automation reinforcing knowledge hegemonies, the creative and critical work of teaching and teachers in the face of automated processes and data-driven personalisation, an ethos of pedagogical care and community orientation, and so forth.

The outputs of stage 1, maps and outputs from the pre-mortem, can be seen in Fig. 1. These maps can be engaged separately as three interdependent relational systems or juxtaposed as ontological provocations. Markham (2021) notes this potential for provocation that these maps might have in the following: ‘It is not until the map has been turned upside or otherwise disturbed that we notice it was operating on our sensibilities in the first place … to create a predetermined narrative arc or more generally, a sense of inevitable continuation’ (Markham 2021: 386). These three maps, alongside the work of the pre-mortem in determining the scope of the subsequent activity, provide a guide for the remaining two stages of this methodology. Teams can articulate, for example, the social justice issues involved and identify which ones to target in the remaining two stages based on an analysis of these artefacts.

Fig. 1
figure 1

Outputs from the first stage of activity, including the limitations that the pre-mortem identified acting as a boundary for what is possible with this methodology

Stage 2: Play and Provocation

The second stage of this methodology is characterised by play and provocation as a means of ‘imagining other realities that might emerge’ (Amsler and Facer 2017: 4). To do this, the activities of this stage are directed at playful and practical engagement with these automation technologies to learn what is possible, what remains opaque and lacking transparency, and what can be done with that knowledge in the form of institutional and individual practice (Table 2). This stage allows for institutions ‘to play across the torn landscape of pedagogic automation’ (Bayne 2015: 457).

Table 2 The activities and outcomes of stage 2 in summary form

The first activity in this stage is a practical engagement with a range of both black-boxed and transparent automation technologies. For the purposes of this paper, we are defining these transparent automation technologies as those able to be created by lay people within the institution (i.e. those without a particular technological capacity, training, or skill). Predominantly, however, this activity relies on engagement with technologies as playful provocation, whereby automation technologies are deliberately manipulated in terms of their inputs and outputs. A limited range of examples drawn from the authors’ own teaching, from their respective programmes, and more broadly, are found in Table 3. This activity is critical in the development of ethical praxis in that it potentially surfaces the problematic and productive facets of automation in education and suggests that individuals have agency in this process, even if that agency is directed at intentional manipulation of technologies lacking transparency.

Table 3 A sample set of activities that allow institutions to playfully and provocatively engage with automation

What is critical in this stage of activity is that the institution engages with a range of technologies towards practical and critical effect rather than one discrete application. In this play, they identify what is possible, what is not, what larger systems of automation exist in educational institutions, and what role these technologies might have in how hidden curricula is manifest and what pedagogical approaches exist in response, if any.

The outputs that emerge from this initial activity around play and provocation allow the team to discuss the possibilities and problems found in this automation and to begin to engage with the incongruities and tensions that emerged from the play. In this discussion, the team draws on the maps from stage 1 to do so to note how the practical dimensions of these automation technologies sit in relation to institutional values and the configuration of the hidden curricula. In this discussion, treat this process as potentially surfacing the emergent properties of ‘new relations that are “humanly worthy in process”, and to recognise and create opportunities for the emergence of these relations in new settings’ (Amsler and Facer 2017: 11; Bloch 1995). This playful activity and the outputs from stage 1 might represent ‘fertile ground for speculating about different trajectories that got us here, or different possible social configurations as a consequence of this alternate reality’ (Markham 2021: 398).

The third activity in this stage is to cohere all of this into an edited map of ‘preferred’ automation configurations at the institution, a set of institutional and pedagogical practices for engaging these configurations, and a statement of a preferred future for how the institution will position automation in education and what impact this position will have on hidden curricula.

Stage 3: Slow Activism

The third stage of this methodology is ‘a creative one of actively pursuing the realisation of the alternative by transforming the fundamental conditions of its possibility’ (Amsler and Facer 2017: 4). This active pursuit is largely directed towards refining the preferred future of automation in education that emerged from stage 2 towards dissemination and then subsequently embedding elements of that future in institutional policy, strategy, and practice (Table 4).

Table 4 The activities and outcomes of stage 3 in summary form

The first activity to be performed is to refine the preferred future emerging from stage 2—the statement of how the institution will ideally position automation in education and what impact this position will have on the hidden curricula—and prepare it for dissemination to a wide range of audiences at the institution. The work of refining this future for a multitude of audiences does two things. It first directly embraces the idea of an educational institution as a space of significant complexity in which any preferred future will need translation to maintain its relevance to a diverse audience. Secondly, this complexity mirrors the complexity of how the hidden curricula intersect with automation itself, an intersection potentially characterised by potential amplification and obfuscation.

Practically, this refinement involves creating several versions of the preferred future: for students, teachers, professional staff, for those in executive leadership, for labour unions, and for the broader community in which these institutions are situated. All these versions are accompanied by the map of ‘preferred’ automation configurations at the institution to provide a visual rendering of that preferred future of educational automation. Some versions include the set of institutional and pedagogical practices for engaging these configurations. All should include the map of the hidden curricula generated in stage 1 as a provocation to see this preferred automation in relation to its capacity to reinforce or reimagine the hidden curricula rather than to allow for it to submerge into a state of taken-for-grantedness. What emerges from this activity is an understanding of how this preferred future is communicated to the broader institution, alongside a set of artefacts that speak to why this preferred future is defined as such.

The second activity in this stage is to transform that same set of institutional and pedagogical practices into policy and strategy positions for how automation in education is to be structured, managed, and evaluated. These policy and strategy positions, alongside the defined practices themselves, become the substance of the subsequent activism. They represent the material artefacts of ethical praxis in that the practices associated with automation become structurally dependent on what is proscribed within them: for example, if policy mandates that a plagiarism detection application is to be used before assignments scores can be validated in marking schemes and exam boards, then teaching practice in relation to this automation is limited largely to compliance or subversion.

The third activity is to identify the structures and networks of the institution in which these policy and strategy positions, as well as the defined practices, are to be embedded and to begin activism towards embedding them in their instruments and bodies of work. This institutional activism is both necessary and slow to ‘get at the varying levels of speed required […] when attempting to work at different levels of the sector to enact change’ (Page et al. 2019: 1317). As such, this third stage of the methodology is positioned in a much more elongated timeframe than the first two stages; it is meant to be an ongoing effort that acknowledges that ‘recalibrating’ the futures of the institution in relation to automation takes both time and involves ‘action across many institutional and sectoral levels’ (Bayne and Gallagher 2021: 608). It is slow activism (Bayne and Gallagher 2021) and requires deliberate and sustained effort.

Ultimately, this third stage of activity is about understanding and transforming the institution itself by ‘extending accepted notions of distributed academic leadership, reconciling value pluralism through establishing common language and values, implementing permissive rather than prescriptive institutional strategy, and institutional praxis with respect to social justice and socially just collaborative working’ (Johnston et al. 2019: 203). The activism of this third stage of activity is designed to methodically transform how automation, the hidden curricula, even the preferred future of automation that this methodology stimulates, will course through complex institutional environments and produce, and reproduce, power.

Implications for ‘Knowing’ and Acting with Automation in Education: Ethical Institutional Praxis

What is presented in this paper is a methodology for ethical institutional praxis for automation, as well as other educational technologies. This methodology underscores a deliberate admission that the range of automation technologies, including the preferred futures emerging from within this methodology, are ‘through their design choices, building the terrain of future politics’ (Srnicek and Williams 2015: 153) both within the institution and more broadly in education. This political terrain is bound pragmatically in this methodology to the frame of the hidden curricula, noting how any curriculum ‘is not neutral but serves the interests of one social group over the others’ (Öztok 2019: 107). This methodology is designed to surface the taken-for-grantedness of particular hidden curricula and note how it is ‘widely used to explain the reproduction of cultural hegemony and social inequity’ (Öztok 2019: 111). It aims to position this reproduction at the forefront of what the curriculum does or is designed to do. It is explicit, if not overtly intentional.

On its own, identifying and reimagining hidden curricula represents a difficult undertaking as it ultimately draws critique to ‘educational institutions for reproducing implicitly the unequal opportunities, inequalities and exercises of power in the social order’ (Edwards 2015: 268) and for alienating how it forces ‘us down pathways functional to the perpetuation of the existing order rather than allowing the pursuit of avenues which call out to us as particular subjects’ (Illich 1973: 51) and how it effectively presents a discursive closure (Markham 2021) that ultimately acts as an agent of cultural reproduction (Öztok 2019: 107). Seeing hidden curricula through automation becomes even more difficult an undertaking due to its concealment in ‘remote, unaccountable, unethical systems’ (Williamson, Eynon, and Potter 2020: 112). Yet it is a necessary undertaking, one that attempts to recapture the university as a site of civic and social purpose (Pettinicchio 2012). We posit that a mechanism for that recapture is a commitment to the type of institutional ethical praxis that this methodology represents.

This methodology suggests future directions for the growing field of futures and speculative methodologies. These directions might include more methodological approaches that meaningfully combine critique, creativity, and concerted activism directed at institutional transformation. These approaches might begin to imagine ‘practices which speak back to power, where the direction of flow is not about “content” being delivered downstream by algorithm but about more open, agentive and productive spaces for both learners and educators’ (Williamson, Eynon, and Potter 2020: 112). They might help put the communities of the educational institutions-teacher, students, professional staff, and leadership to work in manifesting the ‘ethical stand that emphasises that people should be involved in the design of the technological futures that they want to inhabit’ (Baker 2018: 544). Critique alone will not reimagine how educational automation reinforces the power divides in the uneven political terrain of universities. Creativity alone will not speak to reconstituting these same divides. Activism, without creativity and critique, will fall prey to the discursive closure of automation and potentially serve to reinforce divides rather than transform them. Again, we return to ‘what education should be expecting (if not demanding) of automation’ (Selwyn et al. 2021: 8) and then critically, creatively, and actively realising those demands.

Limitations, Resources, Political Will, and Theoretical Insights

Realised fully, the methodology proposed in this paper requires significant leadership, resources, and institutional commitment. This represents a considerable limitation. Not all institutions would share the same resources, the same conceptualisation of technology in its position of praxis, and the same constitution of institutional values and ethos. As realised fully, such an approach requires systemic transformation in educational institutions and significant political will; there is an unequivocal need to provide time, space, interdisciplinarity, and (design and pedagogical) practices to ensure that automation is institutionally and contextually relevant, provides opportunity for increased agency and presence of the academic and students bodies, and is ethical. That need is not able to be fully realised at all institutions, nor for all within these institutions. That need has clear labour (Huybrechts et al. 2018) and design justice (Costanza-Chock 2020) issues associated with it in terms of who gets to participate and what demands that places on them.

The limitations of any one institution to fully realise this methodology suggests that, at times, federated approaches of similar institutions, practitioner networks, or broader sectors might be the natural ‘home’ for this methodology. Without adaptations, any transformation realised through this methodology will be uneven and poorly distributed across universities and within institutions. This is, in itself, a further indication of hidden curricula being enacted in these intersections of automation and education and countenances whether ‘education systems should continue to privilege individual and autonomous attainment at the expense of the capacity to exercise distributed agency in and through networks’ (Facer and Sandford 2010: 15). We believe that this methodology is transposable from institutions to sectors and networks, but not without significant adaptations and work around ‘reconciling value pluralism through establishing common language and values’ (Johnston et al. 2019: 203), identifying the political terrains of these federated networks and the practicalities of ‘freeing’ individuals from their institutional workloads to participate.

Despite these limitations, this methodology is unabashedly aspirational. It equates the ‘capacity to aspire as a social and collective capacity without which words such as “empowerment,” “voice,” and “participation” cannot be meaningful’ (Appadurai 2013: 289). Aspiration carries through all these three stages of this methodology in how it is positioned to allow automation to act as a vehicle through which we both imagine and realise a more ethical constitution of the educational institution. It allows us to identify the ethical praxis that would speak to how that institution would be realised in response to automation and how hidden curricula, if they are to remain hidden, might be more justly constituted. This methodology potentially provides some agency to fulfil these aspirations, to critically and confidently face an uncertain technological world.

Further, this methodology contributes to how we conceptualise and approach the future of education technologies generally and automation specifically. It provides an understanding of the current state of research around automation in education, noting its predominant emphasis on critique in one instance and speculation in another. This methodology attempts to coherently bridge the two and to countenance and add to the position that when ‘one uses technologies he or she remains aware of their nuanced relationship to society, while when one theorizes about them they seem much more “brittle” and inflexible’ (Ratto 2011: 253). As discussed, this awareness is often muted in the taken-for-grantedness of automation in education. Yet, it can be surfaced and made visible, it can be reimagined, and this can all be predicated on nuanced understandings of use, critique, and ethical praxis.