Introduction

The future is always shaped and, to some extent, haunted by design decisions of the present and their future residues. Technology is not only ‘society made durable’ (Latour 1990), but also anticipates, concretises, and forecloses possible futures. As we build the future on our existing infrastructures, norms, practises, and values, the design of digital technologies, data subjectivities, and the logics of datafied organising become the grounds on which we build our educational futures. Predictive analytics, as increasingly used in education, is an example of a technology that is future-oriented by design. They make probabilistic assessments about the future based on datafied representations of the past and present. By design, predictive analytics accelerates future-orientation of technology, in terms of (1) how predictive analytics directs the users’ attention towards a future that is rendered predictable, (2) how predictive analytics renders this future manageable and requires users to act, and (3) how predictive analytics impacts on the futures of those that are subject to their predictions. In these three ways (and possibly more), predictions aim to reduce uncertainty in education and make educational futures manageable.

One prominent example of predictive analytics in education is risk prediction as offered in major learning management systems (LMS). The algorithmic analysis of educational data—learning analytics—‘act as anticipatory devices that are embedded within the pedagogic routines of the classroom’ (Williamson 2016: 136). Put simply, the workings of learning analytics and any other algorithmic system draw on categorisation and sorting. Data generated in such systems are analysed for patterns of similarities and then clustered to receive either automatically or (demi-)manually attached labels for categories. Extensive and long history of research has shown how categorisation and measurement are used for introducing and upholding norms, order and what- or whoever falls out of it, maintain certain power relations, but also create opacity in regard to the origins of certain labels (such as rankings and scores), leading to the lack of accountability (Bowker and Star 1999; Fourcade and Healy 2013). Despite the critical scholarship about predictive learning analytics and their future-making implications in the context of education (Davies et al. 2022; Prinsloo 2019; Witzenberger and Gulson 2021), the centrality of risk in the design of educational technologies has yet to be addressed. Current research concerned with detailed studies of educational systems focuses on a variety of features, the ecologies of software, software developers’ values and ideologies, and infrastructural connections between various systems (Decuypere 2019; Jarke and Macgilchrist 2021; Manolev et al. 2019).

Understanding risk as integral both to modern society (Beck 1992) and the educational processes (Biesta 2013), our goal in this paper is to trace the politics of risk prediction and educational futures. Along with a traditional argument in science and technology studies (STS), we argue that technologies, and more specifically data categories, always have politics (e.g. Aragona et al. 2018; Amoore 2013; Berg 1998; Suchman 1993; Winner 1980). For example, Macgilchrist et al. (2023) state in their commentary to this special issue that:

Designing technology for education is never only a problem-solving practise. It is always already about creating spaces for inherently political and affective sociotechnical futures relations. (Macgilchrist et al. 2023)

In this paper, we argue that the future orientation of educational technologies materialise in form of design features. We contribute to this special issue by analysing design features of educational technologies related to risk prediction. To do so, we consider their politics for future making through the ways in which they define what is problematic, what is thinkable, and what is desirable in education. We conduct a ‘feature analysis’ (Hasinoff and Bivens 2021), a methodology in which design features are understood as relational configurations of use practises, use situations, and users that co-construct and co-produce social reality. Empirically, we analyse how five widely used K-12 school education LMS—Blackboard, Brightspace, Canvas, ItsLearning, and Moodle—configure what is understood as educational risk through their design features. We analyse (1) the promises and aspirations these LMS promote to educators about the possibilities of a risk-free educational future; (2) how risk prediction features work across the five LMS, e.g., on which different data categories they operate; and (3) the resulting politics of who is perceived as a carrier of risk in education and who is called upon to act. We close with a discussion on the politics (and risk) of aspiring risk-free learning and risk mitigation in datafied education.

Risk and Education

The relation between risk and education is an ambivalent one and has been central to educational research, governance, and practise since the 1970s–1980s. Back then, risk in education was understood either through an epidemiological model drawing on experiences in medicine, or through a constructivist model taking into consideration broader characteristics of both the student and the school in shaping educational processes (Novosel et al. 2012). An at-risk student was described as one ‘who is in danger of failing to complete school with a basic level of academic proficiency’ (Novosel et al. 2012: 348). Both risk models aimed at providing teachers and other educational actors with strategies for risk mitigation. The epidemiological model foregrounded individual students’ (learning) behaviour and aimed at providing students identified as being at-risk with evidence-based interventions for ‘correcting’ their ‘risky’ issues (Novosel et al. 2012). The constructivist model recognised that risk is defined to some extent by broader societal, structural issues, including racial discrimination or economic inequality, as well as by related social and affective relations within a school. Within this model, students became carriers of risk both through their and, by extension, their families’ individual characteristics (e.g., socio-demographics, family history, and geographic location) and through their behaviour (from school attendance to timely completion of assignments) (Finn 1993). Behavioural factors were seen as more easily ‘amenable by parents, school personnel and school programmes’ (Finn 1993: 2), but not solely related to a single class where students exhibited problematic actions. With that, the classroom can be understood as a space to collectively encounter, critically interrogate, and collectively approach various educational ‘risks’ and otherness tied to individual students and their (hi)stories. What is perceived as risk and who is defined as ‘risky’ are central questions defining the politics of the constructivist model of risk (Bialostok 2015). Despite this constructivist view, to date, definitions of risk and, respectively, success in education focus on behaviours and do not always account for students’ perspective on their learning environment or their interests and well-being (van den Bogaard 2012). Rather, the educational process becomes a project of developing a certain future self (Posecznick 2015) in which risk can be used to govern behaviour (Bialostok 2015).

With the political and economic changes common for various societal domains in 1980s–1990s, new perspectives arose on what a successful and ‘good’ education, teaching, and learning are, which political and organisational frameworks allowed for achieving them, and which actors, including commercial ones, got to define the criteria for educational success and failure. This made visible diverging perspectives on goals and definitions of risk and success in education, varying from the idea of education as societal, political nurturing and caring, to the idea of market-ready efficiency and competition adopted not only by technology providers but also education governance and policy-makers (Leemann and Imdorf 2019). Especially in this latter view, risk became the opposite of educational success in an ‘attempt to make education into a perfectly operating machine’ (Biesta 2013: x). Using the ‘soft language of autonomy, celebration and creativity for a neoliberal imaginary’ (Mooney Simmie and Moles 2020: 385), however, this view on risk complicates possibilities to engage with a diversity of possible ‘risks’ in a classroom. An ‘effective’ teaching intervention rather typical for an epidemiological, evidence-based understanding of risk becomes a pedagogical and political instrument (Biesta 2007). Biesta (2007, 2013) addresses this as a shift in education from teaching to learning, where it is education that is ‘at risk’ since it loses its ability to engage with uncertainty and deprives learners from opportunities to develop their subjectivity and relations with each other and risks that go along with social interactions. In the data-driven context, risk is identified by setting thresholds and tiers within the categories of success, both through educational technologies and government policies (Bradbury 2019; Breiter and Jarke 2016; Grant 2022; Hardy and Lewis 2018). The data-driven categories and risk thresholds are then used by schools and individual teachers for their interventions, often leading to educational ‘triage’ or ‘intervening with the right children’ (Grant 2022: 226). As Grant (2022) notes, such educational triage is rooted historically in discriminatory practises.

The ongoing datafication of education—the translation of educational processes into digital data—and particularly learning analytics continue to reframe the definitions of risk and success. For example, data analytics are being framed as a new way of knowledge production promising school management and school authorities to identify patterns in data otherwise obscured to educators. Originated in business, this framing benefits educational organisations providing them with measurable criteria for management, accountability, and governance (Ferguson 2019; Prinsloo 2019; Selwyn 2019). Much academic work in learning analytics, however, is directed at identifying behavioural patterns of students at risk, developing assessment strategies, drawing on such behavioural data, and nudging to solicit desired behaviours from students (Raković et al. 2023; Russell et al. 2020). More recent perspectives on learning analytics reflect the moral and practical implications of this practise (Cerratto Pargman et al. 2023).

Beyond the educational domain, predictive risk practises and scoring have been extensively conceptualised in relation to consumer behaviour (Fourcade and Healy 2013; Kiviat 2019). This scholarship acknowledges some benefits of risk prediction but points out that beneficiaries are the organisations initiating prediction and building their business models on selling or avoiding risk (Kiviat 2019). For digital, automated technology, risk-driven marketing strategies can curb investments and avert attention from other pressing but difficult to solve matters, as we observe in a flood of warnings of existential risk of automated models in 2023. Against this backdrop, it is crucial to understand how exactly various educational technologies frame risk through their design features and what kinds of future politics they envision in relation to risk.

Analysis of Risk Features in Educational Technologies

This paper is based on a study of leading international LMS in K-12 education that are providing risk analysis and risk prediction features. Increasingly, risk and risk mitigation have become a design feature of LMS. Such systems are designed to capture processes of teaching and learning in digital data. These systems promise to support functions central to learning management, allowing educators to create, design, and manage courses, educational content (text, videos), and assignments. LMS provide features for managing student data, grading, documentation, reporting, and communication with the classes. Risk prediction and mitigation can be broadly understood as part of the reporting and class management features of LMS. Some of the LMS data are produced automatically, for example, by logging users’ behaviour and interactions within the system such as times spent on certain tasks, number of tasks solved, or courses taken. Other data are the result of both automated and manual labour, for example, test scores, teachers’ grades, course attendance data, and uploaded solutions to given tasks. Increasingly, LMS complement these functions with (predictive) learning analytics. We analyse and compare five LMS—Blackboard, Brightspace, Canvas, ItsLearning, and Moodle—to identify what these define as ‘at risk’ or ‘risky’ and who is responsible for risk mitigation. Specifically, we examine their websites, user handbooks and documentation, available ‘best practise’ cases, and the interface design.

To sample the analysed LMS (see Fig. 1), we initially searched for ‘learning management system’, ‘lms list’, ‘school lms’, and ‘student at risk analytics’ with the help of several search engines in English and German, identifying several articles scoping LMS for schools and teachers to choose from as well as an e-learning industry database of LMS (see Online Resource 1 for a full LMS list). To identify whether these LMS provide any learning analytics and particularly risk prediction, we studied their websites and user documentation available online in multiple steps, first excluding the systems which did not consider K-12 and did not provide any information on monitoring student learning, commonly described by LMS providers with terms such as ‘Reporting and Monitoring’, ‘Dashboard’, and ‘Analytics’. In the next step, 31 LMS were analysed according to their target audience, promises, units of analysis (student, classroom, or district level), and the data used for the learning analytics. Systems not specifically providing any risk analytics or providing such analytics only for district level were excluded. In each step, we excluded any instantiations of other systems to avoid overlap. The final sample for the in-depth feature analysis includes five LMS: Blackboard, Brightspace, Canvas, ItsLearning, and Moodle.

Fig. 1
figure 1

 The LMS sampling process In the flowchart of the LMS sampling process five boxes indicate steps from initial search to the application of exclusion criteria such as no learning analytics no risk prediction or no focus on schools in the three middle steps to the final sample of five LMS

Blackboard LearnFootnote 1 is an LMS based in the United States and is now used in around 100 countries with over 16,000 clients and 100 million users worldwide according to its website. This learning platform, with built-in software and integrations, is used in K-12, higher education, business, and government. BrightspaceFootnote 2, based in Canada, has around 1100 clients and over 15 million users worldwide. It is used in the education sector, both K-12 and higher education, as well as the associations, corporate, and government sector. CanvasFootnote 3 is a US-based LMS that has over 7000 clients with more than 30 million global users maintained by Instructure. This web-based platform is used in K-12, higher education, and corporate education. Canvas uses dual licencing to offer off-the-shelf products and an open-source licence to enable external contributors to further develop the system (reviewed by the Instructure-hired engineers) and additional instantiations, whilst some of the Canvas features run solely on proprietary code. ItsLearningFootnote 4 LMS, used for both K-12 and higher education, is a Norwegian company with 20 million users around Europe and more worldwide. It is a modular platform that provides possibilities for integration of various further systems. MoodleFootnote 5, initially developed and maintained by the headquarters based in Australia, is the world’s most widely used learning platform according to the provider company and is used in 237 countries with over 300 million users. It is an all-in-one open-source learning platform for K-12, higher education, and the workplace. As an open-source platform, Moodle works with certified partners worldwide on the systems’ further development, whilst individual developers can modify and customise their products and add own servers. All five LMS are established systems used in multiple countries worldwide that provide our analysis with comparable yet different use cases.

Following Hasinoff and Bivens (2021), we conduct an analysis of risk prediction features of these LMS to reconstruct the (educational) problem to which these features are provided as a solution. The method consists of four phases from identifying systems addressing certain social problem—management of learning in our case, to identifying and categorising their features. The final step suggested by Hasinoff and Bivens includes speculation about alternative future systems. Instead, we included a discussion of risk politics to complete the application of the method. We qualitatively code the design features the LMS provide to define and predict risks, as well as the kinds of data used to do so (e.g., performance data, interaction log data, and sociodemographic data), and actions LMS recommend to educators and students for risk mitigation. The features are primarily derived from the user and support documentation for each LMS. When available through demo licences or video recordings, LMS interface has been also analysed. To link the framing of risk to the LMS design, we elucidate what input data are used to predict the risks and what promises LMS make towards educational actors interested in risk mitigation. Amongst similar methods such as walkthrough (Light et al. 2018), feature analysis highlights material and discursive forms of these promises in LMS design.

The starting point for feature analysis is that design is often understood as solution to a specific problem. Some have conceptualised this as techno-solutionism (e.g., Morozov 2015). Importantly, feature analysis does not understand problems as simply existing in the world and technical designs providing a solution to these problems, but rather that through the design of specific features, (social) problems are constructed in a particular way. Design, therefore, always addresses only particular aspects of the social assembling particular practises, subjectivities, and materialities that constitute a (social) problem. Design renders this assemblage so that the technological feature can respond to it. Any part of the social problem lying outside of the scope of technological features, and especially structural societal challenges, are written out. Design is therefore a political act in which today’s definitions of societal problems define what will be important in the future, who will be part of this future, and how to manage their actions to achieve it.

Defining ‘Risk’ in Education through LMS

In this section, we present the results of the feature analysis of five LMS, specifically attending to how the LMS frame risk, which features are implemented to do so, and how they propose to manage risk relations between educational actors. All the analysed LMS can be used in a classroom at school as well as in online settings. To enable risk prediction, however, LMS require extensive use, usually facilitating online or blended learning settings in order to provide algorithmic models with enough data input for risk identification and prediction. Overall, such input data range from in-system interaction of learners and teachers (system and content access, clicks, and duration of interactions with the system or content) to performance data (submitted assignments, grades, and in-system test results) to social interactions within the system between students and/or the teacher (writing and commenting posts and providing or reacting to feedback). For teachers and administrators to make sense and view these data and predictions, LMS provide various visualisations such as graphs of learners’ progress over time, network charts of learners’ interaction with one another, or dashboards including various graphs based on individual or classroom data.

LMS Promises and the Risks of Education

First, we analysed the risk features of the five LMS based on their marketing materials and software documentation (e.g., websites) with particular focus on promises regarding learning analytics. Interestingly, all the analysed systems do not specifically advertise features of risk prediction or mitigation, but only describe these—to a varying degree of detail—in their user guidelines. Rather, the promises of learning analytics mention future success of learners and, by extension, teachers (and even the whole world in case of MoodleFootnote 6) in different educational settings.

Despite the individual differences between the LMS in the extent of risk analytics, all providers make similar promises about the role of the analytics in their products in supporting teaching and learning processes. For example, Brightspace promises ‘individualised improvement suggestions by providing context and a holistic view of the learner’ and ‘personalised insight about individual students in order to design specific interventions or instructional plans’Footnote 7. Blackboard also speaks directly to teachers, promising that they will be able ‘to deliver life-changing teaching and learning experiences with unmatched pedagogical autonomy’ to keep the ‘learners engaged and on track, inside and outside the classroom’Footnote 8. Brightspace’s ‘Student Success System’Footnote 9 aims to predict students’ final grades to identify those ‘at risk’. Blackboard, in addition, also prioritises students’ retention, the temporality of risk prediction and assessment:

The earlier you can discover and remedy problems, the more successful students become. They’re more likely to finish your course and continue the programme through to graduation-the ultimate retention goal for an institution.Footnote 10

This focus on retention is specific to Blackboard’s framing of risk prediction. Other systems such as Canvas, however, also foreground that course analytics and data about students’ interactions with course materials are ‘useful info when it comes to adapting instruction’Footnote 11 shifting responsibility to act towards teachers. Similarly, ItsLearning promises a ‘holistic overview of student learning with data sent directly to your reports generator’ from using their reporting features. ItsLearning ‘Student 360° Report’ analytics enables teachers to achieve ‘better planning and understanding of teaching and learning, enabling actions and decisions that lead to success for all students’Footnote 12 based on ‘deep-dive into each learner’s performance through easy-to-follow, visual infographics’.

Whilst these four LMS foreground the features’ positive impact on students’ success and increased flexibility in teaching, Moodle provides a comparison between learning analytics of the LMS and regular reporting tools emphasising the benefits of the former: ‘Log entries, whilst very detailed, are not in themselves descriptive of the learning process. … Much more context is needed around each micro-action to develop a pattern of engagement’Footnote 13. Moodle positions their analytics in contrast to ‘descriptive’ reports, stating that ‘[m]ost commercial solutions are descriptive only’, whilst ‘Moodle Analytics’ features are a ‘full learning analytics solution [that] will help […] not only predict events, but change them to be more positive’Footnote 14.

Whilst this should clarify the difference between a ‘regular’ and a ‘holistic’ view on teaching and learning, it falls in line with other LMS promises addressing positive impact of analytics’ without specifying how they specifically contribute to students’ acquisition of knowledge or personal and social growth and change. Hence, according to the ways in which these LMS risk features are promoted by their ed tech providers and what they are promised to deliver, success and positive learning outcomes are defined through student retention (and, ultimately, graduation), high grades, and high engagement with both LMS, course content, teachers, and other students. Educational risk is understood as failing in one or all of these aspects. Blackboard additionally highlights teachers’ responsibility and accountability towards the goals of educational institutions, at the same time, defining students’ graduation as one of such central goals. In the next step, we provide a more detailed look at how risk prediction works in LMS in order to shed light on the criteria of risk and success.

How Risk Prediction Works: The Rule of Timely Interaction

Each of the analysed LMS features is clustered according to their functions and goals. These clusters receive titles recognisable to the teachers and available both in LMS navigation menus and the user documentation. Depending on the LMS business model, these feature clusters need to be separately activated by educational organisations (ItsLearning) or are included in the basic functionality (Brightspace, Blackboard, Canvas, and Moodle). In some of the analysed LMS, all features related to risk prediction are assigned to one separate cluster of features: for example, Blackboard’s ‘Retention center’ specifically supports risk analytics for teachers and course instructors. Brightspace provides a ‘Student Success System’ and a related ‘Students at Risk widget’ which draws on data from the former. Canvas and Moodle address their predictive risk features as ‘Analytics’, whilst Canvas allows not only teachers but also school administrators to view descriptive (risk) analysis for the whole school or particular courses. Similar features are available in ItsLearning within their ‘360° Reports’. Moodle specifically provides a predictive model titled ‘Students at risk of dropping out’.

To examine these sets of features in more detail, we analysed documentation, guides for users in which the LMS providers explain the way in which they implemented learning analytics in their products, and interface design. These explanations ranged from detailed documentation of algorithmic models and suggestions regarding the preparation of course content as well as datasets required to train a specific model (Moodle and Brightspace) to a brief note of technical requirements for the analytics feature (e.g., cookies) and a reminder that ‘Page view data should not be used to assess academic integrity’Footnote 15 (Canvas). The five analysed LMS differ according to both the particular kinds of input data used for defining risk, algorithmic means to identify and/or predict risk, and the degree to which customisation of the algorithmic model is possible.

Overall, these kinds of data can be organised in four broad categories. First, all systems rely on access and temporal data to monitor system access and duration of in-system interactions. Canvas can also explicitly monitor online course attendance if such a feature is enabled in a school by its administrators. Second, each analysed LMS uses data about students’ performance such as grades and submitted assignments. Third, at least some systems (e.g., ItsLearning and Moodle) specifically address and monitor in-system interaction with course content such as viewing and reading of available resources. Finally, all LMS rely on data about in-system collaborative learning practises for risk identification and prediction. Such social interactions include students’ participation in online discussions, viewing and giving peer feedback, and interactions with the teacher within the system. The dominant focus on in-system interactions and temporality of LMS interactions underscore that current features for risk identification and prediction are predominantly designed for blended learning and online learning settings.

All systems provide some features of risk analytics. Blackboard and Canvas provide solely imperative (Heuer et al. 2021) risk prediction whilst Brightspace and Moodle also provide customisable predictive models based on machine learning (ML) techniques of data analysis. Moodle allows for the greatest extent of customisation. Finally, ItsLearning markets risk prediction as possible based on the data generated in the system and provides some case studies with best practises for doing so via available reporting features but does not specifically advertise any risk prediction features. Our analysis shows, however, that ItsLearning supports multiple features comparable to the imperative analytics of other LMS in our sample.

Imperative risk analytics can be understood as a descriptive algorithmic model monitoring whether any student is below a threshold in one or several pre-defined (imperative) datapoints. For example, for a datapoint ‘assignment submitted on time’, common for most of the analysed LMS, teachers set dates for each assignment in the course as such a threshold. If the student submits their assignment before or on the set date, they would be marked as successful (and usually also receive a green-coloured symbol). Should the student submit their assignment after the deadline set by the teacher, they are identified by the system as below the set threshold and therefore ‘at risk’, especially if they repeatedly submit their assignments too late. Should the student miss the assignment altogether, they are marked as ‘at risk’.

Predictive models as used in Brightspace and Moodle, in contrast, not only provide a snapshot of the current students’ progress but also make predictions about their future performance based on the data available to the LMS analytics. To do so, Moodle requires the teacher to prepare the dataset for a predictive model and customise its settings whilst Brightspace provides automated ‘building’ of a model by comparing predicted and actual data (also based on historical data from previous coursesFootnote 16). To customise the model, various features available in Brightspace can be included/excluded (access data, in-system social interactions, and performance data). Teachers using Moodle always need to draw on data from previous similar courses as a training dataset for the ML-based model, which can be later adjusted. In contrast to detailed recommendations for model customisation in Moodle, Brightspace does not provide much detail as to how their predictions are designed.

Furthermore, both Brightspace and Moodle combine at least two sets of datapoints to predict risks. For example, Brightspace uses various combinations of performance data (submitted assignments, projects, and tests), in-system social interaction (e.g., participation in discussions), and temporal dataFootnote 17. Moodle combines datapoints representing students’ in-system content interaction with in-system social interaction. Moodle customisation includes different time intervals used to update predictions, the widest range of datapoints used for prediction amongst the analysed LMS, and features to evaluate the ML-based training of the model which might point to the mistakes in the training dataset. In addition, Moodle includes ‘teacher presence’ as a part of the predictive model, which we discuss in the next section in more detail. For both systems, the teachers can customise what data exactly is used for the predictive model by in-/excluding certain data categories such as in-system social interaction in Moodle or Brightspace.

The output data from imperative and predictive models are displayed to teachers in form of various visualisations, either as trends of individual students’ development over time (ItsLearning), position of the student in relation to the whole class at a current point in time (Canvas), or in relation to the predicted average (Brightspace). Blackboard, Brightspace, and Moodle support alert functions with which teachers can receive a notification if any of their students are below the set thresholds for a number of available data points used to define risk or to continuously monitor students ‘at risk’ through a dashboard. All five LMS furthermore support communication with students who have been identified as ‘at risk’ such as sending a personal message. In ItsLearning, which does not specifically define ‘risk’ in LMS design, these communication features are available from the overview page displaying students’ progress trends.

In sum, all systems rely on in-system interaction and temporal access data. There is no leeway for teachers to adjust the risk status of a student based on their knowledge of individual (hi)stories. Rather, risk or success visualisations and dashboards show and compare patterns of success and loss in relation to in-system interaction. We could not find comprehensive documentation of how different assignments or different interaction patterns can be weighted into risk prediction according to their didactic or pedagogical importance, although at least Brightspace and Moodle allow weighting for grading. In contrast to teachers, students do not see their individual risk status, but rather can be alerted if they miss assignment deadlines or see their personal progress towards the completion of the course. By showing their progress to students, LMS assume that they can change their ‘risky’ behaviours.

Carriers of Risk and Their ‘Risky’ Behaviours

Each LMS uses different sets of datapoints to define risk. Based on these input data and customisation levels, each system configures its own definition of who and what is ‘risk’, ‘risky’, ‘at risk’, or who is responsible for risk mitigation. Whilst all the LMS promise teachers to identify various educational risks early to help students succeed, their promises indicate different kinds of relations between teachers, students, and LMS depending how risk is defined.

Blackboard’s, Canvas’, and ItsLearning’s imperative risk analytics provide teachers with pre-defined categories of risk defined by a set of datapoints. Each of these systems supports the prediction of one or several ‘types’ of risk. For example, Blackboard identifies risk as students having low grades, having missed deadlines, having low in-system interaction, or not logging in. These four types of risk are presented as central for student retention and preventing drop-out, the latter being the ultimate educational risk according to Blackboard. Canvas allows teachers to monitor students’ attendance, grades, discipline (off- and online), and content interaction patterns (e.g., page view, submitted assignments, and online participation such as writing posts). ItsLearning, whilst not specifically defining ‘risk’, provides features to monitor trends—progress over time—in students’ activity, specifically addressing system access and its duration, in-system content interaction, and completion of assignments. All of these datapoints aim to describe students’ learning journey and configure—explicitly, as with Blackboard, or implicitly, as with ItsLearning—some of the students as ‘at risk’. Remarkably, teachers’ professional assessments such as grades constitute only a small part of risk indicators for all five analysed LMS. Rather, educational risk is predominantly defined through technologically manageable access, temporal, and interaction data.

In Brightspace, risk is defined according to a combination of students’ performance (e.g., submitted assignments) and various types of in-system interaction and temporal data. Brightspace provides predictive models for student ‘success index’, ‘social learning’ (in-system social interaction), and performanceFootnote 18. A ‘Success Index’ is central for defining risk in Brightspace, with types of risk being under-engagement (high performance and low system access), withdrawal or dropout (low performance and low system access), and low academic performance (high system access and low performance)Footnote 19. Furthermore, Brightspace allows to define three different levels of risk (risk, potential risk, and successful) for this index by setting a percentage threshold for relevant datapoints, e.g., the student receives an at risk status only if their performance is below 60% of what is expectedFootnote 20. These definitions of risk levels, however, can only be undertaken by administrators and not by individual teachers. Here, Brightspace defines risk in opposition to success, where only the combination of certain behavioural patterns of in-system interaction, temporal data, and high academic performance determine success. Similarly, Blackboard also positions risk in opposition to successful, high-performing students based on in-system interaction as it promises teachers: ‘You’ll see patterns and recognise when you need to help at-risk-students succeed and prevent high-performing students from getting boredFootnote 21’.

Moodle also draws on certain in-system interactions to identify students at risk, prioritising the risk of dropping out. In contrast to Blackboard or Brightspace, however, Moodle ties the students’ behavioural patterns both to what the LMS addresses as ‘teachers’ presence’—an availability of certain learning activities defined by a teacher within the given course (e.g., an assignment including students’ interaction with people outside the class or feedback, revision, and resubmission of assignments within a course). The risky behaviour of the students is only identified in relation to the behaviour with the highest possible interaction patterns within a given course design. Amongst the analysed LMS, this is the most thorough consideration of teachers’ didactic strategies. Moreover, according to the user documentation, Moodle ties their risk indicators to a theoretical frameworkFootnote 22 foregrounding participants’ ability to construct meaning, identify with the group, communicate with each other in a trusting environment, and develop relations with each other as central to learning. Each of the datapoints and their clusters is positioned within the elements of this framework and compared to the course design developed by the teacher. In sum, whilst also relying on students’ behavioural patterns, Moodle allows more flexibility in risk identification and prediction as it attempts to connect in-system activities with broader objectives of educational progress such as helping students build relationships.

In contrast, Brightspace’s ‘Social Learning’ contains a social network of students grouped either according to the levels of their in-system interactions with other students or according to their ‘Success Index’, indicating which students of which risk levels interact with each other. Whilst the LMS using imperative risk analytics configure particular behavioural patterns as ‘risky’, the ‘social learning’ chart provided by Brightspace defines ‘risky’ individual students. So far, we could observe a shift in the framing of students from being ‘at risk’ to ‘risky’ only in Brightspace. Whilst this framing can only be derived from one LMS feature, it illustrates the possible far-reaching consequences of the individual learning and individualised teaching promoted by all the LMS. Some students can exhibit risky behaviour which teachers may try to correct by alternating their teaching practises. Others, however, can pose a ‘risk’ to the class based on their in-system activities such as low system access or in-system social interactions.

The LMS feature analysis shows that none of the systems actively includes sociodemographic data of the students in their analysis and prediction of risk. Other systems and plug-ins exist which allow such data interrelations (e.g., PureData and Powerschool). These latter systems, however, are predominantly targeting school administrators or educational actors at district and higher levels to monitor quality of education. The exclusion of sociodemographic data in the analysed LMS supposedly helps to mitigate possible discriminatory uses of such systems. At the same time, as initial scholarly and practitioners’ discussions about students at risk show, what has been widely acknowledged as risk for education is broader contextual—social, demographic, economic, communal, and psychological—factors which prohibit or hinder students from participating in educational activities in the first place (Finn 1993). The focus on risk as a behavioural treat measured through various kinds of in-system interaction shifts attention from broad, structural, and societal threats to equity in education towards the individualised patterns of LMS use (Whitman 2020). Some systems such as Blackboard share ‘best practises’ with teachers on how to engage with the students displaying ‘risky’ behaviour, including advice on communication, student time management, or anticipation of students’ technical difficultiesFootnote 23. Even though some of these ‘best practises’ acknowledge that students might experience different issues possibly hindering their learning with LMS (from technical difficulties to need of counselling to emergencies), all the strategies ascribe the teachers a role of managers of students’ learning processes. The LMS provide teachers with only few opportunities to manage risks together with students (most being a shortcut for an email/chat) and rather appeal for adjusting instruction design.

Politics of Risk and the Future of Risk-Free Learning

Within STS, scholars have long argued that technology design has politics. Scholars in critical data studies have joined this chorus by emphasising the politics of practises and decision-making processes along the data life cycle from data creation, data processing, and labelling to the interpretation of data (e.g., Denton et al. 2021; Kitchin and Lauriault 2014; Hartong and Förschler 2019). In this paper, our attention was on the politics of risk prediction features. Such features may be understood as materialisations of ‘sociotechnical imaginaries [that] act as powerful aspirational and normative visions of preferred forms of social order’ (Williamson 2017: 16). One of the key analytical points we drew attention to in our paper was the aspiration of risk-prediction features to create risk-free learning futures. With that, we provide evidence from the educational domain to the literature exploring implications of risk categorisation and prediction (Fourcade and Healy 2013; Mooney Simmie and Moles 2020) and to the current perspectives on responsible learning analytics (Cerratto Pargman et al. 2023).

Using the methodology of ‘feature analysis’ (Hasinoff and Bivens 2021), we identified what and how LMS frame and construct as risk in and for education. Whilst arguably the LMS and the risk analytics features we analysed should primarily be used for online (or at least blended) teaching and learning, we argue here that although this might be true today, these systems shape and render manageable risk-free learning futures for all educational settings. We show that the basic assumption of such features is that education and educational success are continuously ‘at risk’. Whilst the LMS differ in how they detect risk or what they perceive as risk, all analysed systems frame risk through phenomena that the system can capture: system access, in-system interactions, and their temporalities. Moreover, we notice a shift from students ‘at risk’ to ‘risky’ datafied representations of students. In the analysed LMS, not only students are written out and rendered as data subjects performing certain interactions within the system (Jones and McCoy 2019; Selwyn et al. 2022; Witzenberger and Gulson 2021). Whilst teachers are expected to take responsibility by mitigating some educational risks, amongst the few active roles assigned by LMS to teachers as a form of risk management is primarily communication (e.g., writing emails) with students.

In contrast to the studies of risk prediction based on consumer behaviour when people are judged and evaluated based on their life and consumption choices (Fourcade and Healy 2013; Kiviat 2019), students using LMS are restricted to a set of possible actions they either are able to perform with the system or not. The social aspects of learning also predominantly concern different forms of in-system interactions with content or other users. The LMS, however, mostly lack features for risk mitigation. In this regard, the promises of LMS providers to help solve educational problems, recognise risk early, and better understand the risky behavioural patterns seem to favour the marketing strategies based on uncertainty. As Beck (1992) argued, ‘risks signify a future that is to be prevented’ (p. 33). Drawing on our feature analysis, the LMS do not necessarily add new forms of risk in education but redefine these in relation to technological systems: a risk of non-interaction, too little time spent in the system, and too low a number of system users in a particular school. From that perspective, risk indicators are working well in predicting the future financial risks of the companies providing technology, as many scholars have pointed out to date (Fourcade and Healy 2013; Kiviat 2019).

An excessive focus on classification and risk prediction, however, has been shown to lead to lack of support for students who most need it (Grant 2022). The challenge of risk politics in education, then, lies not only in the disappearance of students and teachers as subjects in educational technologies. Rather, it is the implications of rendering risk and success categories into manageable indicators and commercial features which are concerning. In his book Do Androids Dream of Electric Sheep? which became the basis for Blade Runner, Dick (1968) states about a specific model unit of androids that were prone to go rogue: ‘We produced what the colonists wanted’, Eldon Rosen said. ‘We followed the timehonored principle underlying every commercial venture. If our firm hadn’t made these progressively more human types, other firms in the field would have. We knew the risk we were taking when we developed the Nexus-6 brain unit’ (43). Hence, in the story, the scientists built a technology even though they were aware of the associated risks to society. Likewise, in educational research, we already know of the risks of risk-free education, in terms of a shifting of educational responsibilities, accountabilities, and priorities, a foreclosing of educational futures for certain students. If the goal of critique such as ours was to promote alternative ‘positive’ values for good educational features such as well-being or collaborative learning, we would fall into the trap of technology criticism (Morozov 2015). Such critique would effectively propose to exchange current indicators of risky behaviours for some new ‘better’ categories. These new categories would then, too, be turned into business assets or evaluation criteria on the educational markets (Hansen and Komljenovic 2023; Robertson 2022; Williamson and Komljenovic 2022).

It is then worth asking, do we need to change the design of risk for better educational futures, or do we rather need to envision and work towards solving structural societal problems that lead to various risks for education? Resisting the dreams of perfectly manageable societal processes and the categorisations of social reality going along with it might be a step forward to the latter (McQuillan 2022). We argue here following Biesta (2013), rather than pursuing a risk-free education we need to acknowledge that both educational processes and societies are full of risk and uncertainties whilst some risks are productive and others hindering good education. Instead of attending to what and who should be defined as ‘risky’ (Bialostok 2015), the politics of risk could shift towards identifying situated conditions in which risk becomes ‘risky’.