Supporting self-regulated learning with learning analytics interventions – a systematic literature review

During the past years scholars have shown an increasing interest in supporting students' self-regulated learning (SRL). Learning analytics (LA) can be applied in various ways to identify a learner’s current state of self-regulation and support SRL processes. It is important to examine how LA has been used to identify the need for support in different phases of SRL cycle, which channels are used to mediate the intervention and how efficient and impactful the intervention is. This will help the learners to achieve the anticipated learning outcomes. The systematic literature review followed PRISMA 2020 statement to examine studies that applied LA interventions to enhance SRL. The search terms used for this research identified 753 papers in May 2021. Of these, 56 studies included the elements of LA, SRL, and intervention. The reviewed studies contained various LA interventions aimed at supporting SRL, but only 46% of them revealed a positive impact of an intervention on learning. Furthermore, only four studies reported positive effects for SRL and covered all three SRL phases (planning, performance, and reflection). Based on the findings of this literature review, the key recommendation is for all phases of SRL to be considered when planning interventions to support learning. In addition, more comparative research on this topic is needed to identify the most effective interventions and to provide further evidence on the effectiveness of interventions supporting SRL.


Introduction
Self-regulated learning (SRL) is often described as a process that includes the setting of goals, the monitoring of progress, and the regulating of learning (Järvelä et al., 2019;Pintrich, 2000). Empirical studies have continuously demonstrated that students do not use efficient self-regulating strategies when learning in virtual environments (Akyildiz & Kaya, 2021;Azevedo et al., 2019;Broadbent & Poon, 2015;Pedrotti & Nistor, 2019). Therefore, learners may not achieve the expected learning outcome (Lee et al., 2019;Shyr & Chen, 2018;van Alten et al., 2020). This situation has motivated both teachers and researchers to design ways to support it. Panadero (2017) reviewed six different SRL models and concluded that a common feature of the models is the division of SRL into three phases: planning, performance, and reflection. This three-phase model was initially introduced by (Puustinen & Pulkkinen, 2001). Although the SRL models reviewed by Panadero share commonalities, they vary depending on their focus. For example, Boekaerts and Corno's (2005) model focuses on motivational aspects of regulated learning, while Efklides's (2011) model focuses on metacognitive aspects of regulated learning, such as metacognitive experiences of students' learning regulation. Winne and Hadwin (1998) built their model based on information processing theory and focused on the effective use of cognitive strategies to enhance learning. Hadwin et al.'s (2011) model not only considers individual regulation, but also the ways regulation of learning can be shared between group members. Finally, the models of Pintrich (2000) and Zimmerman (2000) can be described as general models of regulated learning since they consider motivation and metacognition to be central to SRL.
SRL phases comprise several subprocesses that differ by model but are still conceptually close. For instance, Boekaerts and Corno's (2005) model includes identification, interpretation, primary and secondary appraisal, and goal setting in the planning phase of SRL. Efklides's (2011) model includes task presentation in this phase. Hadwin et al. (2011) have claimed that planning is included in this phase. Pintrich (2000) has stated forethought, planning, and activation belonging to this phase. Winne and Hadwin (1998) have allocated task definition, goal setting, and planning to this phase. Finally, Zimmerman (2000) has located forethought (task analysis and self-motivation) in the planning phase of SRL. These subprocesses identified by the researchers are actions learners take at the early phase of the learning process, and they involve estimating how the learning task can be accomplished.
Many studies have suggested that SRL could be enhanced by interventions (Ceron et al., 2021;Lee et al., 2019;Wong et al., 2019a). Interventions can assist at-risk learners in improving their learning performance (Espinoza & Genna, 2021) or can help reduce undesirable learning behaviour (Lodge et al., 2019). Furthermore, interventions can be used to increase students' mastery goals (Zheng et al., 2020), self-efficacy (Samuel & Warner, 2021), and perceived value of a course (Alamri et al., 2020). At the same time, this intervention has been found to decrease test anxiety among students (Putwain & von der Embse, 2021). However, numerous SRL subprocesses need different kinds of interventions performed at the right time of the learning process. These needs set the requirements for methods used to create effective interventions.
The rest of this paper follows the following organization. In the second section, the way how LA is applied to enhance SRL is explained. The previous reviews focusing on the themes of this review are explained in the third section. This is followed by the fourth section, where we explain the method used to conduct this review. In the fifth section, we report the results of our study, which is followed by the sixth section discussing the results. Finally, the last section concludes this article.

Learning analytics as a tool to support SRL
Learning analytics (LA) is "the measurement, collection, analysis, and reporting of data about learners and their contexts for the purposes of understanding and optimising learning and the environments in which it occurs" (Siemens, 2013(Siemens, p. 1382, whereas LA methods are referred in this paper as a set of methods used to support the measuring, collecting, analysing and reporting the data. The reporting of the data is mediated to learners in different means which are referred as channels. LA can be used to analyse and report learner performance and to highlight the parts of learning where SRL processes could be improved (Lodge et al., 2019;Winne, 2022). LA for SRL has two elements, calculation and recommendation (Winne, 2017). These two elements are built within the LA model's four stages: data generation, tracking, analysis, and action (Romero & Ventura, 2020). Data is generated when the learner does different activities in an online learning environment. The trace data from the activities can be used as an input for calculation, analysis, and tracking of the learner's process. Next, the action stage is when the instructor executes interventions to support learners' activities, such as SRL subprocesses. These LA interventions support changes a learner can take in their learning process to achieve better outcomes (Ifenthaler & Yau, 2020).
LA intervention is a surrounding frame of activity used to mediate data and reports created with the help of analytics (Wise, 2014). LA dashboard is a typical channel used for LA intervention (Jivet et al., 2018) and according to Sønderlund et al. (2019) most interventions are aimed to identify at-risk students (Shafiq et al., 2022). Sønderlund et al. (2019) have stated that early LA methods relied on fixed factors and could populate predictions only within limited settings, whereas contemporary dynamic models can be used in more complex learning contexts. However, Rienties et al. (2017) have argued that implementing an intervention remains the challenging part of the LA cycle, and the design of intervention should be well established to succeed in its endeavour to support learning (Sedrakyan et al., 2020;Wise, 2014).

Previous literature reviews
A number of literature reviews have already mapped the state of the art in research on SRL, LA, and interventions. Bodily and Verbert (2017) studied the LA dashboards and recommender systems used to intervene in students' learning processes. However, this study did not include the perspective of SRL. Another study focusing on LA dashboards was conducted by Matcha et al. (2020) who studied the LA dashboard interventions aimed to support SRL. This study did not include the different channels besides the LA dashboards. Valle et al. (2021) have studied learner-facing LA dashboards and found out that the intended outcomes of LA dashboards do not meet with their evaluation in terms of effective measures.
The literature review by Pérez-Álvarez et al. (2018) explored tools supporting SRL in online environments. Their study found that most of the tools were used in higher education or general education. The most common means of supporting SRL the researchers observed was visualisation, and the most used indicators were either action-related or content-related. The most supported SRL strategies were goal setting, self-evaluation, help-seeking, and organisation. The studies examined by the researchers rarely explained the relationships between SRL strategies and tools. Their review found that studies measure the value of the tools in terms of usability (easiness of usage), usefulness (appropriateness of use), user satisfaction (acceptance by stakeholders), and learning outcomes. Wong et al., (2019a) reviewed 35 records supporting SRL in online learning environments and MOOCs. Their review studied the effectiveness of approaches to support SRL strategies and whether human factors influenced the approaches. The study did not include LA. The most used intervention types to support SRL identified by the researchers were prompts (14 studies), integrated support systems (10 studies), and prompts with feedback (four studies). The rest of the reviewed studies used different, rarely used approaches. Despite the investigation, how well different approaches support SRL remained unclear. Sønderlund et al. (2019) explored the efficacy of LA interventions in higher education but did not include the perspective of SRL within its scope. Specifically, they reviewed studies examining LA interventions' efficacy for student retention and academic success. The results from the research they found were promising, as the studies identified that LA interventions increased grades and led to higher retention. However, the researchers found only few studies critically assessing the effectiveness of LA interventions for student success and retention.
A literature review by Viberg et al. (2020) focused on studies of SRL and LA in online learning environments. The researchers examined the current state of LA applications and how it measures and supports students' SRL in online learning. Their results showed that most studies focused on the forethought and performance phase of SRL and focused less on the reflection phase. Viberg et al. (2020) concluded that LA was not used to support SRL in their sample of records. They argued that there was little evidence of LA improving learning outcomes or improving support for learning and teaching. Instead of supporting SRL, they found that LA research was focused on measuring SRL. Viberg et al. (2020) thus recommended LA support mechanisms be exploited to foster student SRL within online learning. Araka et al. (2020) examined the research trends regarding measurement and intervention tools for supporting SRL in online learning environments between 2008 and 2018. The review findings indicated that current tools simultaneously measure and implement interventions to support SRL. Araka et al. (2020) identified LA as one way to measure the efficiency of interventions. However, the LA methods in use were not extensively explored, and this perspective remains unstudied.
The previous literature reviews have examined LA, SRL, and interventions, but none of the studies connected the three concepts. Researchers have been focusing on tools to support SRL (Pérez-Álvarez et al., 2018) and learning analytics interventions in online education (Sønderlund et al., 2019). Still, combination of these elements remains unstudied. This study connects these concepts for the first time. First, it is important to examine how LA has been used to identify the need for SRL support. Then, the channels to mediate the intervention has to be selected to target the phases of SRL cycle. Finally, the efficiency and the impact of the intervention have to be evaluated to find out how the selection of LA methods and channels for LA interventions foster SRL. This will help the learners to achieve the anticipated learning outcomes (Azevedo & Hadwin, 2005;Lee et al., 2019;Shyr & Chen, 2018;van Alten et al., 2020). The research questions for this research are the following: RQ1: What learning analytics methods have been applied to identify the needs for SRL support? RQ2: What channels have been applied in LA interventions to foster SRL? RQ3: Which phases of the SRL cycle were targeted by LA interventions? RQ4: How do studies evaluate the SRL support efficiency and impact, and what kind of results were achieved?

Method
This research followed the PRISMA 2020 statement. PRISMA was chosen because it is widely used and suitable for studies reviewing educational interventions. The PRISMA 2020 is also eligible for systematic reviews including qualitative and quantitative methods, thus allowing a meta-analysis of effect estimates and their variances when they are available (Page et al., 2021.) We selected a list of keywords (Table 1) to reduce the possibility of reviewer bias. While doing preliminary searches, we found that only a few studies used the exact term LA intervention or intervention to identify the supporting acts for SRL. Instead, these acts were described using various terms or keywords when presented as a study's experiment. Therefore, we decided to perform a broad search and include all records about LA and SRL. After the records were collected, they were checked manually for the evidence of an intervention since it is hard to find a reasonable keyword or combination of keywords that would ensure that all intervention related records would be captured. To reduce the author bias, we discussed the records in meetings to the unanimous opinions on which records to be included in the review.
The keywords used to find relevant research identified records focusing on SRL and learning analytics. There are two ways to write self-regulation (i.e., with or without a hyphen), and both ways were used to identify the records including this topic. "Self-regulat*" was used to include all papers that have the keyword "self-regulation"; "self-regulating"; "self-regulated"; "self-regulate"; "self-regulates"; and so forth. The term "self regulat*" was used to include variants of the previous terms written without a hyphen. For learning analytics, we decided to include "educational data mining" and "learning analytics" due to the close relationship between the two often overlapping concepts (see Romero & Ventura, 2020).
The inclusion criteria for the eligible record were the following: 1. The manuscript reported a learning analytics analysis or an intervention hypothesised to support or enhance learners' self-regulated learning. 2. The full text was available. 3. The record was written in English. 4. The record was a peer-reviewed empirical paper. 5. The record was published prior to the end of 2021.
We conducted comprehensive searches on Scopus (390 documents) and Web of Science (164 documents) in May 2022. Additionally, we used Google Scholar to search for records published by the Journal of Learning Analytics (142), the Journal of Educational Data Mining (35), and the International Conference on Educational Data Mining (22). The Journal of Learning Analytics, Journal of Educational Data Mining, and International Conference of Educational Data Mining are not fully indexed in the Scopus or Web of Science databases. Therefore, we checked these publication channels manually because they have been Table 1 Systematic literature  review search terms  Topic and cluster  Search terms self-regulated learning "self-regulat*" OR "self regulat*" AND learning analytics "educational data mining" OR "learning analytics" identified as having the most specific focus on LA and EDM (Romero & Ventura, 2020), and we expected to find studies focusing on the topic of the review. We carried out the selection strategy and the literature review process (Fig. 1) following the inclusion criteria. The records obtained from the search were reviewed in three phases. In every phase, authors had meetings to discuss the records to find the unanimous opinions of whether to include or exclude the records. The first phase of examination focused on the title and the abstract of the studies, and the first author of the current review excluded records not meeting the inclusion criteria based on these elements. In the second phase, the first author read the full texts. If a record did not include the three elements of the research question, it was excluded. There were 47 records left (see Appendix Table 7) after phase two. For the last phase, the first author examined each record in detail and recorded the information from their studies. The first and the second authors met repeatedly to agree on a coding scheme. Then the first author and the second author coded papers independently, followed by a meeting to reach an agreement regarding the disagreements and after consensus, the first author completed the coding. Coding was used to describe the details of the studies and enable mapping of the LA methods, the interventions applied, the phases of SRL that were focused on, the evaluation methods used to assess the outcome of LA interventions, and the impact of a study's intervention methods on learning process or learning outcome. Table 2 shows the codification summary of the research questions. We synthesised the results of previous studies to answer research questions and reported the findings. There are diverse LA methods that have been used to measure, collect, analyse, and report the data about learners to understand and optimise learning (Siemens, 2013). The categorisation of each record's methods in this literature review followed the categories presented by Romero and Ventura (2020). They have suggested division of the methods into the following categories: causal mining (CM), clustering (C), discovery with models (DM), distillation of data for human judgement (DD), knowledge tracing (KT), non-negative matrix factorisation (NMF), outlier detection (OD), prediction (P), process mining (PM), recommendation (R), relationship mining (RM), statistics (S), social network analysis (SNA), text mining (TM), and visualisation (V). Almost all the methods of the records were identified and could be categorised using the definitions of Romero and Ventura. Two studies (Wise et al., , 2016 applied methods that did not fit into these categories. Within these studies there are integrated analytics used to express learners the expected quantity, quality, and timing of discussions  and a conceptualization including LA as a part of SRL process (Wise et al., 2016). Therefore, we allocated these records into the category of other methods (O).
We coded the SRL subprocesses to phases: planning, performance, and reflection following the division presented by Panadero (2017). This was done by examining the full texts using "subprocesses" as a search term to identify which subprocesses had been investigated in the studies and organised the subprocesses under the three phases of SRL. The impact of LA interventions can be measured in several ways, including retention, course completion, and dropout rates. LA Interventions that promote a better retention rate improve the learner's continuous study in a course (e.g., Freitas et al., 2015), whereas LA interventions that create greater completion rates reflect learners' successful completion of individual courses and thesis processes (e.g., Nouri et al., 2019). A reduced dropout rate from an LA intervention results in fewer learners failing to finish coursework (e.g., Cambruzzi et al., 2015). Other positive effects can also include better engagement with a course (Chen et al., 2020), usage of learning management system (LMS) resources (e.g., Jovanović et al., 2021), and eventually improved course grades (e.g., Wong, 2017). Finally, the outcome from a LA intervention can be seen in form of enhanced SRL processes (e.g., Roll & Winne, 2015). For the analysis, we considered these aspects as measurable impacts of LA interventions.
The reviewed studies used different effect sizes to measure the impact of their LA interventions. Due to insufficient number of papers with similar research settings, we were not able to execute proper meta-analysis. Instead, we decided to use both qualitative and quantitative studies to illustrate what kind of outcomes are achieved in the reviewed studies. To prevent bias due to differences in how reviewed records analysed different tasks, procedures, methods, and effect sizes, we did not convert the effect sizes to create an equal scale (e.g., Cohen's d Cohen, 1992). Instead, we reported the effect sizes as presented in the original records. To enable the comparison between different effect sizes, we used the interpretation guidelines given by (Maher et al., 2013) to synthesise the results. The results of this approach are presented in Table 3. There are other effect sizes (hazard ratio, β, F, and X 2 ) used in the studies, but they were not comparable with the interpretation approach. These effect sizes are displayed in the results without interpretations.

Results
There were 56 studies included in the final analysis. The first studies about designing LA interventions were conducted by (Bouchet et al., 2013) and (Inventado et al., 2013). In their work, the researchers of these studies formed clusters of students, characterised the students belonging to different clusters (Bouchet et al., 2013), and identified effective long-term learning behaviours (Inventado et al., 2013). Interest in the topic has increased since 2013, and nine studies from 2021 focusing on the topic were found (Fig. 2). These more recent studies focus on implementing and evaluating the tools designed for LA interventions. The reviewed studies primarily examined , workplaces (three studies; 5%), or public schools (one study; 2%). In (Jivet et al., 2021) there were higher education degree students and students in professional development process included. Four studies did not report the population that they researched. The sizes of the samples involved in the studies varied. Some studies had less than 10 participants (Cha & Park, 2019;Inventado et al., 2013;Wise et al., 2014Wise et al., , 2016, whereas the largest population was made of 33,726 learners (Davis et al., 2017). The mean population of the reviewed studies was 1,398 participants and the median was 102 participants. Most studies used single courses as their learning context (27 studies; 48%), but some studies conducted their LA intervention across nine (Hardebolle et al., 2020) or ten (Haynes, 2020;Kia et al., 2020) courses. This trend of aiming to generalise the LA interventions is relatively recent. There were different phases and perspectives in the development of the LA interventions. Some studies explained the early phases of their LA intervention design from a technological point of view (Frey et al., 2016;Manso-Vázquez & Llamas-Nistal, 2015;Siadaty et al., 2016b;Winne et al., 2019). Other studies examined the next phases with the perspective of user's experience on the interface of the intended LA intervention (Cha & Park, 2019;Rohloff et al., 2019;Sonnenberg & Bannert, 2016), and the rest of the studies tested the effectiveness of the LA intervention in action.

RQ1: What learning analytics methods have been applied to identify the needs for SRL support?
While some of the studies (n = 14; 25%) focused on supporting SRL with one LA method, it was more common to see a study use two (n = 16; 29%) or three (n = 18; 32%) methods. Some scholars even used four (Jivet et al., 2020;Matcha et al., 2019;Russell et al., 2020) or five methods (Afzaal et al., 2021;Sonnenberg & Bannert, 2015Yu et al., 2018) to achieve their goals. The methods used in the reviewed studies are summarised in Table 4. The first column shows the learning analytics methods. As multiple methods can be applied, the overall number of learning analytics methods surpassed the number of reviewed studies. The number of studies using a method is displayed in the second column. Statistical approach was often used to provide content for visualisations (n = 19; 34%). Four studies used a combination of the three most used methods (Balavijendran & Burnie, 2018;Jivet et al., 2020;Siadaty et al., 2016a;Yu et al., 2018). In these studies, a learner's trace data was analysed using statistical methods and compared to their peer learners. The analysis result was then visualised for the learner and accompanied by recommendations on how to improve their learning processes.

RQ2: What channels have been applied in LA interventions to foster SRL?
Table 5 presents the different channels used for LA interventions. The first column displays the different channels used for LA interventions and the second column shows the number of studies applying these channels.
An LAD displays an LA intervention using a visual form. Embedded systems may include elements directly connected to the learning materials or assignments. For example, (Bouchet et al., 2013) used a virtual pedagogical assistant to give hints, and (Frey et al., 2016) implemented a system that helped learners rehearse question-asking skills for question-based dialogue. Prompting can provide directions to a learner at different parts of the learning process and help them to improve their learning. The LA interventions mentioned used automation. However, there were also LA interventions where the amount of individual human touch was high. These LA interventions include e-mails (Lim et al., 2021;Lim et al., 2020;Matcha et al., 2019;Menchaca et al., 2018;Nikolayeva et al., 2020), foldouts (Ott et al., 2015;Sedraz Silva et al., 2018), phone calls, and text messages (Herodotou et al., 2020). E-mail is an excellent tool for providing individual guidance and helping a student develop their learning process (Matcha et al., 2019). Foldouts (Ott et al., 2015;Sedraz Silva et al., 2018) also provide learners with information about their learning process.

RQ3: Which phases of the SRL cycle were targeted by LA interventions?
It is most common to foster the SRL subprocesses of the performance phase (e.g., giving hints on how to approach assignments). In total, 44 studies (79%) focused on the performance subprocesses. 25 studies (45%) targeted the subprocesses of the planning phase (e.g., setting goals, time management), and 17 studies (30%) targeted subprocesses of the reflection phase (e.g., evaluation learning outcomes and identifying how to improve performance). Five studies (9%) did not specify the subprocesses they focused on, and therefore those studies could not be categorised. Figure 3 presents the division of the studies into different phases.
Notably, there were only nine studies (16%, Bouchet et al., 2013;Hardebolle et al., 2020;Jivet et al., 2021;Lallé et al., 2017;Lim et al., 2021;Nikolayeva et al., 2020;Siadaty et al., 2016b;Wise et al., 2014;Wong et al., 2019b ) that focused on the entire SRL cycle. Manganello et al. (2021) applied the all the phases of 4C model, which is used in professional contexts. Connecting only two phases was more common. For example, studies typically connected the performance with either the planning phase (12 studies; 21%) or with the reflection phase (four studies; 7%). Connecting the planning and reflection phases was only done by (Lu et al., 2017).

RQ4: How do studies evaluate the SRL support efficiency and impact, and what kind of results were achieved?
The studies evaluated the efficiency of an LA intervention in various ways. Studies focused on their LA intervention design tended to use qualitative methods, such as interviews, the think-aloud method, and focus groups, to verify the usability, usefulness, and learners' perception of an LA intervention. Interviews (Cha & Park, 2019;Corrin & de Barba, 2014;Wise et al., 2014Wise et al., , 2016 were used to discover how learners perceived the functionalities and features of LA interventions and to find approaches that learners would be willing to use in their learning processes. Sonnenberg and Bannert (2016, 2019) used the think-aloud method to gain insight into these learner perceptions and to identify how useful a design was for a learner. In other cases, some researchers used focus groups (Balavijendran & Burnie, 2018;Lim et al., 2021;Lim et al., 2020), observations of facial expressions (Bouchet et al., 2013), eye-tracking (Bouchet et al., 2013;Lallé et al., 2017;Munshi & Biswas, 2019), and physiological sensors (Lallé et al., 2017) to evaluate the effects of an LA intervention during its early developmental phase. Quantitative methods were used by researchers to evaluate the efficiency of an LA intervention during the execution phase. 34 studies (61%) used trace data for this purpose, making it the most used method. Pre-and post-intervention surveys were popular evaluation methods as well. There were 23 studies (41%) that used surveys, either alone or with trace data, to evaluate the efficiency of a LA intervention. The actual outcomes of participants' learning processes were evaluated in terms of grade distribution (Afzaal et al., 2021;Cody et al., 2020;Kia et al., 2020;Lim et al., 2021;Lu et al., 2017;Manganello et al., 2021;Ott et al., 2015;Sedrakyan & Snoeck, 2017;van Horne et al., 2018) and course completion (Davis et al., 2017;Herodotou et al., 2020).
26 studies (46%) reported a positive, measurable impact on learning from their LA intervention. In Table 6, the impact of the LA interventions and the measurements of the impacts are displayed. In the first column, the types of impact are denoted. The second  column identifies the studies, and the third column shows the impacted variable. The fourth column presents the effect sizes measured in the studies, and the last column interprets effect size following the principles introduced in Table 2.
In 13 studies (23%), the impact of an LA intervention was seen in the form of improved self-regulation. Nine (16%) studies achieved better learning outcomes. LA interventions resulted in better engagement with course materials in four (7%) studies. Davis et al. (2017) and Russell et al. (2020) found improved course completion (4%), and Herodotou et al., (2020;2%) reported an improvement in student retention in response to their respective LA interventions. Seven studies (13%; Bouchet et al., 2013;Haynes, 2020;Kia et al., 2020;Li et al., 2017a, b;Ott et al., 2015;Tabuenca et al., 2021) claimed the outcomes they observed were not promising or stated there were significant limitations for practical implications. Four (7%) studies (Frey et al., 2016;Manso-Vázquez & Llamas-Nistal, 2015;Munshi & Biswas, 2019;Siadaty et al., 2016b;Winne et al., 2019) were not able to evaluate the impact of their LA intervention (e.g., due to the early stage of development of the LA intervention).
The reporting of effect sizes varied among studies. There were in total 16 (29%) studies where effect sizes were reported. Very large effect sizes were found by Afzaal et al. (2021)

Discussion
The goal of this paper was to systematically review how LA has been applied to enhance SRL in. Many of the reviewed studies achieved promising results. Many others, however, were not able to show an impact on learning. Despite the promising results, it is still unclear whether there are features that could predict the success of a LA intervention. Some studies report how learners perceive the impact of intervention, whereas some studies report the changes in the students' learning process or learning outcomes. The previous studies have shown, that even though students' self-evaluation would suggest the improvement in SRL, the impact seen in trace data, doesn't necessarily show similar improvement due to e.g., response styles or confidence measures (Tempelaar et al., 2020).
Almost two-thirds of the studies reviewed applied two or three LA methods. Nevertheless, the increase in the number of applied LA methods does not seem to improve the effectiveness of a LA intervention. For instance, (Afzaal et al., 2021;Sonnenberg & Bannert, 2015Yu et al., 2018) applied five LA methods within all their studies, and the effectiveness of their LA interventions ranged from small to very large. If the number of LA methods is not the key for a successful LA intervention, then perhaps the combination of methods could be. Four studies used a combination of the three common LA methods: statistics, visualisation, and recommendation (Balavijendran & Burnie, 2018;Jivet et al., 2020;Siadaty et al., 2016a;Yu et al., 2018). A positive impact was reported only by Balavijendran and Burnie (2018), whose students reported that LA interventions improved their learning. In contrast, the studies of (Jivet et al., 2020;Siadaty et al., 2016a;Yu et al., 2018) were not able to find an impact on learning, even though they included the four stages of the LA cycle (Khalil & Ebner, 2015;Siemens, 2013) and used a combination of methods.
LAD was the most used LA intervention among the reviewed studies, though only 43% of the LAD interventions positively impacted learning. Success rates suggest that the implementation of LAD is not sufficient to support SRL. Instead, how LAD should be used seems to be an unknown element affecting SRL. This can be due to the absence of support in terms of dashboard design from learning sciences with regard to core concepts of learning processes and feedback mechanisms (Sedrakyan et al., 2020). The features and characteristics of LADs differ (Bodily & Verbert, 2017) and the effective elements should be considered as principles of LAD design. A closer look should also be put on the different aspects of e.g., demographics of the sample, duration of the study and the studied subjects to find underlying variables affecting the unknown elements of LAD success.
Only nine studies focused on the entire SRL cycle. These findings are similar to those of Viberg et al. (2020), who found that the studies they reviewed focused less on the reflection phase. Araka et al. (2020) identified LA as one way to measure the effectiveness of LA interventions but found that the methods used for this purpose remained unexplored. Furthermore, Viberg et al. (2020) has argued that there is little evidence of LA improving learning outcomes or learning support. In this current study, only four out of the nine studies reported positive impacts on SRL (Lim et al., 2021;Nikolayeva et al., 2020;Siadaty et al., 2016b;Wong et al., 2019b). Channels used for LA intervention and the LA methods differed studies. Nikolayeva et al. (2020) and Lim et al. (2021) performed their LA intervention using e-mail. Siadaty et al. (2016b) used LAD to give recommendations to learners. Nikolayeva et al., Siadaty et al., and Lim et al. applied statistical methods to support their LA interventions. Wong et al., (2019b) used a discovery with methods accompanied by relationship mining and visualisation to prompt learners. The common feature for all four studies can be seen in learning outcomes: all interventions increase the learning actions e.g., working on task (Siadaty et al., 2016b), completed quizzes (Nikolayeva et al., 2020), interaction with activities (Wong et al., 2019b), and improved grades (L. A. Lim et al., 2021). Lim et al. (2021) was the only study that showed an impact in improved SRL. This is a promising result but requires further studies to verify results in different study contexts and with larger sample sizes. The results suggest that intervention focusing on planning and reflection can be seen in the outcomes of the performance phase. However, the limited number of studies focusing on the entire SRL cycle, sets a need to design more interventions covering the entire SRL cycle and to execute comparative research to find the most effective interventions as a basis for intervention design principles.
The reviewed studies reported the impact of their LA interventions on learning in various ways. Both qualitative and quantitative methods were used at different stages of an LA intervention's development. While 26 studies (46%) reported a positive impact on learning, only 14 studies reported enough details to make the results interpretable and enable proper comparison. A similar problem was found by Sønderlund et al. (2019). In their review of research, only three studies critically assessed the effectiveness of LA interventions for student success and retention. The research of (Sønderlund et al., 2019) explored the efficacy of LA interventions in higher education, and they evaluated LA interventions from the perspectives of student retention and academic success, which is a rather narrow perspective for evaluating the impact of a LA intervention on learning. The current study broadened the framework of impact by adding the aspects of improved SRL processes and engagement with the course materials. This helps the future studies to assess the impact of LA interventions.
Different designs of LA interventions have different impacts on learning. For example, the LA interventions by (Siadaty et al., 2016b) and (Sonnenberg & Bannert, 2019) were aimed at supporting the subprocess of task analysis in SRL. The former study found their LA intervention had a large impact on learning, whereas the latter found it had a medium impact. It is also interesting to note that an initial LA intervention can be further developed, thus leading to an improved impact. This kind of development can be seen in the consecutive studies by (Sonnenberg & Bannert, 2015, where the initial impact was improved from small to very large across their studies. We argue that a successful LA intervention design requires the LA cycle to be supplemented with a sound pedagogical approach to provide feedback for both cognitive and behavioural processes (Sedrakyan et al., 2020). The initial LA intervention must be developed iteratively, as was done by Bannert (2015, 2016) to test different approaches and functionalities in order to find the most effective ways to enhance SRL. In SRL, regulation can focus on the cognitive, emotional, and motivational aspects of the learning process. In addition, regulation can also focus on different phases, such as planning, performance, or reflection. This is to say that it would not be reasonable to expect a LA intervention to support all phases simultaneously unless the subprocesses included in the different phases are considered during the design of an intervention. Also, LA interventions are typically conducted for a short period, despite the fact that only long-term LA interventions can potentially contribute to the development of SRL skills. The longevity of training and support must be considered during the design of a LA intervention to enable the students to master SRL skills (Zimmerman, 2000). Furthermore, research should explore the personalized ways of studying and supporting self-regulation i.e., idiographic methods. Idiographic methods rely on several data points collected from individual students to create precise insights based on student's own data e.g., (Saqr & López-Pernas, 2021). With the rise of multimodal learning analytics, research can tap into the fine grained within-person process and emotions and therefore, deliver highly personalized just-in-time insights and support Törmänen et al., 2022).

Validity threats
The validity assessment includes the aspects of internal, external, construct, and conclusion validity (Zhou et al., 2016). In this review, we reduced the risk of validity threats by following the recommended mitigation actions proposed by (Ampatzoglou et al., 2019). Inappropriate or incomplete search terms can be a threat to construct validity and internal validity. LA interventions can be done in various ways, and the concept of LA intervention might not be used in some records. Within this study, LA intervention was not used as a keyword. Instead, we identified LA interventions qualitatively. The same applies to SRL. There were records screened focusing on interventions and learning, but SRL was not used as a concept in these records. These records may have been excluded even though the interventions could have been beneficial for learning. Therefore, there might be records that were relevant to this review, but they were not included because they used keywords not identified within this research. Incomplete research information in primary studies causes an external threat to validity. We have been mitigating this risk by identifying the missing information in Table 6. To reduce the risk of threat to conclusion validity, we have removed the duplicate studies. The discussions among authors have been used aiming for unanimous consensus to reduce subjective interpretations thus mitigating to internal validity and conclusion validity.

Conclusion
We have reviewed how studies use LA methods to create LA interventions supporting SRL. Based on the review, neither the number, popularity, nor combination of LA methods implemented in a learning environment provide a guarantee for improved SRL. Moreover, none of the LA intervention channels has shown proof of success in supporting SRL. Only seven studies covered all phases of SRL, and only 12 studies assessed the effect sizes of the LA interventions. Out of these studies, only three focused on the entire SRL cycle and showed results that LA interventions positively impacted learning. Furthermore, the LA methods and the LA intervention channels of the three studies varied. Therefore, we cannot say which combination of LA methods and LA intervention channels would ensure the successful support of SRL.
Interpretable and comparable LA intervention results would help scholars and practitioners to find LA interventions that are the most beneficial for SRL. Future research needs to focus on the pedagogical approach and how it may be used to create impactful LA interventions. Second, LA interventions should focus on different phases and processes of regulated learning. Future research should find the most impactful LA interventions capable of supporting different SRL phases and subprocesses. Third, there are a limited number of LA interventions covering all three SRL phases. Future research and LA intervention design should note the aspects of all SRL phases to support holistic learning. Finally, there are limitations related to how the reviewed studies reported their findings. To improve the interpretability and comparability of an LA intervention design, future research needs to put focus on providing comparable data with larger sample sizes to enable the comparison of different kinds and types of LA interventions.

Table 7
Summary of the reviewed studies