1 Introduction

Designing and implementing an individual feedback system based on students’ interaction patterns in different online learning activities with machine learning techniques can support various critical aspects of education, including academic performance, behavioral, cognitive, and affective (Alwafi, 2022; Dornauer et al., 2024). The literature emphasizes that feedback increases academic performance (Brown et al., 2016; Hattie & Timperley, 2007; Nunez et al., 2015). The importance given to academic performance and feedback will also positively affect engagement. According to Cole and Chan (1994), engagement is associated with active participation in learner tasks, feedback, and learning activities. Since there is less interaction between the learner, the content, and the instructor in the online environment, feedback can be an opportunity for effective learning. In this context, the study examined the academic performances, engagements, and behavioral patterns of students who did or did not use the feedback system based on machine learning-based learning analytics integrated into the online collaborative learning environment.

1.1 Machine learning-based learning analytics feedback

In this study, learning analytics were used to increase the effectiveness of the online collaborative learning process. In this context, a feedback system can be integrated using learning analytics to increase students’ academic performance, and learning engagement, eliminate learning deficiencies, and prevent them from leaving the learning environment (Çebi & Guyer, 2020). The effect of the feedback system based on machine learning-based learning analytics integrated into the online collaborative learning environment is discussed from the perspective of individuals. Students were clustered with k means cluster analysis according to the system logs generated according to their activities in the online collaborative learning environment, and according to the results of this analysis, students were given feedback on how to improve their current situation or maintain their current position.

1.2 Academic performance and online engagement in collaborative learning

Group work in online collaborative learning environments is related to academic performance (Yildiz Durak, 2022, 2023). Engagement in online learning is another of the most important concepts for the effectiveness of the learning environment (Vezne et al. 2023). Engagement is a multifaceted concept used in the context of beliefs, attitudes, and behaviors to describe a variety of student experiences (Redmond et al., 2018). Engagement has been dimensioned in different ways in different studies. Behavioral, affective, and cognitive dimensions appear to be the most frequently used structure (e.g. Appleton et al., 2006; Finn & Zimmer, 2012; Pekrun & Linnenbrink-Garcia, 2012).

Behavioral engagement includes behaviors that involve a variety of learning and academic tasks, such as effort, research, contribution to class discussions, and motivation. Affective engagement includes students’ positive or negative emotional reactions to academic processes, their peers, and instructors, depending on their classroom experiences (Stipek, 2002). Cognitive engagement involves the willingness to make an effort to understand complex problems and difficult skills. In this study, the framework proposed by Ergün and Usluel (2015) was used.

1.3 Temporal learning analytics

Learning analytics offers personalized learning opportunities to enhance an individual’s learning experience (Greller & Drachsler, 2012). In contemporary education, analysis techniques like learning analytics are increasingly favored for tailoring learning processes to individual needs. However, log data detailing an individual’s learning process may not be immediately meaningful to other stakeholders in the learning and teaching process (Macfadyen & Dawson, 2012). There is a growing interest in investigating variables related to navigational behaviors and other behaviors in online learning environments (Kizilcec et al., 2017; Li & Tsai, 2017). Recent research has explored the relationship between student profiles outlined in the literature and various aspects, including academic performance (Cerezo et al., 2016; Li & Tsai, 2017), motivation (de Barba et al., 2016), learning strategies (Kizilcec et al., 2017), and online behaviors.

1.4 Behaviors in online collaborative learning environments

Understanding the sequential structure of interaction in online collaborative learning environments can contribute to a better understanding of how students use technology and adapt to different situations. In this context, sequential analysis provides information to determine whether and how events that develop over time are related to each other. However, the processes of insight generation are still not well understood. Although there is much consensus on the effects of feedback on learning, limited empirical findings have been found regarding the effects of machine learning-based feedback on navigational behavior patterns in online courses. Therefore, to fill this gap, the study analyzes students’ log files in the learning management system Moodle using lag-order analysis (LSA) to determine navigation behavior patterns according to experimental and control groups. In education, LSA focuses on transition relationships to identify temporal differences in learning behaviors. For example, Tlili et al. (2023) used LSA to determine how students’ learning styles affected their navigation behavior in an online course. Markov chains, on the other hand, are statistical models used to model transition probabilities between a series of events, and these models are used to analyze how states in a system change over time and to predict future states. For example, Saqr et al. (2023) and Kokoç et al. (2021) student analytics were analyzed with the Markov chain. In this way, behavioral patterns in engagement and assignment submission were tried to be captured.

1.5 Importance of the study

This research sheds light on how collaborative learning machine learning-based learning analytics (ML-LA) can improve the effectiveness of feedback systems. The study first addresses a fundamental concern in online educational environments, namely the impact of personalized feedback (Polat et al., 2022; Yildiz Durak, 2024a, b) on student engagement and performance. The ML-LA feedback application provides a method to personalize feedback efficiently and at scale.

Second, the research contributes to the understanding of students’ behavioral dynamics in online environments. Using analytical techniques such as delayed sequential analysis and Markov chains, the study not only identifies common behaviors, but also models transitions between different engagement states over time. Personalized instruction using these methods offers many opportunities to use data collected about students’ activities and performance for decision-making and adaptation of teaching methods. Indeed, the Markov chain is a probabilistic model for representing probability distributions over sequences of observations (Bowman et al., 2013). This model can be used in artificial intelligence and pattern recognition applications (Vatsalan et al., 2022). Polito et al. (2022) proposed the use of data-driven methods to study student engagement and performance in online courses in order to personalize student feedback more effectively. Using educational data mining, k-means and Markov models, they aimed to identify similar groups of students to determine engagement levels and overall course performance.

Furthermore, the findings from this study provide empirical evidence of the effectiveness of ML-LA feedback in improving academic performance. In this way, it is critical as it provides applied evidence with data and provides a solid foundation for further research and development in this area. Educational institutions can leverage these insights to improve feedback mechanisms and potentially enhance the overall quality of online education. Furthermore, the study enriches the literature on the impact of feedback in online learning environments by identifying how feedback (behavioral, cognitive, and affective) affects various dimensions of student engagement.

In conclusion, the significance of this study lies in its comprehensive approach to integrating machine learning with feedback systems in education. Therefore, this research makes a significant contribution to the field by laying the basis for future innovations in educational technology.

1.6 Purpose of the study

The purpose of this study is to: (1) examine differences between student groups in terms of learning performance, online engagement, and various system usage variables;

(2) To examine and compare students’ behavioral patterns regarding online system engagement according to experimental and control groups;

(3) To examine the change in students’ behavioral patterns regarding online system engagement over time according to the experimental and control groups. Accordingly, the research questions are as follows:

RQ1. Do experimental and control group students differ significantly in terms of learning performance, online engagement, and various system usage variables?

RQ2. What are the behavioral patterns of students’ online system engagement in the experimental and control groups?

RQ3. How do students’ behavioral patterns regarding online system engagement change over time in the experimental and control groups?

2 Method

2.1 Study context

In this study, Moodle data from the Instructional Technologies course was used for 14 weeks. The course aims to help students learn to integrate technology into education and use new technologies for educational purposes. All students participating in this study were enrolled in the course. The online course system is the learning management system (LMS) Moodle, which is a free and open-source system. In the study, both experimental and control groups participated in the same online collaborative learning course, used the same learning management system (Moodle) and were exposed to the same course content and instructional materials. In terms of environment and infrastructure, both groups accessed the course through the same technological platform, using the same user interfaces, providing a consistent learning environment. The level of instructor involvement and interaction was standardized in both groups. Student groups were randomly assigned to the experimental and control groups. In addition, the students in the experimental and control groups were admitted to their departments with close percentile ranks in the nationwide university entrance exam. The instructor regularly monitored the records of student activities.

Weekly learning content and materials in various formats such as e-books, URL contents, videos, presentations, summaries, highlight files, and forums have been uploaded to the system by the instructor. For each course, students were required to review different learning materials uploaded by the instructor and completely different learning tasks. These tasks involve tests to be answered in Moodle or the application of various technologies to training and then uploading it to Moodle. The students were also able to update the assignments they uploaded before the stated deadline, if needed in the context of group interactions. Students freely used the course forum to post collaborative learning tasks, questions, and discussions and to communicate with their peers. The instructor initiated 14 forum discussions. Additionally, the instructor encouraged students by answering forum questions and participating in online forum discussions. Each student has displayed their course activity status on their profile. Students also had the opportunity to see a list of their classmates in the course and view their profiles. Students were able to see the success status and feedback of their learning tasks. The process summary is summarized in Fig. 1.

Fig. 1
figure 1

Research procedure

2.2 Participants and instrument

In the 2022 academic year, 49 undergraduate students of the faculty of education, who had not previously attended the Instructional Technologies course at the state university, participated in the study. Purposive sampling was used to select participants for this study. Purposive sampling ensured the selection of individuals who fit the objectives of the experimental intervention. The main criteria used for participant selection were enrollment in an instructional technology course at the faculty of education, availability and willingness to fully engage with the course requirements during the study period, extracurricular Internet connectivity given the online nature of the learning environment, and possession of a personal computer.

The study’s participants were divided into collaborative groups of 4–5 people and given group tasks. In the context of this course, they were asked to complete the materials and tasks in the online Instructional Technologies course.

The course consists of 14 learning units. Data from three students were excluded from the study due to data limitations such as missing traces of each student’s online engagement behaviors and insufficient learning time. 87.8% of the participants are women and 12.2% are men. The age range is between 18 and 23.

Student’s Engagement Scale in Online Learning Environments was used to describe individuals’ behavioral, cognitive, and affective engagements in online learning environments. This scale was developed by Sun and Rueda (2012) and adapted to Turkish by Ergün and Koçak Usluel (2015). The scale is a five-point Likert-type and has 19 items.

The academic Achievement Test was used to determine the participants’ learning performances for the course. An academic achievement test was developed by the researcher within the scope of the Instructional Technologies course.

In the first week of the application, the course content and the framework of the research were explained to the participants. There is an experimental and control group in the study. In the study, there is a control group with feedback intervention based on machine learning-based learning analytics and a control group in which no intervention is made, and only computer-supported collaborative learning is performed. Students in all groups were enrolled in the application in the first week. Then, at the beginning of the learning process, collaborative groups of 5–6 people were formed. These groups were determined according to the student’s wishes.

During the 14th week, e-contents prepared for the Instructional Technologies course were integrated into the curriculum and presented week by week, and group learning tasks were given. Learning analytics were taken to evaluate the profiles of the learners participating in the environment according to their interaction and behavior patterns in the lessons. System usage data of students were collected throughout the semester (14 weeks).

The data obtained was analyzed with machine learning algorithms, interaction profiles were created, and feedback was provided to the learners. Learner profiles were created according to clusters formed by the k means algorithm. In the LMS system, the students who are most active in terms of system usage time, course visit frequency, and completion of learning tasks are called intense, those who are averagely active are called middle, and students who are generally inactive are called light clusters. The characteristics of each cluster were defined, and feedback cards appropriate to these clusters were prepared and sent to the students (See Fig. 2). Feedback was given at weeks 5, 8, 11 and 13.

Fig. 2
figure 2

Sample feedback cards

2.3 Data analysis

Regarding learning performance, online engagement, and system usage analytics, pretest, and posttest scores were used to decide whether there was a statistically significant difference between the pretest and posttest and the two groups. In the study, paired sample t-test and Analysis of Covariance (ANCOVA) were used to compare the students’ achievement scores and the answers given to the self-reported scales. The normality of the data was checked, and the kurtosis and skewness were found to be between − 1.5 and 1.5. ANCOVA assumptions were tested. These analyses were carried out by the Statistical Package for the Social Sciences (SPSS) and a significance level of p < 0.05 was adopted for the study.

Students’ navigation and engagement behaviors were automatically recorded by Moodle. These data identified online learning behavior, which is thought to be important in representing students’ engagement in learning management systems. In this definition, Tlili et al. (2023) was taken as a basis and the suitability of this framework for the research was evaluated. As a result of this evaluation, a coding scheme was created. System data and weekly student reflections were evaluated to determine behavioral sequences according to this scheme. Sequence analysis was performed with syntax in IBM SPSS 24 to visualize and compare behavioral sequences. Additionally, t-tests were conducted to measure differences between groups regarding learning performance and online student engagement.

This study conducted a lag sequential analysis (LSA) to identify, visualize, and compare key transition patterns between engagement behavior codes displayed by groups. First, engagement behaviors were coded as a series of behaviors in chronological order. The coding process was reviewed with a field expert and discussions were held in cases where differences of opinion occurred. Consensus was reached for all rankings. Tlili et al., which includes 12 behaviors under five headings. A rubric was created based on the framework proposed by (2023). A pattern was created by marking individual students’ behaviors, system navigation behaviors, and teacher observation notes. This model was analyzed with a script written in SPSS. The analysis provided results regarding the number of times the behavior was performed and which transitions between behaviors were meaningful (See Table 1).

Table 1 The coding scheme for online learners’ engagement behaviors on learning management systems (Tlili et al., 2023)

Secondly, LSA was used to calculate the transitions between these behavioral codes, and the number of transitions from each state to another state was calculated. The transition frequency matrix and transition probability matrix were calculated. A corrected z score) was calculated for each transition to determine whether transition probabilities deviated significantly from their expected value. A Z score of over 1.96 indicates that the transition from one code to another is statistically significant (p < 0.05). Finally, a behavioral transition diagram was created for the experimental and control groups according to the meaningful transition sequences.

Markov Chains have been applied to model online engagement behaviors by incorporating time-varying variables. This method is a person-centered method that captures qualitative differences in response patterns of engagement over time. Markov chain was performed using Python programming language. For data processing and analysis, pandas, numpy and matplotlib libraries were preferred for transition counts, transition probabilities and transition probabilities matrix calculations. In the context of data visualization, plotly.graph_objects and networkx was used for Sankey diagram and network graph. The analyses applied in the process are summarized in Fig. 3.

Fig. 3
figure 3

Data analysis procedure

3 Findings

RQ1. Do the experimental and control groups differ significantly in terms of learning performance, online engagement, and various system usage variables?

The average values of students’ learning performance and student engagement, both immediately after the pilot group studies and after the experimental intervention, are depicted in Fig. 4. To assess whether a significant difference existed between groups before and after the experimental intervention, paired and independent sample t-tests, as well as ANCOVA tests, were employed.

Fig. 4
figure 4

The change in the learning performance and student online engagement scores of the experimental and control groups

According to the post-test scores presented in Fig. 4, the experimental group has higher learning performance and online engagement than the control group, according to the arithmetic averages.

According to Table 2, pretest-posttest score comparisons within the groups are significant (p < 0.05). ANCOVA was applied to the post-test scores to compare the learning performances and online engagements of the experimental group and control group. The pretest score was accepted as a covariate. According to Table 3, no significant difference was found between the groups in the affective dimensions of academic success and online engagement (p > 0.05). A significant difference was found in the behavior and cognitive dimensions of Engagement in favor of the experimental group (p < 0.05).

Table 2 Comparing the groups’ learning performance, student online engagement with paired sample t-test
Table 3 ANCOVA result of the post-test scores of the groups

According to the paired sample and independent sample t-test results (See Table 4), a significant difference was found in favor of the experimental group only in terms of completed tasks (p < 0.05).

Table 4 Comparing the groups’ learning analytics

RQ2. What are the behavioral patterns of students’ online system engagement in the experimental and control groups?

LSA results of the experimental and control groups are presented in Tables 5 and 6.

Table 5 Lag sequential analysis results for the experimental group

Table 5 presents statistically significant transitions, their numbers, and adjusted residual values. Adjusted residual value shows that the connection between behaviors is strong. The most important behavioral transitions are the sequential relationship between viewing task status and updating task submission (6_as4→5_as3), displaying discussion and making discussion (7_d1→8_d2), and updating task submission and viewing group performance (5_as3→10ar2) and are the highest relationships. Therefore, these relationships are thought to be worth considering.

Table 6 Lag sequential analysis results for the control group

In Table 6, the ordinal relationship between the most important behavioral transitions, viewing discussion and making discussion (7_d1→8_d2), is high. This is followed by the sequential relationship between the display of personal performance and the display of group performance (9_ar1→10_ar2). Therefore, these relationships are thought to be worth considering.

Fig. 5
figure 5

Online behavior patterns of learners

In Fig. 5, significant behavioral patterns and behavioral codes are indicated by rectangles and significant transitions by arrows, according to significant z-scores. Both groups exhibited some common transitional sequences. According to the findings, in the course activities section, students’ assignment form viewed behaviors significantly follow their course module viewed and course module completion behaviors (CA1→AS1 and CA2→AS1).

Participants generally started the process with the behaviors of viewing the course, viewing the assigned learning task, and completing the course content in the experimental and control groups. After viewing the learning environment, students in the experimental group tended to examine the assigned task and view what was shared in the discussion environment. In the control group, they preferred to view the assigned task and view the profile of their group after viewing the course. Sequential analysis shows that after the assignment, students in the control group preferred to view individual profile achievements and group achievements. In the experimental group, individual and group achievements were discussed after discussions.

RQ3. How do students’ behavioral patterns regarding online system engagement change over time in the experimental and control groups?

It was investigated whether the online engagement behaviors of students in the experimental and control groups changed over time, and descriptive findings are presented in Figs. 4 and 5, and 6.

For this purpose, first of all, the transition between the categories of the learning analytics data sets obtained at different time points is visualized in Figs. 6 and 7, and Fig. 8. As seen in the graphs, there are transitions between low-medium, low-high, medium-high, medium-low, high-low, and high-medium.

Fig. 6
figure 6

Evolution of state membership across the 14 weeks of courses in the experimental group

According to Fig. 6, light generally transitions to medium and high in the experimental group. In Fig. 7, the control group switches between light and medium in the experimental group, and all participants are presented collectively in Fig. 8.

Fig. 7
figure 7

Evolution of state membership across the 14 weeks of courses in the control group

Fig. 8
figure 8

Evolution of state membership across the 14 weeks courses for participants

Markov Chain analysis was used to analyze the transitions between different engagement states in more detail. Students were grouped as High, Medium, and Low according to the learning analytics data in the system at 3 time points. With this approach, the probabilities of the student moving from a “low, medium, or high” state at one-time point to a “low, medium, or high” state at another time point during the learning period are calculated. The calculated values for their transition are presented in Figs. 9 and 10. Markov Chain analysis shows the actual transition probabilities for the experimental and control groups over the period.

Fig. 9
figure 9

Experimental group Markov chain. *T1I: Time 1 Intense;T1M: Time 1 Middle;T1L: Time 1 Light;T2I: Time 2 Intense;T2M: Time 2 Middle;T2L: Time 2 Light;T3I: Time 3 Intense;T3M: Time 3 Middle;T3L: Time 3 Light

The arrow between groups represents the direction of transition, and the numerical values represent the probability of transition between each group. The maximum probability of each transition is 1 (i.e. 100%). The Markov chains analysis presented in Fig. 9 revealed some important engagement behaviors of the students. The probability of being in the light cluster at time point T2 is 0.111 (11 out of 100 students), the probability of being in the medium cluster is 0.111 (11 out of 100 students) and the probability of being in the intense cluster is 0.769 (77 out of 100 students). At time point T2, the probability of being in the light cluster is 0.111 (11 out of 100 students), the probability of being in the middle cluster is 0.111 and the probability of being in the intense cluster is 0.111 (i.e. 11 out of 100 students) for students whose engagement level is in the medium group at time point T1. Students whose engagement level was in the intense group at time point T1 would have a probability of being in the light cluster at time point T2 of 0.111, a transition probability of being in the middle cluster of 0.677 (i.e. 67 out of 100 students) and a probability of being in the intense cluster of 0.231 (i.e. 23 out of 100 students). The probability of students in the light group at T2 to be in the light, middle and intensive cluster at T3 will be 0.323, 0.111 and 0.524, respectively. The probability that students in the middle group at T2 will be in the light, middle and intense cluster at T3 will be 0.111, 0.476 and 0.5, respectively. The probability of students in the intense group at T2 to be in the light, middle and intense cluster at T3 is 0.111 in all cases.

Fig. 10
figure 10

Control group Markov chain. *T1I: Time 1 Intense;T1M: Time 1 Middle;T1L: Time 1 Light;T2I: Time 2 Intense;T2M: Time 2 Middle;T2L: Time 2 Light;T3I: Time 3 Intense;T3M: Time 3 Middle;T3L: Time 3 Light

According to the Markov chains analysis presented in Fig. 10, The probability of being in the light cluster at time point T1 is 0.25, the probability of being in the middle cluster is 0.50, and the probability of being in the intense cluster is 0.25 at time point T2. At time point T1, students whose engagement level was in the middle group had a probability of being in the light cluster at time point T2 of 0.20, a probability of being in the middle cluster of 0.60, and a probability of being in the intense cluster of 0.20. Students whose engagement level was in the intense group at time point T1 will have a probability of being in the light cluster at time point T2 of 0.15, a transition probability of being in the middle cluster of 0.50 and a probability of being in the intense cluster of 0.35. The probability of students in the light group at T2 to be in the light, middle and intense cluster at T3 will be 0.3, 0.4 and 0.3, respectively. The probability that students in the middle group at T2 will be in the light, medium and intense cluster at T3 will be 0.25, 0.45 and 0.3, respectively. The probability of students in the intense group at T2 to be in the light, middle and intense cluster at T3 is 0.2, 0.3 and 0.5, respectively.

4 Discussion

This study aims to examine the behavioral patterns of online system engagement of students who receive and do not receive ML-LA feedback, the differences of student groups in terms of learning performance, online engagement, and various system usage variables, and the change in students’ behavioral patterns regarding online system engagement over time.

This study was applied to longitudinal online interaction data as well as pretest-posttest data. This study primarily aimed to determine the effect of ML-LA feedback on behavioral, cognitive, and affective dimensions of online engagement and academic success. In the t-test and ANCOVA analysis, according to significantly different variables, the group receiving ML-LA feedback showed higher behavior and cognitive engagement than the control group. In addition, the rate of completing learning tasks was higher in the experimental group. Winstone et al. (2021) found that the use of feedback in LMS provides numerous opportunities to overcome difficulties in online learning environments. These opportunities include planning the student’s development, monitoring the progress, and delivering personalized feedback to each student to enhance commitment to learning. This can support levels of feedback literacy and self-regulation, laying the foundation for meaningful interaction and dialogue in online learning environments. Based on the findings of the study in question, feedback based on student interactions positively affects behavior and cognitive engagement and supports the completion rate of learning tasks, as it is unique to students, has a personal nature, and supports the establishment of meaningful interaction in the learning environment. It is noteworthy that the feedback given did not cause any difference in academic success and affective engagement. Agricola et al. (2020), Polat et al. (2022) and Yildiz Durak (2024b) emphasized that the type of feedback given, whether written or verbal, can have an impact on academic success because it creates different opportunities for interaction. Beaumont et al. (2011) found that the findings regarding how students personally perceive feedback were similar to Lin et al. (2013) drawing attention to the importance of structuring the feedback content in a way that reduces cognitive load. The experimental conditions of this study may not have had an impact on affective engagement and academic success due to these reasons mentioned in the studies in the literature. On the other hand, it is also possible that students’ perceptions of feedback may depend on their previous knowledge and experience with feedback. Therefore, prior knowledge may have been effective in understanding and interpreting the feedback. However, there was no significant difference between the experimental and control groups in terms of academic achievement and affective engagement. This result may indicate that the feedback mechanism should be improved in the affective dimension and other factors affecting these results should be investigated. Improving the depth of the feedback mechanism in terms of personalization may be a solution. Indeed, while the ML-LA system customizes feedback according to learning navigation behaviors, improving individual learning styles, prior knowledge or specific emotional and cognitive needs as factors of the system could potentially make feedback more effective. On the other hand, the effectiveness of feedback may also depend significantly on its type, the way it is presented (textual, auditory, visual), its timing or frequency. To increase both academic and affective engagement, feedback types/application times/presentation patterns can be examined to determine an approach that includes motivational elements and the most appropriate feedback strategy. In addition, affective engagement in online learning environments is often closely related to the level of social interaction and sense of community among students. This may also affect affective engagement.

This study examined students’ behavioral processes toward online engagement using LSA. Similarities were observed in both groups in terms of viewing and conducting discussions. However, the students in the experimental group focused more on the learning tasks, revising the task, and monitoring group performances and this current finding is consistent with Tlili et al. (2023) used engagement behavioral criteria coded from the monitoring data obtained to detect online engagement behaviors of students in LMSs. Nunez et al. (2015) and Dunworth and Sanchez (2016) state that feedback supports the homework completion rate and learner motivation, and therefore increases student engagement. Based on these findings, in the current study, the learning intervention made with ML-LA-based feedback that students received in collaborative group work better guided students to group success in learning tasks.

This study examines how engagement develops or changes over time. For this purpose, the Markov chain was used. The results showed three states of interaction (Intense, Moderate, and Light), corresponding to three levels of highly active, moderate, and low degree of interaction. The results showed that students’ engagement behaviors were likely to transition from light to medium and intense levels throughout the semester, especially in the experimental group. Saqr et al. (2023) emphasized that online participation develops dynamically over time and that this change occurs at different rates among student subgroups. These changes differ in each subgroup according to previous engagement statuses and variables related to teaching and teachers. Additionally, that study found that the ability to transition into an engaged state explained a moderate to a significant portion of the final performance. It may be recommended to increase cognitively and behaviorally interesting instructions for students with low course engagement levels to move to the intense level. On the other hand, students who are at the light level in complex and challenging tasks and who are likely to move to this level are likely to need more intense feedback.

This study has some limitations that should be addressed in future research. First, the fact that all participants were undergraduate students at the faculty of education may limit the generalizability of the findings. Considering the relatively small sample size of this study” and “the specific context of this study, which is specific to the instructional technology course”, the generalizability of the findings of the study may be limited. In future studies, study findings can be replicated with a larger population of students from different faculties. Secondly, this study used LSA to determine students’ online engagement behavior patterns. Since LSA covers a long-term period, that is, a period of 14 weeks, all the behaviors listed in the rubric were exhibited. LSA can be applied in future studies for shorter periods. On the other hand, differences can be compared by performing LSA with experimental and control groups, as well as student groups with low and high levels of online engagement. Relying on self-report scales to measure participation may lead to biased results. Additionally, future studies can integrate this analysis into LMSs to provide immediate learning analytics-based feedback to support students’ engagement and performance. This study focused only on online engagement behaviors based on learner logs and reflections. Future studies could investigate how students’ learning outcomes and academic performance are related to online engagement using methods such as association analysis. Additionally, it is unclear whether our findings about different engagement behavior patterns can be generalized to MOOCs and large-scale LMSs. Since engagement is a multidimensional structure that includes cognitive, affective, and behavioral components, using a single data source to capture all engagement processes was considered a limitation. Therefore, future researchers can use data with different content from different sources to create a more comprehensive framework of engagement processes. Finally, the proportional number of female students compared to male students in the study may constitute a limitation in terms of gender distribution. This limitation arising from the structure of education faculties should be kept in mind regarding the generalizability of the findings of this research. Future studies can examine how individual differences (e.g. gender, prior knowledge, etc.) affect students’ behavioral patterns. In this way, personalized LMSs can be developed that can adapt to the individual differences of students.

5 Conclusion

This study investigates how students who receive machine learning-based learning analytics feedback react differently online than students who do not. In the context of this purpose, learning performance, temporal learning analytics, and behavioral patterns were taken into consideration. Our results regarding learning performance show that students’ knowledge of the instructional technologies course content has improved. Although there is no significant difference between the experimental and control groups, the difference between pretest and posttest is higher in the experimental group. Accordingly, these findings show that ML-LA feedback helps students learn. Regarding behavioral patterns, the LSA approach was applied to analyze students’ engagement behavior patterns. The findings from this study revealed that the students in the experimental and control groups exhibited some similar behaviors, but there were some differences between them. The most important behavioral transitions in the experimental group were viewing/making and sending tasks, discussions, updating task submissions, and viewing group performance. In the control group, the most important behavioral transitions are the sequential relationship between viewing a discussion and making a discussion. Although both groups were similar in terms of viewing and conducting discussions, the students in the experimental group focused more on the learning tasks, revising the task, and monitoring group performance. Such findings suggest that ML-LA feedback plays an important role in behavioral mechanisms and should be taken into account in future LMS development. In addition, the results showed that students’ engagement behaviors were more likely to transition from light to middle and intense levels throughout the semester, especially in the experimental group.