Skip to main content

The effect of the tracking technology on students’ perceptions of their continuing intention to use a learning management system

Abstract

This research examines the effect of having a tracking technology in a learning management system (LMS) that reports the effect of perceiving other students’ interactions on a learner’s intention to keep using LMS in the future. The main underlying theory is herd behaviour theory which argues that crowd behaviour affects the perceptions of the observers. In this paper, we proposed and found that tracking technology will affect a learner’s perceptions of cognitive absorption and that perception of self-regulation from using an LMS. These perceptions are found to influence the learner’s intention to keep using the LMS in the future positively. This research developed a new tracking technology in response to weaknesses noted in the literature and validated by interviewing teachers. Its effects were tested on 151 university students taking a computer science module. This research contributes to knowledge by integrating herd behaviour theory into the design of LMS and offers a new perspective on learners’ interactions with educational technologies.

Introduction

A tracking technology acts as a cybernetic system which collects information on specific indicators to regulate the decision-making process (Green & Welsh, 1988). This technology collects learners’ interactions and feeds this information into their dashboards, to be displayed and compared with similar information collected from other learners (Jivet et al., 2017). Despite the benefits of these tracking dashboards, evidence shows that the learners do not use them. The reason here seems to be that learners are advised to avoid comparing themselves with others (Neugebauer et al., 2016) because it may harm some learners’ self-esteem. Harvey and Keyes (2019) also reported lower self-esteem among those students who (from information provided by the dashboard) believed their academic performance to be lower than their class average. Moreover, Jivet et al. (2017) showed that peer comparison could shift the students’ focus from mastering knowledge and skills to comparing themselves against others. Some researchers, e.g., Toohey et al. (2019), have suggested that the peer comparison component in learning dashboards needs to be carefully designed to minimise its negative impact on students. Thus, instead of focusing on competition among students, this research will adopt a new perspective to explain and to apply the learners’ dashboard, namely, to consider herding theory in technology adoption (Peng et al., 2009; Sun & Rueda, 2012; Wang et al., 2019).

This negative consequence of comparing the self with others is a noted disadvantage of using dashboards and may explain why many students do not want to access the learners’ dashboards, as reported by Bodily et al. (2018) and Kim et al. (2016). Similarly, in the study by Kia et al. (2020), only 52% of the students used learning dashboards, of whom 19% had only a single interaction with the dashboard. Reimers and Neovesky (2015) also reported that more than half of the surveyed students did not want to be compared to their peers on the learning dashboards.

With the above challenges to tracking tools in mind, this research will develop a new tracking technology that reports other students’ interactions to learners but makes no comparisons. This research aims to re-conceptualise the tracking technology from a comparative tool to a herding tool by removing the “self” performance from the dashboards and reporting only about others. Thus main theory underlying this study is not competition theory but rather herd behaviour theory. That is to say, if a learner perceives the crowd to be doing something, this could affect her/his perception of being self-regulated and her/his cognitive absorption, which could in turn affect her/his intention to continue using the LMS in the future.

The aim of this article is to develop a framework to examine the impact of embedding herding tool (operationalised in terms of tracking tool to see other students interaction behaviours) in the LMS for improving the intention towards the use of it. Herding in technology adoption is defined as “the phenomenon that a person follows others when adopting a technology, even when his/her private information suggests doing something else” (Sun, 2013; p. 1016). This theory has not been applied before to the study of what influences learners’ perceptions and intentions in educational technology. Thus, the main research question is “What is the effect of seeing other learners’ interactions on a learner’s perceptions and intentions regarding an LMS?” This research seeks to make two contributions to knowledge; first, to theorise the use of tracking technology in learning management systems to unlock the potential benefits and guide the design process; and second, to examine for the first time the effect of tracking technology on the cognitive absorption of a learner and her/his perceived self-regulation from using the LMS.

To fulfil these research objectives, the next section defines and evaluates herd behaviour, cognitive absorption, and perceived self-regulation in the LMS field. Then the methodology for designing, implementing, examining and evaluating the effects of new tracking technology is highlighted. In the last two sections, before discussion, new design and the results of testing this technology on learners’ perceptions and intentions are presented.

Literature review

Herd behaviour theory

Herd Behaviour Theory (Banerjee, 1992) is based on the premise that individuals tend to follow behaviour which, when exhibited, seems to be legitimised and followed by others. People’s decisions and behaviours are biased towards imitating and following the herd (Banerjee, 1992; Duffy & Azevedo, 2015). Herd behaviour theory (Banerjee, 1992) is based on a natural human reaction to observing peers, which leads to imitating their behaviour to achieve a similar level of success. In other words, an individual’s decision is more likely than not to be influenced by the way that the people around him/her choose, prefer, and behave. The concept of herd behaviour explains people’s tendency to follow explicit behaviour when they see that others in the same group are applying it (Banerjee, 1992).

Discerning the relationship between herd behaviour and technology adoption involves several features. Herd behaviour theory is used to explain the adoption of technology by discounting users’ internal beliefs, their readiness to embrace the beliefs of others about the usefulness of an application, and the further adjustment of their behaviour to reflect changes in the level of its use by others (Peng et al., 2009). Similarly, Sun (2013) used herd behaviour theory to explain the Continued Use of Technology in online shopping behaviour by the discounting of a person’s own beliefs and the imitating of others. According to him, this behaviour occurs when adopting a new technology and is provoked primarily by the observation of prior adoptions. Zhao et al. (2020) concluded that, because of the herding effect, the tracked software applications that form the largest group of downloads are the most likely ones to be discontinued. This indicates that some learners engage in buying the application without sufficient reason. Viewing and copying the completed activities could develop learners’ engagement to a deeper level because the learning content is meaningful and not a mere set of random tasks (Jivet et al., 2017). Wang et al. (2019) also reported that a higher level of difficulty in the online course is correlated with more rational herd behaviour.

Similar evidence in educational technology was deduced by Wang et al. (2019), who investigated students’ participation in an MOOC in China. Wang et al. (2019) observed significant evidence of irrational and rational herding behaviours among the learners when they see others’ learners’ behaviours. Thus, technological intervention which could increase herding behaviour may contribute to improving the use of LMS and the intention to keep using it in the future.

Cognitive absorption

Cognitive absorption (CA), or the Perceive Cognitive Absorption (PCA), is explained by state of flow theory, first coined by Agarwal and Karahanna (2000). Barnes et al. (2019) underlined the role of PCA as the absolute immersion that generates a sense of thorough engagement. State of flow means the inability to sense time or place while using an application (i.e. LMS, in this case). According to Bozoglan et al. (2014), the state of flow is reflected by the realisation of each performed action, the focusing of attention, neglect of self-mindfulness, a complete sense of autonomy, consistency requirements, and knowing the purpose of each activity required and performed. In such situations, PCA is useful and leads to positive outcomes; however, this must be conditional on its being directed to purposeful activities. Jumaan et al. (2020) found that PCA affects the satisfaction, perceived usefulness and continuance intention of the mobile internet. Similar evidence from the educational technology literature is that Roca (2008) studied 172 university learners and found that cognitive absorption significantly improves the continued intention to use LMS. Another paper, by Léger et al. (2014), concluded that cognitive absorption by 36 students had a significant effect on learning outcomes.

Nevertheless, the outcomes of bringing a learner to the state of flow are subject to several conditions, notably, the usability of the system. Without a usable system, the learner cannot reach this state. A study conducted by Moreno et al. (2016) on 251 students enrolled in distance learning business programmes in Brazil concluded that usability mediated the relationship between PCA and the effective intention to use. PCA is perceived as both a negative and a positive factor affecting academic performance. From a study of 239 students in a University of Technology, Rouis et al. (2011) showed that PCA on the social media makes students less connected to their studies. It even made them unable to plan for their studies and negatively affected their ability to self-regulate. However, the PCA can be perceived as a positive factor, if it is purposeful and leads to transformed educational behaviour, such as deriving benefits from the technology.

Self-regulated learning (SRL)

Self-regulation is critical for students’ learning performance (Kurtovic et al., 2021). In e-learning, SRL is a significant predictor of learners’ success (Sun & Rueda, 2012). Previous research reported that unsuccessful e-learners are those who were not equipped with SRL skills and could not practise its strategies (Liaw & Huang, 2013). Thus, SRL is essential in the learning processes and is particularly applicable to e-learning, since it is a self-driven learning platform (Liaw & Huang, 2013). Self-regulated learning is learners’ ability to set their learning objectives and plans and to carry out these plans as expected (García-Pérez et al., 2020). It is further defined as “learners’ ability to independently and proactively engage in self-motivating and behavioural processes that increase goal achievement” (Zimmerman, 2002).

Writers have debated whether self-regulation nurtures learning or is part of its nature. One school of thought believes that it is part of the learners’ competence and ability (Schunk & Ertmer, 2000). In contrast, the second believes it can be temporary (Hall & Fong, 2010) or can be learned through specific training or technologies.

The first school also perceives self-regulation as an aspect of personal competence that is gained over time. Learners direct their learning processes and attainments by setting challenging goals for themselves and devise appropriate strategies to achieve their goals, enlisting self-regulative influences that motivate and guide their efforts (Liaw & Huang, 2013). Self-regulated learners are characterised by their ability to initiate meta-cognitive, cognitive, affective, motivational, and behavioural processes which function to achieve their learning goals; in this they persevere until they achieve success (Kitsantas et al., 2019). SRL as suggested in Kizilcec et al. (2017) is a personality trait whereby learners have the resilience to initiate, undertake, complete and adapt to the process or the task in hand.

The second school of thought agrees that SRL is not a static skill. Instead, it evolves with practice and is a scaffolding that supports the learner through contingencies or uncertainties (Kizilcec et al., 2017), thus, it is called Perceived Self-regulated learning (PSRL) that is based on a certain condition or technology. Indeed, self-regulated learning is highly dependent on a person’s self-efficacy (Liaw & Huang, 2013), while the perception that one can be structured is contextual and based on the tools available. In other words, self-regulated learning is more of a personality trait, whereas Perceived Self-regulated learning (PSRL) can be gained by technological interventions (Wu et al., 2010).

In the context of this research, PSRL is defined as an active, constructive process that is made possible by technology, in which learners can set their learning goals and are enabled to monitor, regulate, and control their cognition and behaviour (which are driven and constrained by their goals and by the contextual features of the environment) (Liaw & Huang, 2013). Therefore, unlike SRL, which is confined to personal attributes, PSRL is operationalised as the perception of a technology as a self-regulated individual learning tool which enables people to regulate the content of e-learning (Liaw & Huang, 2013). In other words, PSRL is an outcome or a benefit of using LMS effectively. In their study of the perceived satisfaction of 196 university students, perceived usefulness and interactive learning environments were shown to positively influence PSRL in an e-learning environment (Liaw & Huang, 2013).

To sum up, in educational technology literature, PSRL can be improved by an intervention in the practical interface design.

Analytical framework

Perceived cognitive absorption, perceived self-regulated learning and continuation intention of LMS

The ultimate purpose of the LMS is to improve learners’ ability to self-regulate. If the purpose of the LMS is perceived to have been met (i.e. if it becomes a perceived benefit and operationalised in this research PSRL), the CIU will be improved. Thus, the first criterion for a meaningful intervention is to improve the PSRL; i.e. the new intervention must improve the PSRL so that the CIU can be strengthened. There are two main theoretical perspectives in explaining the proposed effect of the PSRL on CIU, those of perception theory and expectation theory.

First, according to perception theory, perception is subjective and based on contextual factors, i.e. people’s perceptions and their interpretations are based on their experience, knowledge, background, and self-rating (Yammarino & Atwater, 1993). People’s perception of themselves may also have a significant effect on their intentions (Maselli & Altrocchi, 1969). In other words, if a system improves one’s perception of one’s capabilities and performance, it will strengthen one’s intention to keep using the system. There is academic evidence that PSRL and CIU are associated positively. The relationship between PSRL and CIU was studied in Concannon et al. (2018). In their phenomenological study, Concannon et al. (2018) found that the courses that are designed to improve learners’ perceived self-regulation strengthen their persistence in following the course and their intention to take similar courses in the future. Self-regulation is formed not only from competence but also from perception and experience, built up by the design of the course and the interfaces and characteristics of the LMS.

Second, according to expectation theory, if a system’s users receive the expected benefits from the system, their intention to continue using the system is accordingly strengthened (Lee, 2010). Therefore, if the system can deliver the targeted objectives, the users are expected to use it in the future. PSRL is about the belief that one can set goals, plan accordingly, and then follow the plan. An LMS is a system designed and implemented to improve learners’ ability to set goals, plan accordingly, and follow the plan. Therefore, if LMS improves the PSRL, the CIU is expected to become stronger.

  • H1: PSRL positively affects CIU

The second criterion to ensure the meaningfulness of the intervention is the ability to improve the PCA. It is proposed in this section that the PCA from using the LMC can predict the variation in the learners’ CIU. Based on the expectation-confirmation model extended by Jumaan et al. (2020), the confirmation of the expectation affects the PCA, which affects the perceived usefulness, satisfaction and the CIU. That is, by enhancing the learners’ PCA, it is proposed to improve the CIU because it can change their attitude to the system (Moreno et al., 2016). Using the same analogy in LMS, if students are fully engaged by the use of a learning technology, their engagement improves their attitude to the application of LMS, hence strengthening their intention to use it effectively. This is also shown by Moreno et al. (2016) in their examination of 251 students in Brazil. On the same lines, the first proposition is that learner’s PCA towards the LMS will intensify the CIU.

  • H2: PCA positively affects the CIU

Second, it is proposed that improving the learners’ PCA improves the perception of the benefits from the LMS (i.e. PSRL). An LMS is an application that aims to improve the learners’ perception of their personal ability to organise, structure and achieve learning objectives. It is shown in Basol and Balgalmis (2016); Karlinsky-Shichor and Zviran (2015) that the effective use of a software application improves its perceived benefits, i.e. the PSRL. Since the PCA can improve the effective use of applications (Moreno et al., 2016), the present study argues that the PCA positively affects the PSRL.

  • H3: PCA positively affects PSRL

Third, it is proposed that a technological intervention that can improve the learners’ PCA will influence CIU positively mediated by improving the perception of benefits from the LMS (i.e. PSRL). The PCA does not, however, always affect the CIU because it can sometimes be perceived negatively. For instance, learners who had higher PCA in using Facebook were found to have a lower level of academic achievement (Rouis et al., 2011). Therefore, the PCA needs to be directed to the purposeful use of applications. Otherwise, it could lead to poor results. Since the ultimate purpose of the application is to regulate one’s own learning activities, the present study proposes PSRL as a mediator between PCA and CIU.

  • H4: PCA affects the CIU positively mediated by the positive effects of PSRL

The earlier discussion concludes and provides a theoretical foundation of an analytical framework that is purposed in Fig. 1. The analytical framework is based on three states: the engagement level, which could improve the learners’ ability, and so the intention to keep using the educational technology will be enhanced. In other words, the significance of the intervention can be measured by the amount of improvement in the Perceived Cognitive Absorption (PCA) and Perceived Self-regulated learning (PSRL) of the LMS, in turn strengthening the CIU, as visualised in Fig. 1. In this figure, the causal relationships are proposed. The use of tracking technology operationalised into perceived usefulness and perceived ease of use, is proposed to influence the PCA and PSRL. Additionally, it is proposed that PCA affects PSRL and CIU while PSRL affects CIU. And the aim of this article is to test the analytical framework purposed in Fig. 1.

Fig. 1
figure 1

Research Conceptual Framework

The following sections are to deduce from the literature these propositions.

Tracking technology (TT) and continued intention to use the LMS (CIU) of the LMS

Herd behaviour theory was adopted in the present study to theorise the potential impact of TT on improving learners’ PCA and PSRL. This TT could improve their PCA and their perception of an improved SRL. The improved perception of PSRL and PCA suggested by herd theory might further improve learners’ LMS CIU. According to herd theory, users who are exposed to certain behaviours performed by others follow a similar behaviour pattern. Herd behaviour theory suggests that exposing learners to a technology that displays the behaviours of others (e.g., as shown by tracking tools (TTs)) is assumed to enhance their intention to follow and imitate the perceived learning practices of those others. The rationale for herd behaviour lies in people’s belief that others may have access to some critical information that is unavailable to themselves. According to the arguments of Banerjee (1992), decision rules are characterized by a need to follow rather than lead. The concept of TT in this research is borrowed not from comparison theory as found in the classic literature, but rather from the leading fundamental theory here, that of herd behaviour. This operates by eliminating the self from the comparison and letting the learner see the pattern of the herd’s behaviour. This research adopted herd behaviour theory to develop the TT intervention.

The present study considers the usability, in terms of ease of use and usefulness, as confirmation that the tracking tool is used. In other words, since students express their perceptions of its usefulness and its ease of use, their proposed effects on the learners’ cognitive absorption and PSRL can be improved so that the CIU of the LMS can be enhanced. Thus, it is proposed that the more the TT is used, as operationalised in the perceived ease of use (PEOU) and perceived usefulness (PU) of the TT, the higher will be the learners’ tendency to keep using the LMS in the future.

  • H5: Perceived usability of TT affects CIU positively

    • H5a:PEOU of TT affects CIU positively

    • H5b: PU of TT affects CIU positively

Tracking technology (TT) and perceived cognitive absorption (PCA) of LMS

The use of tracking technology is proposed to influence learners’ PCA in LMS. According to herd behaviour theory, learners will be motivated and engaged by following the steps of others., This means that, where learning is concerned, students as they follow the herd tend to become intensely focused on the content. Therefore, a tracking technology will affect the learners’ perceived cognitive absorption. Thus, it is proposed in this research that the greater the use of the TT, which is operationalised in the level of PEOU and PU, the greater the effect on the learners’ perception of cognitive absorption from the use of the LMS.

  • H6: Perceived usability of TT affects PCA positively

    • H6a: PEOU of TT affects PCA positively

    • H6b: PU of TT affects PCA positively

If using TT improves the level of perceived cognitive absorption, the usability of the tracking technology can affect the CIU. Accordingly, the following hypotheses were developed.

  • H7: Perceived usability of TT affects CIU positively mediated by the positive effects of PCA

    • H7a: PEOU of TT affects CIU positively mediated by the positive effects of PCA

    • H7b: PU of TT affects CIU positively mediated by the positive effects of PCA

Tracking technology (TT) and perceived self-regulated learning of LMS

Learning analytics was found to enhance learners’ awareness and regulation of their learning processes (Schumacher & Ifenthaler, 2018). The tracking tool is proposed to improve the learners’ perceived self-regulation skills by exposing them to others’ learning practices. According to social comparison theory (Festinger, 1954), when students can observe other students’ performance, they can set goals and plan more effectively and efficiently (Venkatesh et al., 2000). This is unlike the calibrated self-comparison of a self-regulated learning tool, which could negatively affect learners’ self-esteem. Instead, the present study uses the power of learners’ perceived self-regulation in learning to retain their focus on their perception of their learning while using the tracking tool. Learners will be able to know more about the relative weight of each topic based on the number of clicks by others. This is proposed to improve their perception of self-regulation. The evidence here can be strengthened by research (Wang et al., 2019), which concluded that rational herd behaviour is a persistent pattern demonstrated by learners to alleviate the complexity of a module by planning based on the efforts of others and the time they spend on the learning activities.

  • H8: The usability of tracking technology affects PSRL positively.

    • H8a: PEOU of TT affects PSRL positively

    • H8b: PU of TT affects PSRL positively

The TT as an organisational tool could also improve the sense of a task being structured and of being in control of it; this map improves the learners’ perception of the value of an application as a self-regulating tool (i.e. PSRL). This positive perception could then affect the positive attitude to continuing to use an LMS in the future. The TT could affect the PSRL, which has been proposed to affect the CIU positively. Therefore, the present study proposes that the TT affects the CIU mediated by the PSRL.

  • H9: Perceived usability affects CIU positively mediated by the positive effects of PLS

    • H9a: PEOU of TT affects CIU positively mediated by the positive effects of PSRL

    • H9b: PU of TT affects CIU positively mediated by the positive effects of PSRL

The students are presumed to become immersed and focused on completing the learning activities in order to follow the herd. This could establish their perceived cognitive absorption. Only then can the students’ perceived self-regulation be improved. This may be explained as follows: the students will be focused and deeply engaged with viewing and completing the learning activities on time (since the weekly topic of the module is represented graphically). They will need to perceive themselves as they interact, reason, search for content, set goals, and plan to complete the activities. These skills are the critical dimensions of the PSRL

  • H10: TT usability affects PSRL positively mediated by the positive effects of PCA

    • H10a: PEOU of TT affects PSRL mediated by the positive effects of PCA

    • H10b: PU of TT affects PSRL mediated by the positive effects of PCA

The level of cognitive absorption in the use of the dashboard can be a determining factor in the CIU (Arnold & Pistilli 2012). In other words, for purposeful behavioural engagement (i.e. PSRL) resulting from the motivational context (i.e. PCA), could improve the cognitive behavioural engagement, which could then lead to the improvement of the intention to continue using an LMS.

  • H11: TT usability affects CIU mediated by the positive effects of PCA & PSRL

    • H11a: PEOU of TT affects CIU positively mediated by the positive effects of PCA & PSRL

    • H11b: PU of TT affects CIU positively mediated by the positive effects of PCA & PSRL

Research methodology

This research will develop a new tracking technology to be integrated in the learning dashboard. The effects of this tracking technology will be examined by using survey. A questionnaire is preferred when the population is too large to be interviewed singly. Non-computer science background students could struggle in using the computer due to computer literacy issues. To ensure the homogeneity of the data collected and avoid outliers due to other non-research related factors (E.g. technology anxiety or technology literacy), the sample focused on Computer Science Students at one of the modern universities. The data is collected from Westminster International University in Tashkent (WIUT), which was established in 2002. Uzbekistan University was the first international university located in Central Asia to offer a Western-style education with UK qualifications. In this research, 400 students, all from Uzbekistan, who had finished their first year in the Computer Science department, were using the LMS. Both demographics were roughly well presented 37% woman and 63% men. Academic courses at WIUT lasted eleven weeks. Accordingly, to avoid end-of-term stress, the surveys were distributed four weeks before the exam period. A pilot study to try out the research method was essential. A pilot study mimics the data collection process to detect and adjust potential pitfalls in preparation for the survey and the actual data collection phase of the research (Van Teijlingen et al., 2001). The last stage of developing the questionnaire involves a process of pre-testing for validity, reliability, and mistakes (Presser & Blair, 1994; Presser et al., 2004; Reynolds et al., 1993). The pre-testing included recruiting a postgraduate student in psychology from the University of Westminster in the UK. 16 responses were collected. Following feedback on the questionnaire design and layout, more straightforward language was used. The questionnaire was revised according to the recommendations received.

Sampling technique

Statisticians have agreed that a sample size of 20–50 to measure each independent variable is a sufficient range in any research where SEM is applied, such as the structure equation modelling and confirmatory factor analysis found in the present study. However, obtaining a ratio of 30 responses: 1 construct is a preferable way of making the measurements reliable (Kline, 2015). The developed model for the present research was structured from the following independent variables: PEOU, PU, PCA, PSRL, and CIU. The ratio of 30:1 suggests that the minimum sample size for the present research was 150 responses. The total number of participating students in the present study survey was 151.

Questionnaire design

Perceived Cognitive Absorption, perceived usefulness and perceived ease of use were adopted from Roca et al. (2006). The CIU is a five items construct borrowed from Abdullatif and Velázquez-Iturbide (2020) and Alraimi et al. (2015). Perceived Self-regulated learning was adapted from Basol and Balgalmis (2016) and Liaw and Huang (2013). Based on Zimmerma’s definitions of self-regulation abilities and the results from the pilot study on the wording about these abilities, new topics were developed. They concerned the LMS ability to “manage my study time”, “plan my learning tasks independently”, “where to search for missing information”, “tracking my learning progress independently” and “evaluation of the achievement”.

Validity and reliability tests

SMART PLS software was used in this research for the statistical data analysis of the survey. Partial least square methods (SMART PLS) are employed as the structural equation modelling techniques to examine the proposed conceptual framework. It is originally built by Herman Wold in 1970s. This statistical technique provides a framework for analysing multiple causal relationships between a set of variables. It is assumed that each block of variables is symbolized by a latent construct (i.e. theoretical concept).

This research adopted three measures for ensuring the reliability of the constructs. Reliability measures are to examine the internal consistency of the constructs (Henseler et al., 2009). They describe the amount to which the items used are constituting the same construct (Rönkkö & Cho, 2022; McNeish, 2018). Thus, they measure the inter-relatedness of the items within the test (Tavakol & Dennick, 2011, p.53). The proposed three measures are Composite reliability, Cronbach alpha, and Rho_A. Composite reliability “Measures the sum of latent variable factor loadings relative to the sum of the factor loadings plus error variance” (Urbach & Ahlemann, 2010, p.19), while Cronbach alpha “Measures the extent to which all the items in a test measure the same concept or construct and hence it is connected to the inter-relatedness of the items within the test” (Tavakol & Dennick, 2011, p.53). Rho_A is parallel to composite reliability but using Spearman’s test with unstandardized loadings of factors (Dijkstra & Henseler, 2015). All of the measures are more than 0.6 for all of the research constructs, indicating the reliability of them (Henseler et al., 2009).

This research adopted four methods to ensure the validity of the constructs. They were the AVE test, the Fornell Larker test, HTMT, and Confirmatory Factor Analysis (i.e. the significance of the factor loading and significance of the model). All the reported AVEs were accepted because they were above 0.5. The Fornell Larker test, as in Table 1, presents the correlation matrix of the constructs. According to this test, the square root of AVE should be more than the highest correlation in the matrix, so the highest correlation was r = 0.697, p < 0.05, and the lowest square root of AVE was .787, which proved the existence of discriminate validity.

Table 1 Fornell Larker Criterion Test

Another approach for assessing discriminant validity as suggested by Roemer et al. (2021): the heterotrait-monotrait ratio of correlations (HTMT). The HTMT is a measure of similarity between latent variables. A threshold of 0.85 reliably differentiates between those pairs of latent variables that are discriminant valid and those that are not. as shown in Table 2, all measures are far below 0.85. This means that this study satisfied the criteria for discriminant validity. A confirmatory factor analysis test was conducted (see Table 3 for the standardised factor loadings). All the factors loadings were significant and above 0.6. The model used for significance and fitness for purpose was χ2/df =1.498 and RMSEA of .058 ware accepted. Regarding Fornell Larker Criteria test, as part of discriminate validity, the square root of AVE should be more than the highest correlation in the matrix, which is fulfilled as summarised in the Table 1.

Table 2 Heterotrait-Monotrait ratio of correlations (HTMT)
Table 3 research constructs

Common method Bias

Two approaches were taken to checking for common method bias to test for the possibility of systematic error. They were the Variance Inflation Factors and Harman’s single factor test. In the first method, as advised by Kock (2015), all the independent and dependent variables were tested and found to be less than 3.3, as shown in Table 4. For the second measure, Harman’s single factor test, dimension reduction was used and the extraction sum of squared loading had to be less than 50%. In this research it was 44.35%, which also indicated no issue with the common method bias. In addition, this research adopted unmeasured latent method construct (ULMC) (UMLC) to examine the common bias (Richardson et al., 2009). The analysis is conducted using AMOS software application and the common variance is 25% which is accepted. The report of the analysis is in the appendix.

Table 4 VIF Table

Findings

In the present study, the TT was implemented by illustrating the summary of completed activities on the Moodle course page. The completed activities were the course learning content tasks (i.e. reading, attempting or completing a quiz, completing a lesson and downloading slides). Each pictured item connected to a hyperlink for the actual learning content (Fig. 13, for example). The second item was completed by 17 students; if students checked on it and decide to complete the activity, they could access it by clicking on it once. This increased the click counter by one.

The click counter statistics were collected weekly on a separate database for data analysis purposes. A similar counter was added to the TT table to collect another aspect of students’ satisfaction with the visualised data. All counters were reset every week before totalling the collection of data for the following week. The instructor and the researcher collaborated weekly to administer the process of updating the illustration of the activity. Figure 2 shows the click counters on the TT.

Fig. 2
figure 2

The implemented tracking technology – students’ interface

To collect information on learners’ interactions, a click counter was used and customised to fit the purpose of this research. Tracking learners’ progress is a well-established practice that involves following their navigational behaviour by logging their interaction with hyperlinks. The click counter indicates the behavioural intention to use the system (Toohey et al., 2019), and is considered a primary mechanism for collecting interaction data to feed the instructor’s dashboard. The data on the click counters are reported to the students through the TT system.

Thus, the present study’s click counter was developed to provide further insight from a different angle into students’ perspectives on the proposed features. All clicks were recorded even those repeated by the same student. A re-clicked item could reflect the importance of the topic or the feature being revisited. In other words, non-unique clicks were also recorded. The “Like” counter collected only unique clicks. i.e. the student could click once only on the “Like” button. The like-counter captured the student’s instant perception of the TT and the competency guidance visualisation. The deeper level click counter of competency sub guidance is shown on Figs. 3 and 4. These data tended to show the students’ interest in the competency being illustrated. There was an in-depth investigation of the impact of the tracking tool panel on students’ behaviour. This was implemented by adding a counter to each activity link to measure the number of times that students accessed it.

Fig. 3
figure 3

Tracking technology and visualised competency – condensed students’ interface

Fig. 4
figure 4

Tracking technology and visualised competency – expanded students’ interface

Testing the effects

This research used Partial Least Square Methods – Structure Equation Modelling (PLS-SEM) to test the research results. SmartPLS software was used in this process. Figure 5 is used to visualise the test results. TT is measured by the level of PEOU and its PU. Thus, there were six main relationships here, as shown in Fig. 5. The first two relationships came from the effect of TT on PCA and on PSRL. The second two came from the effect of PSRL and PCA on the CIU. The last two derived from the effect of TT on the CIU, both directly and when mediated by the PSRL and PCA. The model was well-fitted and the adjusted r2 was significant (55%, 51.2%, 27.5%) for the dependent factors (CIU, PSRL, and PCA).

Fig. 5
figure 5

Research model

Table 5 reports the significant effects of the learners’ PCA and PSRL from the LMS on their intention to continue using this system. There were four relations to examine. They are the effect of PSRL on CIU (H1), of PCA on CIU (H2), of PCA on PSRL (H3), and of PCA on CIU through PSRL (H4). The PLS model results suggested that the PSRL significantly affects CIU (.404, P < 0.01), confirming H1. PCA has three significant effects: on PSRL (.58, P < 0.01): direct on CIU (.307, P < 0.01) and indirect on CIU through PSRL (.234, P < 0.01) to validate H2, H 3, and H4. Thus, PCA has a total significant effect on the CIU (0.541, P < 0.01).

Table 5 Research results

This research found, as proposed, that the usability of a TT significantly affects the perceptions of the LMS, i.e., the PSRL and PCA of the LMS. TT usability (PU and PEOU) had significant positive effects on LMS PCA (.281, P < 0.01 and .314, P < 0.01), which confirm H6a and H6b. However, no significant evidence was found to support the direct effects of the PEOU (.131 > .05) and PU (.113, P > 0.05) of the TT on the LMS PSRL, causing H8a and H8b to be rejected. Further analysis found a relationship between TT and PSRL, but PCA mediated it. The PCA had a strong mediating effect on the relationship of PEOU (.179, P < 0.05) and PU (.163, P < 0.05) on the PSRL, which confirms H10a and H10b. Thus, the TT usability aspects, i.e., PEOU (.315, P < 0.05) and PU (.274, P < 0.01), had a total significant effect on the PLS.

The TT usability aspects had a significant total effect, a total indirect effect, but no direct effect on CIU. The direct effects of the PEOU (0.011, P > 0.05) and PU (.137, P < 0.05) were weak and statistically non-significant, which disqualifies H5a and H5b. The total effect of the PU (.337, P < 0.01) was greater than the effect of the PEOU (.233, P < 0.05), and both of them had significant effects on the LMS CIU. However, the total indirect effect of the PEOU (.222, P < 0.01) and PU (.2, P < 0.01) are, as demonstrated, positive and significant. Interestingly, regarding the three proposed mediators PCA, PSRL and PCAà PSRL, PSRL did not play a significant role in mediating the PEOU (.053, P > 0.05) and PU (.046, P > 0.05), which causes H9a and H9b to be rejected. Nor did PCA alone show any significant impact as a mediator (0.097, 0.084) on PEOU and PU, respectively (P > 0.05), which invalidates H7a and H7b. However, the PCAà PSRL was a significant mediator for the effects of both PEOU (.074, P < 0.05) and PU (.064, P < 0.05) on the CIU, stressing the role of PCA in the model, which confirms H11a and H11b.

Discussion

This research is twofold: to set a design framework for interventions for improving the intention to continue using LMS and to use this design framework as a theoretical ambarella to reconceptualise the tracking technology from a self-comparison tool to be a herding tool so that the intention to continue use the LMS can be improved.

Regarding the first objective, this research proposed that for a continuance intention to use LMS, there is a need to improve the perceive cognitive absorption and perceived self-regulated learning. This research found that that the perception of cognitive absorption (PCA) was found to influence the learners’ perceptions of being self-regulated as a result of using the system. This research also found significant evidence that PSRL and PCA influence the CIU positively, and PCA affects the CIU positively, partially mediated by the PSRL. This finding applies to educational technology the findings in the study by Jumaan et al. (2020), that PCA affects the CIU in the mobile industry. Similarly, too, to their finding that PCA affects the perceived usefulness of the technology; the present research found that the PCA from the LMS affects the perception of self-regulation from using the LMS, which is the main benefit derived from using it. Thus, any technological intervention can influence the learners’ perception of the LMS either as a PSRL or as the PCA could affect its CIU.

This research successfully designs this tracking technology and found that this tool is improving the perceptions towards the LMS and so the intention to continue using it. This research theorised, developed, examined, and validated the integration of a TT by improving the perceptions of the LMS. According to herd behaviour theory, one’s decisions and emotions are highly influenced by herd decisions and behaviours. Thus, the first proposed model was the use of TT on the LMS PSRL and PCA so as to improve the CIU of the LMS. The use of TT was measured using TAM through the PEOU and PU of the TT. Based on herd theory, when learners saw that the class was completing learning activities, they felt motivated and engaged to do the same (Banerjee, 1992). The leading feature here was the students’ knowledge of the tasks and activities that other learners had completed, which engaged them in similar tasks and was also able to help them plan more effectively now that they were conscious of the gap between their achievement and that of the herd. Using this theory, the intervention proposed to increase students’ cognitive absorption by providing a feature that told them what others were doing. The empirical data in this research yielded significant supporting evidence for this.

The first proposition concerned the relationship between TT and PCA. The use of a tracking technology in this research was inferred from proximity. That is, the perception of usability (PU and PEOU) in this research indicated that a user must have accessed and used the tool in order to register the perception. In operational terms, H6a and H6b suggested that PEOU and PU of TT positively affected PCA. Since the perceived usability of TT positively impacted on learners’ PCA, this tool provided a meaningful learning experience. This research finding supplements those of Zhao et al. (2020), who found the number of downloads affected the software application downloader by following herd behaviour without having a clear rationale. The present research is novel because it is the first to reflect similar propositions but in an educational and technological context.

Additionally, this research supplements that of Jivet et al. (2017), and Bodily and Verbert (2017), who believed that the use of dashboards in education improved students’ engagement. However, this research is different from that of Jivet et al. (2017) because it presents the learners’ interactions. In the present research, only other people’s behaviours are presented, to avoid issues of being frustrated or disappointed by being compared with those others. In other words, having a tracking tool that did not compare people’s behaviour could still improve the PCA without sacrificing an individual’s well-being through comparisons with others.

The second proposition concerned the effect of PEOU and PU of the tracking technology on the PSRL of the LMS. The leading underlying theory here was that students would be able to benchmark their completed activities and tasks against those of the herd, allowing them to perceive that they could plan their activities better. This research found no significant evidence to support this argument. Nevertheless, the ninth hypothesis in this research was confirmed. H9 theorised that the perceived usability of the tracking tool had a positive impact on students’ perceived self-regulated learning mediated by their PCA. The confirmed theory corresponded positively with the logic that H9 was based on the fact that learners could become deeply focused on completing their activities in order to follow the herd (their peers), which initiated improved cognitive absorption. Only then did students experience the improvement of self-regulated learning. This result contributed to the findings by Jivet et al. (2017) that viewing and copying the completed activities could develop a deeper engagement with learners since thought-out tasks made the learning content meaningful. In a nutshell, the use of TT would devalue the benefits from the LMS unless it improved the LMS PCA.

The third proposition involved the relationship between TT and CIU. Interestingly, in this research, both parts of the fifth hypothesis, H5 (a) and (b), were rejected. H5(a) and (b) named two ways in which PEOU and PU could arguably directly improve students’ CIU. Additionally, the role of PCA as a mediator was not verified in this research. In H7, perceived usability showed no impact on the students’ intention to continue through their PCA directly, except when the PCA and PSRL acted as mediators. Hypothesis 10 was confirmed in this research; it proposed that the perceived usability of the TT had a positive impact on students’ continued intention to use the LMS and this impact was mediated by the relationship between the PCA and PSRL.

To sum up, the TT improved their perceived cognitive absorption and deeply engaged them, renewed their focus on their learning, and satisfied them by immersing them in their work. This increased their perception that self-regulated learning was attainable. Once students felt that they received all these benefits of engagement and at the same time were in control of their resources, they could effectively regulate their learning tasks and became enthusiastic about using the LMS again.

Contributions

The framework in Fig. 5 and its operationalised model in Fig. 6 contribute to the existing literature on educational technology because they are the first to frame and test the relationship between tracking technology in education, the perceived cognitive absorption, perceived self-regulation, and intention to continue using the LMS. Adding to other research in explaining the intention to continue using LMS such as social media (Al-Shaikhli et al., 2021), this research contributes to this body of knowledge by showing that embedding tracking technology could play a significant role in predicting improving the intention to continue using LMS. Although it has been conceptualized before that tracking technology resembles and simulates the herd behaviour in education (Jivet et al., 2017), this research is the first to examine and found, its positive effects on the intention to continue using the LMS.

Fig. 6
figure 6

Operationalised model of the research model

This research contributes to the educational technology literature by borrowing herd behaviour theory from the field of sociology (Hirshleifer & Hong Teoh, 2003; Lux, 1995), and educational technology (Sun, 2013; Peng et al., 2009) to propose the possible role of tracking technology in a learner’s LMS interface. Tracking technology is introduced in this research to implement this proposition. The new TT is different from any other in the literature. The present research makes the TT at the interface continuously visible, whether consciously or unconsciously. In addition, unlike others who compare learners’ interactions with their own (Jivet et al., 2017), this research avoided benchmarking the learners’ interaction against others’ because it was noted in the literature that such a comparison could lead to unintended consequences.

Thus, the present research used herd theory to discuss potential mimicking by students and showed how this could affect their experience of being engaged in using the LMS and perceiving that they were self-regulated by discovering others’ interaction data. The concept of letting the learners see the herd’s behaviour was a novel idea theoretically applied in the new context of learning technology. This research is novel in proposing this type of design.

Research limitations

The first area of improvement for the survey lay in including demographic questions so as to assess the role of gender, age, background and education, together with other country-specific factors. Despite the importance of having these factors in the study, the potential cost of having them could outweigh their benefits. Indeed, the class has similar students in terms of age, background, education and from relatively the same culture. Due to the novelty of the tracking tool idea, the research aimed to investigate the tool based on one topic, one class, one teacher, and one intake. This was hoped to reduce the impacts of unmeasurable personal factors on the research idea. The variation in the demographic factors was thus; hence replicated studies should ensure a wider and more diverse group of participants to test inclusivity and therefore, equality. The students may see this as a way of identifying them or asking what could be seen as personal questions. Moreover, the questionnaire was designed to be kept short and straightforward so as to merit a high response rate.

The measurement of the scales in the positivist questionnaire led to some areas for improvement. First, the PSRL was measured using a self-rated questionnaire, as commonly used in the literature. However, it could also be measured by the learners’ interactions on the computer (Çebi & Güyer, 2020). Both approaches were used in the literature, but the possibility of triangulating the data could improve the value of the work. Nevertheless, the cost and effort of integrating the algorithms of the data interactions and seeking approval from the students to connect these data with their questionnaire was a key challenge limiting the researcher’s ability to consider this option. Second, the PSRL was a perception and not necessarily accurate; i.e. this research focused on self-perception more than objective practices. Nevertheless, this research adopted rigorous validity and reliability tests to ensure that the scales that were used were helpful and not misleading.

Regarding the PCA, in the literature, some authors used 4-dimensional scales to measure the PCA; each of the dimensions having 4 items. This could be a valid point if the constructs were few in number. Nevertheless, if the present research had included 16 items for measuring the PCA the questionnaire would have been lengthy and this might have made the response rate low. The last point to include as a possible area for improvement was the ability to measure the use behaviour of the respondents regarding the interventions. Because there was no possible linkage between the data interaction information and the questionnaire, the only option available was to measure this behaviour through self-rated items. Statements about the behaviour could be doubted or inflated but using second-order questions for measuring this behaviour would give more valid responses. In other words, the questionnaire could ask direct questions of the level of use, but this might have elicited misleading answers since the use level was relative and not easily measurable. However, measuring the perception of ease of use and usefulness on a Likert scale could give stronger and comparable figures.

Future research

Replications and generalisation

Due to its methodological challenges, replicability and generalisability, this research could be improved by conducting similar research in different contexts and on different modules. A replication of the same study with a larger sample could also improve the reliability of this work. The students who enrolled on this project studied for a computer science degree with the implicit assumption that they did not have a problem with using computers. The results might differ if the undergraduates had come from a different subject background (such as Business, Biology, or English). Likewise, the age group of this study was clustered at the undergraduate level, which meant that all of the learners were below 20 years of age. Greater age might influence the research findings; the possibility of influencing the PCA for older people is not a straightforward process, since this raises questions about work-life balance, job-related factors, and family demands, bearing on the availability of time that can be dedicated to studying.

CIU and academic performance

The ultimate focus of this research was improving the CIU of LMS because in the present research, the CIU is associated with academic performance (Tawafak et al., 2018). Although this is a valid premise, in most of the previous research, it needed to be critically evaluated and analysed before taking the research findings for granted. There are two examples here that touch on this relationship and need to be considered with cautions.

First, herd behaviour was robust and affected the learners’ practice of following the herd. This did not mean that best practices were followed nor that the best performance was produced. Instead it implied the average person’s behaviour, which indeed could be a little higher than the middle point of the performance scale (more than 50%). Thus, the herd behaviour pushed the practices to cluster around the average performance which was suitable for the negative outliers (failing students) but was a weakness for the positive outliers (the best students). Thus, it was necessary to note that herd behaviour was found to play a significant role in improving the CIU. However, to improve the learning performance needed further investigation to examine the implicit assumptions underlying the relationship between the use of technology, herd behaviour and academic performance.

References

  • Abdullatif, H., & Velázquez-Iturbide, J. Á. (2020). Relationship between motivations, personality traits and intention to continue using MOOCs. Education and Information Technologies, 1–19. https://doi.org/10.1007/s10639-020-10161-z

  • Agarwal, R., & Karahanna, E. (2000). Time flies when You’re having fun: Cognitive absorption and beliefs about information technology usage. MIS Quarterly, 24(4), 665. https://doi.org/10.2307/3250951

    Article  Google Scholar 

  • Alraimi, K. M., Zo, H., & Ciganek, A. P. (2015). Understanding the MOOCs continuance: The role of openness and reputation. Computers and Education, 80, 28–38. https://doi.org/10.1016/j.compedu.2014.08.006

    Article  Google Scholar 

  • Al-Shaikhli, D., Jin, L., Porter, A., et al. (2021). Visualising weekly learning outcomes (VWLO) and the intention to continue using a learning management system (CIU): The role of cognitive absorption and perceived self-regulated learning. Education and Information Technologies, (2021). https://doi.org/10.1007/s10639-021-10703-z

  • Arnold, K. E., & Pistilli, M. D. (2012, April). Course signals at Purdue: Using learning analytics to increase student success. In Proceedings of the 2nd international conference on learning analytics and knowledge (pp. 267–270).

  • Banerjee, A. V. (1992). A simple model of herd behavior. The Quarterly Journal of Economics, 107(3), 797–817. https://doi.org/10.2307/2118364

    Article  Google Scholar 

  • Barnes, S. J., Pressey, A. D., & Scornavacca, E. (2019). Mobile ubiquity: Understanding the relationship between cognitive absorption, smartphone addiction and social network services. Computers in Human Behavior, 90, 246–258. https://doi.org/10.1016/j.chb.2018.09.013

    Article  Google Scholar 

  • Basol, G., & Balgalmis, E. (2016). A multivariate investigation of gender differences in the number of online tests received-checking for perceived self-regulation. Computers in Human Behavior, 58, 388–397. https://doi.org/10.1016/j.chb.2016.01.010

    Article  Google Scholar 

  • Bodily, R., & Verbert, K. (2017). Review of research on student-facing learning analytics dashboards and educational recommender systems. IEEE Transactions on Learning Technologies, 10(4), 405–418. https://doi.org/10.1109/tlt.2017.2740172

    Article  Google Scholar 

  • Bodily, R., Ikahihifo, T. K., Mackley, B., & Graham, C. R. (2018). The design, development, and implementation of student-facing learning analytics dashboards. Journal of Computing in Higher Education, 30(3), 572–598. https://doi.org/10.1007/s12528-018-9186-0

    Article  Google Scholar 

  • Bozoglan, B., Demirer, V., & Sahin, I. (2014). Problematic internet use: Functions of use, cognitive absorption, and depression. Computers in Human Behavior, 37, 117–123. https://doi.org/10.1016/j.chb.2014.04.042

    Article  Google Scholar 

  • Çebi, A., & Güyer, T. (2020). Students’ interaction patterns in different online learning activities and their relationship with motivation, self-regulated learning strategy and learning performance. Education and Information Technologies. https://doi.org/10.1007/s10639-020-10151-1

  • Concannon, J. P., Serota, S. B., Fitzpatrick, M. R., & Brown, P. L. (2018). How interests, self-efficacy, and self-regulation impacted six undergraduate pre-engineering students’ persistence. European Journal of Engineering Education, 44(4), 484–503. https://doi.org/10.1080/03043797.2017.1422695

    Article  Google Scholar 

  • Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly: Management Information Systems, 13(3), 319–339. https://doi.org/10.2307/249008

    Article  Google Scholar 

  • Dijkstra, T. K., & Henseler, J. (2015). Consistent partial least squares path modeling. MIS Quarterly, 39(2), 297–316.

  • Duffy, M. C., & Azevedo, R. (2015). Motivation matters: Interactions between achievement goals and agent scaffolding for self-regulated learning within an intelligent tutoring system. Computers in Human Behavior, 52, 338–348. https://doi.org/10.1016/j.chb.2015.05.041

    Article  Google Scholar 

  • Festinger L. (1954). A theory of social comparison processes. Human Relations, 7(2):117–140. https://doi.org/10.1177/001872675400700202

  • García-Pérez, D., Fraile, J., & Panadero, E. (2020). Learning strategies and self-regulation in context: How higher education students approach different courses, assessments, and challenges. European Journal of Psychology of Education, 1–18. https://doi.org/10.1007/s10212-020-00488-z

  • Green, S. G., & Welsh, M. A. (1988). Cybernetics and dependence: Reframing the control concept. Academy of Management Review, 13(2), 287–301. https://doi.org/10.5465/amr.1988.4306891

    Article  Google Scholar 

  • Hall, P. A., & Fong, G. T. (2010). Temporal self-regulation theory: Looking forward. Health Psychology Review, 4(2), 83–92. https://doi.org/10.1080/17437199.2010.487180

    Article  Google Scholar 

  • Harvey, A. J., & Keyes, H. (2019). How do I compare thee? An evidence-based approach to the presentation of class comparison information to students using dashboard. Innovations in Education and Teaching International, 57(2), 163–174. https://doi.org/10.1080/14703297.2019.1593213

    Article  Google Scholar 

  • Henseler, J., Ringle, C. M., & Sinkovics, R. R. (2009). The use of partial least squares path modeling in international marketing. In New challenges to international marketing. Emerald Group Publishing Limited.

  • Hirshleifer, D., & Hong Teoh, S. (2003). Herd behaviour and cascading in capital markets: A review and synthesis. European Financial Management, 9(1), 25–66. https://doi.org/10.1111/1468-036X.00207

    Article  Google Scholar 

  • Jivet, I., Scheffel, M., Drachsler, H., & Specht, M. (2017). Awareness is not enough: Pitfalls of learning analytics dashboards in educational practice. In European conference on technology enhanced learning (pp. 82–96). Springer.

  • Jumaan, I. A., Hashim, N. H., & Al-Ghazali, B. M. (2020). The role of cognitive absorption in predicting mobile internet users’ continuance intention: An extension of the expectation-confirmation model. Technology in Society, 63, 101355. https://doi.org/10.1016/j.techsoc.2020.101355

    Article  Google Scholar 

  • Karlinsky-Shichor, Y., & Zviran, M. (2015). Factors influencing perceived benefits and user satisfaction in knowledge management systems. Information Systems Management, 33(1), 55–73. https://doi.org/10.1080/10580530.2016.1117873

    Article  Google Scholar 

  • Kia, F. S., Teasley, S. D., Hatala, M., Karabenick, S. A., & Kay, M. (2020). How patterns of students dashboard use are related to their achievement and self-regulatory engagement. In ACM international conference proceeding series (pp. 340–349). Association for Computing Machinery. https://doi.org/10.1145/3375462.3375472

    Chapter  Google Scholar 

  • Kim, J., Jo, I. H., & Park, Y. (2016). Effects of learning analytics dashboard: Analyzing the relations among dashboard utilization, satisfaction, and learning achievement. Asia Pacific Education Review, 17(1), 13–24. https://doi.org/10.1007/s12564-015-9403-8

    Article  Google Scholar 

  • Kitsantas, A., Baylor, A. L., & Hiller, S. E. (2019). Intelligent technologies to optimize performance: Augmenting cognitive capacity and supporting self-regulation of critical thinking skills in decision-making. Cognitive Systems Research, 58, 387–397.

    Article  Google Scholar 

  • Kizilcec, R. F., Pérez-Sanagustín, M., & Maldonado, J. J. (2017). Self-regulated learning strategies predict learner behavior and goal attainment in massive open online courses. Computers and Education, 104, 18–33. https://doi.org/10.1016/j.compedu.2016.10.001

    Article  Google Scholar 

  • Kline, R. B. (2015). Principles and practice of structural equation modeling. Guilford publications.

  • Kock, N. (2015). Common method bias in PLS-SEM: A full collinearity assessment approach. International Journal of e-Collaboration (ijec), 11(4), 1–10.

  • Kurtovic, A., Vrdoljak, G., & Hirnstein, M. (2021). Contribution to family, friends, school, and community is associated with fewer depression symptoms in adolescents - mediated by self-regulation and academic performance. Frontiers in Psychology, 11, 615249. https://doi.org/10.3389/fpsyg.2020.615249

    Article  Google Scholar 

  • Lee, M. C. (2010). Explaining and predicting users’ continuance intention toward e-learning: An extension of the expectation-confirmation model. Computers and Education, 54(2), 506–516. https://doi.org/10.1016/j.compedu.2009.09.002

    Article  Google Scholar 

  • Léger, P. M., Davis, F. D., Cronan, T. P., & Perret, J. (2014). Neurophysiological correlates of cognitive absorption in an enactive training context. Computers in Human Behavior, 34, 273–283. https://doi.org/10.1016/j.chb.2014.02.011

    Article  Google Scholar 

  • Liaw, S. S., & Huang, H. M. (2013). Perceived satisfaction, perceived usefulness and interactive learning environments as predictors to self-regulation in e-learning environments. Computers and Education, 60(1), 14–24. https://doi.org/10.1016/j.compedu.2012.07.015

    Article  Google Scholar 

  • Lux, T. (1995). Herd behaviour, bubbles and crashes. The Economic Journal, 105(431), 881. https://doi.org/10.2307/2235156

    Article  Google Scholar 

  • Maselli, M. D., & Altrocchi, J. (1969). Attribution of intent. Psychological Bulletin, 71(6), 445–454. https://doi.org/10.1037/h0027348

    Article  Google Scholar 

  • McNeish, D. (2018). Thanks coefficient alpha, we’ll take it from here. Psychological Methods, 23(3), 412.

    Article  Google Scholar 

  • Mohammadi, H. (2015). Investigating users’ perspectives on e-learning: An integration of TAM and IS success model. Computers in Human Behavior, 45, 359–374. https://doi.org/10.1016/j.chb.2014.07.044

    Article  Google Scholar 

  • Moreno, V., Cavazotte, F., & Alves, I. (2016). Explaining university students’ effective use of e-learning platforms. British Journal of Educational Technology, 48(4), 995–1009. https://doi.org/10.1111/bjet.12469

    Article  Google Scholar 

  • Neugebauer, J., Ray, D. G., & Sassenberg, K. (2016). When being worse helps: The influence of upward social comparisons and knowledge awareness on learner engagement and learning in peer-to-peer knowledge exchange. Learning and Instruction, 44, 41–52. https://doi.org/10.1016/j.learninstruc.2016.02.007

    Article  Google Scholar 

  • Peng, M. W., Sun, S. L., Pinkham, B., & Chen, H. (2009). The institution-based view as a third leg for a strategy tripod. Academy of Management Perspectives, 23(3), 63–81. https://doi.org/10.5465/AMP.2009.43479264

    Article  Google Scholar 

  • Presser, S., & Blair, J. (1994). Survey pretesting: Do different methods produce different results?. Sociological Methodology, 73–104.

    Article  Google Scholar 

  • Presser, S., Couper, M. P., Lessler, J. T., Martin, E., Martin, J., Rothgeb, J. M., & Singer, E. (2004). Methods for testing and evaluating survey questions. Public Opinion Quarterly, 68(1), 109–130.

  • Reimers, G., & Neovesky, A. (2015). Student focused dashboards: An analysis of current student dashboards and what students really want. In CSEDU 2015 - 7th international conference on computer supported education, proceedings, 1 (pp. 399–404). https://doi.org/10.5220/0005475103990404

  • Reynolds, Nina, Diamantopoulos, Adamantios, & Schlegelmilch, Bodo. (1993). Pre-Testing in questionnaire design: A review of the literature and suggestions for further research. International Journal of Market Research, 35(2), 1–11. https://doi.org/10.1177/147078539303500202.

    Article  Google Scholar 

  • Richardson, H. A., Simmering, M. J., & Sturman, M. C. (2009). A tale of three perspectives: Examining post hoc statistical techniques for detection and correction of common method variance. Organizational Research Methods, 12(4), 762–800.

    Article  Google Scholar 

  • Roca, J. C. (2008). Understanding e-learning continuance intention in the workplace: A self-determination theory perspective. Computers in Human Behavior, 24(4).

  • Roca, J. C., Chiu, C. M., & Martínez, F. J. (2006). Understanding e-learning continuance intention: An extension of the technology acceptance model. International Journal of Human Computer Studies, 64(8), 683–696. https://doi.org/10.1016/j.ijhcs.2006.01.003

    Article  Google Scholar 

  • Roemer, E., Schuberth, F., & Henseler, J. (2021). HTMT2–an improved criterion for assessing discriminant validity in structural equation modeling. Industrial Management and Data Systems.

  • Rönkkö, M., & Cho, E. (2022). An updated guideline for assessing discriminant validity. Organizational Research Methods, 25(1), 6–14.

  • Rouis, S., Limayem, M., & Salehi-Sangari, E. (2011). Impact of Facebook usage on students’ academic achievement: Role of self-regulation and trust. Electronic Journal of Research in Educational Psychology, 9(3), 961–994. https://doi.org/10.25115/ejrep.v9i25.1465

    Article  Google Scholar 

  • Schumacher, C., & Ifenthaler, D. (2018). Features students really expect from learning analytics. Computers in Human Behavior, 78, 397–407.

    Article  Google Scholar 

  • Schunk, D. H., & Ertmer, P. A. (2000). Self-regulation and academic learning. In Handbook of self-regulation (pp. 631–649). Elsevier. https://doi.org/10.1016/b978-012109890-2/50048-2

    Chapter  Google Scholar 

  • Sun, H. (2013). A longitudinal study of herd behavior in the adoption and continued use of technology. MIS Quarterly: Management Information Systems, 37(4), 1013–1041. https://doi.org/10.25300/MISQ/2013/37.4.02

    Article  Google Scholar 

  • Sun, J., & Rueda, R. (2012). Situational interest, computer self-efficacy and self-regulation: Their impact on student engagement in distance education. British Journal of Educational Technology, 43(2), 191–204. https://doi.org/10.1111/j.1467-8535.2010.01157.x

    Article  Google Scholar 

  • Tavakol, M., & Dennick, R. (2011). Making sense of Cronbach’s alpha. International Journal of Medical Education, 2, 53.

    Article  Google Scholar 

  • Tawafak, R. M., Romli, A. B., & Arshah, R. B. A. (2018). Continued intention to use UCOM: Four factors for integrating with a technology acceptance model to moderate the satisfaction of learning. IEEE Access, 6, 66481–66498. https://doi.org/10.1109/ACCESS.2018.2877760

    Article  Google Scholar 

  • Toohey, D., Mcgill, T., Berkelaar, C., Kadekodi, A., Kaminska, D., Lianto, M., & Power, N. (2019). Do students really want to know? Investigating the relationship between learning analytics dashboards and student motivation. In Proceedings of the 2019 InSITE conference (pp. 321–332). Informing Science Institute https://doi.org/10.28945/4352

    Google Scholar 

  • Urbach, N., & Ahlemann, F. (2010). Structural equation modeling in information systems research using partial least squares. Journal of Information Technology Theory and Application (JITTA), 11(2), 2.

  • Van Teijlingen, E. R., Rennie, A. M., Hundley, V., & Graham, W. (2001). The importance of conducting and reporting pilot studies: the example of the Scottish births survey. Journal of Advanced Nursing, 34(3), 289–295.

    Article  Google Scholar 

  • Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating control, intrinsic motivation, and emotion into the technology acceptance model. Information Systems Research, 11(4), 342–365.

    Article  Google Scholar 

  • Wang, W., Guo, L., & Sun, R. (2019). Rational herd behavior in online learning: Insights from MOOC. Computers in Human Behavior, 92, 660–669. https://doi.org/10.1016/j.chb.2017.10.009

    Article  Google Scholar 

  • Wu, J. H., Tennyson, R. D., & Hsia, T. L. (2010). A study of student satisfaction in a blended e-learning system environment. Computers and Education, 55(1), 155–164. https://doi.org/10.1016/j.compedu.2009.12.012

    Article  Google Scholar 

  • Yammarino, F. J., & Atwater, L. E. (1993). Understanding self-perception accuracy: Implications for human resource management. Human Resource Management, 32(2–3), 231–247. https://doi.org/10.1002/hrm.3930320204

    Article  Google Scholar 

  • Zhao, X., Tian, J., & Xue, L. (2020). Herding and software adoption: A re-examination based on post-adoption software discontinuance. Journal of Management Information Systems, 37(2), 484–509. https://doi.org/10.1080/07421222.2020.1759941

    Article  Google Scholar 

  • Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory Into Practice, 41(2), 64–70. https://doi.org/10.1207/s15430421tip4102_2

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dhuha Al-Shaikhli.

Ethics declarations

Conflict of interest

None.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Al-Shaikhli, D. The effect of the tracking technology on students’ perceptions of their continuing intention to use a learning management system. Educ Inf Technol (2022). https://doi.org/10.1007/s10639-022-11156-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10639-022-11156-8

Keywords

  • Learning management system
  • Self-regulation
  • Cognitive absorption
  • Tracking technology
  • Herd behaviour