There is a wide body of psychological literature linking people’s time perspectives with how people make common judgements, decisions, and actions (e.g. Carelli et al. 2011; Zimbardo and Boyd 1999). Similarly, in educational research there is substantial interest in how students are strategically making study decisions, when and where to study (Gelan et al. 2018; Heileman et al. 2015; Kovanovic et al. 2015; Nguyen et al. 2017, 2018; Panadero et al 2016; Winne 2017). The use of CBA data may provide deep insights into how students learn and solve complex assignments and tasks (Greiff et al. 2015; Tempelaar et al. 2015), and eventually might help researchers to develop a specific CBA-specific cognitive theory (Greiff et al. 2017; Kirschner et al. 2017). As argued by a critical commentary on a recent special issue on CBA, Greiff et al. (2017, p. 718) indicated that “there is a rather urgent need for an integrated and comprehensive theoretical foundation that drives the design and the setup of CBAs and provides guidance for the entire process of developing, employing, interpreting, and making use of computer-delivered assessment instruments”.
In this study, we argue that the impact of CBA designs on the learning processes of students may be better understood, and eventually theoretically grounded, with the support of a better linkage of digital traces of actual learners’ interactions with CBA activities using learning analytics. As argued by (Nguyen et al. 2017), learning analytics research has found that the way in which teachers design tasks and assessments can influence how students are engaging with CBA tasks and their academic performance at a micro-level (within one assessment or task: see for example Greiff et al. (2015)) and a macro-level (across various assessments within or across modules: see for example (Koedinger et al. 2013; Nguyen et al. 2017; Rienties and Toetenel 2016; Toetenel and Rienties 2016)).
For example, Agudo-Peregrina et al. (2014) found that interactions with CBA tools, interactions with peers and teachers, as well as active participation were significant predictors of academic performance in six online and two blended modules. Similarly, in an introductory computer programming course, Brito and de Sá-Soares (2014) found that a high frequency of CBA at weekly level was one of the most effective ways of setting students on the route to success. In a flipped classroom of Business French, Gelan et al. (2018) used process-mining techniques amongst 285 students and found that most students followed their self-study sessions by the design of the course. In particular, most students tended to access the various reading materials in preparation for the exam, whereby failing students started to work either very late in the course or dropped out in the first 2 weeks (Gelan et al. 2018).
Weekly CBA may help to speed up the cycle of productive failure (Kapur 2008)—“fail fast to learn sooner” (Brito and de Sá-Soares, 2014)—because it can provide automated feedback. Indeed, a recent fine-grained study by Tempelaar et al. (2017) found that students with effective metacognitive strategies used worked examples, which provided help on a particular CBA task, at the beginning of their learning process, while students with sub-optimal learning strategies tended to use these examples only at the end of their learning process. Similar findings were reported by Gelan et al. (2018) and (Nguyen et al. 2018), whereby beyond the intensity of engagement the timing and types of engagements were primarily distinctive between “successful” and “less successful” students.
Online learning and self-regulated learning
In particular when students are learning in online environments with a range of learning activities with a range of choices and options when and how to study, including CBA (Nguyen et al. 2017; Trevors et al. 2016), “appropriate” Self-Regulated Learning (SRL) strategies are needed for achieving individual learning goals. Zimmerman (2000) defined self-regulation as “self-generated thoughts, feelings and actions that are planned and cyclically adapted to the attainment of personal learning goals”. Indeed a vast body of research has consistently found that self-regulation, directly and indirectly, impacts goal setting, motivation, engagement, and academic performance (Trevors et al. 2016; Winne 2017).
For example, in a study of 788 MOOC learners Littlejohn et al. (2016) found that learners’ motivations and goals substantially influenced learners’ conceptualisations of the learning environment, and how they engaged with the learning processes. Indeed, in a recent meta-review of 12 studies of SRL nested in online higher education, Broadbent and Poon (2015) found that metacognition, time management, effort regulation and critical thinking were significantly associated with academic achievement. However, the effect sizes were relatively small, whereby correlations ranged between 0.05 and 0.14, in particular in comparison to face-2-face settings (Broadbent and Poon 2015). In part, this small effect might be explained by the complex nature of online learning, and in part because most of the selected studies did not specifically measure fine-grained log data of what students were doing.
An indicator of a poor self-regulating process is procrastination. Academic procrastination can be viewed as leaving the academic duties to the last minute like preparation for exams and doing homework (Solomon and Rothblum 1984). A recent meta-analysis of the relationship between procrastination and academic performance has reaffirmed most of the previous findings, which was a negative association between procrastinating and academic performance (Kim and Seo 2015). While the use of self-report questionnaire has been predominantly present in the literature (Kim and Seo 2015), recent research on procrastination in blended and online learning environment has started capturing behavioural engagement through log analysis to understand how and when students procrastinate (Cerezo et al. 2017; Goda et al. 2015). For example, a study on learning behavioural types of 441 students in five e-learning courses suggested that 69.16% of the students fit within a procrastination profile. Students exhibiting procrastinating behavioural patterns performed significantly worse than students with a ‘learning habit’ pattern and ‘chevron’ pattern (Goda et al. 2015). Another study on 140 undergraduate students in a blended learning setting which tracked and analysed students’ behaviour on an LMS confirmed the negative effect of procrastination on academic performance (Cerezo et al. 2017).
Providing flexibility in online learning
Given that studying online and at a distance in comparison to f2f education is perhaps especially hard (Broadbent and Poon 2015; Nguyen et al. 2017; Rienties and Toetenel 2016; Simpson 2013; Toetenel and Rienties 2016), many distance learning providers purposefully design some forms of flexibility in terms of workload and study breaks to accommodate adult learners, who mostly also have work, family and caring responsibilities. As highlighted by a recent report on designing effective online modules by van Ameijde et al. (2016), providing a consistent and balanced workload throughout an online module with opportunities to take a “breather”, or to catch-up, may be essential for online learners. Indeed, fine-grained analyses of six online CBA modules by Nguyen et al. (2017) of weekly workloads indicated that many teachers consciously or subconsciously designed non-study weeks into the module schedule. At the same time, these fine-grained analyses of six online CBA modules by Nguyen et al. (2017) showed that when teachers designed high workloads for a particular week, most students tried to balance this workload by working more intensively before or after that high workload week.
In follow-up work linking predictive analytics data from excellent (grade > 75%), pass (grade > 40%), and failing students with actual engagement data, which was combined with the respective learning design, Nguyen et al. (2018) found that most excellent students studied well before the respective study week. In addition, excellent students often revisited various previously engaged learning activities, while in particular failing students mainly lagged behind in terms of the module schedule, and were primarily in catch-up mode (Nguyen et al. 2018). A similar finding was noted by Gelan et al. (2018), who found that students who successfully passed Business French studied in line with the course schedule.
In particular, when modules do not specifically design study breaks, one obvious assumption would be that for some groups of students the lack of opportunities to catch up might eventually “force” them to stop, as they have fallen too far behind. Therefore, in many modules at the OU teachers design specific breaks in the study, where no study activities are planned (Cross et al. 2016; Nguyen et al. 2017). In part, these study breaks are a result of cultural festivities, such as Christmas and Easter, but in part, these study breaks are also specifically introduced to help students to catch-up or to allow students to take a breather before a new part of a module starts. Alternatively, one might hypothesise that for students who are progressing well and are “in the flow” of the module schedule, or students who prefer strict deadlines with no opportunities to “relax a bit”. A study break in the module schedule might actually disrupt their flow, so it would be important to test whether such study breaks might have a positive or negative effect on students’ engagement and academic performance over time.
In this study, we specifically distinguish study breaks from exam preparation, as study breaks can occur at any point of time during the online module, while exam preparation is specifically linked with the final assignment at the end of the module, and obviously this is linked to a concrete learning activity (i.e. exam). Indeed, Cross et al. (2016) argued that students’ behaviour during exam preparation and revising is distinct from an engagement at other times during the module. In a study investigating 281 students’ perceptions of assessment practices at the OU and revision practices, in particular, Cross et al. (2016) found that most students self-reported that they spent 20–30 h revising for the final exam. Students indicated to mostly benefit from using sample exams and answers, tutor support, and feedback from assignments. However, no correlations were found between (self-reported) time spent revising, design and satisfaction and completion of online modules (Cross et al. 2016).
Research questions
Most of the studies above have conceptualised and tested different variations of CBA in a single module context. By aligning the designs of a range of modules using different types and combinations of CBA with fine-grained data relating to behaviour, the researcher may obtain valuable insights as to how their students are “reacting” to the design of CBA in online distance learning settings. Given the complex and flexible nature of online learning and the perhaps more demanding intertwining of studying with balancing work and private lives of adult learners (Broadbent and Poon 2015; Simpson 2013), there is an urgent need to understand how online learners are choosing when and how to work with CBA.
Building on a large dataset of 205 modules at the OU that has been extensively mapped regarding CBA approaches, the first main aim of this study is to provide a macro perspective of CBA approaches used at the OU. The second main aim of this study is to conduct a fine-grained log-data study of one large online module, hereby labelled as “Introduction to Business”, that extensively used CBA, whereby we want to critically examine how students make use of their preparation week before the final report. Therefore, we will address the following two research questions.
RQ1 How do study break weeks and exam preparation weeks influence the odds of passing a module?
RQ2 How do students engage in the VLE during exam revision weeks?