Verifying the Stability and Sensitivity of Learning Analytics Based Prediction Models: An Extended Case Study

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 583)

Abstract

In this empirical contribution, a follow-up study of previous research [1], we focus on the issues of stability and sensitivity of Learning Analytics based prediction models. Do predictions models stay intact, when the instructional context is repeated in a new cohort of students, and do predictions models indeed change, when relevant aspects of the instructional context are adapted? Applying Buckingham Shum and Deakin Crick’s theoretical framework of dispositional learning analytics combined with formative assessments and learning management systems, we compare two cohorts of a large module introducing mathematics and statistics. Both modules were based on principles of blended learning, combining face-to-face Problem-Based Learning sessions with e-tutorials, and have similar instructional design, except for an intervention into the design of quizzes administered in the module. We analyse bivariate and multivariate relationships of module performance and track and disposition data to provide evidence of both stability and sensitivity of prediction models.

Keywords

Blended learning Dispositional learning analytics E-Tutorials Formative assessment Learning dispositions 

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Dirk T. Tempelaar
    • 1
  • Bart Rienties
    • 2
  • Bas Giesbers
    • 3
  1. 1.School of Business and EconomicsMaastricht UniversityMaastrichtThe Netherlands
  2. 2.Institute of Educational TechnologyOpen University UKMilton KeynesUK
  3. 3.Rotterdam School of ManagementRotterdamThe Netherlands

Personalised recommendations