1 Introduction

Technology is almost everywhere—even in education, an area known for its slow change (UNESCO 2017), advanced technologies, such as artificial intelligence, machine learning, and mobile assessments, are affecting the ways we teach and learn in classrooms. Next to the challenges that arise with the integration of technology in teaching and learning contexts (e.g., Scherer et al. 2019; Tang and Hew 2017), several benefits are attracting the attention of educators and policy-makers. These benefits include but are not limited to the provision of personalized learning models, the availability of data that describe the processes of knowledge and skill acquisition, and the accessibility of complex skills due to highly interactive tasks and real-world simulations (Baker and Koedinger 2018; Quellmalz and Pellegrino 2009). This special issue brings to attention the potential and the challenges associated with advanced technologies in educational assessment.

2 Objectives and content

The aim of this special issue is to summon current developments of advanced technology-based assessments, the underlying measures and psychometric models, and their applications to assess educationally relevant constructs. Key research topics, gaps, and future developments in the context of technology-based assessment are highlighted as well.

The special issue is comprised of eight papers in which the authors focus on two key issues related to technology-based assessments: (1) the approaches to representing and analyzing process data and (2) the potential of technology for the assessment of specific constructs. Table 1 presents an overview of these papers, including some contextual information. Addressing (1), Deonovic et al. (2018) explored ways to connect Bayesian Knowledge Tracing—an algorithmic approach to model learners’ mastery of the knowledge presented or tutored in a technology-based environment—with existing Item Response Theory models to improve the assessment of learning and possible progressions of learners. The authors present a unified framework to integrate these two approaches and show its performance in a simulation study. Slater and Baker (2018) extend the existing literature on Bayesian Knowledge Tracing by examining the effect of different sample sizes on its degree of error in predicting learners’ mastery. From the perspective of item banking, Rights et al. (2018) bring to attention the uncertainty in scores obtained from models of item response theory. The authors propose an approach to handle this uncertainty through Bayesian model averaging. Taking an item response theory approach, Frey et al. (2018) propose to identify possible mechanisms behind missing item responses with the help of response times. The authors illustrated their approach to missingness with a dataset of almost 800 students who took a computer-based test of digital skills and compare it to common approaches. Kroehne and Goldhammer (2018) present a framework that allows researchers to conceptualize, represent, and analyze log file data that were obtained from technology-based assessments. They further suggest using finite state machines for the analysis of log file data and illustrated this approach for the response times of PISA (Programme for International Student Assessment) questionnaire data.

Table 1 Overview of the papers presenting advances in technology-based educational assessment

The second part of the special issue is comprised of three papers targeting the technological advances in the assessment of skills and engagement (2). Specifically, de Klerk et al. (2018) developed a media-based assessment of performance in a vocational domain. The simulations included in this assessment mimic real-life situations and allow researchers to trace test-takers’ behavior while interacting with the tasks. The authors gathered evidence for crafting a validity and present the results from multiple perspectives in the paper. Nguyen et al. (2018) present how learning analytics can aid the understanding of study break effects on students’ engagement and pass rates in virtual learning environments. In their study of more than 120,000 undergraduate students, the authors were identified periods of procrastination with the help of log file data. Shi et al.’s (2018) paper show the potential of a conversational intelligent tutoring system for the assessment of adults’ reading comprehension. Shi et al. (2018) conclude their paper by discussing the advantages of such systems over traditional assessments of adult literacy.

3 Conclusion

We believe that the selected papers for this special issue illustrate the enormous potential of advanced technologies for the assessment of educationally relevant variables and constructs. Specifically, the opportunities to capture not only indicators of accuracy while learners work on certain tasks but also indicators of task behavior and knowledge mastery allow researchers to draw a more detailed picture of task performance with the help of technology. At the same time, the availability of new types of data (i.e., process data) is associated with several challenges that the authors bring to attention in this special issue: First, process data require transparent and replicable frameworks for conceptualizing, representing, and analyzing them. Second, the psychometric models to describe task performance beyond the accuracy of item responses must be developed further so that process data (e.g., response times, sequences of actions, frequency of actions) can inform and improve the description of learners’ proficiency (see also von Davier 2017). Third, advanced technologies, as they allow for interactive task designs (e.g., simulations, intelligent tutoring, adaptive testing), may improve the assessment of traditional constructs (e.g., literacy and numeracy) and enable the assessment novel constructs that require complex task designs (e.g., collaborative skills, adaptive problem solving). Together with the availability of process data, however, we believe that this potential requires the careful crafting of a validity argument to facilitate the interpretation of the scores and indicators resulting from advanced technology-based assessments (see also Katz et al. 2017; Mislevy 2016). Overall, we encourage researchers in the field to not shy away from novel approaches to modeling and assessing complex constructs in education with the help of modern psychometrics and advanced technologies.