Keywords

1 Introduction and Background

The acquisition of cross-curricular complex skills (such as collaboration, presentation and information skills) is important for students in secondary education, as they are often required in their future professional life. However, looking closer at daily practice in secondary schools, it shows that they struggle with how to organize the acquisition, guidance and supervision and (formative and summative) assessment of these skills. Both schools and teachers recognize the importance of teaching cross-curricular complex skills, nevertheless they are only practiced occasionally (Rusman et al., 2014; Thijs et al., 2014). The extent to which this happens also largely depends on efforts of individual teachers. Moreover, the SLO (Dutch national institute for curriculum planning and development) points out that the training of skills is often not organized in a methodical, structured, goal-oriented and substantiated manner (Thijs et al., 2014, p. 103). When schools support the acquisition of cross-curricular complex skills, it is often done through project-based education, using textual rubrics occasionally and incidentally and in a time-and paper consuming manner. Furthermore, to streamline both acquisition of these skills as well as supervision and guidance during practice, students and teachers need to develop a concrete and consistent mental model of skills. If students know to what skill performance level they work towards (feed-up) and where they stand with regard to this level (feedback), they can better regulate their practice (feed forward) to achieve these objectives (Hattie & Timperley, 2007). An analytic assessment rubric describes skills’ mastery levels, usually textually, through a set of quality criteria and descriptions for the constituent skills of a specific skill (Andrade & Du, 2005). Thus, they can become a ‘mirror’ to determine one’s skills performance level. However, we expected that textual rubrics could still be improved, as many aspects of desired behavior can hardly be described into words. Therefore, we designed and developed a technology-enhanced and structured formative assessment method, called Viewbrics. Within the Viewbrics method, we alternatively proposed to use video-enhanced rubrics as a manner to counterbalance disadvantages of textual rubrics.

We were interested whether such a technology-enhanced structured formative assessment method, with either video-enhanced or textual analytic rubrics, could offer a more efficient and effective solution to teach, practice, achieve and formatively evaluate cross-curricular complex skills. Thus, the design-based research project Viewbrics was conceived (Rusman et al., 2019). In this chapter a (theory and practice-based) description of both the design and development as well as the characteristics of the Viewbrics technology-enhanced formative assessment method are described. Furthermore, overall results of two alternative pilot implementations of the Viewbrics method (video-enhanced or textual rubrics) on students’ mental models, feedback quality and skills’ performance levels regarding complex skills in two secondary schools are reported.

1.1 The Acquisition of Complex Skills, Formative Assessment and (Video-Enhanced) Rubrics

Complex skills consist of constituent subskills which concertation require high cognitive effort and concentration (Kirschner & Merriënboer, 2008; Van Merriënboer & Kirschner, 2017; Voogt & Pareja-Roblin, 2012). Complex generic (also ‘transversal’ or ‘twenty-first century’) skills are not specific for a domain, occupation or type of task, but important for all kinds of work, education and life in general. These skills are applicable in a broad range of situations and many subject domains (Bowman, 2010). To master a complex skill, it requires frequent, prolonged and repetitive practice, but also (timely) feedback on performances. Also modelling examples and variability in application contexts influence skills’ acquisition (Kirschner & Merriënboer, 2008; Van Merriënboer & Kirschner, 2017). One of the instruments that can be used to support skills acquisition through structured feedback and reflection during practice are rubrics. Rubrics define the features of work that are considered quality, and can be either holistic or analytic. It is a mechanism for judging the quality of a students’ performance on a task (Arter & Chappuis, 2006; Sluijsmans et al., 2013). Analytic assessment rubrics describe a skill, their constituent subskills and a set of quality criteria (performance indicators) for the various mastery levels of a sub-skill (Andrade & Du, 2005) in text. Performance indicators specify aspects of variation in the complexity of a skill, constituent sub-skills and related performance levels (Rusman & Dirkx, 2017). For example; the skills mastery (as displayed and visible behaviour) ranging from a novice to that of an expert. When students acquire insight in their performance compared to the targeted mastery level of a complex skill, they can better monitor their own learning activities and communicate with teachers (Panadero & Jonsson, 2013; Schildkamp et al., 2014). Thus, rubrics provide opportunities to jointly adjust the teaching–learning process through reflection. Furthermore, an analytic rubric provides the opportunity to structure teachers’ and peers’ timely and informative feedback, but also to clarify and make expectations about the strived-for mastery level(s) of a skill clear in advance (feed-up) to the learner. This helps learners at the start to envisage the targeted mastery level (Berry et al., 2007) and enables them to focus on the aspects of a skill that they didn’t master yet very well while practicing.

However, many aspects of complex skills mastery refer to motoric activities, time-consecutive operations and processes that are hardly captured in text (e.g. body posture or use of voice during a presentation) (De Grez et al., 2013; O'Donovan et al. 2004). Furthermore, the context in which a skill is practiced and behavior enacted is important. Contextual conditions and characteristics imply and generate implicit knowledge (tacit knowledge, ‘knowing how/’knowing why’), which is interwoven with practical activities, operations and behavior in the physical world (Westera, 2011). Text supposedly also leaves more space for personal interpretation of performance indicators of a complex skill than video. Also, in educational practice it showed that textual analytic rubrics didn’t clarify the desired mastery level of a skill sufficiently and concrete enough for pupils, as students often asked questions like “What should I exactly do?” and “To what kind of things should I pay attention to?” (Rusman, 2015). Therefore, text-based analytic rubrics only have a restricted capacity to clarify the targeted mastery level of a skill and to assess shown behaviour, as they don’t provide information on visible behavioral aspects of mastering a skill (Berry et al., 2007). This could supposedly lead to incomplete and inconsistent mental models of students of the expected skill performance level.

However, these restrictions could might be overcome with video-enhanced rubrics. A video-enhanced rubric (VER) is the synthesis of video modelling examples and a text-based analytic rubric in a digital formative assessment format (Ackermans et al., 2017, 2019b). Video-enhanced rubrics could foster observational learning from desired behavior of a role model in (good/bad) video modelling examples (De Grez et al., 2013; Rohbanfard & Proteau, 2013; Van Gog et al., 2014). They can also capture implicit contextual knowledge, as they show motoric, temporal and contextual information of a skill, which cannot be expressed in words (Ackermans et al., 2017; Westera, 2011). Van Gog et al. (2014) found an increased performance of task execution when a video-modelling example of an expert was shown and De Grez et al., (2013, 2014) found comparable results when learning presentation skills. Moreover, when teacher trainees compare their own performance with video-modelling examples they ‘overrate’ their own performance less during self-reflection than without these examples. Additionally, teacher trainees gained an improved insight in their performance compared to the targeted mastery level of a complex skill (Baecher et al., 2013). Therefore, we alternatively proposed to use video-enhanced rubrics within the Viewbrics method as a manner to counterbalance disadvantages of textual rubrics.

1.2 Technology-Enhanced Formative Assessment: Process Support for Goal Setting, Practice, Feedback, Reflection and Self-regulation

Formative assessment or ‘assessment for learning’ aims to support teaching and learning processes by providing developmental feedback to learners (and their teachers) on their understanding or skills during a period of practice and instruction (Black & William, 1998). Formative assessment differs from summative assessment in that it is a continuing process of feedback. In this continuing process information on learners’ performances is gathered continuously and mirrored against a set of predefined criteria or good practices. Information is also used to shape improvements and promote an individual's learning, rather than serve as a final formal summary of learners’ achievements (Sluijsmans et al., 2013). Providing feedback during formative assessment is also one of the most effective ways to support learning processes (Hattie & Timperley, 2007). Feedback can be specified at different levels (e.g. looking at self-, task-, process-, or self-regulation aspects (Hattie & Timperley, 2007), and by means of different sources, such as self-, peer-, expert- or teacher feedback or via ‘built-in’ feedback in (technology-enhanced) educational materials (Sluijsmans et al., 2013). The aim is to gather information about (the gap between) the current and desired personal performance goal or mastery level and how this gap can be closed. For example, by carrying out specific learning activities, altering behaviour or (adapted) instruction. To learn new skills, learners first need support to form a clear mental model of the strived-for performance objectives (feed-up). Second, they need concrete, supportive and timely information (Shute, 2008) on their performance in relation to these objectives and instructions or guidelines on how further growth could be achieved by altering learners’ thinking or behaviour (feedback). Finally, learners need to reflect on the gained feedback so that they can specify new or adapted objectives and determine where their focus should be when practicing further (feed-forward) (Hattie & Timperley, 2007). The responsibility for learning is shared between both learners as teachers and eventually with their peers (McManus, 2008; Black & Williams, 2009). They determine (jointly) where a learner is going (goals), where (s)he is now (how am I going?) and how a learner can get where (s)he wants (where to go next?) (Hattie & Timperley, 2007), thus forming a natural self-regulative cycle with a Forethought, Performance and Self-reflection phase (Zimmerman, 2008, p. 178). Peer assessment and feedback can play an important role in formative assessment, next to self and expert assessment (Filius, 2019). Both receiving peer feedback as well as providing peer feedback yield improved learning gains compared to only teacher feedback, such as improved presentation skills, critical thinking, self-regulation and reflection skills (Boud, 2001; Vincent-Wayne & Bakewell, 1995; Vincent-Wayne & Bakewell, 1995). Students also self-report that they learned more from providing peer feedback then receiving it (Filius, 2019). Additionally, providing peer feedback in a written form both force and facilitate students to analyze and think critically about a performance and also to phrase and express it in an understandable manner. With written peer feedback students experience extra time to think, reflect and express their feedback, compared to oral and (often) immediate feedback. Peer feedback also offers a practical merit, in that it can facilitate learning and development of students, with a reduction of teachers’ time and effort (Candy et al., 1994; Filius, 2019). However, in order to increase the effectiveness of peer feedback, it is important to instruct students in advance in providing (high quality) peer feedback (Nicol, 2010; Shute, 2008).

Furthermore, technology can offer different affordances that potentially facilitate and enhance formative assessment and feedback processes (Norman, 2013; Rusman et al., 2013). It improves access to practice and assessment by different actors (e.g. by peers, experts and teachers) anytime, anyplace and anywhere, enabling learners to measure their understanding when and how often they want and allow them more control of their learning. Feedback times can be shortened and this can help to change misconceptions rapidly, or feedback may be given from different perspectives, within a group or adapted to a learner. Thus, technology can affect feedback quality. Also, technology can track, trace, store, process and visualize learners’ results as well as actions (Looney & Siemens, 2011), which makes them visible and available for various learning purposes, such as individual or group reflection or to evaluate and visualize learners’ progress and growth. Technology can also affect teacher efficiency, as teachers can be supported with various tools helping to reduce assessment time and material (e.g. save ‘piles of paper’ and related work), thus saving time and costs that can be spent otherwise. Additionally, as technology enables rapid updating and combination of (recent) material and display of various formats (e.g. video, audio, annotation etc.), it can also contribute to more varied and authentic assessment designs (Rusman et al., 2013).

1.3 The Objectives and Outline of the Viewbrics Project

In the Viewbrics project we designed a technology-enhanced formative assessment method with (video-enhanced or text-based) analytic rubrics, to provide both teachers and learners with structured, feasible and convenient process support to formatively assess and provide high quality feedback while practicing skills and to monitor students’ skill performance growth. We aimed to fulfill the need for practical, implementable educational models, methods, assessment indicators and instruments, ICT-tools and guidelines to support the acquisition of complex (twenty-first century) skills. We also aimed to make the process of implementing learning activities and assessment practices for skills acquisition more straightforward (Rusman et al., 2014; Thijs et al., 2014). A valid, standardized, cyclic and repeatable technology-enhanced assessment process, in which (video-enhanced) rubrics are ‘set’ instruments to provide structured, timely, specific and relevant feedback, was desirable from that (practical and straightforward) stance. This could also help to overcome the use of analytic rubrics for summative assessment purposes only and embed formative assessment more regularly in daily educational practice. Additionally, we wanted to introduce a way to make behavior resembling the various mastery levels of a skill more visible as well as structurally support teachers and pupils in the process of providing and using feedback while practicing skills, for which we designed and developed (video-enhanced) rubrics.

Furthermore, we aimed to study effects of structured technology-enhanced process support for formative assessment, peer feedback and the use of (video-enhanced) rubrics for skills acquisition. More specifically, whether technology-enhanced formative assessment process support, peer feedback and video-enhanced rubrics resulted in a more complex (‘richer’) mental model of a complex skill; improved feedback quality and/or quantity and a significant growth in learners’ skills performance.

In this practice-and design based research project (Rusman et al., 2019), an interdisciplinary project team collaborated intensively with various stakeholders (teachers, students, school board, researchers and experts (educational, ICT, interface design)) in order to develop and investigate the Viewbrics method and accompanying digital tool. This was done for three complex (twenty-first century) skills, namely presentation, collaboration and information literacy skills. The project had two phases:

  1. 1.

    a cyclical design-oriented phase for the development of the (technology-enhanced) formative assessment method, the textual rubrics and the video-enhanced rubrics with stakeholders.

  2. 2.

    a (practice-based) research phase into the effects of implementing the method

In the first phase stakeholders met in a core development team, in order to develop the Viewbrics method and the (video-enhanced, VER) rubrics, both from theoretical as well as practical perspective. The core team met once every two weeks. Theory-informed proposals and prototypes for the development of the method and the video-enhanced rubrics (Ackermans et al., 2017, 2019; Mertler, 2001; Van Strien & Joosten-ten Brinke, 2016) were developed, discussed and adapted in line with the feedback of stakeholders: students, teachers and experts. Questions like “How many performance level descriptions will we use in the rubric?; What are the (dis)advantages of starting with the highest or lowest performance level descriptions at the left side of the rubrics? How can we foster a growth perspective of students on their skills development? What steps should the formative assessment method consist of and what/where could be the added value of technology? What should be the constituent subskills described within the rubrics? What behavior can we show in the video modeling example and how should it connect and relate to the performance level description of a subskill?” were discussed, both from a theoretical (based on scientific literature) as well as a practical stance and jointly decided upon. This resulted in a prototype of the technology-enhanced formative assessment process; three analytic text-based rubrics for presentation, collaboration (see Fig. 10.1) and information literacy skills and the design and development of video-enhanced rubrics in which video modelling examples were combined with textual rubrics in a digital formative assessment format (Ackermans et al., 2017, 2019b).

Fig. 10.1
A chart has 2 parts. A classification chart on the left. A table of 4 columns and 4 rows on the right with different emojis for each column. All the details are provided in a foreign language.

Example of skill decomposition and developed rubric for ‘collaborating’ skill

Once a first working technology-enhanced version of the Viewbrics method was ready, it was evaluated on its usability and usefulness with students and teachers in two secondary schools (Rusman et al., 2018) and further adapted, developed and evaluated, until stakeholders were satisfied with the Viewbrics method. In the second phase, the effect of using the Viewbrics technology-enhanced formative assessment method with video-enhanced rubrics and textual rubrics was investigated at two secondary, pre-university education schools for 24 weeks and compared with existing educational practice for skills acquisition (as a control group). This research took place within project-based education, with secondary school students and teachers in six classes (two classes with video-enhanced rubrics, two classes with textual rubrics and two classes as a control group).

We expected that video-enhanced rubrics and textual rubrics within the technology-enhanced formative assessment method, compared to the current educational practice, could lead to richer mental models and improved feedback quality for both students and teachers. As a result, we ultimately expected an increased mastery of skills by students. Additionally, we expected that video-enhanced rubrics compared to textual rubrics, used within the same technology-enhanced formative assessment method, would lead to richer mental models, improved feedback quality, and improved skill performance of students. This led to the following twofold research question, which was investigated for three cross-curricular complex skills (presentation, collaboration and information literacy skills):

  1. 1.

    Do rubrics, applied within a (technology-enhanced) formative assessment method, improve (i) the mental model of (ii) the feedback on, and (iii) the performance of a (cross-curricular) complex skill among secondary school pupils when compared to existing educational practice?

  2. 2.

    Do video-enhanced rubrics, applied within a (technology-enhanced) formative assessment method, improve (i) the mental model of (ii) the feedback on, and (iii) the performance of a (cross-curricular) complex skill among pupils in secondary education when compared to textual rubrics?

1.4 The Designed Intervention: The Viewbrics Technology-Enhanced Formative Assessment Method

In this section the Viewbrics technology-enhanced formative assessment method is described from the student-learner perspective. The overall formative assessment process supported by the Viewbrics method is visualized in Fig. 10.2 and consists of five main steps, that are described below and illustrated with main interfaces.

Fig. 10.2
A flow diagram of the formative assessment process in Viewbrics. It includes watching video-enhanced rubrics, practicing skills, assessing own performance, looking at and analyzing feedback of teachers and peers, and determining the next learning objectives.

Formative assessment process in the [Viewbrics] method from a student perspective

Step 1—Watch (video-enhanced) rubrics: Students look either at video-enhanced rubrics (VER) with video-modeling examples and information processing support (by means of a questioning mechanism (Ackermans et al., 2017, 2019b)) or text-based analytic rubrics in the digital tool, in order to form a mental model of a complex generic skill and the strived-for mastery level. This is done to facilitate mental model creation and goal-setting of learners. In the VER implementation of the Viewbrics-method, learners first watch the complete video-modelling example (holistic), then they process the video modeling example by means of information processing questions, a modeling example of the highest mastery level of a constituent subskills and color codes which allow learners to link scenes in the video to the related constituent sub-skill in a rubric (Ackermans, 2019; Ackermans et al., 2017, 2019b; Rusman et al., 2019, p. 20) and then they watch fragments of the video-modelling examples, associated with and starting from a subskill (Fig. 10.3) and review the complete video. In the text-based rubric setting, students click through the skill-hierarchy and constituent subskills, and can read through the performance level descriptions related to each subskill.

Fig. 10.3
A screenshot of the dashboard. It depicts a paused video of a woman speaking. The text in the screenshot is in a foreign language. Two callouts pointing to the list read, reviewing video fragments organized by sub-skill, and watch a complete video without questions.

Reviewing video fragments of modeling examples by sub-skill in rubric

Step 2—‘Practicing a skill’: Students go ‘into the real world’ in order to practice a skill in the educational scenario a teacher provided them with and with the impression of skilled behaviour they formed by looking at the (VER) rubric. In the Viewbrics project this was done in the context of project-based education. Peers and teacher provide feedback on the ‘live’ performance of a student in class through the use of digital devices (e.g. tablet, laptop), however students only received an overview of this feedback after they did a self-assessment of their performance. Additionally, students provide peer-feedback to the performances of their colleagues, in addition to the teacher (Rusman et al., 2019, p. 21).

Step 3—‘Self-assessment’: Based on their own experience with practicing skills, their perception of their own performance and the built-in support in the Viewbrics method [(video-enhanced) rubrics, analysis/comparison of performance through peer assessment and technology-enhanced process support] students self-assess their performance by means of the rubrics in the digital tool (Rusman et al., 2019, p. 22 & 23). The self-assessment is designed comparable to the peer-assessment process, only the person and performance setting vary. Rubrics are organized in skills clusters and sub-skills (Fig. 10.4). Each sub-skill is described in a rubric with four performance level descriptors (Fig. 10.5). Only after completing the self-assessment, students can take a look at the 360-degree feedback of peers and the teacher (who assess students’ performances while practicing by scoring the rubrics on a digital device and providing additional tips and tops per skills’ cluster). This 360-degree feedback consists of a visualization and a summary of all tips and tops given by peers and teachers.

Fig. 10.4
A screenshot of a webpage with text in a foreign language. Two callouts are in English and read, skill-cluster and sub-skill.

Self-assessment by means of reflection on subskills within a skill-cluster

Fig. 10.5
Four screenshots of text in a foreign language, each accompanied by a different emoji.

Scoring a rubric with four mastery level descriptions per sub-skill

Step 4—‘Review and analysis of feedback’: The feedback provided by peers and teacher is visualized in a ‘skill performance wheel’ representing students’ performance score on subskills of a complex skill (Fig. 10.6) (Rusman et al., 2019, p. 23 & 24). Each ‘spoke’ of the wheel represents a constituent subskill of a complex skill and each ‘level’ on a spoke aligns with a rubric performance level description of this subskill. This visualization allows students (and teachers) to see at a glance on what skills they may still improve and what skills they performed well on, to direct their further and future practice. Performance growth or shrinkage between assessment moments through time are visualized in performance level color highlights (red for performance reduction, green for growth in performance, blue for stable performance) (Fig. 10.7) and the top three skills that went either well or less well during practice are presented below the wheel. Additionally, all provided textual tips and tops are summarized in a feedback report. Students analyze this information and determine what went well and what subskills may still need improvement.

Fig. 10.6
A screenshot of a feedback wheel of students' performance scores. The text in the screenshot is in a foreign language.

Skill performance feedback wheel representing students’ performance scores

Fig. 10.7
A photograph of a child using a digital tablet displaying complex skill growth on the dashboard.

Complex skill growth visualization on dashboard

Step 5—‘Determine (next) learning objectives’: Students describe their learning objectives in the digital tool based on their analysis of self-, peer-and teacher feedback in both the skills performance wheel and the tip/top summary report, to determine where to focus on during their next practice session (Fig. 10.8). This information becomes part of their formative assessment report of one specific assessment moment (M1) in time, to be used and referred to for future practice and which can be compared to a latter practice session and performance.

Fig. 10.8
A screenshot of a webpage with a text box and tabs at the bottom. The text in the screenshot is in a foreign language.

Description of skills’ learning objectives for next skills practice session

2 Method

To determine the effect of using the Viewbrics technology-enhanced formative assessment method with video-enhanced or textual rubrics on the mental models, (perceived) feedback and skills performance of students, two secondary pre-university education schools used the method for 24 weeks (Ackermans, 2019; Rusman et al., 2019). This study took place within the context of project-based education, with students and teachers in six low-secondary school classes (two classes using video-enhanced rubrics, two classes using textual rubrics and two classes as a control group), to compare with existing educational practice for skills acquisition. A mixed method approach was chosen, in which both quantitative and qualitative data (interviews) were combined, using and combining results from various research instruments, such as concept maps (as a representation of a mental model), rubric scores, written tips and tops, questionnaires and (focus group) interviews. A time-series approach (Field, 2009) for data collection was adopted for detecting differences in the measurement of mental models and skill performance of students. Data were analyzed by means of a test for the practical equivalence of the development models of both experimental and control groups (Ackermans, 2019; Ackermans et al., 2019a, 2019b; Kruschke, 2018; Rusman et al., 2019).

2.1 Sample

This study was carried out in an ecological manner and therefore used a convenience sample. Each participating school had one class using video-enhanced rubrics, one using textual rubrics within the technology-enhanced formative assessment method and one control group (skills acquisition education as usual). Participating students were between 12 and 14 years old. In total 153 students and four teachers participated.

2.2 Instruments

The change in mental models of the three cross-curricular complex skills was measured via a quantification of the ‘richness’ of concept maps. A concept or mind map is an external graphic representation of a mental model, derived from the learner’s self-generated concepts (Ackermans et al., 2019a; Dhindsa et al., 2011). A rich mental model is rich in concepts (multitude of concepts), has a linear structure, contains hierarchies and a multitude of complex relationships (Besterfield-Sacre et al., 2004; Buzan, 2003; Novak & Gowin, 1985). We used the number of concepts in the concept map as an indicator for the width of the mental model, determined the depth of the mental model by looking at the structure of the concepts and the number of hierarchies and determined the strength of a mental model by counting the number of explained and unexplained relationships between concepts and different segments of the concept map (Ackermans et al., 2019a; Besterfield-Sacre et al., 2004). These indicators were part of the scoring instrument that we used for mental model richness (Evrekli et al., 2010; Van Beek-Sweep, 2018). The quality of the feedback was determined with a self-developed instrument. This instrument performs a quantitative analysis of (overlap in) word use between the feedback given (in tips and tops) and the text of the rubrics (Ackermans et al., 2021b; Hirschberg & Manning, 2015). Additionally, interviews were carried out with students. The mastery of a skill was determined via an average rubric score (self-, peer-, expert assessment) of a student’s performance on this skill (Ackermans et al., 2021a).

3 Results

The specific data and results were presented in the Dutch end report of the Viewbrics research project (Rusman et al., 2019) and in a PhD thesis (Ackermans, 2019). We here report and summarize the overall obtained research results. When using the technology-enhanced formative assessment method for the acquisition of cross-curricular complex skills for students in lower secondary education, we obtained the following results (Ackermans, 2019; Rusman et al., 2019):

  • Students in both experimental groups performed significantly better in the three cross-curricular complex skills compared to the control group. This effect of the structured Viewbrics technology-enhanced formative assessment method with peer feedback is therefore independent of the modality of the rubrics (textual or video-enhanced) (Ackermans et al., 2021a).

  • Students in the video-enhanced rubric settings developed a significantly richer mental model of collaboration and information skills compared to the control group (Ackermans et al., 2021b). This effect of the technology-enhanced formative assessment method is therefore dependent on the modality of the rubrics (video-enhanced). There was no significant difference in the mental model for presentation between the experimental and control groups. Possibly this could be due to the fact that the starting mastery level of students for this skill was initially already higher, so that less “growth” in mental models could be achieved.

  • Compared to textual rubrics, applied within the technology-enhanced formative assessment method, the video-enhanced rubrics did not lead to a significant improvement of mental models and performance of collaboration, information literacy and presentation skills (Ackermans et al., 2019a).

  • The video-enhanced rubrics, applied within the technology-enhanced formative assessment method, resulted in significantly higher feedback quantity of tips and tops, compared to textual rubrics (Ackermans et al., 2021b). However, feedback quality and consistency of the remarks within the tips and tops were not significantly improved.

4 Discussion

Based on various design principles, derived from educational theory on formative assessment, skills acquisition and (peer) feedback, we expected that the Viewbrics (technology-enhanced) formative assessment method would improve (i) the mental model of, (ii) the feedback on, and (iii) the performance on a (cross-curricular) complex skill among secondary school students when compared to existing educational practice. We also looked whether the format of the rubrics used within the method (video-enhanced or text-based) would affect learning outcomes and feedback. Looking at the effectiveness of the Viewbrics technology-enhanced formative assessment method, combining self-, peer- and expert assessment with analytic rubrics for the acquisition of complex generic skills, this study yielded affirmative research results. Based on previous studies on supporting formative assessment with written (self-, peer- and expert) feedback, we expected that the Viewbrics method would support students’ skills performance and growth, which it indeed did. This effect was independent of the rubric format. Furthermore, students in the video-enhanced rubric group developed richer mental models compared to existing educational practice, however this effect was insignificant compared to use of the Viewbrics method with text-based rubrics. It seems that mainly the use of the Viewbrics technology-enhanced formative assessment method with (self-, peer- and expert-) feedback by means of rubrics, independent of the format, supported students’ skills acquisition. Furthermore, feedback quality and consistency were also independent of rubric format (video-enhanced or text-based), although feedback quantity increased in the video-enhanced setting.

This study has a number of limitations: first, we have implemented the technology-enhanced formative assessment method at a limited number of secondary schools, with a limited number of students and teachers. This may have consequences for the applicability and the generalization of measured effects in other educational settings. Additionally, we had a limited time-frame for implementation (24 weeks, 16 effective lesson weeks) of the method. Perhaps if the method had been used for a longer period, with more (regular) practice moments in multiple classes, this study would have yielded different results. A final limitation is that the video modelling examples of the video-enhanced rubrics were developed only for the highest skill performance level. Perhaps several video modeling examples for different skill levels or multiple examples for one performance level description would have had a different effect. Furthermore, the development of video-enhanced rubrics is time-and cost intensive, which has to be considered. However, looking at previous studies, one might expect that a video-enhanced rubric, combining video modeling examples with a text-based analytic rubric, can have an added value for learning skills, compared to a text-based rubric only (Rohbanfard & Proteau, 2013; Van Gog et al., 2014). Therefore, it is still worthwhile to explore effects of alternative implementations on students’ complex skills acquisition in future research.

Although there is research available on (technology-enhanced) formative assessment, the use of rubrics, modelling examples and the use of multimedia for learning respectively, research on the combination of these concepts to learn complex skills and design specific process support is rare. This study contributed both by the design of video-enhanced rubrics, as by exploring its effects. Moreover, Dutch secondary education is in the process of a transformation, where generic complex skills receive more emphasis and are integrated with learning and applying domain-specific knowledge. The Viewbrics technology-enhanced formative assessment method could be(come) one of the instruments providing teachers with structure to deal with this change in their daily educational practice.

4.1 Implications for Practice

This project yielded, in addition to jointly (with stakeholders) developed scientific and practical knowledge about the use of video-enhanced rubrics with video (modelling) examples within a technology-enhanced formative assessment method for the development of skills, a technology-enhanced formative assessment method that has proven to be effective in educational practice in secondary schools, supported with the digital Viewbrics tool. This digital formative assessment tool, with standardized and structured 360-degree feedback and reflection process support, was evaluated (by stakeholders) as effective, usable and user-friendly. It can save time, but also paper, when using rubrics in formative assessments. Moreover, ecologically validated (textual and video-enhanced) rubrics and video-modeling examples were developed for three skills (collaboration, presentation, information literacy skills), which are reusable for other secondary schools. Instruction and workshop material, manuals and various information videos were also developed.

5 Conclusion

Based on this study, we can conclude that the structured Viewbrics technology-enhanced formative assessment method with (self-, peer- and expert-) feedback supported via analytic rubrics led to richer mental models and increased skill performance, independent of a video-enhanced or textual rubric format. Moreover, video-enhanced rubrics led to more feedback quantity (tips/tops), however feedback quality (concreteness/consistency) was not improved. In this study, it seems that the technology-enhanced structured ‘step-by-step’ process support for formative assessment and feedback with rubrics caused the mayor impact on skills acquisition of students, not the format of the rubrics. However, compared to the control group, video-enhanced rubrics did make a difference in the mental model formation (richness of model) for two skills, probably dependent on the initial performance level before practice was started.

Therefore, further and future research is needed to determine whether alternative formats would alter the effectiveness of video-enhanced rubrics within the technology-enhanced formative assessment method (e.g. with video-modeling examples available for more than one performance level description within a rubric, or alternative examples at each subskill), compared to textual rubrics. Moreover, further research is needed to determine whether and how this technology-enhanced formative assessment method impacts students’ skills acquisition at different educational levels and contexts, and for various types of skills. Design-based research is needed to see whether theory-and practice informed adaptations to the method are necessary, to make learning skills even more effective, efficient (e.g. impact on teachers’ guidance and support time) and attractive in various educational practices.