Advertisement

Educational Technology Research and Development

, Volume 67, Issue 4, pp 983–1002 | Cite as

Solving instructional design dilemmas to develop a Video Enhanced Rubric with modeling examples to support mental model development of complex skills: the Viewbrics-project use case

  • Kevin AckermansEmail author
  • Ellen Rusman
  • Saskia Brand-Gruwel
  • Marcus Specht
Open Access
Development Article

Abstract

For learners, it can be difficult to imagine how to perform a complex skill based on textual information from solely a text-based analytic rubric. Rubrics lack (1) the contextual information needed to convey real-world attributes (2) the dynamic information (such as gesturing in the complex skill of presenting) (3) the procedural information required to support the automation of constituent skills. We propose to address the text-based rubric’s deficiencies by adding video-modeling examples, self-explanation prompts, an intertwined educational- and instructional narrative, natural segmentation, and a non-verbal script. With the resulting Video Enhanced Rubric, we aim to improve the formative assessment of complex skills by fostering learner’s mental model development, feedback quality, and complex skill mastery. Designing multimedia to support the formative assessment of complex skills using can cause dilemmas for instructional designers. For example, is learner control needed to foster intrinsic motivation or is it deemed to create extraneous cognitive load? Is it wise to use a video modeling example of peer-aged learners when the model does not display perfect performance? We found seven dilemmas around proven complex skill development, motivational design, and multimedia design guidelines. This paper presents a theoretical contribution to instructional design by introducing a framework to address dilemmas regarding such design dilemmas. As a practical contribution, we support educational researchers and practitioners by presenting six practical guidelines for designing a Video Enhanced Rubric. A use case of the Viewbrics-project provides insight into the practical application of the framework within the context of Dutch pre-university education.

Keywords

Video Rubrics Assessment (formative) Complex skills Mental models 

Introduction

The Viewbrics-project is concerned with fostering the learner’s mental model of complex skills through the process of formative assessment (FA), expecting that a clear and detailed mental model of complex skills will improve their mastery of complex skills. For the Viewbrics-project use-case, learners in Dutch pre-university education aim to foster the complex skills of collaboration, presentation and information literacy skills. FA is concerned with how assessment about the quality of student performance can be used to improve the student’s competence. FA can give the learner insight into the process of skill mastery (Sadler 1989). Feedback is a crucial part of FA and contains information about the gap between the perceived mastery level and a higher mastery level in a recurring feedback loop (Ramaprasad 1983). Thus, FA supports an iterative process of learner-regulated development (Black and Wiliam 2009; Panadero and Jonsson 2013; Van Aalst et al. 2011). Rubrics are being implemented as a tool to support learner regulated development, as they provide a detailed account of the parts or ‘sub-skills’ that make up a complex skill, such as presentation, collaboration and information literacy (Panadero et al. 2012). A rubric is an assessment tool for the qualitative rating of complex student performance (Jonsson and Svingby 2007). As opposed to the quantitative nature of summative scoring (i.e., providing the learner with a grade), a rubric aims to provide the learner with insight into the process of mastering a skill. The detailed, process-oriented quality of a rubric makes it a useful tool for formative assessment purposes as it provides the learner with structured and transparent feedback on the assessment criteria, and provides a uniform assessment method that fosters insight into complex skills acquisition and fosters learning (Panadero and Romero 2014).

Rising in popularity since the 1980s, rubrics are widely used in primary, secondary (pre-university), and post-secondary education (Brookhart and Chen 2014). As the Dutch educational system moves towards competency-based education, the qualitative aspects of a rubric as formative assessment tool result in growing implementation of rubrics in the curricula of Dutch schools (Kerkhoffs et al. 2006). Although a rubric represents a feedback-rich, qualitative way to assess skills, we find three problems with relying on textual rubrics to support the development of a rich mental model of a complex skill. The development of rich mental models is essential for the in-depth understanding of a complex skill and relates to the higher performance of the learners’ complex skills (Gary and Wood 2011, 2016).

First, rubrics provide a fragmentary textual framework, because a rubric describes a complex skill using a subdivided set of constituent (sub)skills that are identified by experts. This may result in insufficient attention to the necessary integration of constituent skills during task execution.

Second, a text-based rubric lacks the contextual information needed to convey the real world attributes and natural context of skills’ execution and a representation of dynamic information (such as gesturing in the complex skill of presenting) (Matthews et al. 2010; Westera 2011).

Third, rubrics do not provide the procedural information needed to support the automation of constituent skills, risking the formation of an incomplete mental model.

Proposing a Video-Enhanced Rubric (VER) to support complex skill development

To overcome the three problems above, we recommend combining the positive qualities of rubrics with video-modeling examples to support complex skill development. Video inherently provides the contextual and dynamic information lacking in a textual rubric (Ackermans et al. 2017). For example, a rubric on information literacy may start with the first two subskills of confirming the given information literacy task and global orientation on the subject, before moving on to the third subskill of further specifying the subject of the information literacy task. A video-modeling example shows the performance of these tasks on a mastery level, in the context of a classroom, providing dynamic information of the interaction between learners and their environment. The novel approach of this paper supports connecting the contextually rich subskills seen in the video-modeling example with the analytic description of subskills in a textual rubric. The affordances of video modeling examples are shown to have a learning effect on the development of the complex skills of collaboration, presentation, and information literacy skills (De Grez et al. 2014; Frerejean et al. 2016; Kim and McDonough 2011). In conclusion, we expect the addition of video modeling examples to a rubric to foster inter-task and sub-skill coordination and most importantly, help the learner better imagine how to perform a complex skill, by forming a richer mental model.

We designed an early prototype Video Enhanced Rubric (VER) in the form of a digital multimedia application to deliver video-modeling examples combined with rubrics to the learner. A digital multimedia application can guide the learner to actively link the behavior of video modeling examples in the VER (including all kinds of ‘visible’ information, such as time-related, non-verbal, procedural information) to the description of skills mastery levels in a rubric. An expert appraisal workshop of an early prototype VER by twenty international multimedia and instructional design experts stressed the importance of annotation to facilitate the learner to personally consolidate the connection between the abstract information of a textual analytic rubric and the concrete information of a video modeling example. This connection can be facilitated by implementing features such as notes, events, annotation or a quiz (Ackermans et al. 2018). The importance of connecting rubric and video sparked the in-depth analysis of the design dilemma’s, framework and guidelines presented the current paper.

Several existing theories, models and methods provide guidelines that inform the design of a Video Enhanced Rubric to support mental model development of complex skills. Guidelines for designing multimedia learning content found in the Cognitive Affective Theory of Learning with Multimedia (CATLM) to balance the cognitive load of presenting multiple representations (rubric and video) at once (Moreno 2005; Van Merriënboer and Kester 2014). CATLM states emotion, motivation and affect are essential for the learner to regulate the cognitive load of multiple representations (text and video). To foster the learner’s motivation to regulate multimedia information further, we use Keller’s Attention Relevance Confidence Satisfaction (ARCS) model (1987). The ARCS theoretical model focuses on the function of media, such as grabbing the learners Attention, establishing the Relevance of the media for the learner, inspiring Confidence in the learner and finishing watching the media with Satisfaction. The ARCS model is applicable independent of the type of media and implementable throughout the complete design process of a VER to support the regulation of multiple representations. Van Merriënboer and Kester (2014) explored the implementation of Four-Component Instructional Design (4C/ID) to foster complex skills while adhering to Mayer’s (2014b) Cognitive Theory of Multimedia Learning in theory. In this paper, we will explore how to design a multimedia application to foster complex skills in educational practice.

Having described the background of the project and defined the ways in which we use CATLM, CTML, FA, 4C/ID, and ARCS to guide the design of a VER, we move on to the requirements set by the Viewbrics-project for the development of video-enhanced rubrics to foster the development of complex skills through a technology-enhanced formative assessment process.

Requirements for a Video-Enhanced Rubric to support mental model development of complex skills

In the Viewbrics-project students practice and learn complex skills with the help of VER. Students should be enabled to develop a mental model as described above to assess both their performance and that of others. VER should also support them towards skill mastery, based on the identified differences between their performance and the modeling example offered. The requirements for a successful design of a VER are (1) the combination of textual descriptions and video modeling examples into an integrated format and (2) enabling the linking and combination of different subskills into a coherent complex skill.

For the first requirement, every constituent (sub-)skill of the complete skill must be visible in both the rubric and the video-modeling example. We find evidence of the importance of this first requirement in both multimedia and complex skill development theory. The pre-training principle of multimedia theory states that simple information can be used to prime the learner for complex learning (Mayer 2014b). Sequencing can also be of importance for the first requirement, as proposed by complex skill development models such as 4C/ID (Van Merriënboer and Kester 2014). The sequencing principle advises gradually increasing the complexity of a complex skill from simple to complex, although always through familiarizing learners with the complete skill at a certain complexity level. The complete image of a complex skill aids learners with recognizing and connecting concrete information found in the video with the abstract information found in the textual analytic rubric, preparing the learners for the second requirement (Krauskopf et al. 2012).

For the second requirement, we aim to support learners in consolidating the connection between concrete and abstract information while limiting the risk of extraneous load (Gary and Wood 2011; Panadero and Romero 2014). Personally consolidated information fosters deeper learning through the robust ‘generation effect,’ stating that generated material is better learned than content that is received (Bertsch et al. 2007). To foster this connection between abstract and concrete knowledge, we support learners in exploring the VER. Exploration of the VER is encouraged by the guided discovery principle, stating that the role of instruction is to provide a suitable environment for the learner to develop their understanding of the information (De Jong and Lazonder 2016; Moreno 2004).

Dilemmas in translating requirements to practical design guidelines in the Viewbrics-project

The requirements we elaborated upon in the last paragraph can be met by design guidelines that are provided by several theories and associated models. For instance, design guidelines for the development of complex skills can be distilled from the grounding theory behind Four-Component Instructional Design methodology, whereas design guidelines to balance the cognitive load of presenting multiple representations and implementations can be found in CATLM (Moreno 2005; Van Merriënboer and Kester 2014).

However, a problem arises when we examine the design guidelines these theories and models propose more closely to formulate a set of design guidelines for the Viewbrics-project use case and the development of a VER. In general, motivational and complex skill developmental guidelines derived from 4C/ID, ARCS and Cognitive Load Theory (CLT) contradict with multimedia learning guidelines derived from the CTML. Exploring the theories used as a foundation for our design presented us with seven specific dilemmas regarding ‘contradicting’ design guidelines and decisions:
  1. 1.

    In the area of situational interest, Dousay (2016) finds multimedia training designed using the modality (it benefits learners to present words as speech rather than on-screen text (Clark and Mayer 2012)) and redundancy (learners have difficulty processing simultaneous hearing and seeing of the same verbal message) principles do not have an effect on the (triggered and maintained) situational interest and in this study written words were of greater cognitive benefit to learners than spoken words.

     
  2. 2.

    The effect of information that is not an essential part of the curriculum (defined as non-essential information) on cognitive load is deemed extraneous by Mayer (2014b) and should be avoided. However, from the ARCS standpoint, it is beneficial to invest time in grabbing the attention of the learner and ensuring personal relevance to the subject (Keller 1987). From a 4C/ID standpoint, supportive information should be presented simultaneously with the task (Van Merriënboer and Sluijsmans 2009). From a CATLM standpoint, information that serves an emotional, motivational or affective purpose is not deemed extraneous and supports the learner’s regulation of the multimedia learning process (Mayer 2014a).

     
  3. 3.

    The effect of learner control on cognitive load is deemed extraneous by Mayer and Moreno (2003), because of the input needed from the learner to control multimedia. However, learner control is required to foster intrinsic motivation in a multimedia setting and is an essential part of the second-order-scaffolding needed to foster complex skills in self-directed learning (Kuhl 2000; Van Merriënboer and Kester 2014)

     
  4. 4.

    The selection of a peer as the learners’ modeling example is preferred in Mayers (2014b) personalization principle. However, findings have been mixed as reported by Hoogerheide et al. (2016). Hoogerheide et al. (2016) stress the peer model’s mastery (a model who displays perfect performance) may be more critical than the peer being perceived as similar in age. Bandura (1986) and Schunk et al. (1987) state learners mainly imitate peer models when they are high in expertise. The perception of mastery to have a more significant learning effect for both skill development and the intrinsic motivation needed to regulate cognitive load and foster positive multimedia learning (Leutner 2014; Martens 2007). When we can expect peers to perform the role of modeling example with a perception of mastery, this is not a dilemma. However, while casting the leading characters we discovered 13-year-old peer-aged models (the age of learners in the first year of their secondary education) seemed unable to perform a complex skill such as collaboration, which requires experience to build a schema (and can be difficult to automate), with an acceptable level of (perceived) mastery (Kirschner and Merriënboer 2008). In collaboration with a focus group of teachers and students, we selected four 15-year-old professional actors to be both identifiable as peers and capable of performing complex skills with a perception of mastery.

     
  5. 5.

    While Mayers (2014b) segmentation principle argues that pre-segmentation lowers cognitive load, Van Merriënboer and Kester (2014) argue learner-segmentation offers more control over the pace of the instruction.

     
  6. 6.

    Because of the inherent complexity of a complex skill that consists of many constituent sub-skills and requiring an estimated five hundred hours to acquire, the 4C-ID model focuses on acquiring complex skills as a long-term, durable addition to the learner’s skillset (Janssen-Noordman and Van Merriënboer 2002). However, Schweppe et al. (2015) conducted a study into the stability of the multimedia principle, testing retention of information acquired through multimedia with a delay of only one and 2 weeks, questioning the stability of the multimedia principle. Schweppe finds only limited learning effect, leaving the question if the multimedia principle is durable, and therefore applicable to complex skill development unanswered. While not necessarily conflicting, this does question the applicability of the multimedia principle to complex skill development.

     
  7. 7.

    While Mayer’s spatial split-attention principle argues that increasing a learner’s gaze shifts between mutually supportive textual and pictorial information reduces working memory load and improves memorization, CLT argues that increasing gaze shifts increases working memory load and reduces memorization (Mayer 2014b; Ouwehand et al. 2015).

     

The above mentioned seven dilemmas regarding ‘contradicting’ design guidelines and decisions are addressed in the following paragraph to come to a well-balanced design for a VER, that fosters multimedia learning on one side and complex skill development on the other.

Addressing dilemmas regarding ‘contradicting’ design guidelines: a step-by-step framework to prioritize and select design guidelines

To illustrate how we address the dilemmas regarding ‘contradicting’ design guidelines, we use the Viewbrics-project as a use case. The Viewbrics-project use case implements a VER in Dutch schools for pre-university education to foster the complex skills of presentation, collaboration, and information literacy.

In general, we propose four steps to prioritize the methodological guidelines over the multimedia guidelines when fostering complex skill development using multimedia (Kozma 1994). We prioritize in each of the four steps, by adding from top to bottom in order of importance.

In the Viewbrics-project use case, the VER supports a formative assessment method, providing the learner with in-process evaluations of their comprehension, learning needs, and skill’s acquisition.

First, the role(s) of the VER is identified. The VER is used to provide video and rubric information on which learners and peers can form a mental model and formatively assess complex skill mastery, and to gain insight into the constituent subskills of a complex skill (Fig. 1).
Fig. 1

Step 1, roles

In the Viewbrics-project use case, the VER has two roles due to the formative nature of the multimedia application in which it is implemented. The (1) orientation role carries the function to support the learner with forming a rudimentary mental model of the complex skill. As this role represents the start of the FA cycle and is the first screen the learner sees, we prioritize this role. In the (2) Preparation, Evaluation and Selection role, the function of the VER is to provide contextualization of received feedback and aid the learner in setting a goal for the next iterative FA cycle (self-directed goal selection).

Second, the function(s) of the VER in each of the roles is identified and placed in a top to bottom priority. In this case, motivational is a top priority in the role of orientation and feedback is a top priority in the role of Preparation, Evaluation, and Selection. This prioritization will prove valuable in the following steps (Fig. 2).
Fig. 2

Step 2, functions

In the Viewbrics-project use case, the VER in the orientation role carries the function to motivate the learner to watch the VER entirely and provide him/her with the supportive information to form a rudimentary mental model of the complex skill. Motivating the learner to view the entire video is an essential function and is prioritized under role 1. In the Preparation, Evaluation and Selection role, the function of the VER is to provide contextualization of the received feedback by offering the video-modeling example and rubric to compare their performance to the presented mastery levels. Feedback is an essential function and is prioritized under role 2. Other features are motivating the learner to formulate a learning goal and allowing the learner to search for his/her learning goal in the clusters of the rubric for an example of how to perform better.

Third, the method(s) of the VER needed to implement the functions of the roles in practice, keeping in mind the target audience’s attributes, are identified. This will start to narrow down our search for the design guidelines to the theories and methodology’s wherein they can be found. For instance (Fig. 3):
Fig. 3

Step 3, methods

In the Viewbrics project use case, we chose to support motivation using ARCS for two reasons. First, because ARCS has explicitly been shown effective in learning with multimedia (Dousay 2016). And second, the ARCS steps can be applied in the script-writing process of the video modeling example. FA is used to support feedback as the Viewbrics app follows the FA cycle. 4C/ID is used to derive complex skill development guidelines as 4C/ID expected to be a known methodology of the implementation schools.

Fourth, the guideline(s) of the VER that will foster the function of the VER can now be found in the chosen methods. The guidelines can be filled in under step 4, in order of importance according to the chosen roles, functions and methods. It is also clear that the two roles have different functions, leading to different guidelines. For the Viewbrics-project, this resulted in two designs. The design of role 1 focused more on motivational aspects such as peer identification, an educational narrative, and group dynamics, the design of role two design focusing more on providing feedback, fostering goal selection and promoting the growth of the mastery of a complex skill (Fig. 4).
Fig. 4

Step 4, guidelines

We explored the tensions of development guidelines and presented a framework to decrease these tensions. The decrease is a result of the identification of different roles, and the prioritization of functions by ordering from top to bottom. We expect this approach to decrease the design tensions for researchers and practitioners who aim to foster complex skill development using multimedia by providing a step-by-step framework to prioritize and select design guidelines.

Having categorized the roles, functions, methods, and guidelines needed to develop a VER for the Viewbrics-project, we move on to the practical development of the VER.

Designing a Video-Enhanced Rubric (VER): six guidelines

In addition to addressing the conceptual problem of tension using a framework, we introduce several generalizable guidelines that may prove valuable to others while designing and implementing a VER. We present these practically applicable guidelines in a three-level structure proposed by Cattaneo et al. (2019) framework for designing hypervideo in Fig. 5. We choose this model as it allows us to clarify the design decisions we took while designing at a more detailed level and to generalize them into design guidelines for practical application by others. Cattaneo’s et al. (2019) model for designing hyper video-based instructional scenarios starts with the first level of traditional, linear functions of pre-recorded video as a foundation (such as watching a movie on television). Video features that allow the learner to watch the video in a non-linear fashion are then added in the second level. Such as the function to press play, pause, rewind or fast-forward. The model then defines the additions of an index/table of contents and hyperlinks as features of the third level, hypervideo. For instance, a learner can visit a hyperlink to gain more in-depth information on a subject before resuming to watch the linear video.
Fig. 5

Cattaneo’s et al. (2019) model for designing hyper video-based instructional scenarios

Of the six presented design guidelines, guidelines 1, 2, 3 and 4 address the linear design of the VER. Guideline 5 addresses non-linear elements of design and guideline 6 addresses exchange options that may add value in connecting the abstract information of a rubric to concrete knowledge of the video.

Design guideline 1: educational narrative and the personalization principle

A purely instructional narrative (for instance: a step-by-step YouTube video containing the necessary steps to replace a broken phone screen) on how to perform a complex skill may lack insight into variables influencing the performance of the complex skill (Dousay 2016; Hidi and0 Renninger 2006). Variables such as perceived difficulty, motivation, affection and emotion (such as the anxiety of presenting) are known regulators of learning (Moreno 2005). To negate these shortcomings, we introduce an educational narrative as a design guideline. An educational narrative gives the instructional information in a context by embedding it into an educational storyline. Using Keller’s (1987) attention relevance confidence and satisfaction (ARCS) motivational model, we designed such an educational narrative integrated with educational content within the VER specifically to stimulate the attention and motivation of learners.

It is crucial to limit factors that harm learner’s identification with modeling examples to foster motivation with an educational narrative. First, actors slightly older than the learner can perform the role of modeling example with the ‘perception of mastery,’ while maintaining acceptable model-observer similarity needed for motivation (De Grez et al. 2014; Hoogerheide et al. 2014; van Gog and Rummel 2010). Second, Mayer’s personalization principle suggests selecting actors on fluent, enthusiastic and accent-free speech. These guidelines may minimize the amount of attention the learner pays to the characteristics of the actor at the expense of analyzing the modeling example (van Gog and Rummel 2010).

For the Viewbrics-project use case, we use guideline 1 to provide guidelines for designing both motivational and supportive informational functions of role 1. We added an introduction before the video-modeling examples, introducing an educational narrative. With the educational narrative, we aim to foster a richer and contextualized insight into a complex skill than an instructional narrative alone. We stimulate identification with the characters of our script by varying our cast in terms of gender, ethnicity, personal traits and educational level to facilitate deeper learning (Mayer et al. 2003). We also aim to foster the relevance of the video for the target group learners by resolving practical educational tasks/problems in a familiar and realistic group dynamic setting (Keller 1987), such as a project group or classroom setting.

Design guideline 2: the non-verbal script principle

An educational narrative alone may not give learners sufficient insight into the richness and complexity of the mastery of a complex skill. For this purpose, we propose the addition of non-verbal scripts to the educational narrative. A non-verbal layer is added to the script in two forms.

First, a narrative voice-over intends to foster insight into the mental process of performing a complex skill to promote a richer mental model. In addition to forming a richer mental model, the voice-over may also verbalize factors such as anxiety for the learner and his peers.

In the Viewbrics-project use case, we use guideline 2 in role 1 of our design to provide us with guidelines to strengthen the supportive information function. A voiceover is used to vocalize the anxiety of presenting, the selecting of reliable internet sources in information literacy and the social anxiety and complexity of collaboration.

Second, a gesture script may foster a rich mental model for the learner by instructing the video-modeling example to physically act out non-verbal cues such as gestures, eye movement and complex social interactions (Cutica and Bucciarelli 2011; Ouwehand et al. 2015).

In the Viewbrics-project use case, the gesture-script highlights changes in character development by gradually changing body language towards peers during the collaboration process. The gesture script describes the elements of presentation that rely on non-verbal activity, such as the presenter checking if his message has landed with the audience before beginning with a new sentence.

Design guideline 3: the distance principle

Even though the combination of educational content, educational narrative, and a non-verbal script may support the formation of a rich mental model, we must consider the limited working memory capacity of the learner in our design guidelines. With helping the learner to access prior knowledge and to become motivated using a combination of both educational content and an educational narrative, we introduce two processes that compete for working memory access (Fisch 2000). Fisch’s (2000) concept of Narrative Dominance states that educational narrative automatically takes priority over the educational content. To address this strain, we propose using Fisch’s (2000) capacity model of children’s comprehension of educational content on television. This model states that the degree to which the educational content is integral to the narrative correlates positively with the strain on working memory. Therefore, we propose keeping the distance between educational narrative and educational content in the script to a minimum. This is ensured by writing the script in such a manner that the learner perceives the educational narrative and the educational content to be intertwined. If the learner understands the educational narrative and the educational content to be intertwined, this reduces competition in accessing precious working memory (Fisch 2000). Intertwining educational content and educational narrative on a storyline level can be ensured by using the educational content as a base for the storyline. Intertwining educational content and educational narrative on a character level can be secured by selecting actors based on the characteristics needed to portray the constituent skills, thus ensuring the learner to perceive a mastery level (Hoogerheide et al. 2016) (Fig. 6).
Fig. 6

The timeline of the videos: showing the complex skills of presentation, collaboration and information literacy with the numbered constituent skills. Yellow indicates an introductory storyline which serves the purpose of educational narrative and will foster personalization as suggested in guideline 1. Green shows the natural segmentation as recommended in guideline 4

In the Viewbrics-project use case, we use guideline 3 in our design to combine role 1’s motivational and supportive information functions.

The characters in the script are written to support the selected complex skills. Each character has his or her defining quality, essential to portraying a complex skill. For instance, in the complex skill of collaboration, we have a pessimistic team member, a positive team member, a shy/new character and a leader. Writing these defining qualities into the characters minimizes the distance between educational narrative and educational content as no character is asked to step outside his or her role to portray the educational content. In Fig. 6, the first video in the series (collaboration) introduces our four characters and invests up to 3 min on educational narrative in the form of an introduction of the characters. The introduction aims to stimulate motivation and foster identification of the learner with the characters while intertwining educational narrative with educational content.

Design guideline 4: the natural segmentation principle

While intertwining of educational content and the educational narrative is essential to reduce competition for working memory, the length of the script warrants additional design guidelines. If we take a working memory limitation of approximately four to seven items into account, no more than seven constituent skills may be shown in the design to prevent errors in the learner’s memory retrieval (Luck and Vogel 1997; Ma et al. 2014). However, segmenting the video after a fixed number of items might increase the distance between educational narrative and educational content. Pausing the educational narrative after a maximum of seven items breaks the storyline and reveals the educational content to be a priority. We, therefore, propose using the change in (such as moving from a scene filmed in-school to an outdoor scene) in the script as a guide for a more natural segmentation and fading. The natural segmentation offers the opportunity to include video-fade outs, which are intended to clear the learners working memory, preparing the learner for the following four to seven items (Spanjers et al. 2012a, b; Van Merriënboer and Sluijsmans 2009).

In the Viewbrics-project use case, we use guideline 4 in our design to combine role 1’s motivational and supportive information functions.

The constituent skills between fade-outs range from three to six and are represented by the green cells in Fig. 6. Fig. 6 shows the timelines of the videos of collaboration, information literacy and presenting.

In summary, we propose the addition of an educational narrative and a non-verbal layer to the script to stimulate motivation and verbalize emotion (Keller 1987; Moreno 2005). We advise reducing the distance between the educational content, educational narrative and non-verbal layer to minimize strain on working memory by intertwining these elements in the script based on Fisch’s (2000) capacity model of children’s comprehension of educational content on television. Finally, we propose using natural segmentation and fading to clear the learners working memory while minimizing the distance between educational content and educational narrative (Spanjers et al. 2012b; Van Merriënboer and Sluijsmans 2009).

Design guideline 5: the rewind principle

We propose keeping rewind options accessible as a non-linear development guideline for or two reasons. First, replaying video correlates positively with learning outcomes and increases motivation (De Jong and Lazonder 2016; Schüler et al. 2013). Second, the exploratory and control-feedback principles state that exploring the VER by increasing learner control decreases cognitive load and motivates novice learners (Eitam et al. 2013; Moreno 2004).
For the Viewbrics-project use case, we use guideline 5 in our design throughout roles 1 and 2 to support learner control and foster more in-depth learning and motivation. We foster replaying the video in multiple ways. First, we keep a 10 s-rewind button accessible at all times in the orientation design and use the clusters of the rubric as a navigational menu in the preparation screen (Fig. 7) and second, we use rewind buttons in the self-explanation prompts (Fig. 8) and the assessment screens that allow the learner to rewind to a timestamp in the video that the learner may find useful in answering a specific self-explanation prompt or assessing a peer or him/herself.
Fig. 7

The preparation role, showing the video broken down into rubric clusters and constituent skills. This allows the learner to rewind to a specific constituent skill, using the rubric as a navigational index

Fig. 8

The self-explanations questions

Design guideline 6: the self-explanation principle within natural segmentation

Having discussed the linear and non-linear design of the VER, we proceed to the hypervideo design. With the hypervideo design, we aim to help learners to identify and connect the concrete contextualized knowledge presented in the video and the abstract knowledge presented in the rubric. To facilitate this, we ask the learner to self-explain how the concrete actions of the modeling example in the video meet the abstract mastery levels described per constituent sub-skill, as found in the rubric, in his own words (O’Neil et al. 2014).

We propose facilitating the learner to self-explain by inserting self-explanation prompts after each scene defined by the natural segmentation and fading guideline. Promoting a self-explanation prompt at the end of the scene will also allow the learner to clear working memory regarding the constituent skills in the completed scene (Ma et al. 2014; O’Neil et al. 2014; Spanjers et al. 2012a). A self-explanation prompt presents the learner with a what, how or why question such as: ‘How did Quinn interact with his audience? In the design of these self-explanation prompts, we support the learner in recognizing the scene from the video by offering similar text in both rubric and explanation prompt, called dynamic linking (Ainsworth 2006). Dynamic linking is thought to reduce cognitive load, especially in complex representations involving action-consequence sequences (Kaput 1992) (Fig. 9).
Fig. 9

The self-explanation prompt

For the Viewbrics-project use case, we use guideline 6 in our design to lower cognitive load in role 1 and support the learner in connecting video and rubrics. Figure 8 shows the self-explanation prompts we inserted in the natural segmentation and fading of the video shown as the green blocks in Fig. 6. The self-explanation questions are in chronological order, with the constituent skill that is shown first in the video shown as the top question. The color of the question relates to the skill cluster on the right of the screen. After clicking the first self-explanation question, a pop up appears, repeating the self-explanation question, shown in Fig. 9. The purple check (✔) in Fig. 8 indicates completed questions. The purple rewind button on the left of the self-explanation question rewinds the video to a moment the learner might find useful to provide an answer to the question. After all self-explanation questions are answered, the video moves on to the next scene.

Conclusions and discussion

Developing a multimedia application to foster complex skills can result in several design dilemmas. For instance, guidelines for multimedia design may have limited stability over time, and thus, limited use in complex skill development (Schweppe et al. 2015). We developed a framework to filter the essential guidelines from theory and methodology for practical application in the Viewbrics prototype. Our framework can be used as a step-by-step guide by educational researchers and practitioners in prioritizing the sometimes-conflicting design principles as expressed by major (motivational, skills acquisitions, multimedia) theories.

We found motivation to be a constant design factor throughout our guidelines because of its positive effect on both complex skill development and learning with multimedia (De Grez et al. 2009; Heidig et al. 2015; Moreno and Mayer 2007). The importance of designing for motivation could be found in both literature and the experience of the teachers involved with the Viewbrics project, leading to the ARCS model being used from the writing of the script to the final field testing. Therefore, we strongly recommend the reader to consider motivation as a constant factor when analyzing the functions of a multimedia application with our RFMG framework resulting from our framework. We presented six practical guidelines as tools to inspire educational researchers and practitioners alike in creating their own Video Enhanced rubric to foster complex skills. The practical use-case gives context and inspiration to design concerning the changing and multiple functions a Video Enhanced Rubric may have, such as providing feedback, fostering complex skill development, motivating exploration or presenting straightforward information (Ainsworth 2006). Our prototype encourages the learner to connect descriptive textual rubric content to realistic, motivational and cognitive load conscious video modeling examples in their own words, through embedded self-explanation prompts. We have presented a novel approach to the development of complex skills using multimedia in the field of FA. While our prototype is specified to the complex skills of presenting, collaboration and information literacy, we expect this design applies to a broader field of complex skills (such as critical- and/or creative thinking, problem-solving and communicating) and is not limited to lower pre-university education. Adding the feature for schools to add their video material with matching rubric and self-explanation prompts could be a next step in widening future applications.

Our design will now be implemented in four classrooms of two Dutch schools for pre-university education. We will assume that groups using the VER develop richer mental models, produce higher quality feedback and show increased complex skill performance for collaboration, information literacy, and presentation skills than other experimental groups. The implementation will provide data for our future research and refinement towards a practically applicable VER.

Notes

Acknowledgement

The authors, would like to thank the reviewers for their constructive comments on our paper and all Viewbrics project members for the fruitful and enjoyable collaboration.

Funding

We would like to gratefully acknowledge the contribution of the Viewbrics-Project (Project No. 405-15-550), that is funded by the practice-oriented research program of the Netherlands Initiative for Education Research (NRO), part of The Netherlands Organisation for Scientific Research (NWO).

Compliance with ethical standards

Conflict of interest

All authors declared that they have no conflict of interest.

Ethical approval

This research has been approved by the ethics committee of the author’s institution.

Human and Animal Rights

No human participants were involved in this study.

Open Access

This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

References

  1. Ackermans, K., Rusman, E., Brand-Gruwel, S., & Specht, M. (2017). A first step towards synthesizing rubrics and video for the formative assessment of complex skills. In Communications in Computer and Information Science (Vol. 653, pp. 1–10).  https://doi.org/10.1007/978-3-319-57744-9_1
  2. Ackermans, K., Rusman, E., Brand-Gruwel, S., & Specht, M. (2018). The dilemmas of formulating theory-informed design guidelines for a video enhanced rubric. In Communications in Computer and Information Science (Vol. 829, pp. 123–136).  https://doi.org/10.1007/978-3-319-97807-9_10
  3. Ainsworth, S. (2006). DeFT: A conceptual framework for considering learning with multiple representations. Learning and Instruction, 16(3), 183–198.  https://doi.org/10.1016/j.learninstruc.2006.03.001.Google Scholar
  4. Bandura, A. (1986). Social foundations of thought and action : A social cognitive theory/Albert Bandura. Englewood Cliffs, N.J: Prentice-Hall.Google Scholar
  5. Bertsch, S., Pesta, B. J., Wiscott, R., & McDaniel, M. A. (2007). The generation effect: a meta-analytic review. Memory & Cognition, 35(2), 201–210.  https://doi.org/10.3758/BF03193441.Google Scholar
  6. Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1), 5–31.  https://doi.org/10.1007/s11092-008-9068-5.Google Scholar
  7. Brookhart, S. M., & Chen, F. (2014). The quality and effectiveness of descriptive rubrics. Educational Review.  https://doi.org/10.1080/00131911.2014.929565.Google Scholar
  8. Cattaneo, A. A. P., van der Meij, H., Aprea, C., Sauli, F., & Zahn, C. (2019). A model for designing hypervideo-based instructional scenarios. Interactive Learning Environments.  https://doi.org/10.1080/10494820.2018.1486860.Google Scholar
  9. Clark, R. C., & Mayer, R. E. (2012). e-Learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning: Third edition. e-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning: Third Edition.  https://doi.org/10.1002/9781118255971
  10. Cutica, I., & Bucciarelli, M. (2011). “The More You Gesture, the Less I Gesture”: Co-speech gestures as a measure of mental model quality. Journal of Nonverbal Behavior, 35(3), 173–187.  https://doi.org/10.1007/s10919-011-0112-7.Google Scholar
  11. De Grez, L., Valcke, M., & Roozen, I. (2009). The impact of goal orientation, self-reflection and personal characteristics on the acquisition of oral presentation skills. European Journal of Psychology of Education, 24(1), 293–306.  https://doi.org/10.1007/BF03174762.Google Scholar
  12. De Grez, L., Valcke, M., & Roozen, I. (2014). The differential impact of observational learning and practice-based learning on the development of oral presentation skills in higher education. Higher Education Research & Development, 33(2), 256–271.  https://doi.org/10.1080/07294360.2013.832155.Google Scholar
  13. De Jong, T., & Lazonder, A. W. (2016). The guided discovery learning principle in multimedia learning. In R. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 371–390). Cambridge: Cambridge University Press.Google Scholar
  14. Dousay, T. A. (2016). Effects of redundancy and modality on the situational interest of adult learners in multimedia learning. Educational Technology Research and Development, 64(6), 1–21.  https://doi.org/10.1007/s11423-016-9456-3.Google Scholar
  15. Eitam, B., Kennedy, P. M., & Higgins, E. T. (2013). Motivation from control. Experimental Brain Research, 229(3), 475–484.  https://doi.org/10.1007/s00221-012-3370-7.Google Scholar
  16. Fisch, S. M. (2000). A capacity model of children’ s comprehension of educational content on television. Media Psychology, 2(October), 63–91.  https://doi.org/10.1207/S1532785XMEP0201.Google Scholar
  17. Frerejean, J., van Strien, J. L. H., Kirschner, P. A., & Brand-Gruwel, S. (2016). Completion strategy or emphasis manipulation? Task support for teaching information problem solving. Computers in Human Behavior, 62, 90–104.  https://doi.org/10.1016/j.chb.2016.03.048.Google Scholar
  18. Gary, M. S., & Wood, R. E. (2011). Mental models, decision rules, and performance heterogeneity. Strategic Management Journal, 32(6), 569–594.  https://doi.org/10.1002/smj.899.Google Scholar
  19. Gary, M. S., & Wood, R. E. (2016). Unpacking mental models through laboratory experiments. System Dynamics Review, 32(2), 101–129.  https://doi.org/10.1002/sdr.1560.Google Scholar
  20. Heidig, S., Müller, J., & Reichelt, M. (2015). Emotional design in multimedia learning: Differentiation on relevant design features and their effects on emotions and learning. Computers in Human Behavior, 44, 81–95.  https://doi.org/10.1016/j.chb.2014.11.009.Google Scholar
  21. Hidi, S., & Renninger, K. A. (2006). The four-phase model of interest development. Educational Psychologist, 41(2), 111–127.  https://doi.org/10.1207/s15326985ep4102_4.Google Scholar
  22. Hoogerheide, V., Loyens, S. M. M., & van Gog, T. (2014). Effects of creating video-based modeling examples on learning and transfer. Learning and Instruction, 33, 108–119.  https://doi.org/10.1016/j.learninstruc.2014.04.005.Google Scholar
  23. Hoogerheide, V., van Wermeskerken, M., Loyens, S. M. M., & van Gog, T. (2016). Learning from video modeling examples: Content kept equal, adults are more effective models than peers. Learning and Instruction, 44, 22–30.  https://doi.org/10.1016/j.learninstruc.2016.02.004.Google Scholar
  24. Janssen-Noordman, A. M., & Van Merriënboer, J. J. G. (2002). Innovatief Onderwijs Ontwerpen. Groningen: Wolters-Noordhoff.Google Scholar
  25. Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–144.  https://doi.org/10.1016/j.edurev.2007.05.002.Google Scholar
  26. Kaput, J. J. (1992). Technology and mathematics education. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 515–556). New York: Macmillan.Google Scholar
  27. Keller, J. M. (1987). Development and use of the ARCS model of motivational design. Journal of Instructional Development, 10(1932), 2–10.  https://doi.org/10.1002/pfi.4160260802.Google Scholar
  28. Kerkhoffs, J., Stark, E., & Zeelenberg, T. (2006). Rubrics als beoordelingsinstrument voor vaardigheden. Enschede: SLO.Google Scholar
  29. Kim, Y., & McDonough, K. (2011). Using pretask modelling to encourage collaborative learning opportunities. Language Teaching Research, 15(2), 183–199.  https://doi.org/10.1177/1362168810388711.Google Scholar
  30. Kirschner, P., & Merriënboer, J. Van. (2008). Ten steps to complex learning a new approach to instruction and instructional design. In T. L. Good (Ed.), 21st century education: A reference handbook (pp. 244–253). Thousand Oaks, CA: Sage. Retrieved from http://hdl.handle.net/1820/2327
  31. Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42(2), 7–19.  https://doi.org/10.1007/BF02299087.Google Scholar
  32. Krauskopf, K., Zahn, C., & Hesse, F. W. (2012). Leveraging the affordances of Youtube: The role of pedagogical knowledge and mental models of technology functions for lesson planning with technology. Computers & Education, 58(4), 1194–1206.  https://doi.org/10.1016/j.compedu.2011.12.010.Google Scholar
  33. Kuhl, J. (2000). A functional-design approach to motivation and self-regulation: The dynamics of personality systems and interactions. In J. Kuhl (Ed.), Handbook of self-regulation (pp. 111–169). San Diego, CA: Academic Press.Google Scholar
  34. Leutner, D. (2014). Motivation and emotion as mediators in multimedia learning. Learning and Instruction, 29, 174–175.  https://doi.org/10.1016/j.learninstruc.2013.05.004.Google Scholar
  35. Luck, S. J., & Vogel, E. (1997). The capacity of visual working memory for features and conjunctions. Nature, 390(6657), 279–281.  https://doi.org/10.1038/36846.Google Scholar
  36. Ma, W. J., Husain, M., Bays, P. M., Ji Ma, W., Husain, M., & Bays, P. M. (2014). Changing concepts of working memory. Nature Neuroscience, 17(3), 347–356.  https://doi.org/10.1038/nn.3655.Google Scholar
  37. Martens, R. (2007). Positive learning met Multimedia. Onderzoeken, toepassen & generaliseren. Heerlen, The Netherlands: Open University of the Netherlands. Retrieved from http://dspace.ou.nl/handle/1820/1611
  38. Matthews, W. J., Buratto, L. G., & Lamberts, K. (2010). Exploring the memory advantage for moving scenes. Visual Cognition, 18(10), 1393–1420.  https://doi.org/10.1080/13506285.2010.492706.Google Scholar
  39. Mayer, R. E. (2014a). Incorporating motivation into multimedia learning. Learning and Instruction, 29, 171–173.  https://doi.org/10.1016/j.learninstruc.2013.04.003.Google Scholar
  40. Mayer, R. E. (2014b). The Cambridge handbook of multimedia learning. In R. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning (Second ed.). Cambridge: Cambridge University Press.  https://doi.org/10.1017/CBO9781139547369.Google Scholar
  41. Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43–52.  https://doi.org/10.1207/S15326985EP3801_6.Google Scholar
  42. Mayer, R. E., Sobko, K., & Mautone, P. D. (2003). Social cues in multimedia learning: Role of speaker’s voice. Journal of Educational Psychology, 95(2), 419–425.  https://doi.org/10.1037/0022-0663.95.2.419.Google Scholar
  43. Moreno, R. (2004). Decreasing cognitive load for novice students: Effects of explanatory versus corrective feedback in discovery-based multimedia. Instructional Science, 32, 99–113.  https://doi.org/10.1023/B:TRUC.0000021811.66966.1d.Google Scholar
  44. Moreno, R. (2005). Instructional Technology: Promise and Pitfalls. Technology-Based Education: Bringing Researchers and Practitioners Together, 1–19. Retrieved from http://test.scripts.psu.edu/users/c/r/crg177/ProfessionalDevelopment/Moreno (2005).pdf
  45. Moreno, R., & Mayer, R. (2007). Interactive multimodal learning environments. Educational Psychology Review, 19(3), 309–326.  https://doi.org/10.1007/s10648-007-9047-2.Google Scholar
  46. O’Neil, H. F., Chung, G. K. W. K., Kerr, D., Vendlinski, T. P., Buschang, R. E., & Mayer, R. E. (2014). Adding self-explanation prompts to an educational computer game. Computers in Human Behavior, 30, 23–28.  https://doi.org/10.1016/j.chb.2013.07.025.Google Scholar
  47. Ouwehand, K., van Gog, T., & Paas, F. (2015). Designing effective video-based modeling examples using gaze and gesture cues. Educational Technology and Society, 18(4), 78–88.Google Scholar
  48. Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, 129–144.  https://doi.org/10.1016/j.edurev.2013.01.002.Google Scholar
  49. Panadero, E., & Romero, M. (2014). To rubric or not to rubric? The effects of self-assessment on self-regulation, performance and self-efficacy. Assessment in Education: Principles, Policy & Practice, 21(2), 133–148.  https://doi.org/10.1080/0969594X.2013.877872.Google Scholar
  50. Panadero, E., Tapia, J. A., & Huertas, J. A. (2012). Rubrics and self-assessment scripts effects on self-regulation, learning and self-efficacy in secondary education. Learning and Individual Differences, 22(6), 806–813.  https://doi.org/10.1016/j.lindif.2012.04.007.Google Scholar
  51. Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science.  https://doi.org/10.1002/bs.3830280103.Google Scholar
  52. Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144.  https://doi.org/10.1007/BF00117714.Google Scholar
  53. Schüler, A., Scheiter, K., & Gerjets, P. (2013). Is spoken text always better? Investigating the modality and redundancy effect with longer text presentation. Computers in Human Behavior, 29(4), 1590–1601.  https://doi.org/10.1016/j.chb.2013.01.047.Google Scholar
  54. Schunk, D. H., Hanson, A. R., & Cox, P. D. (1987). Peer-model attributes and children’s achievement behaviors. Journal of Educational Psychology.  https://doi.org/10.1037/0022-0663.79.1.54.Google Scholar
  55. Schweppe, J., Eitel, A., & Rummer, R. (2015). The multimedia effect and its stability over time. Learning and Instruction, 38, 24–33.  https://doi.org/10.1016/j.learninstruc.2015.03.001.Google Scholar
  56. Spanjers, I. A. E., Van Gog, T., & Van Merriënboer, J. J. G. (2012a). Segmentation of worked examples: Effects on cognitive load and learning. Applied Cognitive Psychology, 26(3), 352–358.  https://doi.org/10.1002/acp.1832.Google Scholar
  57. Spanjers, I. A. E., Van Gog, T., Wouters, P., & Van Merriënboer, J. J. G. (2012b). Explaining the segmentation effect in learning from animations: The role of pausing and temporal cueing. Computers & Education, 59(2), 274–280.  https://doi.org/10.1016/j.compedu.2011.12.024.Google Scholar
  58. Van Aalst, J., Chan, C. K. K., Chan, Y.-Y., Wan, W.-S., & Tian, S. (2011). Design and development of a formative assessment tool for knowledge building and collaborative learning. Connecting Computer-Supported Collaborative Learning to Policy and Practice: CSCL 2011 Conf. Proc.Short Papers and Posters, 9th International Computer-Supported Collaborative Learning Conf., 2, 914–915. Retrieved from http://www.scopus.com/inward/record.url?eid=2-s2.0-84858410831&partnerID=40&md5=7c8d4e2afe2f8eb465fb26904dcb34a5
  59. van Gog, T., & Rummel, N. (2010). Example-based learning: Integrating cognitive and social-cognitive research perspectives. Educational Psychology Review, 22(2), 155–174.  https://doi.org/10.1007/s10648-010-9134-7.Google Scholar
  60. Van Merriënboer, J. J. G., & Kester, L. (2014). The four-component instructional design model: multimedia principles in environments for complex learning. In R. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 104–148). Cambridge: Cambridge University Press.  https://doi.org/10.1017/CBO9781139547369.007.Google Scholar
  61. Van Merriënboer, J. J. G., & Sluijsmans, D. M. A. (2009). Toward a synthesis of cognitive load theory, four-component instructional design, and self-directed learning. Educational Psychology Review, 21(1), 55–66.  https://doi.org/10.1007/s10648-008-9092-5.Google Scholar
  62. Westera, W. (2011). Reframing Contextual Learning: Anticipating the Virtual Extensions of Context, 14, 201–212. Retrieved from http://hdl.handle.net/1820/2112%5Cnpapers2://publication/uuid/DBB0DD08-AB0A-4245-8466-C4F0272D4FD2

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Welten InstituteOpen UniversiteitHeerlenThe Netherlands

Personalised recommendations