Advertisement

Educational Psychology Review

, Volume 30, Issue 1, pp 153–176 | Cite as

Promoting Argumentation Competence: Extending from First- to Second-Order Scaffolding Through Adaptive Fading

  • Omid NorooziEmail author
  • Paul A. Kirschner
  • Harm J.A. Biemans
  • Martin Mulder
Open Access
Review Article

Abstract

Argumentation is fundamental for many learning assignments, ranging from primary school to university and beyond. Computer-supported argument scaffolds can facilitate argumentative discourse along with concomitant interactive discussions among learners in a group (i.e., first-order argument scaffolding). However, there is no evidence, and hence no knowledge, of whether such argument scaffolds can help students acquire argumentation competence that can be transferred by the students themselves to various similar learning tasks (i.e., second-order argument scaffolding). Therefore, this conceptual article argues that the focus of argument scaffold design and research should be expanded: from the study of first-order scaffolding alone to including the study of second-order scaffolding as well. On the basis of the Script Theory of Guidance (SToG), this paper presents a guideline for second-order argument scaffolding using diagnosis of the student’s internal argumentative script and offering adaptive external support and various fading mechanisms. It also explains how to complement adaptive fading support with peer assessment, automatic response tools, and adaptable self-assessment to ensure that learners actually understand, learn, and apply targeted argumentation activities in similar situations.

Keywords

Adaptive education Argumentation Fading Scaffolding Learning 

Introduction

Research on scaffolding Collaborative Argumentation-Based Learning (CABLe) has been influenced by developments in educational technologies focusing on the role of computer support systems (see Scheuer et al. 2010; Noroozi et al. 2012 for an overview). With CABLe, students exchange views and arguments, negotiate meaning, and (co-)construct knowledge on the issue at hand. A meta-analysis review by Wecker and Fischer (2014) showed that computer-supported argument scaffolds are successful with respect to their most proximal goal of enhancing argumentation in the particular learning task. However, against the broadly shared theoretical assumption, this meta-analysis also showed that argumentation does not mediate the effects of interventions on domain-specific knowledge acquisition (Wecker and Fischer 2014). This is striking because—analogous to what Salomon (1992) argued with respect to the effects of and with computers—if learners can acquire argumentation competence (i.e., learn how to argue themselves), it is likely that they also acquire the domain-specific knowledge through epistemic exchange of ideas and argumentation with their learning partners.

One criticism is that most available computer-supported argument scaffolds are aimed at stimulating argumentative discourse for learning within a particular task (i.e., to achieve first-order argument scaffolding), and there is no systematic evidence, and hence no knowledge, of whether such scaffolds also help learners acquire argumentation competence for transfer within the same area (i.e., second-order argument scaffolding) (see Noroozi et al. 2012). The focus of this article is on the various disciplines of natural sciences (e.g., chemistry, biology, geology, biotechnology, and physics) that contain many societal and controversial issues. With second-order argument scaffolding, students should be able to transfer their argumentation competence within the same discipline for dealing with various complex, ill-defined, and controversial issues in that specific discipline. The reason for such expected transfer within the same discipline and not across disciplines and areas is that each discipline has specific shared features of argumentation structure, discipline’s value, epistemology, argumentation goals, and terminologies (see Andrews 2010; Noroozi et al. 2016; Samraj 2004).

Although the transfer of some aspects of acquired argumentation competence (e.g., learning to use data to back up hypotheses) can also, to some extent, be applied across different disciplines of an area (such as from biotechnology to molecular life sciences and vice versa) and even areas (such as from natural sciences to social sciences and vice versa), it would be an oversimplification to assume that the transfer of argumentation competence can be accomplished in all aspects, because various disciplines have different argumentation rules and goals (see Andrews 2010; Noroozi et al. 2016; Samraj 2004). For example, argumentation for CABLe in the area of natural sciences is rather different from the type of competitive argumentation in law containing ethical or legal argumentation (see Pinkwart et al. 2006, 2007) or argumentation in math in which there actually is a solution that can be proven and not many things are ill-defined. Therefore, when we talk about the transfer of argumentation competence, we mean transfer within the same discipline in the area of natural sciences, and not transfer across disciplines and/or areas.

The process of acquiring argumentation competence differs depending on the learner’s own individual, already developed, and often idiosyncratic internal script that indicates how a person will act in, and understand, a particular situation (see Kollar et al., 2007). Fischer et al. (2013) refer to internal collaboration scripts as configurations of knowledge components about a collaboration process. They refer to external scripts as types of support that can activate existing internal scripts or help create new internal scripts through organizing dispersed, internally represented elements of knowledge. An external script can be used to induce a functional configuration of an internal script to be enacted which enables learners to engage in CABLe practice beyond their ability without an external script (Fischer et al. 2013). The internal script of each learner develops through repeated learning experiences and guides the learner in new CABLe situations (see Fischer et al. 2013).

Fischer et al. (2013) developed the Script Theory of Guidance (SToG), which can explain the interplay of the external and internal scripts during CABLe. Based on the SToG, the learners’ activities are guided by a (re)configuration of existing internal script components consisting of play, scenes, roles, and scriptlets. While Fischer et al. (2013) specifically focus on the SToG for application in Computer-Supported Collaboration Learning (CSCL) settings, this study applies the SToG for second-order argument scaffolding in CABLe settings. Therefore, regarding SToG, the emphasis is on argumentation rather than collaboration as such.

According to the SToG, acquiring argumentation competence depends on how external and internal scripts interplay during CABLe (see Kollar et al. 2007). The optimal presentation of external scripts may only occur when they trigger an accompanying specific constellation of internal script components (Fischer et al. 2013), if they exist in the learners, or if they do not conflict with and are not redundant to them.

Fading external scripts or, in other words, gradually transferring the responsibility for learning from the learning environment to the learner has been argued to be the most effective approach to realize an optimal interplay between external and internal scripts (see Kollar et al. 2007; Van Merriënboer and Kirschner 2013). However, fading something that is not understood is not adequate for fostering acquisition and practice of the targeted argumentation activities. Additional support during the fading is needed if learners are to dynamically reconfigure their internal script components as a response to changing situations, and to their individual goals to continue acting in accordance with the strategy suggested by the external script (Fischer et al. 2013; Wecker and Fischer 2011).

This conceptual paper proposes instructional approaches to complement fading for internalizing and securing continuous application of the strategy as suggested by external support. It uses a narrative analysis approach to synthesize and integrate literature on this topic, with the goal of developing a guideline for the design of second-order argument scaffolding, and for addressing practical implications and indicating avenues for future research. After synthesizing the literature, this paper presents a three-step guideline for the design of second-order argument scaffolding in such a way as to secure acquisition and continuous application of the argumentation strategies for various similar learning tasks (i.e., to promote argumentation competence), namely (1) diagnosis of the internal argumentative script, (2) adaptive external support, and (3) adaptive fading of this external support. Specifically, this paper describes mechanisms in which automated analysis techniques can be used to recognize the internal scripts of both individuals and groups of learners and their learning processes for providing dynamic support and adaptive fading. Next, it explains how artificial intelligence and computer-linguistic tools can be combined to provide learners with dynamic support and adaptive fading depending on argumentative discourse. Then, it explains how to complement adaptive fading support with peer assessment, automatic response tools, and (adaptable) self-assessment to ensure that learners actually understand and learn the targeted argumentative activities for transfer in similar situations. Finally, it formulates an agenda for future research.

Collaborative Argumentation-Based Learning

In social constructivist paradigms for learning, students are supposed to engage in argumentation with learning partners, negotiate meaning, and understand multiple perspectives of issues at stake for co-constructing knowledge and solving tasks. Students need to express their informed opinions on what they think, what they mean, what they believe, and what they need from their peers so as to resolve differences of opinion on the issue at stake and solve task-related problems that confront them (see Kirschner et al. 2003).

Argumentation is a vehicle for collaborative learning processes such as knowledge co-construction and meaning making (see Baker 2003). This is known as CABLe in which argumentation is used by learning partners as a means to engage in collective exploration of a dialogical space of solutions (Andriessen 2006). CABLe—otherwise known as argumentative knowledge construction (see Weinberger and Fischer 2006)—is regarded not only as a discourse for changing learners’ conceptions or convincing others through logical and evidence-based theses but also as a way to present and discuss disagreements and reasoning for demonstrating truth or falsehood, and to gain understanding of the multiple perspectives of the issue at stake (see Kirschner et al. 2003). That is why Baker (2009) argues that the point of CABLe is not necessarily changing learners’ conceptions or beliefs, but rather broadening and deepening their views and making them more reasoned and reasonable, which will enable them to understand each other’s point of view. This is an important distinction, because when learners perceive argumentation as competitive, it is likely that they will merely engage in what Asterhan and Schwarz (2009) call a “debate-type win-lose situation” as in law (see Pinkwart et al. 2006, 2007), in which they try to refute their opponents’ views and prove the superiority of their own arguments. This is more or less similar to a situation where argumentation merely serves as a means for persuasion or eristic argumentation (“fighting”). Thus, argumentation can effectively contribute to learning when it is not used as an adversarial means for competition during CABLe practice (Andriessen 2006; Asterhan and Schwarz 2009).

Difficulties for Collaborative Argumentation-Based Learning

Despite the presence of argumentation in everyday life and its emphasis on “winning” the argument, students often struggle to generate, analyze, interpret, and evaluate valid arguments during CABLe (see Kuhn 1991). Several factors could contribute to the observed difficulties (see Noroozi et al. 2012 for an overview of these difficulties). Some difficulties have to do with cognitive, emotional, and social barriers during discourse. For example, while some students might hold positive epistemic emotions (being curious or anxious when receiving counterarguments), others might be emotional based on their achievements (being proud of their success or shameful of their failure) during CABLe. Other contributing factors are the complex, nonlinear, ill-defined nature of argumentation, and the pressure in a real-time situation that makes it difficult for learners to follow a set of rules and unbending laws on how to construct arguments and respond to counterarguments. Finally, some students perceive CABLe similar to argumentation that is used in everyday life situations. In everyday life, argumentation is often used as a clash of ideas in which the goal is neither learning nor resolution, but rather “winning” the argument. A variety of approaches has been introduced to cope with complexities and difficulties for CABLe. The most prominent approach is the use of computer support systems for CABLe.

Computer Support Systems for Collaborative Argumentation-Based Learning

There are computer support systems for collaborative argumentation which assist sharing, constructing, and representing arguments to support learning, and which also lighten some of the responsibilities of teachers in terms of time pressure and availability. Such systems provide tools for collaborative development of arguments and argumentation schemas and possibly also for checking the consistency/inconsistency between arguers and their arguments (see Kirschner et al. 2003; Scheuer et al. 2010; Noroozi et al. 2012 for an overview).

Computer support systems can scaffold critical discourse and argumentation processes in a number of ways, for example through representational guidance tools, digital dialogue games as well as macro- and micro-scripting approaches. The underlying rationale for representational guidance tools is to provide learners with graphical representation of the structure of argumentation to support argumentation processes in the form of schematic representation (Schwarz and De Groot 2007), tables (Suthers and Hundhausen 2003), or visualizations (Noroozi et al. 2011; Scheuer et al. 2013). The underlying rationale for digital dialogue games is based on the dialogic dimension of argumentation that guides students towards desirable argumentative moves and sequences (see Ravenscroft 2007, 2011 for an overview) such as CoLLeGE (e.g., Ravenscroft and Pilkington 2000), AcademicTalk (e.g., McAlister et al. 2004), and InterLoc (e.g., Ravenscroft and McAlister 2006). Macro-scripting provides learners with predefined roles and activities that can stimulate argumentative discourse activities such as assigning and rotating roles (see De Wever et al. 2007; Schellens et al. 2007). The underlying purpose for using micro-scripting is to help learners follow a desired mode of interaction and argumentation using explicit guidelines for learners to clarify what, when, and by whom certain activities need to be executed during CABLe; examples of these activities are prompts (Baker and Lund 1997), sentence openers (McAlister et al. 2004), or question stems (Ge and Land 2004). To conclude, macro-scripts might be regarded as scripts that provide support at the scene, and also the role components of the SToG, without any further guidance at the scriptlet (activity) level. Micro-scripts might be regarded as scripts that provide support for components of the SToG specifically for the provision of guidance at the scriptlet level.

Although most of these scaffolds have been successful on their proximal intended outcomes (first-order argument scaffolding), this does not necessarily mean that they achieved second-order argument scaffolding (acquiring argumentation for transfer to similar situations). Even if students learn about the criteria for assessing and evaluating the quality of argumentation in such systems, they may still have difficulties applying these argumentation criteria in similar tasks. That is why, in a study by Noroozi et al. (2013), learners were found to score high on construction of a single argument, yet they were not able to put that into practice in a new comparable case. Therefore, additional support is needed if students are to acquire argumentation competence (second-order scaffolding) for carrying out corresponding domain-specific tasks (first-order scaffolding) in similar situations. Below, the nature and conditions of the first- and second-order argument scaffolding are described.

First- and Second-Order Argument Scaffolding

Van Merriënboer and Kirschner (2013), in their book Ten Steps to Complex Learning, argue that there are two types of scaffolds for learning in educational settings, namely first-order and second-order scaffolds. They explain that while first-order (i.e., regular) scaffolding refers to providing support and guidance for the acquisition and performance of domain-specific complex skills, second-order scaffolding provides support and guidance for self-directed learning.

The first-order scaffolding approach can be used for situations in which learners need to perform recurrent and routine aspects of learning tasks in order to master a complex cognitive skill within the domain being taught. For example, a medical teacher can use a variety of available methods, such as workshops, skills-lab exercises, drill-and-practice programs or intelligent electronic agents, to teach lifesaving skills (e.g., mouth-to-mouth resuscitation, intubation, external cardiac massage) to medical students (Van Merriënboer and Kirschner 2013). With regard to argumentation learning, this is analogous to a class situation in which scaffolding is used as a means to stimulate argumentative discourse activities among students with the aim of learning domain-specific knowledge or complex skill aspects of a learning task such as the pros and cons of energy conservation. For first-order scaffolding, the control over task practice and responsibilities is shared between the learner and the teacher. A tutor or a pedagogical agent in a computer program can also take on the role of the teacher. The teacher is responsible for designing and selecting the tasks and also providing advice on how to effectively practice tasks. Learners are responsible for identifying routines that may be helpful for improving their whole task performance and finding opportunities for practicing tasks (Van Merriënboer and Kirschner 2013).

There are also situations in which scaffolds target not only acquisition of a complex cognitive skill but also development of self-directed learning that will help learners become competent professionals who can handle comparable tasks and continue learning in their future professions (Van Merriënboer and Kirschner 2013). Such scaffolds, which aim at both teaching a particular complex cognitive skill and stimulating the development of self-directed learning, are called second-order scaffolding. For example, a teacher may first regularly meet with students to discuss the problem and selection of the learning task, then gradually reduce the number of coaching meetings, and finally let learners schedule such meetings only if necessary (Van Merriënboer and Kirschner 2013). This way the teacher not only supports the learners with specific routines and recurrent aspects of the learning task (especially at the early stage) but also promotes self-directed learning by allowing learners to select their own learning task. With regard to argumentation learning, this is analogous to a class situation in which scaffolding is used as a means to teach learners to acquire the argumentation competence not only for learning the current complex cognitive issue at hand (i.e., within the domain being taught) but also for applying such competence in comparable tasks in the future.

This paper argues that second-order argument scaffolding should be given priority above first-order scaffolding in class situations in which students have to deal with complex and authentic problems on a frequent basis, such as in the area of the natural sciences. The rationale for this is that, if the learners acquire argumentation in such a way as to self-direct it for application in similar situations, it is likely that they also acquire the complex cognitive skills by engaging in epistemic and argumentative activities with learning partners (see Andriessen 2006). In other words, second-order argument scaffolding is assumed to include first-order argument scaffolding. In this view, argumentation can be used as an epistemic activity in which learning partners learn how to express and exchange their ideas in order to gain and construct knowledge, correct false viewpoints, refine and modify claims, and eliminate their misunderstandings and misconceptions about issues at stake (Andriessen 2006).

To sum up, first-order argument scaffolding refers to providing support for students to engage in desirable modes of argumentation during CABLe for the acquisition and performance of the particular domain-specific knowledge or skill. This, however, does not imply that when the first-order argument scaffolding is achieved, students would be able to engage in desirable modes of argumentation during CABLe once the support is no longer present. Second-order argument scaffolding refers to providing support for students during CABLe to acquire argumentation competence that can be transferred to similar situations (self-directed learning). This implies that when the second-order argument scaffolding is achieved, students would be able to engage in desirable modes of argumentation during CABLe even once the support is no longer present.

It is worth mentioning that argumentation competence is not only the knowledge and the skills of argumentation but also students’ attitude and willingness to apply them to the case under discussion when the situation calls for it (see Rapanta et al. 2013). For second-order argument scaffolding, various elements of argumentation competence, such as knowledge, skill, and attitude, should be taken into consideration. For first-order argument scaffolding, students do not need to be fully competent in argumentation because they constantly receive support and guidance for engaging in desirable modes of interaction and argumentation during CABLe. This support is always available during the discourse because argumentation is used as a means for engaging students in desirable modes of argumentation during CABLe, for promoting domain-specific knowledge or skill, and not acquiring the argumentation competence itself.

The challenge for educational designers and researchers is thus how to design and implement tools for second-order argument scaffolding that can also facilitate the acquisition of first-order scaffolding. Second-order argument scaffolding, however, is not at all easy for educational designers and teachers, since internal argumentative scripts may vary among individuals and are not fixed in different situations. Even though there are some basic script components in each culture that are shared and that allow for successful social interaction, for any specific CABLe situation, learners may have available different sets of plays, scenes, scriptlets, and roles that might be applied (see Fischer et al. 2013). Below, we explain that computer support systems for CABLe should take learners’ internal scripts into account if they aim to achieve second-order argument scaffolding.

The Interplay Between Internal and External Scripts

An internal script (i.e., prior procedural knowledge) is a set of knowledge and strategies that determines how a person will understand and act in particular situations, such as in argumentative situations (Kollar et al. 2006, 2007). For each individual, this procedural knowledge is structured in the form of cognitive scripts based on repeated prior experiences within argumentation situations. The internal script of each individual learner could be developed through learning by doing. In contrast, external scripts are embedded not in learners’ cognitive system but in their external surroundings, with the aim of providing learners with guidelines for desirable or undesirable actions.

An external argumentation script (as a type of an external collaboration script; Fischer et al. 2013, p. 56) “is a configuration of representations (e.g., textual or graphical) of a[n argumentation] practice and its parts . . . . The external [argumentation] script is presented to a group of learners by an external source (e.g., a teacher or a website interface) as a means to guide their [argumentative] activities.” External scripts are likely either to guide learners in accomplishing the task or to be gradually internalized (i.e., faded over time; Kollar et al. 2007). The first approach aims to help learners accomplish domain-specific argumentation tasks by being continuously accessible in the learning environment. This has been termed “distributed intelligence approaches to scripting” or “tools for living” (see Stegmann et al. 2007; Pea 2004). Such scripts “provide learners with a scaffold to enable them to participate in high-quality argumentation beyond their current level of competence and construct knowledge on argumentation that is distributed by the script” (Stegmann et al. 2007, p. 422). An example of tools for living is the sentence openers for promoting students’ sentence construction, interaction, reasoning, and argumentative dialogue processes/practices (see Noroozi et al. 2013). The second approach uses external aids for better understanding complex domain-general concepts or processes, which persuades learners to utilize learned competence with external support withdrawn through fading mechanisms. This has been termed “scaffolding approaches to scripting” or “tools for learning” (see Stegmann et al. 2007; Pea 2004). An example of this is a script for the construction of a sound single argument, or a script for engaging in argumentation sequences (see Kollar et al. 2007). Tools for learning can be regarded as tools for living if learners lack the capability to internalize external scripts: in that case, external support cannot be gradually withdrawn through fading mechanisms (Carmien et al. 2007).

Scientific evidence suggests that the optimal learning scenario—in this case acquiring argumentation competence—depends on the interplay between external and internal scripts (see Kollar et al. 2007). The external script may interfere with the internal script when it targets previously developed internal script components that do not need further scaffolding, or targets them in a way that conflicts with how the person already effectively works. As a result, processing these unneeded or interferential/conflicting scaffolds not only may cause unnecessary cognitive load but may also prevent developing higher-level internal script components by taking away learners’ self-regulation (see Fischer et al. 2013; Wecker and Fischer 2011).

This may happen when the internal script is of a high level and the external script is redundant (i.e., in terms of cognitive load), which is comparable to the expertise reversal effect where excessive instructional support not only does not lead to learning, but is also detrimental for experienced learners (see Kalyuga et al. 2003; Van Gog et al. 2005b). A way around this problem is to diminish support and guidance before it conflicts with already available cognitive schemes of the learners (Van Merriënboer and Kirschner 2013). Therefore, acquiring argumentation is best served when external instructional scaffolds are adjusted to learners’ individual internal scripts. This can be seen as an application of the theory of “contingent tutoring” (Wood and Wood 1999) for acquiring argumentation competence. If no need is diagnosed, no support is provided, and the more a learner progresses, the less support is provided. The following section describes how to diagnose a learner’s internal argumentative script and how to trigger it with external support to promote second-order argument scaffolding.

Determining the Learner’s Internal Argumentative Script

The first step for designing instruction for acquiring argumentation competence is determining the learner’s internal argumentative script, though there is no consensus among scholars on how to determine it. Stegmann et al. (2007), for example, used a performance test to diagnose the learner’s current level of argumentation competence. They determined argumentation competence by giving learners the task of writing an individual analysis of a real problem prior to collaborative argumentation. The individual analyses were then segmented into propositional units and coded with respect to two aspects of argumentation, namely constructing single arguments and constructing argumentation sequences. Assessment of construction of single arguments for each learner was based upon the sum of the supported and/or qualified claims in the text. Assessment of construction of argumentation sequences for each learner was based upon the number of transitions between the message types (argument, counterargument, or integration). Kollar et al. (2007), on the other hand, analyzed the responses of learners who were given a fictitious discourse excerpt about a science topic and were then required to identify good arguments (e.g., those accompanied by reasons or argumentative sequences that were adequate) and poor arguments (e.g., those lacking reasons or argumentative sequences that were too short). A median score was calculated to classify learners, depending on their responses, as having either a low- or high-quality internal script. Noroozi et al. (2013) used a combination of both methods to determine the internal argumentative scripts of individual learners. As a performance test, learners were given argumentative texts and asked to identify the “complete” and “incomplete” explicit arguments. They were asked to back up their choices with explanations and arguments. Complete arguments contained all components of the simplified Toulmin model (i.e., claim, ground, and qualifier), while incomplete arguments lacked at least one of the components. Construction of single arguments for each learner was determined based upon the number of correct identifications of complete and incomplete argumentative texts as well as their reasonable explanations of the choices made. Learners were also asked to identify “good” and “poor” argumentation in a fictitious discourse. Good argumentative texts contained all components of the Leitão model (i.e., argument, counterargument, integration), whereas poor argumentative texts lacked at least one of those components (e.g., too short, nonsequential, and/or unsupported arguments). The result of this test was used as the criterion for formal quality of single arguments for each individual learner. For both tests, students were asked to back up their choices with explanations and arguments. The quality of construction of argumentation sequences for each learner was determined based upon the number of correct identifications of poor and good argumentations as well as their reasonable explanations of the choices made. Noroozi et al. (2013) also used the same technique as Stegmann et al. (2007) to measure internal argumentative scripts of individual learners. The learners’ analyses were segmented into propositional units and coded with respect to constructing single arguments and constructing argumentation sequences.

It is questionable, however, whether either performance tests or analyses prior to actual collaborative argumentation are reliable indicators of an individual’s internal argumentative script. First, determining the learner’s individual internal argumentative script is not really a reliable measure of the script needed for understanding and acting in argumentative situations. Argument construction, moves, and sequences can be different when learners actually engage in argumentation than when they are in individually performed tests. Because of this, Kollar et al. (2007) also used participants’ actual behavior to assess internal scripts. Depending on the nature of collaborative discourse, group members may employ strategies that enhance the group product but are not necessarily the same as what they do individually (Prichard et al. 2006; Weinberger and Fischer 2006). Second, learners may have solo argumentation competence that they might not be able to apply when arguing with others. Stegmann et al. (2007, 2012), Kollar et al. (2007), and Noroozi et al. (2013) all found that, although individual learners could construct good single arguments, they were not always able to apply this in a comparable collaborative problem-solving task. Therefore, it is also necessary to use actual discourse activities to reliably measure a person’s internal script for collaborative argumentation (see also Andrew and McMullen 2000).

Scheuer et al. (2012) classified available automated analysis techniques to this effect into four categories, namely syntactical analysis, problem-specific analysis, reasoning analysis, and collaborative filtering analysis. The description, functionalities, and applications of each of these techniques are presented in detail in Scheuer et al. (2012). With the advancement of artificial intelligence systems and analysis techniques, it is possible to design natural language and computational processing systems to detect the internal argumentative scripts of learners while they are engaging in CABLe. For example, educational technology such as ARGUNAUT (McLaren et al. 2010), TagHelper (Rosé et al. 2008), Rashi (Dragon et al. 2006), Belvédère (Suthers 2003), and LARGO (Pinkwart et al. 2009) have used artificial intelligence and language content analysis techniques to automatically analyze student argumentation moves and structures (see also Noroozi and McAlister 2017). In addition to analysis of individual argumentation, Mu et al. (2012) successfully implemented natural language processing (NLP) technologies to code and analyze micro-argumentation dimension of the discourse prior to a learning phase.

This paper specifically presents guidelines on how to determine a student’s internal argumentative script according to the SToG. Knowledge about a CABLe practice, based on SToG, consists of a set of somewhat hierarchical components including plays, scenes, scriptlets, and roles (Fischer et al. 2013). The play component has to do with knowledge about the type of so-called story that participants are involved in. This could be knowledge and expectations about the sequences of activities, scenes, and roles. For example, for CABLe, a learner’s internal script may include a play for engaging in collaborative argumentation or collaborative problem-solving in a specific situation. The scene component has to do with knowledge about situations that may follow each other within the play. For example, during CABLe, a possible scene could be the initial individual idea generation by different learners or the joint development of a solution for the issue at stake (Fischer et al. 2013; Kollar et al. 2014; Vogel et al. 2016).

The role component has to do with knowledge about single activities across the different scenes of the play as distributed among the members of the group. For example, during CABLe, a possible role for a play is the role of “learner.” During CABLe in a dyad, for example, both learners have the role “learner” that interacts with the scene components. Even though the role is kept the same for the two learners through the whole play, the scenes in the play could be different. For example, in scene 1, learner A provides an analysis of the case (analyzer) and learner B responds to this analysis by providing critique (criticizer), but in the next scene, learner B provides an analysis and learner A provides critique. As the learners keep the same role throughout the entire play, they follow a sequence of different scenes within the play that activate different scriptlets. And finally, the most subordinate component level, the scriptlet, has to do with knowledge and expectations about the type of activities that can occur within particular scenes. For example, in a particular scene, for responding to analysis, a learner’s internal script may include scriptlets suggesting to first read the analysis of the learning partner, to evaluate the analysis, and then provide counterarguments to that analysis (see Fischer et al. 2013; Kollar et al. 2014). Detailed information on the SToG with examples from a range of different studies can be found in Fischer et al. (2013), Kirschner and Erkens (2013), Kollar et al. (2014), and Vogel et al. (2016).

There are two other possible ways to measure internal scripts. The first approach is primarily good for research purposes and involves cued retrospective recall, a technique often used in eye-tracking research (Eger et al. 2007; Van Gog et al. 2005a). Based upon the arguments given, this technique involves traditional retrospective reporting and cueing based upon the persons own actions. According to Van Gog et al. (2005b), “[I]n cued retrospective reporting, participants are instructed to report retrospectively on the basis of a record of observations or intermediate products of their problem-solving process, which they are shown to cue their memories of this process. This is known to lead to better results because of less forgetting and/or fabricating of thoughts than plain retrospective reporting (Van Someren et al. 1994). More important…cued retrospective reporting based on a cue that shows participants’ actions might lead to more actions being reported, without losing the retrospective nature and its associated information types.” (p. 238).

In this way, the researchers can gain insight into the underlying internal scripts used by the respondents. The second approach involves a two-step expert modeling procedure similar to Jarodzka et al.’s eye movement modeling examples known as EMMEs (see Jarodzka et al. 2010; Van Gog et al. 2009). In this approach, first expert argumentation schemas are collected, here with respect to the argumentation used in a specific case. These expert schemas are then studied and discussed retrospectively with the experts much in the same way as this is done in expert visualization studies. In EMME research, videos of experts carrying out a problem solving task are seen as models of how the task should be carried out. Rather than simply making use of an expert’s natural performance on these tasks, the expert is asked to recreate her/his behavior didactically, that is, to imagine explaining to someone who knows little of the process and telling her/him what the argumentation used was based upon and deliberately doing it. These expert argumentation schemata would then be used as baseline for real-time comparison with the schema used by the novice. In this way, it would be possible to measure the (development) of the novice’s internal script.

Below, we explain how to use adaptive external support by taking into account learners’ internal scripts in order to scaffold second-order argumentation competence.

Adaptive External Support for Collaborative Argumentation-Based Learning

The second step for aiding in acquiring argumentation competence is providing adaptive external support at the individual and the group level based on the components of the internal scripts determined during CABLe, as external scripts will only be effective when they trigger the accompanying specific collection of internal script components (Fischer et al. 2013). Two different types of scripts can be distinguished.

One script type deals with recurrent aspects of a situation that can be generalized to other situations and thus can be presented as an instruction prior to a class or task of argumentation (see Van Merriënboer and Kirschner 2013). This type of script typically includes the rules of argumentation designed as an expert model based on the desirable structure of the argument patterns (see Scheuer et al. 2012). The main purpose of such scripts is to target aspects of argumentation that are almost always present, such as that a claim must always be backed up with evidence. As discussed, for various reasons (e.g., pressure in a real-time situation and the ill-defined nature of argumentation; social, emotional, and individual perception of argumentation; the dynamic nature of collaborative argumentation), even when the rules are given prior to a class or task, learners might still follow different patterns of argumentation during actual discourse. In this case, automated analysis would be necessary during actual discourse to detect rule violations. Feedback authoring tools could then alert students when a violation of the valid argumentation model was found (e.g., Here you don’t back up your claim. How can you solve this?).

The other script type deals with nonrecurrent situation-specific aspects (Kester et al. 2001; Van Merriënboer and Kirschner 2013) that are not typically repeated regularly during CABLe (e.g., Here your opponent just used the logical inverse of the original statement. Is this allowable?). Such aspects may just happen once during the discourse and therefore are called “nonrecurrent situation-specific aspects.”

As discussed, according to the SToG, each learner’s particular activity during CABLe is guided through a (re)configuration of existing internal script components including play, scenes, roles, and scriptlets. In line with the SToG, we provide some examples below of how feedback authoring tools can offer adaptive support based on the components of the determined internal scripts during CABLe.

Application of the Script Theory of Guidance for Realization of Adaptive External Support for Collaborative Argumentation-Based Learning

In this section, based on the SToG, we specifically focus on the typology of different types of external support (instruction) that can be adapted to different kinds of configurations on the internal script side. Based on the SToG, which assumes its components to be flexible and interactive, when a learner has selected a certain play in a given situation (such as collaborative argumentation or collaborative problem-solving), this “selected” play yields expectations about the phases of the CABLe situation (i.e., the knowledge of which is stored in “scenes”), and the selection of a particular scene yields expectations about activities that are likely to be shown by different actors in the play (i.e., the scriptlet and the role components). When one single activity for which knowledge can be represented in a scriptlet is missing during CABLe (e.g., the learner does not provide warrants to claims), this could be a sign that a certain internal script component is lacking in the learner’s repertoire (i.e., the learner does not know that a claim needs to be warranted or does not know how to provide warrants to claims), or that a functional play in the learner’s repertoire is not yet enacted.

When a single activity for which knowledge can be represented in a scriptlet is repeatedly missing in a certain play, then possible assumptions are that the scriptlet is just not associated with the play or the scriptlet is not at the learner’s disposal. So, even though the learner might be capable of describing or enacting that activity, that activity might not be demonstrated during learning. In such a case, a reminder could help the learner to enact that activity. Another assumption is that a certain scriptlet is repeatedly missing in a certain play because a certain internal play might not be in the repertoire of the learner. In such a case, a direct instruction by the feedback authoring tool is likely to work better than a reminder instruction. When a certain internal script component is not part of the learner’s repertoire (i.e., a lack of knowledge about the particular scriptlet is observable or discernible), adaptive support would mean that “direct instruction” will need to be given explaining how this activity is to be enacted (e.g., Here you don’t provide warrants for your claim. Warrants are the underlying assumptions that connect your data to your claim. You need to provide one or more for your claim). When a single activity for which knowledge can be represented in a scriptlet is sometimes available and sometimes missing in a certain play, then we can assume that a certain internal script is in the repertoire of the learner but that it is sometimes not enacted (for whatever reasons: group pressure, focusing on task completion, situational, relational, cognitive, emotional, and social barriers during CABLe, etc.). Therefore, in such a case when the play is part of the repertoire of the learner but not enacted, a reminder instruction (e.g., Here you forgot to provide warrant(s) for your claim. How can you solve this?) by the authoring tool would work better than a direct instruction. For this situation, adaptive support would mean that the learner will need only to be pointed to this “suboptimal” selection of the scriptlet that is associated with the play.

Such direct and reminder instructions can be realized for several other nonenacted internal script components during CABLe, such as the situation when a certain scriptlet is missing with regard to providing grounded and qualified claims, responding to counterarguments, generating multiple arguments, analyzing, integrating, and extending the arguments, and/or engaging in transactive argumentation (see Noroozi et al. 2012 for a list of CABLe activities). Since CABLe also includes group activities, the reminder and direct instruction can sometimes be given to all members of a learning group at the same time rather than only to individual members of a group. This is the case when most or all group members have a dysfunctional play or lack of knowledge, for example of how to build reasoning on the reasoning of other group members during CABLe (see Teasley 1997). In such a case, a direct instruction that can be sent to all group members could be a message that they need to engage in high-level transactive argumentation during CABLe An example of a reminder instruction here could be a message that just alerts members of a group to build on the reasoning of others group members (e.g., Did you forget to integrate one another’s arguments?).

When a functional play is in a learner’s repertoire and is adequately enacted during CABLe (i.e., a certain expected scriptlet is observed), it might be beneficial to receive a reminder instruction with positive feedback indicating that the learner is doing it right. For example, when a learner provides grounded claims a couple of times during CABLe, the feedback authoring tool can send a positive message indicating that the learner has adequately grounded claims according to the rules of argumentation (e.g., Good job with providing grounded claims). This would encourage the individual learner always to provide grounded claims. These types of positive feedback could not only be given to individual learners but also to the members of a learning group at the same time when they engage in fruitful CABLe, such as engaging in transactive collaborative argumentation according to the rules of argumentation (e.g., Good job with building on the reasoning of one another’s contributions). Of course, such reminders for positive feedback should not be overused: such prompts, if used too frequently, might distract students from engaging in actual CABLe.

One challenge for the realization of adaptive scripting on the basis of the SToG is how to clarify whether the lack of a certain scriptlet component is due to (a) the lack of a certain internal script component which is not part of the learner’s repertoire or (b) a functional play that is in the repertoire of the learner yet not enacted. In such a case, our suggestion is to rely on the continuous assessments of the performance of the learner during CABLe (e.g., with the aid of the authoring tool) that are diagnosed by the underlying system algorithms. When a certain scriptlet is repeatedly missing in a certain play, then we can assume that a certain internal play is not in the repertoire of the learner. In such a case, a direct instruction by the feedback authoring tool would work better than a reminder instruction. When a certain scriptlet is sometimes available and sometimes missing, then we can assume that a certain internal script is partially present in the repertoire of the learner but that at times, due to situational, motivational, relational, or other factors, it would not be enacted. In such a case, a reminder instruction by the authoring tool would work better than a direct instruction.

In the following section, we explain how to implement adaptive external support by taking into account the current level of internal argumentative scripts of the individual learners in such way as to promote the transfer of argumentation in similar situations.

Adaptive Fading of the External Support

The third step for acquiring and applying argumentation competence in similar situations is to provide learners with adaptive fading of the external support so that they can develop their own internal argumentative scripts. Providing learners with automatic adaptive feedback alone does not guarantee successful application of argumentation competence in similar situations when external support is no longer available. Based on the SToG, the hypothesis is that learners first need to be supported by adaptive external support to develop their corresponding internal script components, with repeated application to internalize external support (acquisition), and then, they need the opportunity to practice and apply their newly developed internal script components for regulating their activities, to use their internal argumentative script in similar situations (consolidation). Such adaptive support is particularly effective for learners when the idiosyncratic approach becomes similar to the external script. Based on this hypothesis, the process of internalization of scripts can be divided into two steps: acquisition and consolidation. When learners are supported by adaptive external support to acquire corresponding script components, the results of such internalization of script components can be seen as acquisition. When learners have already developed an internal script component, the result of the application of this internal script component can be seen as consolidation that mostly depends on self-regulated application of script components. The internalization of the external script and the further development of one’s own internal script can best be achieved if and when the learner is aware of the corresponding activities and the underlying reasoning behind the activities (Fischer et al. 2013); otherwise, it becomes a procedure aiding the student at that moment and will not be transferred to other relevant situations. When students are aware of the importance of the external script elements presented through scaffolding, it is more likely that they will internalize the corresponding script and, in this case, the externalization of scripts (consolidation) would become easier. Fading external script components is a way to provide learners with the opportunity to practice their newly developed skills with the aim of regulating their activities and consolidating their internal scripts.

Fading is an integral part of scaffolding that allows learners to take over control of their cognitive activities and to initiate and adapt the corresponding learning activities themselves for acquisition of skills such as argumentation (Wecker and Fischer 2011). Fading instructional support relies on the position that, when the learner is able to carry out the required action, the support should be gradually reduced until it is no longer needed (see Van Merriënboer and Kirschner 2013). Fading is not restricted to one specific pedagogy and has been studied, for example, with regard to collaborative learning (Bouyias and Demetriadis 2012; Tsovaltzi et al. 2012; Wecker and Fischer 2011), inquiry learning (McNeill et al. 2006), learning for conceptual change (Biemans and Simons 1996), and worked examples (Renkl and Atkinson 2003; Van Gog and Rummel 2010). Although scientific evidence demonstrates effectiveness of fading for learning in some studies (e.g., Renkl et al. 2004; Tsovaltzi et al. 2010), results that are mixed (e.g., Leutner 2000; McNeill et al. 2006) and even disappointing (e.g., Bouyias and Demetriadis 2012; Wecker et al. 2010) are reported as well. In the following section, we explain these inconclusive results for various fading instructional scaffolds in CABLe situations.

Fading Scaffolds for Collaborative Argumentation-Based Learning

To date, various approaches have been used for fading instructional support for CABLe. These approaches include fading through time control (McNeill et al. 2006), based on learning phase progression (Lee and Songer 2004), based on number of posted messages (Bouyias and Demetriadis 2012), and based on number of information search strategy clues (e.g., Wecker et al. 2010). Mixed results have been achieved for these fading scenarios. For example, McNeill et al. (2006) showed that when support is faded over time, students provide stronger explanations themselves in terms of their reasoning compared to the continuous support group. On the other hand, Wecker et al. (2010) found that fading controlled by the number of information searches during discourse was not successful in terms of internalization of the external script. The authors attributed this to the fact that fading was not based on actual quality of the discourse processes. Bouyias and Demetriadis (2012) found that fading controlled by the number of posted messages during discourse did not yield better acquisition of argumentation competence than continuous script support or peer-monitoring support. Students in the peer-monitoring group acquired higher levels of argumentation competence than those in the continuous script support group. The same was true with regard to domain-specific knowledge acquisition. In other words, students in the peer-monitoring group outperformed students in the fading and continuous support groups. Furthermore, in a study by Lee and Songer (2004), fading that was controlled by the learning progression phase during discourse yielded a lower quality of reasoning explanations than continuous support. A study by Vogel et al. (2015) also found that the adaptable argumentation script did not lead to any better outcome than the high-structured and low-structured scripts during collaborative work on mathematical tasks. However, in adaptable argumentation script condition, self-regulation skills were a significant positive predictor for argumentation skills.

Fischer et al. (2013) as well as Wecker and Fischer (2011) do not consider these inconclusive and disappointing results surprising. They argue that fading can be an effective approach only when the design of components of external support are based on the learners’ internal script components in such a way as to secure continuous application of the suggested strategy even after the external script components are faded out. In CABLe, external script components could be faded to enhance self-regulated learning and to avoid cognitive overload in overly scripted collaborative tasks, provided that the fading procedure is tuned to the level of internal script components of the learners (Dillenbourg 2002; Jermann and Dillenbourg 2003). Wecker and Fischer (2011) argue that fading of the instructional script alone does not automatically guarantee successful transfer of the learning responsibility and control from the environment to the learner. This is because learners need to dynamically reconfigure their internal script components as a response to changing situations, and their individual goals to continue acting in accordance with the strategy suggested by the external script (see also Fischer et al. 2013). Therefore, fading itself should be supported by other complementary approaches that can secure the continuous application of the targeted activities even after the support is faded out.

Complementing Fading for Collaborative Argumentation-Based Learning

To date, only limited approaches have been proposed to complement fading for internalizing and securing continuous application of CABLe strategies (as suggested by external scripts) by learners themselves. The most prominent recent approach is the use of adaptive fading (e.g., Kumar et al. 2007; Tsovaltzi et al. 2010; Vogel et al. 2015). When the continuous application of a strategy (as suggested by external instructional support) is guaranteed, adaptation and adjustment of the strategy through fading is necessary for internalization of the external script and enhancement of the internal script development. That is why, in an empirical study by Tsovaltzi et al. (2010), the conclusion was drawn that adaptive fading in which external script components are continuously adjusted to the quality of discourse activities can be much more effective than fixed regimes for the development of learners’ internal script components. As discussed, adequate and proper diagnosis of the discourse and situation is essential for realization of adaptive fading.

Although adaptive fading might be a good strategy for fostering acquiring argumentation competence in the rather short term (e.g., Kumar et al. 2007; Tsovaltzi et al. 2010), there is no evidence for longer-term effects of adaptive fading on developing an internal script, and for application of the acquired argumentation competence in similar learning situations. We argue that, although adaptive fading may foster the transfer of the learning responsibility and control from the environment to the learner in a particular situation (see Wecker and Fischer 2011), it alone does not guarantee a successful application of the acquired competence in comparable situations. There could be a situation in which adaptive fading leads to successful internalization of the external script (see Kumar et al. 2007; Tsovaltzi et al. 2010); however, fading something that is not fully understood or has not fully been learned but is still internalized in a particular situation cannot be sufficient for fostering independent learner application of the targeted activities in the long run. We thus need to make sure that adaptive fading targets both internalization of the external script (i.e., acquisition) and development of the internal script (i.e., consolidation). To this end, we propose using peer assessment, automatic response support tools, and (adaptable) self-assessment when using adaptive fading to make sure that the learners actually understand and learn (i.e., achieve consolidation as well as acquisition) the targeted activities as suggested by external support.

Peer Assessment Fading Approach (Indirect Feedback)

Peer assessment has been considered as a powerful instructional practice to enhance both students’ motivation and argumentation quality (Gabelica et al. 2012; Nelson and Schunn 2008). Receiving feedback from learning peers with the same motivational needs and also giving them feedback in a reciprocal manner are important aspects of learning processes during CABLe. Peer assessment provides students with the opportunity to broaden and deepen their thinking and understanding when they compare their own line of reasoning and arguments with those of others (Nelson and Schunn 2008; Yang 2010).

Peer assessment can be used as a sort of indirect feedback for adaptive fading to make sure that learners have actually learned the targeted activities that have been externally supported. Using feedback from learning partners in a group can be a suitable approach to adjusting adaptive support and informing learners about their current state and progress. In such an approach, contributions from learning partners can also be used as input for self-assessment and reflecting on what contributes to high-quality performance during CABLe. That is likely to be the reason why, in a study by Wecker and Fischer (2011), fading the external script fostered acquisition of argumentation, since the argumentation strategy suggested by the external script was monitored by the learning partner. Peer assessment for adaptive fading has other advantages as well: for example, it helps students engage in collaborative discourse by unveiling the disapproval of conflicting arguments (Wecker and Fischer 2011).

Although scientific literature highlights the importance and the features of high-quality peer assessment for argumentation quality (see Nelson and Schunn 2008; Tsai and Chuang 2013), peer assessment can be challenging, especially with regard to constructing high-quality feedback in CABLe settings (see Noroozi et al. 2012). There could be several reasons for this. First, peer assessment requires high-level cognitive processing (King 2002), and this may not happen intrinsically (Kollar and Fischer 2010). Second, there are psychological, emotional, and social barriers to peer assessment during CABLe that may cause assessment to remain at the surface level and lack well-founded arguments for promoting critical thinking and deep and elaborative learning. For example, some students would be reluctant to oppose and disagree with their learning peers, while others may not appreciate being challenged themselves. Furthermore, less assertive students may avoid giving critical assessment merely due to the (negative) competitive and disagreement aspects of the critique (Nussbaum et al. 2008). Last but not least, not all students fully trust in the competence of their learning peers to evaluate their work (Kaufman and Schunn 2011). Distrust in the quality of the peer assessment of learning peers may not only impede learning but also create a negative perception that can even evoke negative emotional responses and further complications during CABLe (Cheng et al. 2014). Lack of trust among learning partners can be minimized by using multiple raters instead of just one, as well as assigning and rotating the roles of students in the group (see Cho and Schunn 2007). This might reduce the provocation of the negative perceptional and emotional responses to the feedback, which is a factor that impedes learning (Cheng et al. 2014; Hanrahan and Isaacs 2001; Shute 2008). These challenges point to the need for additional instructional strategies to complement fading in CABLe settings. Additional complementing strategies for fading external support is needed in CABLe environments to fully safeguard effective peer assessment.

Fading Approach Through Automatic Response Support Tools (Direct Feedback)

One approach to coping with challenges inherent to peer assessment for complementing adaptive fading is to provide direct feedback for adoption of scaffolds according to the rules of argumentation through automatic response support tools. Such an approach could diminish the risk of distrust and low-quality peer assessment by the learning peers. The use of automatic diagnostic tools is an approach to using adaptive fading to ensure that learners have actually learned the targeted activities as suggested by external support during CABLe. Such tools can determine to what extent, when, and how support can be faded during discourse activities. Wecker and Fischer (2011) propose employing diagnostic tools based on standard methods such as recall measures and reaction times, or advanced methods such as computer-linguistic tools (Mu et al. 2012) or script formalization (Hernandez-Leo et al. 2010). The type of support should, however, be attuned to students’ competence in constructing valid arguments and engaging in discourse activities according to the rules of argumentation.

(Adaptable) Self-Assessment Fading Approach

One approach to coping with challenges inherent to peer assessment and automatic response support tools for complementing adaptive support is to give students freedom to control the type and amount of support based on their self-perceived needs (Vogel et al. 2015); this might be beneficial for their self-regulated learning (Järvelä and Hadwin 2013) as well. This approach, which can be used for adaptive fading and ensuring that learners have achieved the targeted argumentation activities, is to let learners influence the fading process and to switch parts of the support on and off (see Vogel et al. 2015) according to estimation of their own competence level (Järvelä and Hadwin 2013).

One way to realize such an approach is to provide learners with the opportunity to control the amount and the timing of support that they would like to receive (i.e., flexible support) during discourse activities. This means that the support should be adaptable, and given to learners when they feel the need for such support. For example, if not sure whether the provided argument is valid, a learner can check by clicking on the feedback support button which then automatically checks if the provided argument is valid and, if not, explains why this is the case. This makes the feedback more adaptable based on the needs of each individual learner. Furthermore, it prevents frustration and over-scripting since learners can only ask for feedback when they feel that they need support.

In the scientific literature, there are some critical views that giving students full control over the amount and the timing of support that they would like to receive during CABLe can be troublesome. Kirschner and Van Merriënboer (2013), for example, argued that learners typically lack the capacity to appraise both the demands of the task and their own learning needs in relation to that task for choosing appropriate instruction and support, resulting in misregulating their learning, exerting control in a misguided or counterproductive fashion, and not achieving the desired result. The authors explained that this could be due to the lack of necessary knowledge and standards by which to monitor and judge their learning state. The other problem for the learner in the control approach is that learners often choose the learning activities that they prefer, yet what they prefer is not always what is best for them. Furthermore, there is a paradox of choice, meaning that the more options learners have to choose from, the harder and the more frustrating it is to make the choice. Kirschner and Van Merriënboer (2013) have supported these arguments with a wealth of theoretical and empirical evidence. Apart from these problems for the learner-in-control approach, the success of adaptable support depends on learners’ metacognitive skills. In study by Vogel et al. (2015), it was found that only learners with high levels of self-regulation skills are able to benefit from the opportunity for adaptable support. This implies that not all learners can benefit equally from adaptable support (see Vogel et al. 2015).

These concerns about adaptable support have consequences for complementing adaptive support through an adaptable fading approach for second-order argument scaffolding. It is therefore important to seek a balance in providing learners with control over the type and amount of support they receive during CABLe based on their self-perceived needs. Adaptable support (self-assessment) for the fading approach for students during the CABLe should be combined with other approaches, such as automatic response support tools (direct feedback) and peer assessment (indirect feedback), to ensure the most effective approach to the adaptive fading support mechanism. Furthermore, scientific evidence shows that both self-assessment (see Fastré et al. 2010, 2014) and peer assessment (see Panadero et al. 2013; Schunn et al. 2016) work better when the assessors receive clear criteria such as a rubric form on which to base their assessment. Therefore, for a valid assessment it would be wise to provide students with a list of criteria that should be taken into consideration for (adaptable) self-assessment and peer-assessment during CABLe.

Conclusions and Future Research Agenda

We have argued that most of the available computer support systems for CABLe have been designed for stimulating argumentation and interactive discussions among learners in a group for acquiring a complex cognitive skill (i.e., first-order argument scaffolding). There is no systematic evidence, and hence no knowledge, of whether such scaffolds can help students acquire argumentation competence that can be transferred for dealing with new comparable tasks (i.e., second-order argument scaffolding). We have also argued that the focus of computer support systems for CABLe should be expanded: from the study of first-order scaffolding alone, to including the study of second-order scaffolding as well.

Both arguments above are based on the synthesis of a wealth of literature in this field, not on the basis of profound empirical findings. For example, due to the general nature of argumentation, which is apparent in linguistics, philosophy, psychology, and education, there could be sparse empirical findings on second-order argument scaffolding which have not been seen and reviewed by the authors of this manuscript. We therefore acknowledge that some of our arguments are not empirically grounded, and we thus propose an empirical research agenda for future research in this field as follows.

First, although we argued that second-order argument scaffolding is assumed to include first-order argument scaffolding, this is not certain, and there is no empirical evidence on this assumption. Therefore, empirical research is needed to see whether this hypothesis would be confirmed or not. This could be realized through a quasi-experimental setting in the area of natural sciences with four conditions as follows: (1) no argument scaffolding as control group, (2) first-order argument scaffolding, (3) second-order argument scaffolding, and (4) first- and second-order scaffolding. If our arguments are confirmed through empirical study, educational designers and researchers can then follow our practical approach to design and implement tools for second-order argument scaffolding that can also include the first-order argument scaffolding.

Second, while some empirical findings (e.g., Kollar et al. 2014; Noroozi et al. 2013; Stegmann et al. 2012) report positive effects of computer-supported argument scaffolding with regard to argumentation knowledge and/or skills (see Wecker and Fischer 2014 for a meta-analysis review), as yet there is no evidence, and hence no knowledge, of whether such argument scaffolds can help students acquire argumentation competence that can be transferred by the students themselves to various learning tasks. It would be insightful to design comparable computer-supported argument scaffolds and test them empirically with various learning tasks in the same discipline, in order to see to what extent students can transfer their acquired argumentation for application in similar situations. This could be realized through a quasi-experimental setting with students who are confronted with solving different complex tasks or dealing with various controversial issues over a period of a time in a specific discipline. This could help us test the short-term and long-term transfer effects of the various argument scaffolds for solving various argumentation tasks in the same discipline.

Third, this paper followed the design of the SToG (Fischer et al. 2013) to provide a practical guideline that could facilitate the design and implementation of such computer-supported tools for second-order argument scaffolding. We specifically explained that the process of acquiring self-directed argumentation competence depends on proper diagnosis of the internal argumentative scripts of the learners and provision of adaptive fading support. We proposed various mechanisms for the diagnosis of the internal argumentative scripts of learners and provision of adaptive fading support. It is yet unclear which of these mechanisms should be given priority, and also to what extent these various mechanisms are consistent in an empirical setting. Furthermore, it is not clear to what extent these mechanisms can be implemented from a practical point of view in an ongoing CABLe situation. Therefore, we advise that future empirical research in laboratory settings should focus on the implementation of, and consistency among the outcomes of, various mechanisms for proper diagnosis of the internal argumentative scripts of learners and provision of adaptive fading support accordingly. When the outcomes of these mechanisms are positive then they could be extended to real educational settings with direct practical relevance for educational practice.

Fourth, we proposed to complement adaptive fading support using self-assessment, automatic response tools, and peer assessment to make sure that the learners actually understand, learn, and apply targeted argumentative activities as suggested by the external support. These complementing approaches for adaptive fading support are not yet fully supported empirically. For example, it is not clear how each of these complementing approaches influences the internalization of external support (acquisition) and the application of the internal script components (consolidation) in similar situations. Future research therefore needs to investigate the extent to which various complementing approaches for adaptive fading influence the acquisition and consolidation of the internal script components of students. This would not only help us provide tailor-made feedback for individuals and groups of learners; it would also clarify how, when, and under what conditions external support should be faded out to foster internalization of external scripts (acquisition) and to secure self-directed acquisition and application of the argumentation competence for consolidation of the internal scripts even after the support is faded out.

Fifth and last, we proposed to combine various complementing approaches to safeguard the most effective mechanism for adaptive fading support for students during the CABLe. This would require future empirical research to compare and contrast direct and interaction effects of various fading conditions on development of the internal script, acquisition of argumentation competence, and their transfer effects. Future research could also investigate the mediating effects of the internal scripts on the impacts of various fading conditions on acquisition and transfer of argumentation competence and complex cognitive skill acquisition, in order to determine the most feasible and functional fading condition for facilitation of students’ argumentation competence development during CABLe as well as their transfer for application in similar situations as second-order argument scaffolding. Quasi-experimental settings can be designed with various adaptive fading support such as peer assessment, automatic response tools, and adaptable self-assessment to test which one of these mechanisms help learners actually understand, learn, and apply targeted argumentation activities in similar situations.

References

  1. Andrews, R. (2010). Argumentation in higher education. Improving practice through theory and research. New York: Routledge.Google Scholar
  2. Andrew, G., & McMullen, L. M. (2000). Interpersonal scripts in the anger narratives told by clients in psychotherapy. Motivation and Emotion, 24(4), 271–284.CrossRefGoogle Scholar
  3. Andriessen, J. (2006). Arguing to learn. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 443–460). New York: Cambridge University Press.Google Scholar
  4. Asterhan, C. S. C., & Schwarz, B. B. (2009). Transformation of robust misconceptions through peer argumentation. In B. B. Schwarz, T. Dreyfus, & R. Hershkowitz (Eds.), Transformation of knowledge through classroom interaction (pp. 159–172). London: Routledge.Google Scholar
  5. Baker, M. (2003). Computer-mediated argumentative interactions for the co-elaboration of scientific notions. In J. Andriessen, M. Baker, & D. Suthers (Eds.), Arguing to learn: confronting cognitions in computer-supported collaborative learning environments (pp. 47–78). Boston: Kluwer.CrossRefGoogle Scholar
  6. Baker, M. (2009). Intersubjective and intrasubjective rationalities in pedagogical debates: Realizing what one thinks. In B. B. Schwarz, T. Dreyfus, & R. Hershkowitz (Eds.), Guided transformation of knowledge in classrooms (pp. 145–158). New York: Routledge, Advances in Learning & Instruction Series.Google Scholar
  7. Baker, M., & Lund, K. (1997). Promoting reflective interactions in a CSCL environment. Journal of Computer Assisted Learning, 13(3), 175–193.CrossRefGoogle Scholar
  8. Biemans, H. J. A., & Simons, P. R. J. (1996). Computer-assisted instruction and conceptual change. Educational Research and Evaluation, 2(1), 81–108.CrossRefGoogle Scholar
  9. Bouyias, Y., & Demetriadis, S. (2012). Peer-monitoring vs. micro-script fading for enhancing knowledge acquisition when learning in computer-supported argumentation environments. Computers and Education, 59(2), 236–249.CrossRefGoogle Scholar
  10. Carmien, S., Kollar, I., Fischer, G., & Fischer, F. (2007). The interplay of internal and external scripts- a distributed cognition perspective. In F. Fischer, H. Mandl, J. Haake, & I. Kollar (Eds.), Scripting computer-supported collaborative learning: cognitive, computational, and educational perspectives (pp. 303–326). New York: Springer.CrossRefGoogle Scholar
  11. Cheng, K. H., Hou, H. T., & Wu, S. Y. (2014). Exploring students’ emotional responses and participation in an online peer assessment activity: a case study. Interactive Learning Environments, 22(3), 271–287.CrossRefGoogle Scholar
  12. Cho, K., & Schunn, C. D. (2007). Scaffolded writing and rewriting in the discipline. Computers and Education, 48(3), 409–426.CrossRefGoogle Scholar
  13. De Wever, B., Van Keer, H., Schellens, T., & Valcke, M. (2007). Applying multilevel modelling on content analysis data: methodological issues in the study of the impact of role assignment in asynchronous discussion groups. Learning and Instruction, 17(4), 436–447.CrossRefGoogle Scholar
  14. Dillenbourg, P. (2002). Over-scripting CSCL: the risks of blending collaborative learning with instructional design. In P. A. Kirschner (Ed.), Three worlds of CSCL. Can we support CSCL (pp. 61–91). Heerlen: Open Universiteit Nederland.Google Scholar
  15. Dragon, T., Woolf, B.P., Marshall, D., & Murray, T. (2006). Coaching within a domain independent inquiry environment. In M. Ikeda, K.D. Ashley, T.W. Chan (Eds.), Proceedings of the 8th International Conference on Intelligent Tutoring Systems (ITS-06) (pp. 144–153). Berlin: Springer.Google Scholar
  16. Eger, N., Ball, L.J., Stevens, R., & Dodd, J. (2007). Cueing retrospective verbal reports in usability testing through eye-movement replay. In L.J. Ball, M.A. Sasse, C. Sas, T.C. Ormerod, A. Dix, P. Bagnall, & T. McEwan (Eds.), People and Computers XXI - HCI...but not as we know it: Proceedings of HCI 2007 - Volume 1 (129-137). Swindon, UK: The British Computer Society.Google Scholar
  17. Fastré, G. M. J., van der Klink, M. R., Amsing-Smit, P., & Van Merriënboer, J. (2014). Assessment criteria for competency-based education: a study in nursing education. Instructional Science, 42(6), 971–994.CrossRefGoogle Scholar
  18. Fastré, G. M., van der Klink, M., & Van Merriënboer, J. J. G. (2010). The effects of performance-based assessment criteria on student performance and self-assessment skills. Advances in Health Sciences Education, 15(4), 517–532.Google Scholar
  19. Fischer, F., Kollar, I., Stegmann, K., & Wecker, C. (2013). Toward a script theory of guidance in computer-supported collaborative learning. Educational Psychologist, 48(1), 56–66.CrossRefGoogle Scholar
  20. Gabelica, C., Van den Bossche, P., Segers, M., & Gijselaers, W. (2012). Feedback, a powerful level in teams: a review. Educational Research Review, 7(2), 123–144.CrossRefGoogle Scholar
  21. Ge, X., & Land, S. M. (2004). A conceptual framework for scaffolding ill-structured problem-solving processes using question prompts and peer interactions. Educational Technology Research and Development, 52(2), 5–22.CrossRefGoogle Scholar
  22. Hanrahan, S. J., & Isaacs, G. (2001). Assessing self- and peer-assessment: the students’ views. Higher Education Research and Development, 20(1), 53–69.CrossRefGoogle Scholar
  23. Hernandez-Leo, D., Jorrin-Abellan, I. M., Villasclaras-Fernandez, E. D., Asensio-Perez, J. I., & Dimitriadis, Y. (2010). A multicase study for the evaluation of a pattern-based visual design process for collaborative learning. Journal of Visual Languages and Computing, 21(6), 313–331.CrossRefGoogle Scholar
  24. Jarodzka, H., Scheiter, K., Gerjets, P., & Van Gog, T. (2010). In the eyes of the beholder: How experts and novices interpret dynamic stimuli. Learning and Instruction, 20(2), 146–154.CrossRefGoogle Scholar
  25. Järvelä, S., & Hadwin, A. (2013). New frontiers: Regulating learning in CSCL. Educational Psychologist, 48(1), 25–39.CrossRefGoogle Scholar
  26. Jermann, P., & Dillenbourg, P. (2003). Elaborating new arguments through a CSCL script. In P. Dillenbourg (Ed.), Learning to argue (pp. 205–226). Dordrecht: Kluwer.CrossRefGoogle Scholar
  27. Kaufman, J. H., & Schunn, C. D. (2011). Students’ perceptions about peer assessment for writing: Their origin and impact on revision work. Instructional Science, 39(3), 387–406.CrossRefGoogle Scholar
  28. Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). The expertise reversal effect. Educational Psychologist, 38(1), 23–31.CrossRefGoogle Scholar
  29. Kester, L., Kirschner, P. A., Van Merriënboer, J. J. G., & Baumer, A. (2001). Just-in-time information presentation and the acquisition of complex cognitive skills. Computers in Human Behavior, 17(4), 373–392.Google Scholar
  30. King. A, (2002) Structuring Peer Interaction to Promote High-Level Cognitive Processing. Theory Into Practice 41(1), 33–39.Google Scholar
  31. Kirschner, P. A., Buckingham-Shum, S. J., & Carr, C. S. (Eds.). (2003). Visualizing argumentation: software tools for collaborative and educational sense-making. London: Springer.Google Scholar
  32. Kirschner, P. A., & Erkens, G. (2013). Toward a framework for CSCL research. Educational Psychologist, 48(1), 1–8.CrossRefGoogle Scholar
  33. Kirschner, P. A., & Van Merriënboer, J. (2013). Do learners really know best? Urban legends in education. Educational Psychologist, 48(3), 169–183.CrossRefGoogle Scholar
  34. Kollar, I., & Fischer, F. (2010). Peer assessment as collaborative learning: a cognitive perspective. Learning and Instruction, 20(4), 344–348.CrossRefGoogle Scholar
  35. Kollar, I., Fischer, F., & Hesse, F. W. (2006). Collaboration scripts-a conceptual analysis. Educational Psychology Review, 18(2), 159–185.CrossRefGoogle Scholar
  36. Kollar, I., Fischer, F., & Slotta, D. J. (2007). Internal and external scripts in computer-supported collaborative inquiry learning. Learning and Instruction, 17(6), 708–721.CrossRefGoogle Scholar
  37. Kollar, I., Pilz, F., & Fischer, F. (2014). Why it is hard to make use of new learning spaces: a script perspective. Technology, Pedagogy and Education, 23(1), 7–18.CrossRefGoogle Scholar
  38. Kuhn, D. (1991). The skills of argument. Cambridge University Press.Google Scholar
  39. Kumar, R., Rosé, C., Wang, Y.C., Joshi, M., & Robinson, A. (2007). Tutorial dialogue as adaptive collaborative learning support. In R. Luckin, K. R. Koedinger, J. Greer (Eds.), Proceedings of the 13th International Conference on Artificial Intelligence in Education (AIED 2007) (pp. 383-390). Amsterdam: IOS.Google Scholar
  40. Lee, H.S., & Songer N.B. (2004). Longitudinal knowledge development: scaffolds for inquiry. Paper presented at the annual meeting of the American Educational Research Association, San Diego, CA.Google Scholar
  41. Leutner, D. (2000). Double-fading support—a training approach to complex software systems. Journal of Computer Assisted Learning, 16(4), 347–357.CrossRefGoogle Scholar
  42. McAlister, S., Ravenscroftw, A., & Scanlon, E. (2004). Combining interaction and context design to support collaborative argumentation using a tool for synchronous CMC. Journal of Computer Assisted Learning, 20(3), 194–204.CrossRefGoogle Scholar
  43. McLaren, B. M., Scheuer, O., & Mikšátko, J. (2010). Supporting collaborative learning and e-Discussions using artificial intelligence techniques. International Journal of Artificial Intelligence in Education, 20(1), 1–46.Google Scholar
  44. McNeill, K. L., Lizotte, D. J., Krajcik, J., & Marx, R. W. (2006). Supporting students’ construction of scientific explanations by fading scaffolds in instructional materials. The Journal of the Learning Sciences, 15(2), 153–191.CrossRefGoogle Scholar
  45. Mu, J., Stegmann, K., Mayfield, E., Ros’e, C., & Fischer, F. (2012). The ACODEA framework: Developing segmentation and classification schemes for fully automatic analysis of online discussions. IJCSCL, 7(2), 285–305.Google Scholar
  46. Nelson, M. M., & Schunn, C. D. (2008). The nature of feedback: how different types of peer feedback affect writing performance. Instructional Science, 37(4), 375–401.CrossRefGoogle Scholar
  47. Noroozi, O., Biemans, H. J. A., Busstra, M. C., Mulder, M., & Chizari, M. (2011). Differences in learning processes between successful and less successful students in computer-supported collaborative learning in the field of human nutrition and health. Computers in Human Behaviour, 27(1), 309–318.CrossRefGoogle Scholar
  48. Noroozi, O., Biemans, H. J. A., & Mulder, M. (2016). Relations between scripted online peer feedback processes and quality of written argumentative essay. Internet and Higher Education, 31(1), 20–31.CrossRefGoogle Scholar
  49. Noroozi, O., & McAlister, S. (2017). Software tools for scaffolding argumentation competence development. In M. Mulder, (Ed.), Competence-based vocational and professional education. Bridging the worlds of work and education (pp.819-839). Cham: Springer International Publishing Switzerland.  http://dx.doi.org/10.1007/978-3-319-41713-4_38.
  50. Noroozi, O., Weinberger, A., Biemans, H. J. A., Mulder, M., & Chizari, M. (2012). Argumentation-based computer supported collaborative learning (ABCSCL). A systematic review and synthesis of fifteen years of research. Educational Research Review, 7(2), 79–106.CrossRefGoogle Scholar
  51. Noroozi, O., Weinberger, A., Biemans, H. J. A., Mulder, M., & Chizari, M. (2013). Facilitating argumentative knowledge construction through a transactive discussion script in CSCL. Computers and Education, 61(2), 59–76.CrossRefGoogle Scholar
  52. Nussbaum, E. M., Sinatra, M. G., & Poliquin, A. (2008). Role of epistemic beliefs and scientific argumentation in science learning. International Journal of Science Education, 30(15), 1977–1999.CrossRefGoogle Scholar
  53. Panadero, E., Romero, M., & Strijbos, J.-W. (2013). The impact of a rubric and friendship on peer assessment: Effects on construct validity, performance, and perceptions of fairness and comfort. Studies in Educational Evaluation, 39(4), 195–203.CrossRefGoogle Scholar
  54. Pea, R. D. (2004). The social and technological dimensions of “scaffolding” and related theoretical concepts for learning, education and human activity. The Journal of the Learning Sciences, 13(3), 423–451.CrossRefGoogle Scholar
  55. Pinkwart, N., Aleven, V., Ashley, K., & Lynch, C. (2006). Toward legal argument instruction with graph grammars and collaborative filtering techniques. In M. Ikeda., K. Ashley., & T.W. Chan (Eds.), Proceedings of the 8th International Conference on Intelligent Tutoring Systems (ITS 2006) (pp. 227-236). Berlin: Springer.Google Scholar
  56. Pinkwart, N., Aleven, V., Ashley, K., & Lynch, C. (2007). Evaluating legal argument instruction with graphical representations using largo. In R. Luckin., K.R. Koedinger., & J. Greer (Eds.), Proceedings of the 13th International Conference on Artificial Intelligence in Education (AI-ED 2007) (pp. 101-108). Amsterdam: IOS.Google Scholar
  57. Pinkwart, N., Ashley, K. D., Lynch, C., & Aleven, V. (2009). Evaluating an intelligent tutoring system for making legal arguments with hypotheticals. International Journal of Artificial Intelligence in Education, 19(4), 401–424.Google Scholar
  58. Prichard, J. S., Stratford, R. J., & Bizo, L. A. (2006). Team-skills training enhances collaborative learning. Learning and Instruction, 16(3), 256–265.CrossRefGoogle Scholar
  59. Rapanta, C., Garcia-Mila, M., & Gilabert, S. (2013). What is meant by argumentative competence? An integrative review of methods of analysis and assessment in education. Review of Educational Research, 83(4), 483–520.CrossRefGoogle Scholar
  60. Ravenscroft, A. (2007). Promoting thinking and conceptual change with digital dialogue games. Journal of Computer Assisted Learning, 23(6), 453–465.CrossRefGoogle Scholar
  61. Ravenscroft, A. (2011). Dialogue and connectivism: a new approach to understanding and promoting dialogue-rich networked learning. International Review of Research in Open and Distance Learning, 12(3), 139–160.Google Scholar
  62. Ravenscroft, A., & McAlister, S. (2006). Digital games and learning in cyberspace: a dialogical approach. E-Learning and Digital Media, 3(1), 37–50.CrossRefGoogle Scholar
  63. Ravenscroft, A., & Pilkington, R. M. (2000). Investigation by design: developing dialogue models to support reasoning and conceptual change. International Journal of Artificial Intelligence in Education, 11(1), 273–298.Google Scholar
  64. Renkl, A., & Atkinson, R. K. (2003). Structuring the transition from example study problem solving in cognitive skill acquisition: a cognitive load perspective. Educational Psychologist, 38(1), 15–22.CrossRefGoogle Scholar
  65. Renkl, A., Atkinson, K. R., & Große, C. S. (2004). How fading worked solution steps works – a cognitive load perspective. Instructional Science, 32(1-2), 59–82.CrossRefGoogle Scholar
  66. Rosé, C., Wang, Y.-C., Cui, Y., Arguello, J., Stegmann, K., Weinberger, A., et al. (2008). Analyzing collaborative learning processes automatically: Exploiting the advances of computational linguistics in CSCL. IJCSCL, 3(3), 237–272.Google Scholar
  67. Salomon, G. (1992). Effects with and of computers and the study of computer based learning environments. In E. De Corte, M. Linn, H. Mandl & L. Verschaffel (Eds.). Computer-based learning environments and problem solving (NATO-ASI Series F: Computer and System Sciences, Vol. 84, pp. 249-263). Berlin Heidelberg, Germany: Springer-Verlag.Google Scholar
  68. Samraj, B. (2004). Discourse features of the student-produced academic research paper: variations across disciplinary courses. Journal of English for Academic Purposes, 3(1), 5–22.CrossRefGoogle Scholar
  69. Schellens, T., Van Keer, H., De Wever, B., & Valcke, M. (2007). Scripting by assigning roles: does it improve knowledge Construction in asynchronous discussion groups? IJCSCL, 2(2-3), 225–246.Google Scholar
  70. Scheuer, O., Loll, F., Pinkwart, N., & McLaren, B. M. (2010). Computer-supported argumentation: A review of the state of the art. IJCSCL, 5(1), 43–102.Google Scholar
  71. Scheuer, O., McLaren, B. M., Loll, F., & Pinkwart, N. (2012). Automated analysis and feedback techniques to support and teach argumentation: a survey. In N. Pinkwart & B. M. McLaren (Eds.), Educational technologies for teaching argumentation skills (pp. 71–124). Sharjah: Bentham Science.CrossRefGoogle Scholar
  72. Scheuer, O., McLaren, B. M., Weinberger, A., & Niebuhr, S. (2013). Promoting critical, elaborative discussions through a collaboration script and argument diagrams. Instructional Science, 42(4), 127–157.Google Scholar
  73. Schunn, C., Godley, A., & DeMartino, S. (2016). The reliability and validity of peer review of writing in high school AP English classes. Journal of Adolescent & Adult Literacy, 60(1), 13–23.CrossRefGoogle Scholar
  74. Schwarz, B. B., & De Groot, R. (2007). Argumentation in a changing world. IJCSCL, 2(2-3), 297–313.Google Scholar
  75. Shute, V. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189.CrossRefGoogle Scholar
  76. Stegmann, K., Wecker, C., Weinberger, A., & Fischer, F. (2012). Collaborative argumentation and cognitive processing in computer-supported collaborative learning environment. Instructional Science, 40(2), 297–323.CrossRefGoogle Scholar
  77. Stegmann, K., Weinberger, A., & Fischer, F. (2007). Facilitating argumentative knowledge construction with computer-supported collaboration scripts. IJCSCL, 2(4), 421–447.Google Scholar
  78. Suthers, D. (2003). Representational guidance for collaborative inquiry. In J. Andriessen, M. Baker, & D. Suthers (Eds.), Arguing to learn: Confronting cognitions in computer-supported collaborative learning environments (pp. 27–46). Dordrecht: Kluwer.CrossRefGoogle Scholar
  79. Suthers, D., & Hundhausen, C. (2003). An experimental study of the effects of representational guidance on collaborative learning. The Journal of the Learning Sciences, 12(2), 183–219.CrossRefGoogle Scholar
  80. Teasley, S. D. (1997). Talking about reasoning: How important is the peer in peer collaboration? In L. B. Resnick, R. Saljo, C. Pontecorvo, & B. Burge (Eds.), Discourse, tools and reasoning: Essays on situated cognition (pp. 361–384). Berlin: Springer.CrossRefGoogle Scholar
  81. Tsai, Y. C., & Chuang, M. T. (2013). Fostering revision of argumentative writing through structured peer assessment. Perceptual & Motor Skills, 116(1), 210–221.CrossRefGoogle Scholar
  82. Tsovaltzi, D., Melis, E., & McLaren, B. M. (2012). Erroneous examples: Effects on learning fractions in a web-based setting. International Journal of Technology Enhanced Learning, 4(3), 191–230.CrossRefGoogle Scholar
  83. Tsovaltzi, D., Rummel, N., McLaren, B. M., Pinkwart, N., Scheuer, O., Harrer, A., & Braun, I. (2010). Extending a virtual chemistry laboratory with a collaboration script to promote conceptual learning. International Journal of Technology Enhanced Learning, 2(1-2), 91–110.CrossRefGoogle Scholar
  84. Van Gog, T., & Rummel, N. (2010). Example-based learning: integrating cognitive and social–cognitive research perspectives. Educational Psychology Review, 22(2), 155–174.Google Scholar
  85. Van Gog, T., Paas, F., & Van Merriënboer, J. J. G. (2005a). Uncovering expertise-related differences in troubleshooting performance: combining eye movement and concurrent verbal protocol data. Applied Cognitive Psychology, 19(2), 205–221.Google Scholar
  86. Van Gog, T., Paas, F., Van Merriënboer, J. J. G., & Witte, P. (2005b). Uncovering the problem-solving process: cued retrospective reporting versus concurrent and retrospective reporting. Journal of Experimental Psychology: Applied, 11(4), 237–244.Google Scholar
  87. Van Gog, T., Jarodzka, H., Scheiter, K., Gerjets, P., & Paas, F. (2009). Attention guidance during example study via the model’s eye movements. Computers in Human Behavior, 25(3), 785–791.CrossRefGoogle Scholar
  88. Van Merriënboer, J. J. G., & Kirschner, P. A. (2013). Ten steps to complex learning (Second Revised ed.). New York: Routledge.Google Scholar
  89. Van Someren, M. W., Barnard, Y. F., & Sandberg, J. A. C. (1994). The think aloud method: A practical guide to modeling cognitive processes. London: Academic Press.Google Scholar
  90. Vogel, F., Kollar, I., Ufer, S., Reichersdorfer, E., Reiss, K. & Fischer, F. (2015). Fostering argumentation skills in mathematics with adaptable collaboration scripts: only viable for good self-regulators? In O. Lindwall, P. Häkkinen, T. Koschmann, P. Tchounikine, & S. Ludvigsen (Eds.), Exploring the material conditions of learning. The Computer Supported Collaborative Learning Conference (CSCL) 2015 - Volume II (pp. 576-580). International Society of the Learning Sciences: University of Gothenburg.Google Scholar
  91. Vogel, F., Wecker, C., Kollar, I., & Fischer, F. (2016). Socio-cognitive scaffolding with computer-supported collaboration scripts: a meta-analysis. Educational Psychology Review. doi: 10.1007/s10648-016-9361-7.Google Scholar
  92. Wecker, C., & Fischer, F. (2011). From guided to self-regulated performance of domain-general skills: the role of peer monitoring during the fading of instructional scripts. Learning and Instruction, 21(6), 746–756.CrossRefGoogle Scholar
  93. Wecker, C., & Fischer, F. (2014). Where is the evidence? A meta-analysis on the role of argumentation for the acquisition of domain-specific knowledge in computer-supported collaborative learning. Computers & Education, 75(2), 218–228.CrossRefGoogle Scholar
  94. Wecker, C., Kollar, I., Fischer, F., & Prechtl, H. (2010). Fostering online search competence and domain-specific knowledge in inquiry classrooms: effects of continuous and fading collaboration scripts. In K. Gomez, L. Lyons, & J. Radinsky (Eds.), Proceedings of the 9th international conference of the learning sciences: Learning in the disciplines (pp. 810-817). Chicago: ISLS.Google Scholar
  95. Weinberger, A., & Fischer, F. (2006). A framework to analyze argumentative knowledge construction in computer-supported collaborative learning. Computers and Education, 46(1), 71–95.CrossRefGoogle Scholar
  96. Wood, H. A., & Wood, D. J. (1999). Help seeking, learning and contingent tutoring. Computers & Education, 33(2), 153–170.CrossRefGoogle Scholar
  97. Yang, Y. F. (2010). Students’ reflection on online self-correction and peer review to improve writing. Computers & Education, 55(3), 1202–1210.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2017

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Omid Noroozi
    • 1
    • 2
    Email author
  • Paul A. Kirschner
    • 3
    • 4
  • Harm J.A. Biemans
    • 2
  • Martin Mulder
    • 2
  1. 1.Tarbiat Modares UniversityTehranIran
  2. 2.Wageningen UniversityWageningenthe Netherlands
  3. 3.Open University of the NetherlandsHeerlenthe Netherlands
  4. 4.University of OuluOuluFinland

Personalised recommendations