Online discussion compensates for suboptimal timing of supportive information presentation in a digitally supported learning environment
- 1.5k Downloads
This study used a sequential set-up to investigate the consecutive effects of timing of supportive information presentation (information before vs. information during the learning task clusters) in interactive digital learning materials (IDLMs) and type of collaboration (personal discussion vs. online discussion) in computer-supported collaborative learning (CSCL) on student knowledge construction. Students (N = 87) were first randomly assigned to the two information presentation conditions to work individually on a case-based assignment in IDLM. Students who received information during learning task clusters tended to show better results on knowledge construction than those who received information only before each cluster. The students within the two separate information presentation conditions were then randomly assigned to pairs to discuss the outcomes of their assignments under either the personal discussion or online discussion condition in CSCL. When supportive information had been presented before each learning task cluster, online discussion led to better results than personal discussion. When supportive information had been presented during the learning task clusters, however, the online and personal discussion conditions had no differential effect on knowledge construction. Online discussion in CSCL appeared to compensate for suboptimal timing of presentation of supportive information before the learning task clusters in IDLM.
KeywordsCollaborative learning Computer-supported collaborative learning Digitally supported learning environment Interactive learning environments Timing of supportive information presentation
The separate effects of interactive digital learning materials (IDLMs) and computer-supported collaborative learning (CSCL) on student learning are well researched, yet no empirical study has addressed the consecutive effects of these two learning arrangements on knowledge construction. Platforms for digitally supported learning environments such as IDLM and CSCL assist learners in the acquisition and construction of knowledge (e.g. Jonassen 2004; Verhoeven and Graesser 2008). Well-designed IDLM environments provide learners with various modes of information presentation, such as interactive texts, exercises, graphs, diagrams, animations, pictures, etc., that can support learners’ knowledge construction (e.g. Busstra et al. 2008; Jonassen 2004; Verhoeven and Graesser 2008; Verhoeven et al. 2009). The effect of timing of information presentation in IDLM on student learning performance has been a subject of interest to many researchers across a range of disciplines (e.g. Jonassen 1999; Kester 2003; Van Merriënboer et al. 2003). This is important since optimal timing of information presentation should take into account the load a task imposes on the learner’s cognitive system (e.g. Kester et al. 2001; Van Merriënboer and Sweller 2005). The literature points out that various types of information such as supportive, procedural, declarative, prerequisite, etc. require different timing of presentation (information before “IB” or information during “ID” the learning task) in IDLM. In spite of a general consensus among researchers on the preferable timing of presenting most types of information (e.g. procedural, declarative, prerequisite, etc.), mixed findings have been reported regarding the effects of timing of “supportive” information presentation on learning performance in IDLM (e.g. Kester et al. 2004a, 2006a). This is a striking gap, since optimal timing of presentation of supportive information could promote meaningful learning by giving the learners maximal opportunity to reason about and elaborate on the learning materials and new information and help them connect these to their existing, relevant cognitive structures (Kester et al. 2001, 2006a). Without such supportive information, presented at the preferable time, it would be very difficult, if not impossible, to direct learners’ attention to and help them identify relations between relevant aspects of the tasks to foster meaningful learning. This study was therefore intended to contribute to the existing literature on learning in IDLM by investigating the effect of timing of supportive information presentation on student performance.
In educational practice, a fruitful approach can be to compensate for the possible limitations of a particular intervention by introducing a complementary intervention. In this study, immediately after the first intervention (presentation of supportive information in IDLM), a second intervention (collaboration in CSCL with graphical knowledge maps) was introduced to examine the consecutive effects of these two interventions on students’ knowledge construction. Collaborative and networked learning arrangements e.g. CSCL with graphical knowledge maps provide students with a shared learning environment in which to discuss their ideas, concepts, views and questions with their peers. This allows them to co-construct new and re-construct existing knowledge based on what they have learned while working in IDLM. Within CSCL, graphical knowledge maps have evolved to improve knowledge construction and deep learning (e.g. Janssen et al. 2010; Van Amelsvoort et al. 2007).
Despite extensive research on CSCL, no empirical study has compared the effects on knowledge construction of two types of collaboration (personal discussion “PD” in front of a shared computer and online discussion “OD” using a textual chat tool) in CSCL with graphical knowledge maps. Furthermore, no empirical study has addressed the consecutive effects of IDLM and CSCL using graphical knowledge maps on students’ knowledge construction. This study therefore used a sequential set-up (see Campbell and Stanley 1963) to investigate the effect of type of collaboration (PD vs. OD) in CSCL with graphical knowledge maps on knowledge construction while controlling for the effect of timing of supportive information presentation (IB vs. ID) in IDLM. More specifically, this study aimed to explore whether and how the type of collaboration (PD vs. OD) in CSCL with graphical knowledge maps might compensate for suboptimal timing of supportive information presentation (IB vs. ID) in IDLM.
Interactive digital learning materials (IDLMs)
Many types of IDLMs are increasingly introduced in higher education, including in the life sciences (Diederen et al. 2003), to serve various purposes (Busstra et al. 2008). IDLMs are characterized by the use of interactive features, such as drag and drop exercises, interactive graphs, diagrams, animations, pictures, and detailed student-tailored feedback. Interactive exercises are accompanied by information needed to solve them. This makes IDLMs different from e-learning sites which are less interactive and use various forms of texts with hyperlinks, multimedia, clips, etc. Different types of exercises in IDLM, as used in this study, can increase learners’ motivation and their understanding and retention of knowledge (Sweller et al. 1998), as well as facilitate the acquisition and use of domain-specific knowledge (Diederen et al. 2003). Embedding representations like interactive graphs, diagrams, animations, and pictures in IDLM can authenticate and visualize learning contexts (Busstra et al. 2007; Mayer 2003). Multimedia learning modules consisting of texts and pictures can help learners acquire complex cognitive skills and promote deep learning (Schnotz 2002; Mayer 2003). Other forms of information presentation in IDLM such as domain-specific supportive information can help students apply concepts and principles from related scientific fields and also facilitate factual, conceptual, and procedural knowledge construction (Busstra 2008).
Constructivist learning theories state that high-level and complex cognitive processes and activities such as knowledge construction and elaboration may be influenced by the load that the learning task imposes on the learner’s cognitive system (e.g. Jonassen 1999; Kalyuga 2009b; Verhoeven et al. 2009). Scientific evidence indicates that when cognitive overload is reduced, the learner’s performance and knowledge construction are improved (e.g. Busstra et al. 2008; Jonassen 1999; Kester et al. 2006b). Therefore, it has been suggested that digitally supported learning environments like IDLMs should consider cognitive load issues for maximizing learning effects and increasing flexibility and transferability of knowledge (Kalyuga 2009a; Kirschner et al. 2009).
Cognitive load theory (CLT) in IDLM
CLT concerns the limitation of working memory capacity in terms of information that can be processed at a certain time (Sweller 2010; Sweller et al. 1998). Total cognitive load comprises intrinsic, extraneous, and germane cognitive load. Intrinsic cognitive load refers to the expertise of the learner and the nature of the learning materials being dealt with; it is therefore fixed and cannot be altered (Sweller 1988; Sweller et al. 1998). Extraneous cognitive load refers to activities and processes a learner engages in while interacting with instructional materials that are not directly beneficial and useful for learning (Kester et al. 2001). Examples include looking for information sources, integrating them to understand the learning material, and weak-method problem solving (Kester et al. 2006b). Extraneous cognitive load is caused by inappropriate instructional designs and can be reduced using appropriate instructional techniques (Kirschner 2002; Van Merriënboer and Sweller 2005). Germane cognitive load refers to the working memory resources that are used to deal with element interactivity (elaboration of theories, models, exercises, etc.) that enhances learning (Paas et al. 2010; Sweller 2010). As described by Antonenko et al. (2010) “germane load occurs when information presentation is designed to encourage assimilation or accommodation of new concepts and appropriately challenge the learner” (p. 426). Germane cognitive load is the result of the activities and processes such as labeling, sorting, categorizing, and mindful abstraction of generalized knowledge that transfer the knowledge to the learner’s long-term memory; it thus represents the actual learning (Kester et al. 2001; Van Merriënboer et al. 2006).
Instructional designs should seek to minimize extraneous cognitive load, for example, by simplifying the learning tasks especially in the initial stage, avoiding temporal and spatial split attention (e.g. Kirschner 2002; Sweller et al. 1998), and optimally timing the presentation of information (e.g. Kester et al. 2001; Van Merriënboer and Sweller 2005). They should also seek to optimize germane cognitive load, for example, by increasing the variability of learning tasks (e.g. Paas et al. 2003, 2004; Paas and Van Merriënboer 1994; Van Merriënboer et al. 2006). Although scholars recommend reducing extraneous cognitive load (e.g. Kester et al. 2001; Sweller et al. 1998; Van Gog et al. 2005; Van Merriënboer and Sweller 2005), making learning too easy and straightforward may lead to less engagement of the learner in elaborative and deep processing (e.g. Bjork and Bjork 1992, 2011). This could result in a reduction of learning activities and processes that transfer the knowledge to the learner’s long-term memory (e.g. Bjork 1994; Richland et al. 2005).
Scientific evidence suggests that learning materials should be designed to be challenging and difficult enough to improve learners’ long-term learning and retention (e.g. Bjork and Linn 2006; Hirshman and Bjork 1988; Kornell and Bjork 2009; Metcalfe 2011). This has been named the ‘‘desirable difficulty’’ perspective, which recommends that learning materials be more difficult and challenging, but in a deliberate way, in order to promote transfer of the knowledge to the learners’ long-term memory (e.g. Bjork and Bjork 2011; Kornell et al. 2009; Metcalfe 2011). For example, it has been shown that making font more difficult for the learner to study (to achieve what has been named “disfluency”) improves learners’ memory performance (Oppenheimer et al. 2010). Disfluency is associated with the learners’ cognitive operations of the subjective experience of difficulty and leads to deeper processing and cognitive engagement (e.g. Benjamin et al. 1998; Craik and Tulving 1975). This may or may not increase extraneous cognitive load: however, such a desirable difficulty evokes germane cognitive load (e.g. Bjork 1994; Benjamin et al. 1998; Bjork and Bjork 2011). Therefore, the positive effects of desirable difficulty under the right circumstances on germane cognitive load can overcome the drawbacks of the possibly but not necessarily increased extraneous cognitive load, yielding eventually the desirable educational outcomes (e.g. Bjork and Linn 2006; Oppenheimer et al. 2010).
Various types of information
In recent years, the effects of providing various types of information on learning performance have been tested across a variety of learning domains. In the following paragraphs, various types of information and their effects on learning performance are described.
Supportive information refers to information i.e. conceptual, mental, and causal models, theories, or clues that students do not need to memorize, but that they do need to understand in order to engage in the elaborative and deep processing that will improve their long-term learning and retention (Kester et al. 2001). It facilitates problem solving and reasoning, and gives learners the opportunity to elaborate on the learning materials and new information and connect these to their existing, relevant cognitive structures (Kester et al. 2001, 2006a). Presentation of supportive information is typically used for topics with high element interactivity (elaboration of theories, models, principles, exercises, etc.), which helps learners master non-recurrent aspects of the learning task (Kester et al. 2006b; Van Merriënboer et al. 2003). In IDLM environments, supportive information can be presented in various forms e.g. figural organizations of text information, animations, graphical representations, etc. to direct learners’ attention to the relevant aspects of the tasks and foster meaningful learning. Supportive information can be called schematic information when it is presented in graphical representations or organizers such as matrices or diagrams (Van Merriënboer et al. 2006). Presentation of information in the form of graphical organizers offers hierarchical and coordinate relations for relevant aspects of the learning tasks (Robinson et al. 1998; Van Merriënboer et al. 2006).
Procedural information refers to task-specific rules and step-by-step instructions on how to handle routine and recurrent aspects of the learning tasks. It typically pertains to the consistent components of the learning tasks which provide learners with procedural steps that precisely specify under which conditions particular actions must be taken (Van Merriënboer et al. 2006). Procedural information mainly concerns information with a low degree of element interactivity (limited number of related elements, e.g. some conditions and one action), which can be presented in small information units (Van Merriënboer et al. 2003). Procedural information may be interpreted as prerequisite information when learners must know how to correctly perform a task-related activity or follow rules. In this case, prerequisite information could be embedded in learning environments in the form of so-called instances and prompts (Van Merriënboer et al. 2006).
While procedural information provides learners with step-by-step guidelines on how to perform and operate certain task-related activities, declarative information provides learners with relevant instruction on how to connect the new information to their existing knowledge and memory structure (Anderson 1981). Procedural information may thus pertain to a lower degree of element interactivity e.g. fewer interrelated elements than declarative information (Kester et al. 2006a; Van Merriënboer et al. 2006).
Timing of information presentation in IDLM
The reduction of unnecessary cognitive load is one of the crucial aspects of well-designed IDLMs (e.g. Sweller et al. 1998; Van Merriënboer et al. 2003; Kester et al. 2006b). Optimal timing of information presentation is one of the most important approaches to reduce unnecessary cognitive load in IDLM environments (e.g. Kester et al. 2001; Van Merriënboer and Sweller 2005). From the perspective of CLT, various types of information require different timing of presentation in IDLM. For example, an exploratory empirical study by Kester et al. (2001) tested a model for presentation of supportive and prerequisite information in a controlled setting with eight engineering students. The study investigated which type of information learners requested and when they requested it. Supportive information was best presented before the students began their learning task, as it then facilitated schema construction through meaningful learning or elaboration. Prerequisite information was best presented while the students were in practice performing the learning task, resulting in facilitation of schema automation through proceduralization of the recurrent aspects of a task. This proceduralization reduced extraneous cognitive load (temporal split attention avoidance), which in turn enhanced learning performance (Kester et al. 2001). However, on the basis of this exploratory study with only eight participants, it is not possible to conclude that higher transfer test performance was the result of the timing of information presentation. Firstly, due to the weak design of the study, learners’ motivation could potentially have influenced the results since much of the information (both prerequisite and supportive) was presented to learners before they started the learning task. Secondly, since prerequisite information was available for the duration of the task, students may have forgotten to use it by the time it was needed and intended to be used. Therefore, prerequisite information may have been treated as supportive information by learners.
In another study by Kester et al. (2006a), students (N = 87) worked on the same complex cognitive task (troubleshooting) in a 2 × 2 design with the factors declarative (before or during practice) and procedural (before or during practice) information. The results showed that presenting procedural and declarative information separately i.e. piece-by-piece during practice frees up working memory and facilitates learning performance. Presenting declarative information as a conceptual model helped learners construct cognitive schemata through knowledge elaboration, which in turn yielded productions containing domain-general knowledge which are beneficial for learning while dealing with unfamiliar problem situations. Furthermore, presenting procedural information, e.g. task-specific rules and step-by-step instructions, helped learners produce schema automation through knowledge completion, which in turn yielded productions containing domain-specific knowledge which are beneficial for learning while dealing with familiar problem situations. The results did not support the hypothesis that presentation of declarative information before practice and procedural information during practice would lead to the best test performance and mental effort. This was attributed to the system-controlled approach and the learners’ misunderstanding and perceptions of the declarative and procedural information. They had little or no control over information presentation and this might have interfered with the learning processes involved in cognitive skill acquisition (Kester et al. 2006a).
By contrast, the findings of another study by Kester et al. (2006b) with 48 psychology students in a 2 × 2 design with the factors supportive information, i.e. conceptual models or theories, (presented before or during practice) and schematic representation (presented before or during practice) showed that the “supportive during, schema before” format yielded the best learning efficiency i.e. mental effort during practice among all other formats. Furthermore, cognitive load was minimized by using supportive information to avoid temporal split attention, while the germane cognitive load was optimized by using schematic representations of this information to direct learners’ attention to concepts relevant for learning. However, no differences were found in terms of learning effectiveness, i.e. test performance. This result did not support the hypothesis and was attributed to the learners’ lack of control over selection of the task and information. Likewise, a study by Kester et al. (2004b) compared the effects of four information presentation formats in a 2 × 2 design, i.e. supportive information (before or during practice) and procedural information (before or during practice), on learning among 72 psychology and education students. Presenting supportive information during practice led to more efficient learning i.e., a high test performance combined with a low mental effort than presenting supportive information before practice due to temporal split attention avoidance. Furthermore, as an interaction effect, simultaneous presentation of supportive information during and procedural information before practice led to the most efficient learning. A plausible explanation for these unexpected results was that the students processed the supportive and procedural information differently than expected. For example, they may have judged the supportive information to be not very relevant for the task, while in fact it was meant as input for a deeper understanding of the learning material (Kester et al. 2004b). The authors acknowledged that designing the learning task in terms of independent pieces of knowledge in the field of statistics could also have contributed to the unexpected results. In an identical 2 × 2 design study by Kester et al. (2004a), high school learners (N = 88) were asked to work on troubleshooting in electrical circuits. Due to the bottom effect, the results did not support the hypothesis that “supportive before, procedural during” would lead to the best learning performance. The bottom effect was attributed to the high level of difficulty of the given information and the lack of practice in acquiring the complex skill of troubleshooting by the participants (Kester et al. 2004a).
To conclude, presentation of information before practice may have various aims: 1) to activate prior knowledge; 2) to provide students with the necessary information for the learning task; and 3) to rehearse and apply knowledge (i.e. during the learning task students elaborate on the information they just learned; Diederen et al. 2003). Presentation of information during practice reduces cognitive overload by temporal split attention avoidance that is related to facilitation of knowledge acquisition and construction (e.g. Busstra et al. 2008; Jonassen 1999; Kester et al. 2006b). The findings of previous studies with respect to the preferable timing of supportive information presentation in IDLMs are not consistent. On the one hand, some studies found evidence in favor of presenting supportive information during the learning task (e.g. Kester et al. 2004b, 2006b), when learners need certain facts or clues which they are not required to memorize, but which they need to understand for meaningful learning (Jonassen 1999; Kester et al. 2006a). On the other hand, other studies found evidence in favor of presenting supportive information before the learning task (e.g. Kester et al. 2001), so learners can study the information beforehand and thereby avoid an increase in cognitive load while they are carrying out the learning task (Kester 2003; Sweller 1988, 1994; Van Merriënboer et al. 2003). The present study was conducted in a real educational setting in the context of an academic course (rather than in a highly controlled experimental setting as was the case in other studies, e.g. Kester et al. 2001, 2004b) in order to reveal the optimal timing of presenting supportive information for facilitating student knowledge construction in authentic IDLM environments.
Computer-supported collaborative learning (CSCL)
In addition to IDLM environments, online learning platforms can help students discuss their ideas, concepts and problems from different perspectives. This facilitates knowledge construction processes and outcomes while the students are solving authentic and complex problems (Andriessen et al. 2003; Joiner and Jones 2003; Kirschner et al. 2003; Veldhuis-Diermanse et al. 2006), promotes reflective interaction (Baker and Lund 1997) and authentic problem solving (Jonassen and Kwon 2001), and increases the learner’s involvement (Kang 1998), interest, and motivation (Duffy et al. 1998). However, simply putting learners in a group to work together on an authentic and complex problem in an online learning environment is not always beneficial for learning, knowledge construction and problem solving (e.g. Kirschner et al. 2008; Kreijns et al. 2003; Slof et al. 2010). Empirical findings show that online collaborative learners generally encounter communication and coordination problems (e.g. Doerry 1996; Janssen et al. 2007; Olson and Olson 1997) due to the reduced bandwidth or available modes of interaction associated with online learning, resulting in degradation of problem-solving performance and knowledge construction (e.g. Baltes et al. 2002; Doerry 1996; Olson and Olson 1997). In response to this problem, a variety of instructional approaches e.g. shared workspaces, game-based learning, awareness features, knowledge representations, scripts, etc. has been developed to promote learning performance in online collaborative learning environments. These learning environments have been collectively named CSCL and are seen as promising approaches to facilitate and foster knowledge construction (e.g. Andriessen et al. 2003; Stegmann et al. 2007; Veerman 2000; Weinberger et al. 2005).
CSCL with graphical knowledge maps
One of the most prominent instructional approaches in CSCL is the use of external knowledge representation. External representation encourages learners to focus on important instructional elements and may include knowledge representations that can be used in a more graphical implementation in the form of schemes (Ertl et al. 2006; Ertl et al. 2008), tables (Suthers and Hundhausen 2003), or visualizations (Fischer et al. 2002; Suthers and Hundhausen 2003; Suthers et al. 2003), or in a more textual implementation in the form of cues, prompts (Ge and Land 2004; Morris et al. 2009), or scripts (Weinberger et al. 2005, 2007). Extensive prior research has shown various benefits of external representations in the form of graphical knowledge maps (e.g. Ertl et al. 2008; Janssen et al. 2010; Toth et al. 2002; Van Amelsvoort et al. 2007). Various forms of graphical knowledge representation, such as argumentative texts, graphs, and diagrams, are useful for maintaining learners’ focus on the relevant aspects of the task, which could broaden and deepen discussion and therefore improve learners’ knowledge (Baker et al. 2007; Suthers 2001; Suthers and Hundhausen 2003; Noroozi et al. 2011; Nussbaum 2008; Nussbaum et al. 2007; Van Amelsvoort et al. 2008; Veerman et al. 2002). It was not our intention to replicate or test these results, nor to compare the role of different knowledge representational tools. Rather, we intended to study the effects of two types of collaboration, namely personal discussion in front of a shared computer and online discussion using a textual chat tool in CSCL with graphical knowledge maps on knowledge construction.
Type of collaboration in CSCL with graphical knowledge maps
To what extent does timing of supportive information presentation (IB vs. ID) in IDLM affect the quality of knowledge construction?
To what extent does type of collaboration (PD vs. OD) in CSCL with graphical knowledge maps affect the quality of knowledge construction, given the earlier choice of timing of supportive information presentation (IB vs. ID) in IDLM?
Context and participants
The study took place in Wageningen University in the central Netherlands, whose student body represents over 100 nationalities. A broad range of research activities and a unique combination of academic and professional education has led to a coherent system of bachelor, master and PhD programs in this university. In line with the university’s central focus on healthy food and a healthy living environment, students are stimulated to combine natural and social sciences: from plant sciences to economics and from food technology to sociology. Participants in this study were eighty-seven students who enrolled in the 6-ECTS (168-h) course “Exposure assessment in nutrition and health research” organized by the division of Human Nutrition. In this course, students acquire insight into the methods of assessing food and nutrient intake. The main focus is on knowledge and skills related to the design, analysis and interpretation of studies aimed at validating nutritional assessment methods. About half of the 87 students were third-year bachelor and the other half were first-year master students, both from the Nutrition and Health educational program. The number of Dutch and foreign students was about equal. The mean age of the participants was 23.20 years (SD = 4.00). The majority of participants (90%) were female, which mirrors the proportion of females and males among the students in the Nutrition and Health educational program.
Experimental design and procedure
Phase 1 involved individual learning with the platform for Interactive Digital Learning Material (IDLM). The students were randomly divided into two conditions regarding the presentation of supportive information (IB vs. ID) to work individually on a case-based assignment as first learning task in IDLM. Random allocation of students took place a few weeks before the start of the course. After receiving guidelines and instructions, students were given a 10-min introduction on working with the IDLM platform. The learning task then started and students were asked to individually design and analyze the essential aspects of an evaluation study aimed at evaluating a certain dietary assessment method (a 24-h recall) that was used to assess protein intake in an elderly population. Three 4-h afternoon sessions (12 h in total) were devoted to this task. The learning task in IDLM was supposed to be self-studied by students; however, two teaching assistants were available to answer possible questions depending on the needs of the individual student. Teaching assistants were expected to perform three roles: to assist students with technical difficulties regarding the learning platforms; to assist students with difficult words and terms as English was not the first language of all students; and to monitor the way in which students progressed through the digital learning material in order to indicate to what extent students deviated from the provided sequence for example by skipping the theory or exercises. The IDLM learning task was followed by a 45-min examination (test 1) in which students were asked to design a comparable evaluation study (again a 24-h recall) for the assessment of protein intake but this time in a population of immigrants in the Netherlands. Test 1 served two purposes: to assess the effects of the two types of timing of supportive information presentation (IB vs. ID) on the quality of knowledge construction after learning task 1; and to assess the students’ knowledge level before introducing the collaboration conditions in learning task 2.
Phase 2 involved group work in pairs with the platform for CSCL. The students within the two information presentation conditions (IB vs. ID) were randomly assigned to pairs to discuss their results of test 1 under either the PD or OD condition using the CSCL platform with graphical knowledge maps. Guidelines and instructions were again distributed, and an introduction of about 20 min was given on working with the CSCL platform. The two students in each pair discussed the essential aspects of the evaluation studies they had developed individually during test 1. The discussions took 90 min, during which the CSCL platform was used. Students within the OD condition discussed the results online using the CSCL platform. Students within the PD condition viewed the screens of the evaluation studies in the form of graphical knowledge maps they designed in the CSCL platform on a shared desktop computer in front of them. The OD students did not have personal or face-to-face contact, whereas the PD students were sitting together behind the same computer. Finally, in test 2 students were asked to re-design the same evaluation study individually within 45 min based on what they had learned during the collaborative learning task. Test 2 aimed to test the effects of the two types of collaboration on the quality of knowledge construction, given the choice of timing of information presentation. The results of tests 1 and test 2 contributed to a minor extent to the students’ final mark for the course.
Two learning platforms were used in this study: a platform for Interactive Digital Learning Material (IDLM) was used for the learning task in phase 1, and the CSCL platform Drewlite was used for the learning task in phase 2. These two platforms are described below, followed by information about the measurements and data analysis.
Platform for interactive digital learning material (IDLM)
The information provided in these modules was “supportive” in nature, as defined in the “Theoretical framework” section above, and served to activate learners’ working memory in this particular domain. In order to accomplish the learning task clusters, students needed to understand subjects with a high-intrinsic complexity and high element interactivity, e.g. conceptual models, facts, theories and exercises in terms of essential aspects of the evaluation study. For instance, students needed to understand, but were not required to memorize, how the purposes of the evaluation study related to the required type of information, the potential systematic and random errors in exposure assessment, and the design and analysis of an evaluation study. Without such supportive information it would have been very difficult, if not impossible, to promote deep and elaborative processing of the learning materials for meaningful learning.
The CSCL platform Drewlite
The second learning task involved collaborative learning. Within the existing two groups (information before “IB” and information during “ID” the learning task clusters), students were randomly assigned in pairs to the PD (personal discussion) and OD (online discussion) conditions. The pairs in the OD condition were asked to discuss with each other online the results of test 1, which they had completed after the first learning task. The pairs in the PD condition discussed these results face-to-face in front of a shared desktop computer. The students were then given a second individual test to evaluate the quality of their knowledge construction.
The pairs who carried out the collaborative learning task online (OD) used the chat module to elaborate on their individually made outputs i.e. graphical knowledge maps. The chat module in the Drewlite platform can be used to discuss a topic with other participants or to construct a collaboratively written text. In this study, the chat module allowed students to discuss, collaborate and share ideas about the essential aspects of the evaluation study. The students who carried out the collaborative learning task in person (PD) were asked to sit down in front of a shared computer and open up the interfaces of their individually made outputs in order to also discuss, collaborate and share ideas about the essential aspects of the evaluation study. The time spent, the participants’ names, and their contributions to the whole process were automatically recorded in a log-file of the Drewlite platform (see Noroozi et al. 2011 for more information on the Drewlite platform and modules used within this platform).
There are several possible methods for analyzing the quality of learning outcomes in digitally supported learning environments. Essential criteria for the selection and use of these methods are completeness, clarity, applicability (Veldhuis-Diermanse 2002), accuracy, precision (Neuendorf 2002), objectivity (Rourke et al. 2001), validity, reliability, and replicability (Neuendorf 2002; Rourke et al. 2001). For the current study a content analysis instrument which had already been tested based on the aforementioned criteria was used.
In this study, the dependent variable was learning outcomes in terms of quality of knowledge construction. We operationalized knowledge construction as elaborating and evaluating ideas and external information, as well as linking different facts and ideas that could contribute to solutions for the problem case (see also Mahdizadeh 2007; Noroozi et al. 2011; Veldhuis-Diermanse 2002; Veldhuis-Diermanse et al. 2006). A coding scheme was used which was based on the one developed by Veldhuis-Diermanse (2002). That scheme was in turn based on the SOLO taxonomy (Biggs and Collis 1982). SOLO stands for the Structure of the Observed Learning Outcome and is a way of classifying learning outcomes in terms of their complexity. The SOLO taxonomy aims to analyze the quality of students’ contributions to reflect their quality of knowledge construction regardless of the content area (Biggs and Collis 1982). It provides a systematic way of unfolding how a student’s quality of knowledge construction develops in complexity when handling complex tasks, particularly the sort of tasks undertaken in school. Veldhuis-Diermanse et al. (2006, p. 48) declared that: “as students proceed in their learning process, the outcomes of their learning display comparable stages of increasing structural complexity.” Since the SOLO levels are not context dependent, the taxonomy can be applied across a range of disciplines (Veldhuis-Diermanse et al. 2006).
The coding scheme of Veldhuis-Diermanse provided a series of categories for ranking the complexity of students’ contributions as a proxy of their level of knowledge construction when performing learning tasks in online environments. The original coding scheme consists of five hierarchical levels (after Biggs and Collis 1982; Biggs 1999) from basic to advanced: E = prestructural (which reflects the lowest level of understanding, or no understanding at all); D = unistructural; C = multistructural; B = relational; and A = extended abstract (which reflects the highest level of understanding). Veldhuis-Diermanse (2002) further operationalized this coding scheme by identifying and describing corresponding verbs for each of the levels except for the lowest level E. Veldhuis-Diermanse dropped level E, whereas in the current study the original five levels were used, as designed by Biggs (1999), and the meaning of the levels as defined by Veldhuis-Diermanse (2002) was added.
Prestructural (no understanding at all)
Student writes irrelevant contributions which reflect outside (off-task) activities
Unistructural (nominal understanding)
Student recognizes or distinguishes something as being different. One point or item is given that is not related to other points in the discourse. Furthermore, this new point is not elaborated
Student describes something clearly. The description is taken over from a text or someone else; it is not a self-made definition
Multistructural (understanding as knowing about)
Items are listed in a particular or random order. Something is marked with a number, usually starting at one
A self-made definition of something is given (e.g. a theory, idea, problem or solution) which explains distinguishing features of that thing
Ideas or theory are organized, but descriptive in nature. No deeper explanatory relations are given, just a rough structure of information
Items are divided into groups or types so that those with similar characteristics are in the same group
Relational (understanding as appreciating relationships)
Reasons are given for a choice made.
An idea, theory or line of thought is elaborated
Two or more related things or facts are linked
Things are considered and differences or similarities between them are discovered
Acquired knowledge is used in the same or a different situation
Extended abstract (higher level of abstraction; understanding as transfer and as involving metacognitive knowledge)
Arguments on relevance and truth are criticized
After considering relevant facts the student decides that something is true or false
A judgment is given after considering an argument or theory
(The conclusion has to be a point; it must rise above earlier statements, not just be a summary.)
Concrete ideas are surpassed and the student formulates his or her own view or theory
The student predicts that something will be true because of various facts; this prediction has to be checked or examined
The coding scheme was used to quantify the quality of knowledge construction. Student contributions or notes in the comment screens of the Drewlite platform in tests 1 and 2 were segmented into meaningful units and, subsequently, each unit was labeled following the coding scheme (see Noroozi et al. 2011). The meaningful units were segmented based on the solution categories for various aspects of the evaluation study (purposes, the required type of information, the potential systematic and random errors, and the design of the evaluation study). For example, as can be seen in Fig. 5, four solutions were proposed by the student “Nick” for the aim of the evaluation study: 1) quantifying the systematic error for adjusting the results; 2) quantifying the random error between persons; 3) quantifying the random error per person; and 4) quantifying both random error and systematic error for vitamin D intake. Every proposed solution is separately elaborated in the comment section on the right-hand side of the interface of the graph module in the Drewlite platform (in this example the elaboration of aim 1 is shown). The corresponding notes in each comment were coded as a meaningful unit. For each student, the number of coded meaningful units equals the number of proposed solutions with comments for various aspects of the evaluation study. Therefore, a proposed solution was not counted as a meaningful unit if the student did not elaborate on the solution in the comment section. Subsequently, the corresponding verbs or signifiers were identified in the meaningful units (each meaningful unit could thus contain more than one signifier) and were then categorized according to the five quality levels following Veldhuis-Diermanse’ coding scheme. Student contributions were given points according to their quality level in the coding scheme: 1 point for E-level contributions, 2 points for D, 3 for C, 4 for B, and 5 for A-level contributions. Subsequently, the points for the contributions of each student were added up and then divided by the number of meaningful units, resulting in a mean score for the quality of knowledge construction. Coding was done both for tests 1 and 2. Scores of three inactive students were excluded from the analysis due to the limited number of their contributions, which means that for data analysis 84 students were included in the study.
Two coders analyzed the contributions using the coding scheme described above. Both coders were PhD students with sufficient theoretical knowledge on and practical experience in segmenting, analyzing, and coding procedures with similar sorts of data. The coders were not aware of the learning conditions nor of the characteristics of the students. The teachers of the course helped coders gain in-depth insight into the content-related topics of the learning tasks (on exposure assessment in nutrition and health research). Since the number of meaningful (solution) units could be determined unambiguously, no inter-rater reliability calculation was needed for the number of meaningful units. Both intra-rater and inter-coder analyses were carried out for the signifiers and levels of knowledge construction. Cohen’s kappa was employed as a reliability index of inter-rater agreement, which was 0.78 for test 1 and 0.81 for test 2. Moreover, intra-coder test–retest reliability was calculated for 20% of the contributions. This resulted in identical scores in 85% of these contributions. For both inter and intra-analyses, the reliability was thus regarded as sufficient.
An ANOVA test was used to assess the effects of the two types of timing of supportive information presentation (IB vs. ID) on the quality of knowledge construction as measured by test 1. An ANCOVA test was used to assess the effects of the two collaborative learning conditions (OD and PD) on the quality improvement of knowledge construction as measured by test 2, given the choice of the timing of information presentation (IB vs. ID) as measured by test 1. The covariate was students’ mean quality of knowledge construction score on test 1, taken after the first learning task was completed and before the collaboration began. The dependent variable was students’ mean quality of knowledge construction score on test 2, taken after the second learning task when collaboration was completed. Tukey’s HSD test was used as a post-hoc analysis to examine statistical differences between the four conditions (IB-OD; IB-PD; ID-OD; ID-PD).
The results are given below in relation to the research questions presented in the “Theoretical framework” section.
Timing of supportive information presentation tended to influence quality of knowledge construction: F (1, 79) = 3.34; p = 0.07. The average quality of knowledge construction tended to be higher for students who received supportive information during (ID) the learning task clusters than for students who received this information before (IB) the learning task clusters (MIB = 2.92; SDIB = 0.34; MID = 3.07; SDID = 0.32).
The covariate, the quality of knowledge construction as measured by test 1, had a significant effect on the quality of knowledge construction as measured by test 2: F (1, 79) = 27.20; p < 0.01. There was a significant effect of type of collaboration (PD and OD) on the quality of students’ knowledge construction after controlling for the effect of timing of supportive information presentation (IB and ID): F (3, 79) = 5.20; p < 0.01. In other words, a significant overall difference was found between the four conditions (IB–OD; IB–PD; ID–OD; ID–PD), allowing a possible carry-over effect of the timing of information presentation on type of collaboration in terms of quality of knowledge construction.
This overall difference was mainly due to the effect of type of collaboration for students who had received the supportive information before the learning task. At the end of the study period, the quality of knowledge construction under the IB-OD condition (M = 3.21) was significantly higher than that under the IB-PD condition (M = 2.90): F (3, 79) = 12.94; p < 0.01). For students who had received the information before the learning task, the gain of knowledge after online discussion was on average 0.27 (MT2 = 3.21 minus MT1 = 2.94), compared with 0.03 (MT2 = 2.90 minus MT1 = 2.87) after personal discussion.
Conclusions and discussions
Based on our study, the conclusion can be drawn that the timing of supportive information presentation in a digitally supported learning environment and, under certain conditions, the type of collaboration tend to influence the quality of knowledge construction in a real educational setting in the context of an academic course. Timing of supportive information presentation in IDLM has implications for the type of collaboration that should be used in a CSCL platform with graphical knowledge maps. When IDLM is embedded in an authentic educational setting as in our study, it seems to be preferable to present supportive information during the learning task. In this case, students can achieve the expected level of knowledge construction without further implementation of the CSCL platform. When designers of comparable courses have no other choice but to present the supportive information before the learning task starts, however, students can compensate for this through collaboration with peers on a CSCL discussion platform with graphical knowledge maps or comparable systems. In this case, online (written) discussions, in the form of chatting for example, are more effective than personal (spoken) discussions in front of a shared desktop computer. Below, we discuss plausible explanations for these results.
Timing of supportive information presentation tended to influence students’ performance. In this study, performance referred to how well students constructed knowledge while designing and analyzing evaluation studies for the assessment of food and nutrient intake in the field of human and health research. As mentioned earlier, the results of previous research are mixed in terms of preferable timing of supportive information in IDLM. The finding of this study tends to corroborate other research results which showed that providing supportive information during the learning task is productive for learning (e.g. Jonassen 1999; Kester et al. 2004b, 2006b). These studies state that information that is necessary to complete the task but is not supposed to be memorized by students (as used in this study) can best be presented during the learning task (Jonassen 1999; Kester et al. 2004b, 2006b). In the present study, to accomplish each learning task cluster students needed to understand, but were not required to memorize, concepts, principles and aims of reproducibility and validation studies within the field of nutrition research. When supportive information was available during each learning task cluster, unnecessary cognitive overload was minimized by avoiding temporal split attention (Kester et al. 2004b, 2006b), which in turn resulted in the students obtaining a thorough understanding of the task as a whole and facilitation of knowledge construction (Busstra et al. 2008; Diederen et al. 2003; Jonassen 1999; Kalyuga 2009a).
Some theoretical (Kester 2003; Sweller 1988, 1994; Van Merriënboer et al. 2003, 2006) and empirical (e.g. Kester et al. 2001) evidence is inconsistent with this finding of the present study. Van Merriënboer et al. (2006), for example, stated that supportive information with a high intrinsic complexity as used in this study could best be presented before the learning task, while supportive information with low intrinsic complexity could best be presented during the learning task. The question thus is which information should be presented at what time in IDLM? If the relevant supportive information is studied long before it is needed for the specific learning task, split attention might arise during that specific learning task, which could result in a limitation of working memory and cognitive load since the supportive information (studied long before the specific learning task cluster) has to be mentally integrated to understand the complete picture of the learning task as a whole (Kester et al. 2001). If students study the supportive information only shortly before engaging in the specific learning task, it would not cause any split attention and therefore cognitive overload would be avoided. Here the time between presentation of the supportive information and working on the specific learning task cluster is crucial. This is in line with the findings of Kester et al. (2006a), who concluded that there should not be a long lapse between the presentation of the supportive information and the practical task.
In the present study, the link to the supportive information was presented on the same screen as the individual learning task cluster and its sub-tasks for students who were offered supportive information during the task clusters. They thus had the opportunity to open, study, and practice the supportive information immediately before starting each learning task cluster or right when the information was needed (just-in-time, JIT). Treating supportive information as JIT information could free up students’ working memory and facilitate learning (e.g. Kester et al. 2006a). Students who were offered supportive information well before it was needed did not benefit as much as those with access to supportive information during the whole learning task cluster, since the first group studied the information ahead of time and therefore could have forgotten something needed to accomplish a particular sub-task later on.
The type of learning content and the way the learning tasks were articulated in this study could also have contributed to the preferable timing of supportive information presentation (e.g. Kester et al. 2004b). The type of learning task as a whole, the way it was divided into clusters with sub-tasks as well as the domain of this study were different in nature from previously mentioned studies. For example, it is possible that designing learning tasks in terms of independent pieces of knowledge may be more difficult in hard sciences, such as physics and statistics, than in life sciences, such as cognitive psychology and human nutrition as used in this study. Similarly the importance of this design in providing the desirable level of difficulty of the given supportive information for accomplishing complex learning tasks by learners may differ across disciplines. That could explain why in some studies in the hard sciences (e.g. Kester et al. 2004a) hypotheses were not confirmed and unexpected results were attributed to the type of learning content and the high level of difficulty of the given information.
Collaboration had a significant effect on the quality of knowledge construction after controlling for the effect of timing of supportive information presentation. This result is in line with conclusive findings in research on CSCL showing various added values and benefits of collaboration with external representations (e.g. Ertl et al. 2008; Fischer et al. 2002; Janssen et al. 2007, 2010; Nussbaum et al. 2007; Suthers and Hundhausen 2003; Toth et al. 2002; Van Amelsvoort et al. 2007, 2008). In this study, students benefitted from their partners’ knowledge (knowledge awareness) by looking at one another’s individually made graphical knowledge maps in CSCL environments. Knowledge awareness facilitates communication and task coordination (Engelmann et al. 2009) and fosters students’ knowledge construction and convergence in CSCL environments (Schreiber and Engelmann 2010).
In our study, the effect of timing of supportive information presentation on knowledge construction in IDLM was significantly related to the quality of knowledge construction after collaboration on a CSCL platform with graphical knowledge maps. The quality of knowledge construction for students under the IB–OD condition was higher than that for students under the IB–PD condition. When supportive information was presented before the first learning task, students did not benefit much in the IDLM environment, since there was a potentially long lapse between studying the supportive information and performing the practical sub-tasks. By means of online discussion in a consecutive learning task, however, students could compensate for the lack of supportive information during the first learning task. Despite the fact that personal discussion in front of a shared computer provides students with various forms of social interaction, nonverbal communication, physical, mental, and psychological signs which can facilitate turn-taking, giving feedback, mutual understanding, etc. (e.g. Coffin and O’Halloran 2009; Kiesler 1986; Kreijns et al. 2003; O’Conaill and Whittaker 1997; Van Amelsvoort 2006), evidence indicates that learners can benefit from restricted interactive environments (e.g. Burgoon et al. 2002; Fischer and Mandl 2005; Suthers et al. 2003) using support techniques (Engelmann et al. 2009) and factors that are extrinsic to the technology itself (Walther 1994). Through writing notes in CSCL, students can re-construct their thoughts while formulating and organizing ideas and opinions and they can also re-read posted notes by looking at the conversation history (e.g. De Jong et al. 2002; Veerman 2000). Writing notes, re-reading and re-thinking those notes are regarded as important tools for learning and knowledge construction in CSCL (De Jong et al. 2002; Veerman 2000). In the present study, these online activities thus helped students in the IB condition “catch up” with the students in the ID condition. We therefore conclude that when information is presented before the first learning task in IDLM, online discussions lead to better knowledge construction in the second consecutive learning task compared with personal discussion in front of a computer within a CSCL platform with graphical knowledge maps.
There was no significant difference between the quality of knowledge construction for students under the ID–OD and ID–PD conditions. When supportive information can be given during the first learning task in IDLM, the type of collaboration applied in a subsequent learning task in the CSCL platform with graphical knowledge maps is insignificant. When these students started working with CSCL to accomplish the second learning task, they had already attained to some extent the expected level of knowledge construction. The students had already benefited from the optimal timing of supportive information presentation, i.e. during the learning task (e.g. Diederen et al. 2003; Jonassen 1999; Kester et al. 2004b, 2006b). There was thus not much room for improvement in the consecutive learning task and therefore the type of collaboration in CSCL did not make any difference to the quality improvement of students’ knowledge construction.
Limitations and recommendations for future research
This study was embedded in an existing course in a real educational setting with its own dynamics. This means that there is a high level of ecological validity. However, the authentic setting of this study put some constraints on the possibilities to experiment. Now that we know that the tested variables have an effect in real courses, we advise that experiments be conducted in which student learning processes are more intensively monitored and learning results more elaborately tested. Further research under more stringent conditions (regarding pretesting, familiarization of students with the CSCL platform, use of various discussion functionalities, and simultaneous division of the research conditions) and in other sections of the same course, as well as in similar types of courses with more students, is needed to test the extent to which the results can be generalized. The set-up and results of this study also point to the following issues and limitations that warrant discussion and recommendations for future research.
Long-term and short-term measurements
One of the limitations of the present study is that we only administrated short-term measurements. Learners’ performance in this study was measured immediately after the two instructional interventions. The results of these tests were attributed to the cognitive overload construct. When extraneous cognitive load was reduced, the germane cognitive load was optimized and thus learners’ performance was improved. As discussed, however, based on the concept of desirable difficulty, reducing extraneous load may lead to a misleading boost in the short-term learning performance measures without fostering deeper processing that encourages long-term retention. Therefore, future research should focus on whether the short-term results in terms of student learning performance as obtained in this study are consistent with the long-term results to determine to what extent the possible conflict between cognitive load and desirable difficulty really occurs. This could have consequences for the design principles of both desirable difficulty and cognitive load in striving to optimize digitally supported learning environments.
Relationship between course exams and knowledge construction
Knowledge construction in this study was measured by analyzing student contributions using a slightly revised version of an existing coding scheme developed by Veldhuis-Diermanse (2002), which had already been used in several other empirical studies. Its inter-rater reliability and values had been reported as being satisfactory (e.g. De Laat and Lally 2003; Noroozi et al. 2011; Veldhuis-Diermanse 2002; Veldhuis-Diermanse et al. 2006), and these values were even higher in the present study. Furthermore, using existing coding schemes is advocated in the literature (Stacey and Gerbic 2003). This is a form of content analysis which is very time consuming, but for which there is hardly any alternative in this research context. It is therefore not surprising that this type of analysis is most frequently used for analyzing written notes and transcripts of discourse corpora in CSCL environments. In our case, meaningful parts within the contributions were coded with a slight variation of an existing five-tier scheme. The codes were seen as proxies for the achievement of learning outcomes. Measurement of student achievement in courses like the one we studied, however, can also be done with the regular course exams. Further analysis should be conducted to determine the extent to which the results of mid-term and final exams are consistent with the scores obtained in this study through the coding scheme for knowledge construction. If they are not consistent, and the psychometric properties of the exams pass the minimum quality thresholds, further calibration of the coding scheme for knowledge construction is necessary. Therefore we suggest that follow up research be aimed at this question.
The role of prior knowledge and student characteristics
In this study we did not administer tests to control for prior knowledge before students started learning task 1 or for student characteristics. However, since there were prerequisite requirements e.g. successful completion of specific courses for participants to enroll in this course, we presumed that students would have more or less the same level of prior knowledge. Furthermore, as the student group was relatively large, and the students were randomly divided over the different conditions, we assumed that possible differences in prior knowledge would be equally distributed. This is not certain, however, and it could potentially have consequences for the ways in which students interact in IDLM and CSCL platforms. We advise proceeding with controlled experiments that include a pretest on student characteristics. Factors we suggest that should be taken into account are prior knowledge (O’Donnell and Dansereau 2000; Schellens and Valcke 2005), personal character (Rummel and Spada 2005), proficiency in English as a second language and learning style (Biemans and Van Mil 2008), communication skills and self-confidence (Weinberger 2003), and interest in and willingness to work with computers and participate in CSCL (Beers et al. 2007).
Monitoring log files to control variation in the use of information
Students in this study were free to navigate through the IDLMs since it was an individual self-study module. Therefore they could have followed different routes. Theoretically it is possible that students under the IB condition skipped the supportive information step and immediately started with the learning task clusters. Furthermore, it is possible that students under the ID condition discovered early on that there was a list with all theory modules at the end of the digital learning material. Although unlikely, it is possible that some students studied this information before proceeding with the learning task clusters. This may have decreased the contrast between the two information presentation conditions. If that was the case, the research results presented would be of a conservative nature. However, after the experiment, through an evaluation form and personal communication students were asked to indicate the sequence in which they studied theory and exercises. Their answers supported our assumption that they followed the order corresponding with the particular information presentation condition. Furthermore, observations made by two teaching assistants during the scheduled hours did not indicate deviations from this sequence. In order to monitor the contrast between two modes of presenting supportive information, in follow-up research we advise using logging facilities to register the way in which students go through the digital learning material, even if this is for self-study.
The research reported in this article was financially supported by the Ministry of Science, Research, and Technology (MSRT) of the Islamic Republic of Iran through a grant awarded to Omid Noroozi. The authors would like to express their gratitude for this support. We also would like to thank Steven Collins for his tremendous technical support regarding the Drewlite platform. Finally, the authors want to thank the Division of Human Nutrition at Wageningen University and also their students for participating in this study.
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
- Anderson, J. R. (1981). Cognitive skills and their acquisition. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
- Andriessen, J., Baker, M., & Suthers, D. (2003). Arguing to learn. Confronting cognitions in computer-supported collaborative learning environments. Dordrecht: Kluwer.Google Scholar
- Biggs, J. B. (1999). Teaching for quality learning at university: What the student does. St. Edmundsbury: Society for Research into Higher Education & Open University Press.Google Scholar
- Biggs, J. B., & Collis, K. F. (1982). Evaluating the quality of learning: The SOLO taxonomy. New York: Academic Press.Google Scholar
- Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 185–205). Cambridge, MA: MIT Press.Google Scholar
- Bjork, R. A., & Bjork, E. L. (1992). A new theory of disuse and an old theory of stimulus fluctuation. In A. Healy, S. Kosslyn, & R. Shiffrin (Eds.), From learning processes to cognitive processes: Essays in honor of William K. Estes (Vol. 2, pp. 35–67). Hillsdale, NJ: Erlbaum.Google Scholar
- Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. In M. A. Gernsbacher, R. W. Pew, L. M. Hough, & J. R. Pomerantz (Eds.), Psychology and the real world: Essays illustrating fundamental contributions to society (pp. 56–64). New York: Worth Publishers.Google Scholar
- Bjork, R. A., & Linn, M. C. (2006). The science of learning and the learning of science: Introducing desirable difficulties. American Psychological Society Observer, 19(3), 29–39.Google Scholar
- Busstra, C. (2008). Design and evaluation of digital learning material for academic education in human nutrition. Ph.D. dissertation, Wageningen University, The Netherlands.Google Scholar
- Busstra, C., Graaf, C. D., & Hartog, R. (2007). Designing of digital learning material on social-psychological theories for nutrition behavior research. Journal of Educational Multimedia and Hypermedia, 16(2), 163–182.Google Scholar
- Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research on teaching. In N. L. Gage (Ed.), Handbook of research on teaching (pp. 171–246). Chicago, IL: Rand McNally.Google Scholar
- Corbel, A., Jaillon, P., Serpaggi, X., Baker, M., Quignard, M., Lund, K., et al. (2002). DREW: Un outil internet pour créer situations d’appretissage coopérant [DREW: An internet tool for creating cooperative learning situations]. In C. Desmoulins, P. Marquet, & D. Bouhineau (Eds.), EIAH2003 Environnements Informatique pour l’Apprentissage Humains (pp. 109–113). Paris: INRP.Google Scholar
- Craik, F., & Tulving, E. (1975). Depth of processing and the retention of words in episodic memory. Journal of Experimental Psychology, 104(3), 268–294.Google Scholar
- De Jong, F. P. C. M., Veldhuis-Diermanse, A. E., & Lutgens, G. (2002). Computer supported learning in university and vocational education. In T. Koschman, R. Hall, & N. Miyake (Eds.), CSCL 2: Carrying forward the conversation (pp. 111–128). Hillsdale, NJ: Erlbaum.Google Scholar
- Doerry, E. (1996). An empirical comparison of co-present and technologically-mediated interaction based on communicative breakdown. Ph.D. dissertation, Department of Information and Computer Science, University of Oregon. CIS-TR-96-01.Google Scholar
- Duffy, T. M., Dueber, B., & Hawley, C. L. (1998). Critical thinking in a distributed environment: A pedagogical base for the design of conferencing systems. In C. J. Bonk & K. S. King (Eds.), Electronic collaborators: learner-centered technologies for literacy, apprenticeship, discourse (pp. 51–78). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
- Jonassen, D. H. (1999). Designing constructivist learning environments. In C. M. Reigeluth (Ed.), Instructional-design theories and models (pp. 215–239). Mahwah, NJ: Lawrence Erlbaum.Google Scholar
- Jonassen, D. H. (2004). Handbook of research on educational communications and technology (2nd ed.). Mahwah, NJ: Erlbaum.Google Scholar
- Kang, I. (1998). The use of computer-mediated communication: Electronic collaboration and interactivity. In C. J. Bonk & K. S. King (Eds.), Electronic collaborators: learner-centered technologies for literacy, apprenticeship, discourse (pp. 315–337). Mahwah, NJ: Erlbaum.Google Scholar
- Kester, L. (2003). Timing of information presentation and the acquisition of complex skills. Heerlen: Open University of the Netherlands.Google Scholar
- Kiesler, S. (1986). The hidden messages in computer networks. Harvard Business Review, 64(1), 46–60.Google Scholar
- Kirschner, P. A., Buckingham-Shum, S. J., & Carr, C. S. (Eds.). (2003). Visualizing argumentation. Software tools for collaborative and educational sense making. Dordrecht: Kluwer.Google Scholar
- Mahdizadeh, H. (2007). Student collaboration and learning: Knowledge construction and participation in an asynchronous computer-supported collaborative learning environment in higher education. Dissertation, Wageningen UR, Wageningen.Google Scholar
- Metcalfe, J. (2011). Desirable difficulties and studying in the Region of Proximal Learning. In A. S. Benjamin (Ed.), Successful remembering and successful forgetting: A Festschrift in honor of Robert A. Bjork. London: Psychology Press.Google Scholar
- Neuendorf, K. A. (2002). The content analysis guidebook. Thousand Oaks, CA: Sage Publications.Google Scholar
- Noroozi, O., Biemans, H. J. A., Busstra, M. C., Mulder, M., & Chizari, M. (2011). Differences in learning processes between successful and less successful students in computer-supported collaborative learning in the field of human nutrition and health. Computers in Human Behavior, 27(1), 309–317.CrossRefGoogle Scholar
- O’Conaill, B., & Whittaker, S. (1997). Characterizing, predicting, and measuring video-mediated communication: A conversational approach. In K. E. Finn, A. J. Sellen, & S. B. Wilbur (Eds.), Video-mediated communication (pp. 107–132). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.Google Scholar
- Olson, G. M., & Olson, J. S. (1997). Research on computer-supported cooperative work. In M. Landauer, T. K. Helander, & P. Prabhu (Eds.), Handbook of human-computer interaction (2nd ed.). Amsterdam: Elsevier.Google Scholar
- Oppenheimer, D. M., Yauman, C. D., & Vaughn, E. B. (2010). Fortune favors the bold (and the italicized): Effects of disfluency on educational outcomes. Cognition, 118(1), 111–115.Google Scholar
- Richland, L. E., Bjork, R. A., Finley, J. R., & Linn, M. C. (2005). Linking cognitive science to education: Generation and interleaving effects. In B. G. Bara, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the twenty-seventh annual conference of the cognitive science society. Mahwah, NJ: Erlbaum.Google Scholar
- Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (2001). Methodological issues in the content analysis of computer conference transcripts. International Journal of Artificial Intelligence in Education, 12(1), 8–22.Google Scholar
- Stacey, E., & Gerbic, P. (2003). Investigating the impact of computer conferencing: Content analysis as a manageable research tool. In G. Crisp., D. Thiele., I. Scholten., S. Barker., & J. Baron (Eds.), Interact, integrate, impact: Proceedings of the 20th annual conference of the Australasian society for computers in learning in tertiary education.Google Scholar
- Suthers, D. D. (2001). Towards a systematic study of representational guidance for collaborative learning discourse. Journal of Universal Computer Science, 7(3), 254–277.Google Scholar
- Van Amelsvoort, M. (2006). A space for debate. How diagrams support collaborative argumentation-based learning. Dissertation, Utrecht University, The Netherlands.Google Scholar
- Van Gog, T., Ericsson, K. A., Rikers, R. M. J. P., & Paas, F. (2005). Instructional design for advanced learners: Establishing connections between the theoretical frameworks of cognitive load and deliberate practice. Educational Technology Research and Development, 53(3), 73–81.CrossRefGoogle Scholar
- Veerman, A. L. (2000). Computer supported collaborative learning through argumentation. Ph.D. dissertation, Utrecht University, the Netherlands.Google Scholar
- Veldhuis-Diermanse, A. E. (2002). CSCLearning? Participation, learning activities and knowledge construction in computer-supported collaborative learning in higher education. Ph.D. dissertation, Wageningen University, The Netherlands.Google Scholar
- Weinberger, A. (2003). Scripts for Computer-Supported Collaborative Learning Effects of social and epistemic cooperation scripts on collaborative knowledge construction. Ph.D. dissertation, München University, Germany.Google Scholar
- Weinberger, A., Stegmann, K., Fischer, F., & Mandl, H. (2007). Scripting argumentative knowledge construction in computer-supported learning environments. In F. Fischer, H. Mandl, J. Haake, & I. Kollar (Eds.), Scripting computer-supported communication of knowledge—Cognitive, computational and educational perspectives (pp. 191–211). New York: Springer.Google Scholar