Introduction

In this study we focus on the grounding process of students working together at a distance in international teams on complex problems, using a CSCL environment. In today’s work environment the use of international, geographically dispersed teams is becoming more common as organizations take advantage of interorganizational and international opportunities and maximize the use of scarce resources; these teams also typically work on complex problems (as presented by Cramton 2001). Working together at a distance in educational settings thus is important, as it prepares students for their life in the workforce.

One of the problems that students face in collaborative work is grounding. By grounding is meant the interactive process by which students establish common ground, i.e., mutual knowledge, understanding, beliefs, assumptions, and presuppositions (Cramton 2001; Mäkitalo et al. 2002; called ‘shared focus’ by Veerman 2000). Mutual knowledge is knowledge that the students share and which they know they share (Krauss and Fussell 1990). The term grounding is often also used for maintaining common ground, but we will restrict ourselves to the first phase of establishing common ground.

To support grounding in international student groups, use can be made of collaboration scripts. A collaboration script can be defined as a set of instructions regarding how the group members should interact, how they should collaborate, and how they should solve the problem (O’Donnell and Dansereau 1992); in a more narrow description, they are defined rules for structuring dialogues (Hron and Friedrich 2003). Collaboration scripts were originally intended for face-to-face collaboration, but to an increasing extent scripts are developed and tested for CSCL (e.g., Baker and Lund 1997; Dillenbourg 2002; Hron et al. 2000; Kollar et al. 2003; Mäkitalo et al. 2005; Strijbos 2004; Veerman 2000; Weinberger 2003). Characteristics of online scripts include measures for turn-taking control, prompts that provide just-in-time help (Mäkitalo et al. 2005), and the use of sentence openers to stimulate the use of specific communication acts; openers might be chosen from a list in a drop-down box or presented on buttons (Baker and Lund 1997).

There are many unresolved problems regarding online collaboration scripts which are relevant to the grounding of students in international groups. The first is that online scripts have hardly been tested in international students groups working on a complex problem for a longer period of time. Other problems regard the scripts themselves. There is evidence that scripts should not be too complex (Christoph 2006; Dillenbourg 2002; Saab 2005), but not how this can be avoided. It has also been shown that a script should not place too much control in the dialogue. This can diminish motivation and endanger the autonomy of learners (Dillenbourg 2002; Hron and Friedrich 2003). Performance decreases if too many restrictions are placed on turn-taking or if the structure of the interface does not match the structure of the task (Veerman 2000). In general, the script should have a certain degree of flexibility (Baker and Lund 1997; Mäkitalo et al. 2005).

Another important problem is if the script itself is well-designed, but the students do not use it or use it in an unintended way. This occurs, for example, if students do not adopt the script or if they have to use particular software with which they are not familiar (Dillenbourg 2002). This can happen because by scripts students are pushed but not forced to interact in a particular way. For example, students might ignore instructions they received or they may complete sentences in a way not consistent with a provided sentence openers (Baker et al. 1999). They may not react to prompts they receive.

To summarize, the problem with online scripts is that there cannot be too much control, because this might harm the discussion; on the other hand, not having full control means that students can and do use the script in untended ways. The question therefore is often phrased as a question of the extent to which the script should be structured to support learning (Mäkitalo et al. 2005).

It is clear that the rules of one script can be presented to students by many different interfaces, which will have different effects. Most studies, however, only make a comparison between the scripted condition and the unscripted condition. No comparison is made between several presentations of the same script. This is a problem as it does not allow us to find out where exactly the problem of complexity and/or not following the script resides: in the script, in the presentation, or in both. There do exist studies that compare different interfaces (e.g. the work by Daniel Suthers (Suthers and Hundhausen 2003)). However, these studies aim at structuring the workspace of students, not at structuring the discussion on their work-in-progress. Also, these studies do not work with scripts in the sense of sequences of activities that students have to go through.

Therefore the aim of this study is to examine the extent to which grounding in international student groups is supported by a collaboration script implemented in two different interfaces, one structured and one unstructured.

The script and the structured interface

As a preparation for our empirical study, this section describes the script that we used and the theories behind it. First, we provide a description of the script and interaction rules. Second, we describe the structured interface.

The script

The script provides a format for carrying out a grounding discussion among students who have to work together at a distance and who do not know each other. More specifically, the script was designed for discussing the individual knowledge that each participant has on the subject of a joint assignment and for discussing the individual objectives that participants have in performing the assignment. The theoretical idea behind the script is that in a grounding discussion, students should negotiate about common goals (Häkkinen et al. 2001); detailed and accurate models of each other’s knowledge, skills, and motivation help collaborators assign tasks appropriately and solicit and offer appropriate help (Kraut et al. 2002).

The script divides grounding discussions into three phases; the input phase, the discussion phase, and the consensus phase. The italic parts of Fig. 1 give a description of the phases as they were used in our experiment.

Fig. 1
figure 1

Script presented to the students in both scripted conditions; italics added, indicating the discussion phases used in the experiment

The theory behind the phases of the script borrows from research on small group work, brainstorming, and CSCW. The first phase in the script is the input phase, in which people make their own knowledge and objectives explicit to the others. It has been shown that, even without a response, this is a meaning-making process in itself. Thinking aloud in this way might lead to evaluation of existing knowledge, and this can lead to altering of the individual’s knowledge structures. If this happens, this affects subsequent learning and performance (King 1999). From research on brainstorming it is known that inputting ideas is best done individually. When the performance of interactive brainstorming groups is compared to the pooled performance of the same number of individuals brainstorming alone (nominal groups), nominal groups outperform interactive groups in both the quantity and quality of ideas generated (Dugosh and Paulus 2005). While it has not been shown that this will also apply to making explicit expertise and expectations, there is no reason to suppose that this will be different. Finally, by asking of each individual to provide their own knowledge and objectives, we expect to achieve a relatively equal contribution of all partners in this phase. Thereby we hope that this sets the stage for more equal contributions in the next phases, which is something that affects learning positively (Baker and Lund 1997).

In the discussion phase, the emphasis is on the differences between participants as visible in the input they have provided. In international cooperation, it is important that participants become aware of differences between them (Cramton 2001). By examining these differences people often discover that their own perceptions, facts, assumptions, values, and general understanding differ from those with whom they are interacting. If people become aware of these conceptual discrepancies, they often feel the need to reconcile them. Discovering discrepancies leads to negotiation of meaning (King 1999). This type of negotiation may lead to the restructuring of knowledge (Baker and Lund 1997). In negotiation with others, people continually reorganize and restructure their own knowledge. Working alone would not result in the same extent of cognitive change (King 1999).

In the consensus phase, students have to deliver a summary of their joint knowledge on the group expertise and the differences. Asking students to make a joint product is a task that provokes argumentation (Veerman 2000). Arriving at agreed-on meanings and plans in general is one way to establish co-construction of knowledge (King 1999).

Taken together, the theoretical literature leads us to expect that if there is more input of better quality, this will lead to more discussion of better quality, and together they will lead to more summaries of better quality. As providing the input, detecting and negotiation differences, and arriving at mutual knowledge do not occur spontaneously, the script is expected to be beneficial, as it prompts the students to take these actions. We expect the script to lead to more and better input, discussions and summaries.

The structured interface

In the structured interface, all questions and tasks that the students have to perform during the grounding discussions are formulated in separate messages that are prepared in advance, one message for each question or task (see Fig. 2). Students can answer questions and perform tasks by replying to the messages. Messages are ordered hierarchically, so that in the discussion overview the separate questions and phases are neatly ordered.

Fig. 2
figure 2

Screenshot of the structured interface in the script+interface condition, showing the instructions in one of the pre-fabricated messages (left) and a list of replies to the instructional messages (right)

There are two ideas behind the structured interface. The first is that an interface that is easy to use decreases cognitive load. This is an important point as it is known that operating a new, complex technology while at the same time learning new subject matter increases cognitive load (Brna et al. 2001; Hron and Friedrich 2003). A specialized communication interface can lighten students’ typing load and facilitate coordination, thus potentially allowing a more task-focused and reflective interaction (Baker and Lund 1997). Although the discussion and instruction messages themselves consist of text, their position and the positioning of the input that students provide is ordered spatially in a plane; the instruction and input of one phase is put together and can be recognized as such without consulting their content. In this sense the presentation is diagrammatic, which enables an efficient search for relevant items (Larkin and Simon 1987). Also, providing student with a graphical overview of the discussion may be helpful to keep track of the discussion (Veerman 2000). In each phase, input is set up as a reply to the instruction; the two are integrated. This is expected to foster task-focused social interaction (Dillenbourg 2002). Also, having the task visually available while providing the input can be very useful in task-directed communication (Brennan and Lockridge 2006).

The second idea is that the structured interface is designed in such a way that its chance of being used is increased. The first factor contributing to its use is ease of use. It is well known of software in general that if it is easy to use, the chance that people will use it is increased (Collis and Pals 2000; Collis et al. 2001).

Also, the structured interface makes use of a standard discussion forum and will be familiar to many students, which also increases its chance of being used (Dillenbourg 2002). Finally, the instruction is repeated explicitly and integrated into the structured interface, and thus is directly available to the students; this will increase the chance that students will use it (Baker and Lund 1997).

Research questions and hypotheses

In this research we focus on the adherence to the three phases of the script and the quality of the performance and how this is influenced by both script and structured interface. Our research questions are:

  1. 1.

    What is the impact of the use of:

    • a three-phased script for performing a grounding discussion and

    • an implementation of the script as pre-fabricated messages in a discussion forum

on:

  • the contribution to, discussion of and results of the grounding discussion itself?

  • the focus and position that the grounding position occupies in the whole range of activities?

  1. 2.

    In what ways are students guided by the assignment, the script and the structured interface?

In order to test these research questions, an experiment was set up in which students performed a grounding discussion under one of the following three conditions: the script+interface condition, in which the script is presented using the structured interface described above; the script-only condition, in which the script is presented to the students in an instruction but not implemented in the interface; and the unscripted condition, in which no script is presented to the students. Our main hypothesis is that presenting the script to students and the use of the structured interface provide support to students in performing these phases. We expect both the script and structured interfaces to lead to more and better input, discussions, and summaries.

We hypothesize that the script and the structured interfaces have separate, cumulative effects. We see the script as a tool to perform the assignment and the structured interface as a tool to perform the script. Four hypotheses were derived:

Hypotheses

  1. 1.

    The input of own knowledge and objectives,

  2. 2.

    the discussion on differences and the detection of specific differences between group members,

  3. 3.

    the resulting summary, and

  4. 4.

    adherence to the phases of the script and to the prescribed position of the grounding discussions within the assignment as a whole will qualitatively and quantitatively be best in the script+interface condition, intermediate in the script-only condition, and poorest in the unscripted condition.

We did not investigate the effects of adherence to the three phases of the scripts, such as learning effects or effects on group cohesion.

Method

Setting

The script was written for and used in an online course on sustainable development related to the enlargement of the European Union. The course is called the European Virtual Seminar. The course has run every year since the late 1990s. In the course, students work in small groups of four to six students on one of the themes: agriculture, communication, geoconservation, space, and water. Group division is based on student interest and on an equal distribution over topics. The students’ task is to write a report that might be useful to the European Commission. Data were taken from the course which ran from October 2005 until January 2006. The activities of the course in that year were, in chronological order: provide individual information, generate a common group definition of ‘sustainable development,’ perform two grounding discussions, write a common research plan, carry out research, and write a research report. For the grounding activities and the writing of a research plan together students had a period of 6 weeks.

Participants

In this course students from several institutions from several countries worked together in groups of three to five students. Within each group, all students came from different countries and had never met before or worked together. Each group was assigned one of the conditions. Students who were on the student list but who did not participate in any discussion, either in the forum or in chats, were excluded from the research. 10 groups with a total of 42 students and a group size varying from three to five students were investigated.

Procedure

Each student group had to carry out two grounding discussions. Figure 3 shows the assignment as it was presented to all students.

Fig. 3
figure 3

Assignment presented to students in all conditions

Materials

Students used a Blackboard environment in which all of their discussion was supposed to take place. In the learning environment, they also found the instructions for the course. There were three conditions. The unscripted group received only the assignment (see Fig. 3). Two forums, with a user interface as in Fig. 2, were allocated for the grounding discussions; at the beginning of the course, these forums were empty. The script-only group received the assignment plus instructions in a Word document on how to conform to the three phases of the script (see Fig. 1); they too had two empty forums for the grounding discussions. The script+interface group received the assignment and the script was embedded into the two forums for the grounding discussions. As explained in the section on the structured interface (see Fig. 2), all questions and tasks that the students had to perform during the grounding discussions were formulated in separate messages that were prepared in advance, one message for each question or task. Students could answer questions and perform tasks by replying to the messages.

Data collection

All messages that were produced in the Blackboard environment during the grounding discussions and the formulation of the research plan were collected. The original plan was to collect only data from the grounding discussions, but in practice we found that the grounding discussion and working on the research plan were not kept apart by all student groups.

Data collection included not only utterances in the grounding discussion forums, but also data in the other forums that were related to the grounding discussions and the research plan and data from chats that were recorded in the Blackboard environment.

Data outside of the Blackboard environment were out of reach of the researcher. In some groups reference was made to these data, such as non recorded chats or chats outside Blackboard and email discussions.

Validity issues

We agree with many authors on qualitative research that ‘researcher bias’ is something that is to be expected in a study like this, which relies on the interpretation of qualitative data. We did not expect that a person not trained in our coding system would arrive at exactly the same results as we did. From a practical point of view, it was not feasible within this study to train a second person in performing the analysis. For enhancing the validity of our research, we therefore resorted to an independent audit (Smith 2003; Yin 2003). We set up a detailed code book and asked two of our fellow researchers to check whether the procedure described in the code book would lead to the results that we found. For each hypothesis tested, a random selection of materials from one group in each condition was made. Cases in which the outcome of the auditor and the researcher differed had two different causes. In the majority of these cases, the description in the code book was not complete or accurate enough and in that case the code book was adjusted. In a minority of cases, a difference in interpretation occurred, and in that case the difference was solved by discussion.

Analysis

As we focused on the differences between the three conditions, we used an ANOVA with contrasts in testing the four hypotheses. The effect of the script was measured by testing the contrast between the unscripted condition on the one hand and both scripted conditions on the other hand. The effect of the interface was measured by testing the contrast between the script-only and the script+interface condition. These quantitative analyses were complemented by qualitative analyses in which we searched all messages for signs that students were led by the script and/or the structured interface. To investigate these signs, we collected all verbal references to the script and the structured interface in the students’ messages.

Measures and results

Hypothesis 1: The input of own knowledge and objectives

The amount of input was measured by assigning to each student a score indicating the number of the seven input questions (see Fig. 1) to which students provided an answer (unit of analysis: students (N = 42)) and the length of the inputs in words (unit of analysis: group discussion (N = 19)). The results show that students in both scripted conditions provided more inputs than students in the non-scripted condition (p = 0.000, Cohen’s d = 1.8). Differences between the two scripted groups were not significant at the 0.05 level. Students in the scripted conditions also provided longer inputs (p = 0.000, Cohen’s d = 1.0), and students in the script+interface condition provided longer inputs than students in the script-only condition (p = 0.005, Cohen’s d = 1.3).

Furthermore, we found several signs indicating that students were led by the script (second research question). A first apparent sign was that in the scripted conditions, students either answered the questions belonging to one discussion or they did not. To test this conjecture, we assigned each student a score of one if they answered either all or none of the questions of the discussion, and a score of zero if they answered some of the questions. An ANOVA showed that more students in the scripted conditions filled in either all or none of the inputs belonging to one of the discussions (p = 0.000, Cohen’s d = 1.5); a similar difference was found between the two scripted groups (p = 0.000, Cohen’s d = 0.9).

In the script-only condition, most of the students in the four groups referred to the script, most often by incorporating the script phrase into their answer:

Tell what you know about reports for the European Union. Also tell about your expectations: what do you expect a report for the European Union to consist of

I must admit I don’t know a lot about reports for the European Union.

I suppose there should be detailed description of given case, analysis and some suggestion for solving the problem.

In the script+interface condition, all students provided input by replying to the built-in message (see Fig. 2). As in this condition the structured interface provided slots for students to place their input, we found examples of students who corrected their wrongly placed inputs. There was also one group who did almost all discussion via chat, but who nevertheless filled in their input slots long after they had started their discussions, as if it was something they felt they still had to do.

Hypothesis 2: Discussion on differences

First, we measured whether or not a discussion phase on differences between group members took place (unit of analysis: group (N = 10)). If one or more messages referred to the differences, this was counted as a discussion. In total, only three groups performed a total of four discussions on differences among the group members. None of the three groups in the unscripted condition performed a discussion on differences. Of the script-only group, one out of four groups performed a discussion, and with the script+interface group, two out of three groups performed a discussion. The difference between the unscripted group and the two scripted groups was significant (p = 0.40, Cohen’s d = 0.7).

Several measures of the quality of the four group discussions did not yield significant results. These include: number of utterances, number of words, specificity, and proportion of group members mentioned.

The three groups who performed a discussion on differences between group members did so in entirely different ways. Yet it is not clear how these differences can be related to differences in condition.

Hypothesis 3: Summary

There were no significant differences between group discussions in the quantity or quality of the summary. Differences measured include whether or not a summary was made, summary length, proportion of group members mentioned, whether or not there was discussion on the summary, number of versions of the summary, specificity, and whether there was a final version of the summary.

Hypothesis 4: Adherence to the phases of the script and the position of grounding discussions among other research activities

To measure differences between groups as to how well they adhered to the prescribed order between the grounding discussions and other research activities, we assigned to all messages one of the following three codes: (1) Grounding discussions: individual knowledge and objectives; (2) Discussion on group subject and objectives; (3) Research activities and research plan. According to the script for the whole course, the grounding discussions should occur before discussions on group subject and objectives (unit of analysis: message (N = 263)). Messages were divided into six categories:

  1. 1.

    Messages on individual knowledge and objective messages that occurred in time before any messages on group subject and group objectives

  2. 2.

    Messages on group subject and group objectives that occurred after any message on individual knowledge

  3. 3.

    Messages on individual knowledge and objectives that co-occurred with messages on group subject and group objectives

  4. 4.

    Messages on group subject and group objectives that co-occurred with messages on individual knowledge and objectives

  5. 5.

    Messages on individual knowledge and objective messages that occurred in time after any messages on group subject and group objectives

  6. 6.

    Messages on group subject and group objectives that occurred before any message on individual knowledge

Messages 1 and 2 are conform the correct order, they can be considered ‘good’ messages; messages 3 and 4 are ‘intermediate’ messages that deviate from the correct order in that there is overlap between two activities; messages 5 and 6 indicate an order which is the reverse of the correct order; these can be considered the ‘bad’ messages. We assigned the good messages a score of 0, the intermediate messages a score of 1, and the bad messages a score of 2.

The score on ‘bad messages’ was much higher in the unscripted than in the scripted condition (p = 0.000, Cohen’s d = 0.8) and much higher in the script-only condition than in the script+interface condition (p = 0.000, Cohen’s d = 0.9). This indicates that the students adhered much better to the prescribed order of first having the individual grounding discussions and after that discussing group issues.

Based upon this division of messages, the activities of each group can be divided into different phases. Each group may or may not have one phase in which messages on individual knowledge and objectives (type 3) and messages on group subject and objectives (type 4) co-occur. Before or after this phase, there can be either a phase in which only messages on individual knowledge and objectives occur (type 1 or 5, depending on its position), or there can be a phase in which only messages on group subject and objectives occur (type 2 or 6, depending on its position). The results for each group are displayed in Table 1.

Table 1 Discussion pattern in each group, showing the order of message types within the discussion

With respect to the start of the discussion, it is only the scripted conditions in which groups started with individual issues: five out of seven scripted groups started with individual issues and none of the unscripted groups. On the other hand, only the unscripted groups started with group issues (two out of three). With respect to the end of the discussion, two out of four script-only groups discussed individual issues after discussing group issues, with a total of 17 messages. Messages on group issues occurred at the end only in the script+interface and the unscripted group.

We investigated whether the condition influences both the absolute and relative effort that is spent on the three different activities of the course: (a) discussing group subject and objectives; (b) inputting and discussing individual knowledge and objectives; and (c) setting up the research plan and performing research activities (unit of analysis: group (N = 10)). Absolute effort was measured as the average number of messages per student for each group. As this measure is very sensitive to missing data and differences in discussion styles, some groups being more ‘wordy’ than others, we also measured differences between the relative effort in each group, namely the proportion of messages that fell within each of the three categories.

The results show that both the total and the proportional number of messages spent on group issues is larger in the nonscripted group than in the two scripted groups (p = 0.048, Cohen’s d = 2.7; p = 0.008, Cohen’s d = 4.0, respectively). The proportion of messages spent on the grounding discussions is larger in the two scripted groups than in nonscripted group (p = 0.004, Cohen’s d = 1.6).

Discussion

Is performing a grounding discussion, in which students have to collect and summarize specific information on the members of the group, supported by a script which structures this discussion into an input, discussion, and summary phase, and/or by a structured interface into which this script is implemented?

In general, this question can be answered positively. Our experiments showed some aspects on which there was a positive effect of the script and/or the structured interface in the hypothesized direction, while there were no aspects on which the script and/or the structured interface had a negative effect. The effects that we found were mostly related to quantities and order, such as number of messages and inputs. No differences related to the quality of discussion or summary were detected.

The script has an impact on making knowledge explicit, which is the aim of the input phase. Scripting increases the amount of knowledge that is made explicit by increasing the number of input questions that are answered. The structured interface has an additional effect by increasing the effort, namely the number of words, which is devoted to each input question. Furthermore, the structured interface more or less forced students to answer all the input questions in one grounding discussion once they had answered one of them.

The script also provides support in becoming aware of differences between group members, which is the aim of the discussion phase. In the scripted conditions, more students performed a discussion on differences.

No effect of the script was detectable on summarizing the knowledge and objectives of the group members, which is the aim of the consensus phase. Because of lack of data, we were unable to test whether, according to the script, more input would lead to more discussion, which would lead to more summaries of better quality, which is the ultimate aim of the script.

This guidance of the script and the structured interface was not only visible in the effects on input and discussion, but also in participants’ behavior. Students in the script-only condition made reference to questions of the script, and students in the script+interface condition made use of replying to the prefabricated messages.

The script and the structured interface also affected the focus and position that the grounding position occupied in the whole rang of activities. In the scripted conditions, students spent more messages on the grounding discussions, and fewer messages on group issues. The order of the activities was affected by the script, with an additional effect produced by the structured interface. In the script+interface condition students adhered much better to the required order of first performing grounding discussions and then continuing with group activities.

Our research contributes to the growing, but still not very large, number of CSCL studies ‘in the wild,’ in which the researcher is not the one who presents and administers the task with the students and which span considerable period of time, in this case over 2 months. In contrast to the more controlled experiments in schools, we see that people use a variety of tools in performing their tasks, including the ones under investigation. Our research shows that this is not simply a hindrance to investigation (as presented by Dillenbourg 2002), but a very useful source of information. For example, it showed how constraining the structured interface is, as misplaced messages were corrected and inputs provided in a chat were later filled in in the right interface slots. And, in spite of these sources of variation, and in spite of the small number of participants, we detected some robust effects at low significance levels and with mostly large effect sizes.

Furthermore, our study shows the value of including not only the task investigated, but also its surrounding tasks. Our research showed a considerable variation in how students integrated the task investigated into other tasks, which would not exist in a more controlled setting. It has shown that the script and structured interface may lead to more attention to the task at hand, but at the expense of effort devoted to other tasks. This may be part of the explanation as to why, in general, no positive effects of scripting on knowledge acquisition are found (Hron et al. 1997; Strijbos 2004) or even negative effects (Mäkitalo et al. 2005).

Our research has shown that a structured interface can have an additional effect cumulatively with the script. However, we do not know yet why the structured interface had some specific effects while the script had different effects. We only have some indications. In the introduction, we characterized the structured interface as being very familiar to the user, which increases the chance that it is used (Dillenbourg 2002), and very easy to use, which also increases the chance that it is used (Collis and Pals 2000; Collis et al. 2001) and which decreases cognitive load. Indeed, the results showed that the structured interface has been used by the students, apparently without many problems. Yet we do not know, for example, whether the structured interface led to more input because of a decrease in cognitive load.

Finally, our research refines the statement that scripts should not be too complex (Christoph 2006; Dillenbourg 2002; Saab 2005). It has shown that it is not only the extent to which the script controls students’ behavior (as it was formulated by Mäkitalo et al. 2005), but also the way in which this is done. Our results make a plea for making a distinction between the physical freedom that students have in following the script, which was very high, and the psychological freedom they have. In this latter respect, the structured interface proved to be very enforcing, as was shown by the corrected misplaced messages and the filling in of the input slots after the input had been provided in a chat.

Our research has implications for practitioners who are involved in scripting grounding discussions, and probably also for practitioners scripting discussions in other types of complex problem solving. Considering that the script and structured interface have some robust beneficial effects and no detected negative effects, and that the structured interface is not only easy to use, but also easy to implement in any existing virtual learning environment, practitioners may consider implementing the script in their own environment. There is one caveat, however; while the script and structured interface have a beneficial effect on the grounding discussions, they also lead to less effort spent on group issues. There is thus a question of how to balance attention paid to grounding discussions and other activities.