Keywords

1 Introduction

Collaborating with peers can be an effective arrangement for learning, however, research has shown that we cannot expect all learners to collaborate effectively without further support (Rummel & Spada, 2005). Among the core challenges for collaborative learning are coordination and regulation of the group’s interaction. Research on computer-supported collaborative learning (CSCL) has investigated different means of scaffolding in order to support groups. In this chapter, we focus on social interaction in groups and present social group awareness tools as a means to support groups in regulating their interaction processes. We conceptualize group awareness tools as sources for feed-back because they provide groups with information regarding the interaction between their members. Groups can then use this information (i.e., feedback) to adapt the interaction between their group members, that is, improve the future interaction in the group. While previous studies have accumulated evidence for the effectiveness of group awareness tools (Janssen & Bodemer, 2013), the mechanisms behind their effectiveness are not yet well-understood, and a framework of the respective mechanisms is lacking.

Unlike other means of collaboration support such as collaboration scripts (Kollar et al., 2018), group awareness tools provide groups with feed-back on their past performance or interaction without explicitly suggesting potential regulatory actions (i.e., feed-forward). Similar to using instructional feed-back or peer feed-back, students need to actively engage with the feed-back from group awareness tools to benefit from it (Lipnevich & Panadero, 2021). In line with this assumption, Janssen et al. (2011) found that the amount of time that groups attended to the group awareness tool, affected the effect of the tool on the distribution of participation in the group. Other studies found that groups require additional help in interpreting the feed-back by the group awareness tool (e.g., Jermann & Dillenbourg, 2008) or that some groups may require additional support in deriving effective regulatory actions (Dehler et al., 2009; Strauß & Rummel, 2021b). With this in mind, we seek to identify potential boundary conditions that may help or prevent groups from leveraging the feed-back from group awareness tools. Afterwards, we review previous studies on group awareness tools and present two small-scale field experiments from our own research in which we explored the processes of feed-up, feed-back and feed-forward. We conclude this chapter by discussing potential factors that may affect whether will a group is motivated and is able to leverage the feed-back provided by a group awareness tool effectively.

2 Supporting Collaboration with Group Awareness Tools

2.1 Feed-Back on Interaction: How Group Awareness Tools Guide Collaboration

Collaborative learning refers to “a situation in which two or more people learn or attempt to learn something together” (Dillenbourg, 1999, p. 1, emphasis in original), or more specifically “[…] a coordinated, synchronous activity that is the result of a continued attempt to construct and maintain a shared conception of a problem” (Roschelle & Teasley, 1995, p. 70). Years of research have shown the benefits of collaboration for domain-specific knowledge as well as for collaboration skills (Chen et al., 2018; Hattie, 2009; Jeong et al., 2019; Pai et al., 2015; Tenenbaum et al., 2020).

The effectiveness of collaboration for learning stems from productive interaction between the group members. This includes interactions that serve processing of information to solve the joint task (e.g., giving explanations, cognitive modeling, see King, 2007), processes that allow a group to monitor and regulate collaboration processes, as well as the group members’ motivation and affective states (Järvelä et al., 2016; Kirschner et al., 2015). Collaborating with others increases the demands for regulation because the members of a group not only need to regulate their own learning (self-regulated learning, SRL), but, in addition, the group members need to support each other during their regulation (co-regulation), and all members of the group explicitly need to align their perception and regulate as a group (socially-shared regulation) (Järvelä et al., 2016).

Soller et al. (2005) proposed a model of how groups regulate their collaboration and how technology can scaffold this regulation. Their proposed model draws on the cybernetic idea of homeostasis (Umpleby & Dent, 1999; Wiener, 1949) which conceptualizes a group as a system that seeks to achieve an equilibrium. The group reacts to imbalance (disequilibrium), that is, a difference between the current state and a desired goal-state, with regulation. This regulation aims at returning the system to an equilibrium. According to the model by Soller et al. (2005), regulation of collaboration occurs in five phases. During the first phase, the group collects data on the current state of a relevant aspect of the system, such as information on the participation of the individual group members. In the second phase, the group develops a model of the interaction by aggregating the data into indicators that characterize the current state of the collaboration in terms of the desired aspect (e.g., the distribution of participation). In the third phase, the group uses these indicators to compare the current state with a desired goal-state (e.g., eqtrauual participation). The desired goal-state can be set by the group itself (descriptive collaboration norm) or by an external agent such as a teacher (prescriptive collaboration norm). In the fourth phase, the group is expected to regulate if an imbalance had been detected, that is, if the current state and the desired state differ. For example, a group could redistribute tasks so that all group members participate. In the fifth and final phase, the group evaluates the success of the regulation, that is, if the regulatory action restored the equilibrium and the desired-goal state is achieved. If this is not the case, the group will repeat the cycle until it achieves an equilibrium.

Given the central role of interaction for the effectiveness of collaboration and collaborative learning it is important to note that fostering the regulating of the interaction can be a target of instructional support (see Meier et al., 2007; Rummel, 2018). Regulation on this social plane of collaboration requires information about past and current states of the interaction in the group (i.e., feed-back), for example information about the knowledge and skills of the other group members, or on who is currently working on which part of the joint task. Once gathered, this information (i.e., feed-back) can serve for the group as a basis to coordinate their interaction and improve the quality of their interaction. The notion of gathering information about the actions of the other team members can be found in the concept of group awareness (Schnaubert & Bodemer, 2022) which Dourish and Bellotti (1992) defined as “an understanding of the activities of others, which provides a context for [one’s] own activity” (Dourish & Bellotti, 1992, p. 107). If the intention of the information is to increase a teams’ performance, small group researchers refer to it as “team feed-back” while if the feed-back focuses on processes or psychological states in the team is termed “team mediator feed-back”(Handke et al., 2022).

The concept of group awareness has been introduced to the field of CSCL (Schnaubert & Bodemer, 2022) where it lead to the development of so-called group awareness tools (GATs) (Bodemer et al., 2018). These tools collect data from the collaboration environment (e.g., keystrokes, logged actions, self-reports) and visualize the data for the group (Buder, 2011). Research on CSCL has investigated different types of GATs. While cognitive group awareness tools visualize different aspects of the knowledge that is available in the group (e.g., Dehler et al., 2011; Engelmann & Hesse, 2011; Ollesch et al., 2021), social group awareness tools provide information about processes and states of group members such as participation (Bachour et al., 2010; Janssen et al., 2011; Ollesch et al., 2021; Strauß & Rummel, 2021b), how much information has been shared (Kimmerle & Cress, 2009) or how the members of a group perceive each other (Phielix et al., 2011). In this chapter, we focus on social group awareness tools. Specifically, group awareness tools that provide groups with information about the interaction within the group, for example by visualizing the distribution of participation (i.e., the result of individual participation during collaboration).

GATs provide groups with a visualization which can be characterized as team (mediator) feed-back (Handke et al., 2022), that is, information regarding past performance or the current state of the collaboration. The group can utilize this feed-back to improve the group’s performance, that is, the quality of the interaction in the group (Carless & Boud, 2018; Handke et al., 2022; Hattie & Timperley, 2007; Lipnevich & Panadero, 2021). The information provided by a GAT does not contain information about potential desired goal-states (feed-up), or guidance regarding potentially helpful strategies (feed-forward). In this regard, unlike other means of directive collaboration support such as collaboration scripts (Kollar et al., 2018), GATs provide “tacit guidance” (Bodemer, 2011). Based on research on students’ use of feed-back for learning (Lipnevich & Panadero, 2021; Winstone et al., 2017), we assume that the learners of a group also need to take an active role and process the information from a GAT, in order to determine whether regulation is necessary and what actions may help them achieve a more desirable state.

Thus far, research on GATs has not provided a comprehensive framework that specifies how GATs support collaboration. Therefore, we will briefly summarize potential mechanisms that are mentioned throughout the GAT literature.

First, a GAT visualizes information for the group, such as the distribution of participation. This visualization makes a particular aspect of the collaboration more salient and thus draws the learners’ attention to it (Bachour et al., 2010; Carless & Boud, 2018; Pea, 2004). Thus, it is expected that the GAT increases the possibility that a group focuses its efforts on regulating this aspect. Second, group awareness information serves as negative feed-back for a group (Jermann & Dillenbourg, 2008) which allows the group to assess whether and to which degree the current state of the collaboration deviates from a desired goal-state. A discrepancy between the current state of the collaboration and the desired goal-state may then lead to a reflection process within the group which can eventually trigger regulation. Especially continuous feed-back can be expected to facilitate monitoring the progress towards the desired goal-state (e.g., Carless & Boud, 2018; Soller et al., 2005; Webb & de Bruin, 2020) which helps groups regulate their collaboration (Harkin et al., 2016; Webb & de Bruin, 2020).

Finally, a graphical representation of the group members’ behavior makes the individual group members more visible and increases individual accountability which has been shown to be an important predictor of effective collaboration (Handke et al., 2022; Johnson & Johnson, 2009). Feed-back regarding participation also reduces learners’ uncertainty about their peers’ activity and thus supports trust (Robert, 2020; Walther & Bunz, 2005), may reduce social loafing (e.g., Johnson & Johnson, 2009; Price et al., 2006), and promote social comparison which motivates group members to contribute (Michinov & Primois, 2005). Despite a lack of a comprehensive framework that covers the mechanisms behind GATs, there are several studies that investigated their effects on collaboration which we will summarize in the following section.

2.2 Prior Research on Social Group Awareness Tools

Social GATs provide groups with information regarding the functioning of the group, that is, the behavior of the group members, their presence and their perception of the group (Bodemer & Dehler, 2011; Janssen & Bodemer, 2013). While previous studies did not find positive effects on group performance when a GAT visualized the distribution of participation (Janssen et al., 2007a, 2007b, 2011) (cf. Jongsawat & Premchaiswadi, 2009 for a contrary result), research found positive effects of GATs on the collaboration processes. For example, when receiving a GAT that visualized participation, group members authored longer dialogue acts (Janssen et al., 2007a, 2007b; Kimmerle & Cress, 2009; Kimmerle et al., 2007; Lin & Tsai, 2016) (cf. Jermann & Dillenbourg, 2008; Jongsawat & Premchaiswadi, 2009 for different results), showed more coordination of social activities (Janssen et al., 2007a, 2007b), or reported higher group cohesion (Leshed et al., 2009) than groups without the GAT. Interestingly, research did not find direct effects of GATs that provide information about the participation on the distribution of participation (Janssen et al., 2007a, 2007b, 2011; Strauß & Rummel, 2021b). Instead, this effect is mediated by the time that the students in the group have the GAT open (Janssen et al., 2011). Similarly, Bachour et al. (2010) found that groups achieved a more equal participation when the group members perceived equal participation as important. These results suggest that groups can leverage feed-back on their interaction and adapt their collaboration. However, as mentioned above, research- on feed-back has shown that simply providing students with access to feedback does not guarantee positive effects (Lipnevich & Panadero, 2021; Winstone et al., 2017). In line with this circumstance, studies on social GATs suggest not all groups appear to benefit from a GAT (Dehler et al., 2009) and may require additional guidance (Clarebout & Elen, 2006; Janssen et al., 2011). In our own research, we therefore investigated whether additional explicit support helps groups activate adequate regulation strategies. We offered groups a combination of a GAT and adaptive collaboration prompts that both targeted regulating the distribution of participation in the group (Strauß & Rummel, 2021b). Our analyses revealed that the distribution of participation became more even over time but that groups that received the combination of a GAT and prompts did not achieve a significantly more even distribution of participation. Exploring students’ perceptions of the support further suggested that students rather used the GAT to regulate their own participation instead of discussing the distribution of participation with the group. In addition, students reported that the feed-back from the GAT was useful but that it was difficult to regulate on a group’s level. These results lead us to the general question of boundary conditions for social GATs. The results of our study specifically highlighted two questions. The first question concerned, whether students require a dedicated opportunity to process the information displayed by the GAT with the goal to assess whether regulation is required and how this can be achieved. Second, the results call into question whether using the number of words to operationalize participation and using this metric for the GAT may fall short to capture the phenomenon of participation. If the GAT does not provide groups with a useful indicator, they may struggle with taking up the feed-back and translating it into productive interaction. To shed some light on these questions we conducted two small-scale field experiments which we summarize below.

3 Our Research: Scaffolding Collaborative Reflection and Using Self-reports to Assess Participation

In this section, we present two small-scale field experiments and summarize the central findings. These two field experiments were based on findings from an earlier study (Strauß & Rummel, 2021b) and explored two hypotheses concerning potential boundary conditions for the effectiveness of a social GAT. The first field experiment addressed the question whether groups benefit from additional guidance for feed-back take-up and reflection, the second experiment explored the effects of two different data sources for the GAT, that is, a system-generated indicator of participation (number of words) and a peer-generated indicator (self-report of own participation).

A premise underlying our studies was that equal participation is crucial for collaborative learning as the effectiveness of collaboration for learning and problem solving is based on interaction between the members of a group. Productive interactions are less likely to occur if only a few group members actively participate in the collaboration. As a result, less active group members will benefit less from the collaboration. As outlined earlier, effective collaboration requires interaction to serve goals such as achieving a shared understanding (Baker et al., 1999; Clark & Brennan, 1991), pooling unshared information (Deiglmayr & Spada, 2011; Stasser & Titus, 1985) or regulating the interaction (Järvelä et al., 2016; Panadero & Järvelä, 2015). If not all learners participate evenly in these processes, a group may not achieve its goal, for example because not all group members shared their information which were required for finding a good solution to the joint problem. In addition, studies report that learners experience frustration when not all members of their group are actively contributing to the joint task (Strauß & Rummel, 2021a) as well as dissatisfaction with the collaboration the more the participation is unevenly distributed in the group (Strauß & Rummel, 2021b).

Thus, we sought to support groups in regulating the distribution of participation. With regard to facilitating monitoring and regulation of collaboration, GATs have been used in the past, also for fostering the regulation of participation (Janssen et al., 2011; Jermann & Dillenbourg, 2008; Strauß & Rummel, 2021b). The notion underlying a GAT that visualizes the current distribution of participation to the group is that the group can take up this feed-back and compare the current distribution of participation to a desired distribution. Previous research showed that an uneven distribution of participation is a source of frustration for students (for an overview see Strauß & Rummel, 2021a). Hence, it can be assumed that students aim to achieve an equal distribution of participation, thus trying to regulate their interaction in a way that all group members contribute equally. However, most studies did not find direct effects of a GAT that displays group members’ participation on the distribution of participation in the group. In a recent study we investigated whether groups may benefit from explicit guidance (i.e., adaptive prompts) in addition to the tacit guidance of a GAT (Strauß & Rummel, 2021b). The results of our study left open whether a combination of collaboration prompts and a GAT helps groups to regulate the distribution of participation. Exploratory analyses of students’ use and perception of the support indicated that groups may need additional support for leveraging the feedback provided by the GAT, instead of explicit guidance regarding which actions may be useful given an uneven distribution of participation. Further, while using the number of words as an indicator for participation in an online environment is widespread, we acknowledge Hrastinski’s (2008) argument that operationalizing participation solely as the number of words that each group member had contributed may be an incomplete view of what participation includes. Against this background, we derived two small field experiments that explored the effect of additional guidance that targets the process of taking up the information from the GAT and reflecting on them, and that explored the effect of a more holistic operationalization of participation.

Both experiments were conducted in an online course for university students that went over fourteen weeks. On the university’s Moodle, students could access the learning materials for each of the six course topics (two weeks per topic), such as a lecture video, literature, and a quiz. During each topic, students worked in small groups to solve a collaborative task and create a joint answer text. Groups used a private group forum on Moodle for coordination and a private wiki to formulate their answer to the collaborative task.

Both studies took place during one course topic (i.e., two weeks), during which the students collaborated in groups of four to solve a collaborative task. In total, 104 students enrolled in the course and 84 (80.8%) agreed to participate in the study for monetary reward. During collaboration, groups received a group awareness tool which was constantly available on every page of the learning environment (main page, group’s discussion forum, the group’s wiki) on the right-hand margin of the Moodle environment.

To engage students with the GAT we followed the guidelines offered by Wise (2014) and Wise and Vytasek (2017) on how to implement learning analytics interventions in learning settings. First, the analytics should be integrated into the course and students should understand the goal of the analytics. Specifically, students need to be aware of the pedagogical goal of the current learning activity, understand what is considered effective engagement in this activity, and learn how the analytics help them monitor productive activity. We offered this information to the students in a familiarization message that explained the tool’s pedagogical intent, that is, that active and equal participation during collaborative assignments is important for successful problem solving. Further, we explained that the GAT provided an up-to-date visual representation of the current distribution of participation.

Second, learners should be free to interpret the analytics and choose regulatory behavior. Specifically, learners should be able to set goals individually and assess whether they were able to attain them. In our studies, we implemented a collaborative reflection activity (see below) which required students to set goals, monitor their progress towards these goals and decide whether regulation of the collaboration was necessary.

A third aspect that is expected to enhance learning analytics interventions is that students need a frame of reference that helps them interpret the analytics. In our studies, this frame of reference was created during the collaborative reflection activity, that is, the general instruction (i.e., being active and achieving equal participation) and by the goals set by the individual groups.

Finally, learners should have the freedom and opportunities to negotiate the analytics, either with the teacher or with peers. This was a core aspect in our studies as the students worked in groups and received feed-back on their collaboration, which they could freely view and discuss.

4 Field Study 1: Collaborative Reflection to Scaffold Feed-Up, Feed-Back, and Feed-Forward

In the first small-scale field experiment, we implemented a GAT together with collaborative reflection activity as part of the regular group task, which was expected to help groups actively engage with the feed-back from the GAT. To learn more about the effects of additional co-reflection, we compared groups that only received a GAT with groups that received the GAT and also collaboratively processed the information provided by the tool.

From a theoretical point of view, one important step during regulation is reflection (Butler & Winne, 1995). In collaborative settings, peers can serve as resources for critically questioning experiences and developing alternative perspectives (Kori et al., 2014). Yukawa (2006) defined of collaborative reflection (co-reflection) as “cognitive and affective interactions between two or more individuals who explore their experiences in order to reach new intersubjective understandings and appreciation” (Yukawa, 2006, p. 206). Gabelica et al. (2014) refer to this concept as “team reflexivity”. A reflective team exhibits three behaviors. First, the team uses feed-back to evaluate the group’s past performance, for example by collectively discussing the performance on a joint task. Second, the team searches for alternative ways to perform such a task in the future, and eventually, the team arrives at a shared decision on which strategies should be enacted in the future (Gabelica et al., 2014).

Given that reflection has been conceptualized as a key process during regulation, as well as based on findings that emphasize that a designated phase for reflection benefit the collaboration, we expected that providing groups with a collaborative reflection activity helps groups make use of the feed-back from a GAT. For our studies, we adapted the co-reflection activity from Phielix et al. (2011) who designed their co-reflection activity by integrating the suggestions by Hattie and Timperley (2007). This activity tasked the groups to clarify their goals of the current activity (feed-up), decide whether progress is being made towards this goal (feed-back), and eventually decide which activities are needed to progress towards the goal (feed-forward).

4.1 Sample, Materials, Procedure, Measures

We conducted the field experiment in a university online course. In total, 104 students enrolled in the course, of which 84 (80.8%) agreed to participate in the study. The study took place during the fourth topic of the course (i.e., week six of the course). By the time of the data collection, 51 participants (58.6% of the initial sample; 64.7% female; age: M = 24.00; SD = 3.45) were still active in the course. During the collaborative task, students collaborated for two weeks in groups of four. All groups received a GAT that visualized the participation of each group member as a bar graph (see Fig. 9.1).

Fig. 9.1
A screenshot of My team and My team's work interface. My team depicts I D, name, online, and contact. A list of names is under the header. My team's work depicts a bar graph with 400 words under forum and 25 words under wiki for member I D 3 along with the tabs deadline and to do list.

Group awareness tool displaying the number of words of a fictitious group

Each bar in the GAT represented the number of words that each group member had contributed (group’s forum and wiki) and was updated automatically whenever a student submitted a new contribution. A legend below the GAT identified the individual group members. On mouse-over, the GAT displayed the absolute number of words for each group member. Through a collapsible text box below the GAT students could access a brief explanation of the GAT like the one they had received in the familiarization email. In addition, students could view the deadline of the current task and set individual to-dos by clicking on the buttons above the bar graph.

For the experiment, the students were randomly assigned to one of two conditions. Twenty-six students (six groups) received only the GAT during collaboration, while the remaining 25 students (six groups) received the GAT during collaboration and additionally performed the co-reflection activity.

At the beginning of the study, students in both conditions received a familiarization message which informed them about the role of active participation and how the GAT can assist them achieving equal participation. Afterwards, students worked on the collaborative task for two weeks. At the end of the first week, half of the groups performed the collaborative reflection activity.

The reflection activity was designed similar to the one presented by Phielix et al. (2011). We implemented the process of feed-up, feed-back and feed-forward in the form of four questions that students answered in Moodle: (1) “In your opinion: How should participation be distributed during collaboration in a team like yours? Explain.” (feed-up, goal-setting) (2) “Take a look at the visualization: How well is the participation in your team currently distributed? Give a rating (scale 1 (bad) – 5 (good)) and explain your rating.” (feed-back, reflection) (3) “Examine the visualization again and post your rating of the current participation into the forum. Discuss together the ratings of the team members and agree on a rating.” (feed-back, reflection) (4) “Is it necessary to change the way you participate? Develop a plan and set specific goals for your team regarding the distribution of participation (Who? What? When?). Write down your plan in the Etherpad” (feed-forward, goal-setting). Students answered the first two questions individually to prepare for the following co-reflection and subsequently answered the last two questions collaboratively in the group’s discussion forum.

Over the weekend of the first week, the students in each group (1) individually set a goal for the distribution of participation in their group, and (2) individually reflected on the current distribution of participation as displayed by the GAT. At the beginning of the second week, the members of each group negotiated (3) whether regulation was necessary, and (4) how they can regulate their collaboration in terms of the distribution of participation.

To assess the distribution of participation, we used two measures. First, we calculated the gini-coefficient based on the number of words that each group member had contributed to the group’s discussion forum and the group’s wiki where they created a text that included the solution to the collaborative problem. The gini-coefficient uses the number of words that each group member had contributed and returns a value that represents the distribution participation for this group. This coefficient represents the distribution of participation for each group as a value ranging from 0 (perfect balance) and 1 (perfect imbalance) (Dorfman, 1979).

Second, to acknowledge students’ perception of the participation, we assessed perceived social loafing by asking students to rate how the participation was distributed during the collaborative task (−5, only one group member contributed; + 5, every group member contributed equally) (Aggarwal & O'Brien, 2008). As a proxy for engagement with the GAT, we asked students to indicate how frequently they had looked at the GAT on average during the two weeks of the collaborative task.

4.2 Results

Our manipulation check indicated that students complied with the co-reflection activity. Specifically, students who co-reflected contributed significantly more words to their group’s forum (M = 256.57; SD = 176.91) than students who only received a GAT (M = 79.76; SD = 68.85; U = 127.00, Z =  − 3.73, p < 0.05), and also reported that they used the GAT to regulate the collaboration more intensely (M = 3.00; SD = 1.18) than their counterparts (M = 1.82; SD = 0.81; U = 77.50, Z =  − 3.01, p < 0.05). However, against our assumptions, students who performed the co-reflection activity (M = 6.33; SD = 4.51) did not report having looked at the GAT more frequently than students in the GAT condition (M = 5.76; SD = 2.71; U = 176.00; Z =  − 0.07, p > 0.05).

We hypothesized that the additional co-reflection activity helps groups achieve a more even distribution of participation. Our analyses revealed tentative evidence for this hypothesis, as the 17 students in the GAT condition rated the distribution neither as unevenly distributed nor evenly distributed (M = 0.94; SD = 3.09), while the 21 students who performed the additional co-reflection reported that the participation was more evenly distributed, as indicated by a larger positive value (M = 1.81; SD = 2.94). While this difference in means pointed into the hypothesized direction, it was not statistically significant (U = 140.00, Z =  − 1.14, p > 0.05). Further, we analyzed the distribution of the number of words that the students had contributed. Since the gini-coefficient is calculated for each group (i.e., the level of analysis are the groups of students, not the individual students), the number of cases that enter the analysis decreases. Since the remaining sample of 12 groups does not allow for inferential statistics, we report descriptive statistics (see Table 9.1). Groups in the two conditions only differed slightly in terms of the total number of words (i.e., contributions in the group’s wiki and forum combined). On average, groups in both conditions reached a rather even distribution of overall participation as indicated by gini-coefficients below 0.5.

Table 9.1 Distribution of participation in both conditions

Carefully inspecting the distribution of participation, we found that there were groups in both conditions that achieved an almost perfect balance of participation (i.e., minimal values close to 0), as well as groups that did not achieve an even distribution of participation (i.e., maximum values tending towards 1). One group achieved a gini-coefficient of 1 as only one group member had contributed. It is important to note that this group may have been an outlier as the group with the next lower value yielded a gini-coefficient of 0.64. In comparison, the least successful groups that performed the co-reflection achieved a more even distribution of participation. Overall, the groups in this condition reached lower minima and maxima which indicates a more even distribution of participation. In sum, our data indicate a trend that is congruent with our expectation that groups would benefit from a collaborative reflection, however, our results were not statistically significant.

In a subsequent step, we explored the answers that students provided during the individual goal-setting activity (step 1) of the co-reflection task to learn more about students’ collaboration norms. Therefore, we coded the answers regarding the optimal distribution of participation that the 25 students provided during the individual part of the co-reflection activity. During coding, we assigned a label to each response, grouped similar responses, and eventually aggregated them along overarching themes.

The individual answers revealed that students generally valued equal participation. However, we identified four nuances of this collaboration norm. We used representative quotes from the students to name these nuances. We termed the first nuance “The participation in a team should be evenly distributed”. Six students (24%) stated that all group members should contribute evenly to the joint task, however “minimal differences [in participation]” are still acceptable. One student reasoned that participation should be evenly distributed since the requirements for all students in the group are the same. Students mentioned no further boundary conditions or possible compromises.

We summarized the second nuance of this collaboration “It’s normal that not everyone contributes the exact same amount, but the proportions should be right”. Most students (n = 14; 56%) noted that the distribution of participation may differ among the members of the group. Unlike students from the first category, students in this category included qualifiers such as “roughly” or “if possible”. For example, one student proposed dividing the work equally by the number of group members: “Everyone should contribute a part to the task. We are four people so we should divide the workload roughly (!) by four and then look through the results together.”

The third nuance can be summarized as “Essentially, the distribution should be even, but…”. Students who fell into this category (n = 3; 12%) advocated equal participation but also specified boundary conditions. While “participation should be equal by default” and also “fair and just”, multiple factors affect how evenly participation should be distributed. Students mentioned that the task, group members’ capacities, inactive group members, as well as the remaining time until the deadline should be considered. In addition, group members should get the chance to work on tasks that they can excel at. Students argued that uneven participation would be acceptable if a group member signaled early enough that they will not be able to contribute their fair share. In this case, workload could be redistributed. Finally, students acknowledged that asynchronous tasks allow team members to work at their own pace which may lead to uneven participation during the process but should even out towards the deadline.

Finally, we termed the fourth nuance “[…] it should become visible that every team member at least tried to contribute to the final result”. So far, most of the responses focused on the amount of participation. However, one student argued that “while the number of words does not indicate quality, a basic level of participation is required”. Specifically, the student noted that “every participant should say something” and “while not everyone needs to perform exactly equally, or write, that is, it should become visible that every member of the team tries to participate and contribute to the final result,” and had “[looked] into the topic”. In other words, any visible participation by the group members is appreciated.

Discovering these nuances lead us to assume that not all groups may strive for an exact even distribution of participation. Further investigating students’ collaboration norms help us understand under which conditions groups initiate regulation and which goal-state they aim for. For example, these collaboration norms may serve as mediating or moderating variables for regulation and explain differences in the degree to which groups are motivated to achieve an even distribution of participation.

To summarize, we conducted this first small-scale field experiment based on the assumption that groups may require additional guidance on how to engage with the information provided by GAT, instead of actional suggestions for effective regulation. In this first small-scale field experiment we investigated whether a collaborative reflection activity supports groups in leveraging the information from a GAT (i.e., information regarding the interaction in the group). Contrary to our expectations the results of our field experiment indicate that triggering co-reflection (i.e., a sequence of feed-up, feed-back and feed-forward) does not significantly affect the distribution of participation during online-collaboration. While descriptive trends point into the hypothesized direction, the data reported above need to be interpreted with great care due to the limited sample size. In addition to comparing means between two experimental conditions, we further identified different collaboration norms that students may hold about the distribution of participation. We hypothesize that these different collaboration norms affect under which circumstances and to which goal-state the members of a group will regulate the distribution of participation within the group.

5 Field Study 2: Contrasting System-Generated Feed-Back and Peer-Generated Feed-Back

The second question that arose from our field experiment (Strauß & Rummel, 2021b) concerned the operationalization of participation. As discussed above, using the number of words contributed by each group member only captures one dimension of participation (Hrastinski, 2008). We explored this question with a second field experiment that we conducted in the same online course. Based on the promising results of the first study, we carefully assumed that groups benefit from a co-reflection activity and thus required students to answer the four reflection questions outlined above. To address the question of the operationalization of participation during collaboration, we developed a second version of the GAT that asked students to provide their peers with information about their own participation (i.e., peer-generated feed-back).

5.1 Using Peer-Generated Feed-Back to Include a More Holistic Operationalization of Participation

One potential limitation of the design of our earlier study (Strauß & Rummel, 2021b) was that we used the number of words as an indicator for participation during web-based collaboration. While this operationalization is common in research on e-learning and computer-mediated collaboration, Hrastinski (2008) argues that participation can be viewed more holistically. In his review, he identified six concepts of online learner participation: (1) Participation as accessing the e-learning environment, (2) participation as writing, (3) participation as quality writing, (4) participation as writing and reading, (5) participation as actual and perceived writing (i.e., a student makes contributions that are perceived as useful), (6) participation as taking part and joining in a dialogue. Further, he acknowledges that participation may also occur off-system (i.e., offline), for example when students research and read material, or make notes outside the e-learning environment. Importantly, some of these dimensions can be captured by computer systems (e.g., access to the collaboration environment, contributing words) while the remaining dimensions either require more complex computations (e.g., assessing quality writing, having read a contribution) or occur off-system and thus cannot be assessed automatically. If the indicator that is being used in the GAT does not suit the needs of a group, the group may not be able to assess the need for regulation. For example, the number of words provides information regarding the quantity of participation but does not capture the quality of the contributions which may stem from a group member investing a lot of their time into working through the learning materials.

Against the background of Hrastinski’s review, we explored the effect of incorporating a more holistic view of participation in the GAT. Since not all dimensions of participation can be captured through logged events from the learning management system Moodle, we decided to ask the members of the group to display their participation by filling in a short questionnaire on their participation during the collaborative task. Using self-reports as a data source for a GAT is more closely connected to the original idea of group members displaying important information to their peers in order to promote group awareness (Buder, 2011) and can be found to varying degrees in prior studies, for example as peer-assessment of social performance (Phielix et al., 2011), individual task perception (Hadwin et al., 2018) or meta-cognitive judgements (Schnaubert & Bodemer, 2019). Therefore, in this second field experiment, students provided the system with self-reports regarding their own participation which was then visualized in the GAT. Thus, the GAT included feed-back regarding the distribution of participation which consisted of students’ perception of their own behavior. In our field experiment, we contrasted this source of feed-back with providing groups with the number of words that each group member had contributed (i.e., system-generated feed-back). Again, we assumed that groups may use the feed-back regarding the distribution of participation to the current distribution of participation with a desired distribution (i.e., equal participation).

5.2 Sample, Procedure, and Materials

The study was conducted in the same course in which we conducted the first field-experiment. This second study began in week eight of the course. By then, 50 participants (59.5% of the initial sample; age: M = 23.96; SD = 3.48) who agreed to participate and were still active in the course. Again, the participants were randomly assigned to one of two conditions. Twenty-three students (six groups) received a GAT that displayed the number of words that each group member contributed (system-generated feed-back). The remaining 27 students (seven groups) were asked to provide information on their own participation through a short questionnaire. This information was then visualized in the GAT (peer-generated feed-back).

Depending on the condition, the bars in the GAT represented the number of words that each group member had contributed (group’s forum and wiki), or the results of the group members’ self-reports, respectively. The GAT that visualized system-generated feed-back was identical to the one used in the previous field study. The GATs that visualized peer-generated feed-back as well as the pop-up for the participation questionnaire are shown in Fig. 9.2.

Fig. 9.2
A screenshot of the web page I C A Rion reads, usability slash user experience design. The text within the web page is in a foreign language.

GAT that visualizes peer-generated feed-back on participation (right) and pop-up for participation questionnaire (center) for a fictitious group

The bars updated automatically whenever a student posted a new contribution, or when a student filled in the participation-questionnaire. The participation-questionnaire was presented as a pop-up window in Moodle and contained three questions that the students rated on a 5-point Likert scale: (1) “I have been reading the posts of my team mates”, (2) “I have been working on the team task by preparing contributions, reading or by thinking about the topics”, (3) “I have contributed (both online and offline) in a way that brought my team forward”. The participation-questionnaire was displayed every time a student logged into Moodle for the first time each day; and returned each time a student returned to the main course page. If a student had answered the questionnaire, it would not appear for the rest of the day. Students could always update their participation via a button on the GAT. As in the previous field experiment, groups performed the co-reflection activity after the first week of the collaborative task.

5.3 Results

The average distribution of the total number of words (gini-coefficient) within the six groups that received a GAT with system-generated feed-back was more equal (M = 0.34; SD = 0.25) than the distribution in groups that received peer-generated feed-back (M = 0.46; SD = 0.16). Again, due to the small sample size of six groups per condition and non-response at the questionnaires, we did not conduct inferential statistics.

From the 50 students who participated in this field experiment, sixteen students from each condition (32; 64%) responded to the questionnaire. The 16 students in the system-generated feed-back condition rated the distribution of participation as rather evenly distributed (M = 3.13; SD = 1.54), while 16 students who received the GAT based on peer-generated feed-back perceived the participation as significantly less evenly distributed, as indicated by a value closer to zero (M = 1.94; SD = 1.88; U = 76.5; Z =  − 1.98, p < 0.05).

We further compared students’ perception of the different GATs (Table 9.2). Students who worked in a group that received a GAT that visualized the number of words rated the information in the GAT significantly more helpful than students who worked in groups that received a visualization of self-reported participation (U = 55.50; Z = -2.90; p  < 0,05). Similarly, students in the system-generated condition rated the visualization of participation as more realistic than students in the peer-generated condition (U = 68.00; Z = −2.475; p   < 0,05).

Table 9.2 Mean ratings of perceived helpfulness and perceived realism

Altogether, exploring the data of our study revealed a trend that system-generated feed-back led to a more even distribution of participation in contrast to peer-generated feed-back. Interestingly, the group members perceived the peer-generated feed-back as less helpful and as a less realistic representation of the distribution of participation. Again, caution is warranted when interpreting the results of the field trial due to the small sample size. Nonetheless, we identified trends that indicate a coherent picture, that is, that while participation may encompass more than simply providing a certain number of words, students perceive this metric as helpful and more realistic than their peers’ self-reports.

6 Discussion: What Are Boundary Conditions for the Effective Use of Feed-Back Regarding Collaboration?

For collaborative learning to unfold its potential, groups need to monitor their collaboration and assess the interaction in their group. To this end, they collect feed-back. In this chapter we argued that group awareness tools support groups in collecting feed-back on their collaboration, monitoring their collaboration, and adapting their interaction. However, groups do not benefit from the mere presence of these tools, neither can we take for granted that groups possess effective strategies to make use of the support.

We conceptualized social GATs as a means for feedback, specifically, feedback regarding the interaction. Groups can take up this feedback to improve their collaborative interaction. It should be noted, however, that not all tools that have been characterized as GATs may be conceptualized as source for feedback, for example cognitive GATs that display the knowledge held by the group members (e.g., Engelmann & Hesse, 2011). Prior research suggests that boundary conditions exist which affect the effectiveness of these tools (e.g., Dehler et al., 2009; Janssen et al., 2011; Strauß & Rummel, 2021b). To shed light on potential boundary conditions, we presented two small-scale field experiments that explored different ways of promoting regulation of participation. These field experiments were designed to explore questions that arose from our field experiment (Strauß & Rummel, 2021b) and other studies (Dehler et al., 2009; Janssen et al., 2007a, 2007b, 2011). Specifically, we explored whether groups benefit from instruction for collaborative reflection, and whether an indicator for participation that goes beyond the number of words provides groups with more useful feedback for their regulation. The results of our studies indicate a trend that a collaborative reflection activity may help groups achieve a more even distribution of participation, however, the analyses lack statistical power. Analyzing students’ perceptions of an “optimal” distribution of participation showed that students prefer an even distribution of participation, however, different notions may exist. Finally, the results of our second field experiment suggest that students perceive self-reported participation as less valid than a system-generated visualization of the number of words.

A major limitation of the two small-scale field experiments reported above is the small sample size. Further, the items used to assess the individual group members’ participation during the self-reports should further validated in more details in futures studies. Thus, the results can only serve to develop hypotheses that can be tested in studies with a larger sample. During the remainder of this chapter we will tie together the results of the two field-experiments reported in this chapter as well as the results from our first field-experiment (Strauß & Rummel, 2021b) and point out factors that may influence that process of taking up and processing feed-back regarding the current state of the interaction in the group. While we use our studies as examples, we assume that the boundary conditions will apply to other types of GATs and other sources of visual feedback on collaborative interaction. We organize these factors along the phases of the collaboration management cycle (Soller et al., 2005) and try to ground them in prior research. We hope that this overview can serve as a starting point for future studies that investigate the role of these factors during collaboration.

Figure 9.3 shows the collaboration management cycle (Soller et al., 2005). Like other cyclical models of self-regulation (e.g., Butler & Winne, 1995; Zimmerman, 2000) the collaboration management cycle is based on the cybernetic notion of a system that seeks to achieve an equilibrium between its current state and a desired goal-state. To reach this desired goal-state, the system (i.e., a group) uses its sensors (i.e., senses, collaboration support) to collect feed-back on the current state of the system, and then processes this feed-back to compare the current state with a set desired state. In case of a discrepancy, the system tries to transform the current state into the goal-state. The original model only contains the phases and examples of supporting technologies for each phase. In Fig. 9.3 we added factors that may affect whether groups will or can take up the feedback from a GAT, process it effectively and perform adequate regulatory actions. Specifically, we propose processes that appear to be potential blockades for continuing monitoring and regulation of collaboration. Additionally, we propose properties of the learning environment that affect whether and how groups will engage in active monitoring and regulation. Finally, the knowledge, perception and motivation of the individual group members affect monitoring and regulation.

Fig. 9.3
A cyclic diagram between phases 1 and 2, phase 3a, phase 3b, and phase 4. The individual factors include motivation, socio-metacognitive expertise, interpretation of the current state, perceptions and beliefs, and goals during learning.

Collaboration management cycle (Soller et al., 2005) and potential boundary conditions for the effective use of feedback on the collaboration

6.1 Phases 1 and 2: Collecting and Aggregating Data

The general competence to monitor and regulate the collaboration can be termed socio-metacognitive expertise (Borge & White, 2016). In the first and second phase of the collaboration management cycle, a group (i.e., its members) or a support system (e.g., a GAT) collects and aggregates information about the current state of the group. Being able to do this requires that the learners of a group to look for cues, that is, feed-back. In the case of a GAT, this includes noticing the feed-back and paying attention to it. In our previous field experiment (Strauß & Rummel, 2021b) as well as in the two small-scale field experiments reported in this chapter we found that students reported having paid attention to the GAT, however, the number of times that students reported having looked at the visualization did not affect the groups’ regulation (i.e., achieving a more even distribution of participation). In this regard, the results reported by Janssen et al. (2011) suggest that the duration of interaction with the GAT is a better predictor of regulation based on the GAT than the mere frequency of interaction with the GAT. Obviously, the mere time spent on the GAT is only a correlate of (socio)cognitive processes that occur within the (members of the) group. Instead, the way that students take up and process the feedback predicts the time spent on the feedback. Which processes may play a role during this will be discussed in the respective sections for the subsequent phases of the collaboration management cycle.

A further aspect that may affect whether a group engages with the feedback is the data and the indicators that are being used to assess the current state of the collaboration. We suggest that the indicators need to be a valid representation as well as compatible with students’ perceptions. During the analyses in the first field study we found evidence that group members hold different conceptualizations what an optimal distribution of participation may look like in a group. If support-systems like GATs use indicators that do not align with the learners’ perceptions, needs or goals, the learners may ignore the information and not engage with the support any further. For example, a group may pay less attention to the number of words if the group conceptualizes participation based on a different indicator, or if the students perceive the indicator as an unrealistic representation of their behavior. This can be linked to research on cue-utilization (e.g., de Bruin et al., 2017) which has shown that learners regulation depends on whether the learners able to use inadequate cues in order to assess the need for regulation. In this regard, future research should explore groups’ needs in terms of group awareness (Schnaubert & Bodemer, 2022) and valid cues that are suited to foster the regulation of collaboration. One question that is worth investigating in this regard is the compatibility between a valid operationalization of an aspect of the interaction in the group (e.g., the distribution of participation) and group members’ perception of what indicator best represents the respective aspect of the collaboration.

Furthermore, the relationship between the operationalization of the aspect of the collaboration that is being displayed in the GAT, (i.e., feedback on the collaboration) and the intended pedagogical goal of the GAT should be taken into account as well. According to Rummel (2018), one can distinguish between the goal of the collaboration support and the aspect of the learning or collaboration that is being targeted by the support in order to achieve this goal. In our field studies, we collected the number of words that each participant had contributed to the group’s forum and wiki. The total number of words from each group member was then visualized to present the group the distribution of participation in their group (i.e., the “target” of the GAT). The intended effect of this visualization was to trigger reflection processes in the group which we assumed groups to regulate the distribution of participation and achieve a more even distribution (the “goal” of the GAT). That groups would be motivated to engage in regulation of the distribution of participation was based on the finding that an uneven distribution of participation is a source for frustration (see Strauß & Rummel, 2021a for an overview). As the second field experiment reported above showed, students perceive an even distribution of participation as desirable. However, a question that remains open after our field studies and similar prior studies (e.g. Janssen et al., 2011; Janssen et al., 2007a, 2007b, is which indicators may help groups regulate their collaboration. One potential pitfall when using only behavioral indicators in a GAT is “becom[ing] what you measure” (Duval & Verbert, 2012, p. 3). For the case of our field studies this would mean that the group members would simply focus on producing words. While the results of our original field study (Strauß & Rummel, 2021b) and the content of students’ collaborative reflection reported in this chapter do not suggest that students simply contributed more words in order to appear more active in the group, we found evidence of social comparison between students, especially upwards comparison. The particular case of our field experiments underscores the question how participation can best be operationalized. While the number of words is used in many studies it may fall short to capture all aspects of participation. Therefore, we explored the use of self-reported participation in our second field experiment reported in this chapter. While the relationship between the degree of participation of the individual group members, their satisfaction with the collaboration, effective interaction and eventually group performance is complex (see Strauß & Rummel, 2021b for a discussion), building on the argumentation for our second field experiment, implementing a more holistic indicator for participation that combines behavioral data from the learning environment, sensor data, the content of students’ and contributions self-reports (i.e., multimodal learning analytics, Ochoa, 2017; Praharaj et al., 2021) may be worth exploring since the distribution of participation in a group is not only a source of dissatisfaction (Strauß & Rummel, 2021a) but also central for learning through interaction and the group’s success.

Another potential boundary concerns students’ competence to process the feed-back. For instance, the degree to which learners can process visual information (e.g., a graph in a GAT) depends on the way that the information is presented. Given the limited working memory capacity, visual feed-back should be presented in a way that allows for easy processing. Here, research on instructional psychology (e.g., learning with multimedia, Mayer & Moreno, 2003), human–computer interaction, and human-centered design (e.g., Brandenburger et al., 2020; Jacko, 2012) can inform the design process and facilitate information processing.

Finally, it should be considered whether learners perceive the source of feed-back as trustworthy. Research on feed-back has not yet systematically investigated the role the feed-back source (e.g., teacher/experts, peers, task, computer system, self) (Panadero & Lipnevich, 2022), however, for example, Winstone et al. (2017) posit that signals of credibility such as expertise or experience may affect whether and to which extent learners engage with the feed-back. Our results indicate that students prefer the number of words as an indicator for participation over peer-generated feed-back, although the number of words may fall short to cover all facets of participation. This finding points to a tension between trust in computer-systems and trust in peers, or between trust in data and validity of the feed-back.

6.2 Phase 3: Taking up Feed-Back and Comparing It to a Desired State

During the third phase of the collaboration management cycle a group compares the current state of the collaboration to a desired goal-state. This goal may be set by the group itself or externally, for example by the task or the teacher. To analyze the relevant processes in more detail, we propose to distinguish between the process of taking up the feed-back and comparing the current state of the collaboration with the goal-state. Thus, we split phase 3 into two parts (3a and 3b, Fig. 9.3).

In the first half of phase 3 (i.e., 3a), a group deliberately takes up the feed-back (Hattie & Timperley, 2007) with the goal of comparing it to the desired goal-state. We assume that monitoring and reflecting upon feed-back requires more deliberate processing than merely noticing and viewing the information (phases 1 and 2). The model of regulated learning (Butler & Winne, 1995) as well as research on monitoring (Harkin et al., 2016) describe monitoring as a process that precedes regulatory action. Receiving feed-back in the form of a visualization then requires the competence to process the information. This may include data literacy (Calzada Prado & Marzal, 2013) as well as feed-back literacy, that is, “[…] an understanding of what feed-back is and how it can be managed effectively; capacities and dispositions to make productive use of feed-back; and appreciation of the roles of teachers and themselves in these processes” (Carless & Boud, 2018, p. 1316). These competencies enable learners to become active agents who make sense of the feed-back information and adapt their behavior (Carless & Boud, 2018).

In the second part of the third phase (3b), a group compares the current state with a desired state and assesses whether regulation is required. In case of a discrepancy, the collaboration management cycle predicts that the group initiates a reflection process to identify potential reasons for the discrepancy (Boud et al., 1985; Gabelica et al., 2014; Kori et al., 2014; Soller et al., 2005). Hattie and Timperley (2007) refer to this as feed-back. One potential barrier here is learners’ motivation. Following Butler and Winne (1995), students’ motivation affects how much they invest in regulation. Also, if learners do not expect that their efforts will be beneficial for the groups’ performance they are less likely to put in additional effort (e.g., collective effort model, Karau & Williams, 1993). When the group members compare the current state with a desired goal-state, their interpretation of the current state and their knowledge about effective goals (e.g., which degree of discrepancy requires regulation) are further potential boundary conditions for regulation (i.e., feed-up). In this regard, Butler and Winne (1995) stress that the configuration of the goal-state should be appropriate because otherwise regulation fails to lead to desired outcomes. For the context of our studies, the questions remain whether achieving an even distribution of words is an appropriate goal for a productive group, which degree of inequality represent an ineffective state of unequal participation, and which indicators may be the most helpful for a group to monitor and regulate their collaboration (see Strauß & Rummel, 2021b for an initial discussion of this point).

With respect to the desired goal-state, we acknowledge that the individual group members may hold different (and diverging) perspectives of effective interaction patterns and goal-states. For example, the students in the first field experiment described above held different ideas of the “optimal” distribution of participation during collaboration. This ranged from an exactly equal distribution of words to any meaningful contributions. Consequently, within a group, there may not exist a shared understanding regarding the desired goal state (Clark & Brennan, 1991; Hadwin et al., 2018). Given that goals play an important role for regulation as goals describe the desired state that should be achieved through regulation, we propose that a shared understanding of goals and (un)desired states is necessary to negotiate and coordinate potential regulatory actions. Given findings that a shared perception of the current task is an important factor for effective collaboration (Hadwin et al., 2018), we hypothesize that a diverging set of goals or collaboration norms may affect the motivation to regulate the collaboration. Besides having the competence to process (i.e., make sense of) the information (i.e., feed-back), the members of a group collectively need the competence to collectively negotiate about the current state of the collaboration and whether action is needed.

6.3 Phase 4: Regulating the Collaboration

In the fourth phase, a group enacts regulation strategies to transform the current state of the collaboration into the desired goal-state (i.e., feed forward). Whether individuals enact strategies or adapt their behavior depends on their self-efficacy, that is, their expectation that they are capable of achieving a goal and whether their actions will lead to the desired goal (outcome expectation) (Luszczynska & Schwarzer, 2020). Since striving to meet a goal is a volitional process which requires effort, Webb and de Bruin (2020) propose that individuals only invest this effort if the goal is important to them.

Further, Butler and Winne (1995) acknowledge that students’ perceptions and beliefs affect whether and how students process feed-back and consequently regulate their learning. For instance, if learners hold the belief that learning progress occurs quickly, they are more likely to employ superficial learning strategies (Butler & Winne, 1995). Whether relevant perceptions and beliefs exist is still not explored.

Once students engage in regulation, their success depends on students’ knowledge about appropriate strategies (Butler & Winne, 1995; Carless & Boud, 2018; Webb & de Bruin, 2020) as well as their competence to enact these strategies (see Flavell et al., 1966; Hübner et al., 2010 for stages of strategy acquisition, and Kollar et al., 2007; Kollar et al., 2018 for internal collaboration scripts). If the learners of a group do not possess adequate strategies or lack the expertise to use them, the group may struggle to achieve the desired goal-state. At this point during the regulation, adaptive technology may scaffold the regulation process by suggesting groups with effective strategies. When designing an adaptive system that offers groups explicit guidance (i.e., a guiding system, Soller et al., 2005), designers need to consider students’ internal collaboration scripts (Kollar et al., 2018) and which threshold values indicate a problematic state (i.e., what constitutes a “large” discrepancy between the current state and the desired goal state). This value does not necessarily have to be in line with students’ perceptions but still should motivate students to follow the prompted regulation strategy. If students do not agree with the system’s assessment of the current state or with the proposed strategy, they may be less compliant with the support. The challenge of compliance with instructional support has rarely been addressed by prior studies (some exceptions are Bannert et al., 2015; Daumiller & Dresel, 2019; Kwon et al., 2013). Again, students’ trust in the feedback may influence whether they engage with it or follow suggestions made by the collaboration support. The question of compliance may further depend on the pedagogical implementation of the support. Wise (2014), Wise and Vytasek (2017) suggest that learning analytics interventions need to be implemented carefully. Alternatively, instead of providing learners with agency to engage with feed-back, computer support may also include coercion (Rummel, 2018) to achieve compliance. Previous studies (e.g., Kirschner et al., 2008) provide promising evidence that coercion benefits collaboration. However, the question remains whether students on all competence levels equally benefit from coerced support (over-scripting, Dillenbourg, 2002).

Another factor that plays a role are learners’ goals during collaboration. While working in a group, developing group awareness is only a secondary task for the group (Gutwin & Greenberg, 2001) while the primary goal usually encompasses solving a problem or creating a joint artifact such as a presentation. According to Borge et al. (2018), groups rarely invest effort in regulating the interaction, instead, they focus on solving the joint task. Thus, during collaboration, a group may not invest much effort in achieving an even distribution of participation. Students’ goals during collaboration further affect how students perceive and use collaboration environment. As a result, students may appropriate the support so that they can achieve their goals (Tchounikine, 2016, 2019). For example, students in our field experiment (Strauß & Rummel, 2021b) reported using the GAT to learn which group members can be trusted to be good collaborators. The observation indicates that the original purpose of the GAT may not have covered students’ needs in terms of feed-back.

7 Conclusion

In this chapter, we conceptualized group awareness tools (GATs) from a feedback perspective and argued that groups may use this feedback to regulate their interaction. Improving the quality of the interaction in the group serves not only the performance of the group (e.g., successfully solving a problem) but also affects learning through interaction. As prior research on instructional feed-back and peer-feed-back has shown, there are several factors that affect whether and to which degrees students can benefit from feedback, and thus from GATs. While cybernetic models like the one proposed by Soller et al. (2005), Butler and Winne (1995) or Zimmerman (2000) are often used to describe the regulation processes, these models may falls short to model the intricate details of regulation, such as students’ goals, motivation, perceptions, or competencies, and thus may fall short to predict regulation processes.

Thus far, research on GATs has not presented a comprehensive framework regarding the mechanisms underlying their effectiveness. We became sensitive to this issue because implementing GATs into authentic learning settings did not yield the expected results and our explorative analysis lead to more questions than answers (Strauß & Rummel, 2021b).

Based on the results of prior research and our studies, we propose that leveraging feed-back from GATs regarding the interaction of groups is demanding for students and that research still must identify the mechanisms and boundary conditions for this type of collaboration support. Bringing together evidence from different fields such as team feed-back, instructional feed-back, peer-feed-back, and group awareness, we located different boundary conditions during the process of computer-supported monitoring and regulation of collaboration. Since our work is only a first step towards a systematic investigation of monitoring and regulation of interaction in groups and how groups may leverage feed-back regarding the interaction, we warmly welcome future research on how groups can benefit from feed-back on their collaboration.