Our analysis of discussion threads in different cases provides initial insights into how reflective discussions unfold in online communities in workplace settings and which elements of discussions foster or hinder collaborative reflection. We are aware of the fact that given the sizes of our data sets and scarce causal relationships, our results cannot be generalized, and that there is a need to further investigate them. However, the results show that there is a diversity in online collaborative reflection that is not predicted by common models, and that reflection is not ‘messy’ but follows certain paths, which have not been identified and described fully, but characterize collaborative reflection. Therefore, our work brings forward new insights into how reflection unfolds in online communities. Rather than deriving rules for the flow and support for collaborative reflection from our results, we derive hypotheses and suggestions from our findings, which need to be evaluated in subsequent studies.
Comparing results from the data sets: the multiple paths of collaborative reflection
When comparing the correlations for both all and immediately preceding comments in the two data sets M and E, they seem to differ from each other very much. Both contain reflective content, but the flow of the threads seems to be different. We attribute this to the user groups and tools having slightly different purposes, different group sizes, slightly different domains, different time spans of usage and possibly also different cultures in the work places (see Table 2). Reflective discussions may unfold differently in different contexts, and potentially support needs to be adapted accordingly. For example, in the smaller groups reflecting together in data set M, people might have put more emphasis in documenting their learnings, as they might have felt they had closer relationships to each other. As another example, the culture of the organization data set E was created in was rather little fault-tolerant, which may have led to less people sharing experiences.
Data set M contains more codes that point to the documentation of learning. In addition, we found relations between the codes related to learning (S_LOOP, D_LOOP, and CHANGE) and follow-up experience- and knowledge exchange. This may be attributed to the fact that in the dataset’s four groups, in which people know each other, the participants worked together closely and therefore discussed issues in-depth, even continuing after initial learning success was documented.
In comparison to data set M, data set E contains a lot of suggestions – half of the clusters created are suggestion-heavy, and most correlations found include suggestions based on experiences or knowledge. There was no indication of what may have caused suggestions to appear often in the discussions. This focus on suggestions may be attributed to the size of the group, which was considerably larger than in data set M. More participants mean more potential ideas for dealing with certain challenges. At the same time, in contrast to smaller groups, there is an increased likelihood that an issue shared with 200 (and more) people is received by some people who have already made similar experiences and can provide their solutions. Thus, rather than engaging in sensemaking of the experience, these people may have provided solutions that worked for them right away.
These differences show the diversity that reflection processes may demonstrate in practice, and how group size and setting may influence the overall way of reflecting together. This may be regarded as one of the central findings our work points to: Rather than being messy or strictly adhering to paths shown in models, collaborative reflection may follow different paths, which are influenced by factors such as group size, familiarity among purpose of the reflection, cultural context, and others. While this may not seem surprising on first sight, with the exception of de Groot et al. (2013), it has rarely been discussed in research on collaborative reflection. Our work resonates with de Groot et al. (2013) and adds on it: To the knowledge of the authors, this is the first investigation of online collaborative reflection content that shows that there are multiple paths and presents certain sequences of reflection that describe (parts of) these paths. While our work cannot (and does not aim to) present an exhaustive list of paths that collaborative reflection may take, it shows the need to investigate these paths in future work for the implementation of proper support for collaborative reflection.
Elements of reflection and their relations
In this section we discuss our findings in more detail to analyze and interpret the different sequences and paths we found.
Exchange of experience
In both data sets, we observed that the discussion threads often contain experience reports (and more experience reports than links to knowledge). This is in line with reflection literature (Boud et al. 1985; Schön 1983) and highlights that the threads show experience exchange as part of reflective interaction. We also observed that sets reporting on experiences (EXP) correlate with others agreeing with the perspective articulated in this (AGR) for both data sets, which shows the reciprocity of collaborative reflection as discussed in section 2.2. This suggests that people often found experiences similar to their own in this exchange, and that they related to these. It also indicates that people discussed in a ‘healthy’ environment in which colleagues are supporting each other while talking about issues and ideas. In general, having other people present is helpful to discuss issues and to receive feedback on ideas (Fleck and Fitzpatrick 2010; Raelin 2002).
We found at least two specific roles the exchange of experiences took in the data sets, among which one is in line with what could be expected from the literature, of which one represents an interesting and rather surprising new insight. First, we found that the occurrence of code for experience reports (EXP) explains 32% of the variance of learning outcomes documented in a thread for data set M, with additional codes for personal emotion (EMO_OWN) and disagreement (DISAGR) raising the explanatory power to 41%. This is a good example for the need for articulation of experiences in collaborative reflection, which, in line with reflection literature as described above, suggests that the exchange of experience is a strong factor for the success of collaborative reflection and needs to be supported explicitly. Second, and more surprisingly, we found codes for experience (and knowledge) to occur after learning had been documented in a thread, which is not mentioned in literature on (collaborative) reflection. Looking closer at the respective correlation, it makes sense for collaborative reflection, as relating the learning and change documented by others to one’s own experiences (or knowledge) helps to make sense of the solution, to decide on its applicability for oneself or to apply it. This could mean that facilitation mechanisms can continue to support users’ reflection even after learning was documented and can encourage users to relate their experiences to the learning documented.
Provision of suggestions based on experiences
Our analysis on the occurrence of suggestions based on experiences and what may have fostered or caused them did not provide the results we expected. Instead, the interesting finding here lies in what we did not find: Looking at reflection models and reflection literature, we assumed that the provision of suggestions based on experiences (SUG_EXP) should be positively influenced by the provision of experiences (EXP), and that the provision of knowledge (KNO) should have similar effects on the provision of suggestions based on experience (SUG_KNO). However, we did not observe any of these relationships in our coding. Instead, we observed that suggestions are made from the beginning, that is, without any clear preceding type of statement. Additionally, we found that for some threads learning preceded the provision of suggestions based on experience (data set M) and for others, suggestions based on experiences followed each other. This is counterintuitive and surprising, and it suggests that reflection may go different ways than suggested in the literature.
What we may derive from this is that the provision of suggestions may be caused by different elements of a conversation and can happen anywhere in a thread, and that it may be worthwhile to encourage users to provide them from the beginning on. It should be noted, however, that such statements of suggestions based on experiences by definition include statements on own experiences as shown in the examples provided in this paper (see e.g. second example in section 4.3.3 and Appendix Table 7). Nevertheless, for the support of collaborative reflection this means that rather than fostering a process of exchanging experiences and then deriving suggestions from them (as described in almost all models), mechanisms may foster the provision of suggestions based on experiences any time and without necessary preceding elements of the conversation.
The role of questions: fostering questions for interpretation
One of the common factors in reflection models are questions to be asked in order to structure, moderate and lead reflection to success. In our coding scheme, we built on Zhu (1996), who found that with respect to their influence on reflection, there is a need to differentiate between questions for further information and questions for interpretation of experiences, as the latter stimulate reflection while the former do not. While we cannot prove this finding, our data suggests that this differentiation is important: In data set M we found correlations between the codes for questions for interpretations (Q_INT) and experience reports (EXP), and for data set E we found negative correlations between the codes for questions for information (Q_INF) and experience reports (EXP). In addition, we found a negative correlation between codes for suggestions based on experiences (SUG_EXP) and questions for more information (Q_INF). These all suggest that questions for interpretation are helpful for the articulation of experiences, which then supports collaborative reflection. For the support of online collaborative reflection, this may mean that mechanisms such as prompts (see above) may (point users to) ask these questions in order to stimulate collaborative reflection.
We did not observe any codes correlating with individual codes that document learning (S_LOOP, D_LOOP, and CHANGE), which can be attributed to the fact that in our data sets these single learning outcomes were not represented to a large extent (low number of explicit statements on learning as described above). As mentioned above, this has also been observed in other studies (de Groot et al. 2013; Prilla et al. 2015) and does not mean that learning did not take place (see the discussion of this in the explanation of the codes in section 3.3), but that it was not explicitly documented very often. In fact, this was also the case for our groups: As mentioned in section 4.1 our participants told to us that they learned from discussions in the platform, and this indicates that learning was often not documented.
In the analysis of all explicit statements on learning, we found that there was a considerable influence of the articulation of experiences (EXP) on learning documented in the tools, which again shows the need for articulation work in collaborative reflection. On first sight, this seems to fit the reflection models described in this paper nicely, as the articulation of experience is an integral part of reflection. On a second consideration, however, this leaves out steps such as creating suggestions based on the experience exchange as described by most reflection models.
Another interesting finding is that many of the correlations and corresponding regression models we found showed highest effect sizes in the Learning cluster created during the analysis – for both data sets. This includes relations between questions for interpretation and the provision of experiences, between learning documented and relating experiences to it and the relation to suggestions from experience to more suggestions of this kind, and it means that these relations were strongest when learning occurred in a thread. This suggests that learning may have been mediated by the occurrence of these code combinations or vice versa. We did not find a mediating effect in the analysis, and, as such, this is another aspect to be taken on board in further work.
We also observed correlations from learning (S_LOOP, D_LOOP, and CHANGE) to codes for experience reports (EXP) or suggestions (SUG), meaning that both experience reports (EXP) and suggestions (SUG_KNO and SUG_EXP) followed the explication of learning. As mentioned earlier, this indicates that facilitation mechanisms should not stop facilitating reflective discussion once someone states that they have learned, as discussions still continue.
The insights discussed above provide a good basis to answer the research questions guiding the work presented here. Below, we relate them to the respective question.
How can the nature of online collaborative reflection be described?
We did not observe clear or singular reflection flows, the relations we found deviated from what could have been assumed based on existing literature, and different relations were found in the different data sets. As discussed above, this shows that there most likely are multiple paths of collaborative reflection, and that these are potentially influenced by different factors. This opens up a new way to look at the nature of collaborative reflection as neither model-based nor messy, but constituted by multiple paths. For these paths, we found a good amount of causal relationships between types of contributions to collaborative reflection, which may make up elements of the “nature” we looked for in RQ1. This includes both sequences well known from existing models and relationships not commonly featured in existing models (e.g., the effect of relating own experiences to learning documented by others).
Our findings also show the importance of articulation work and reciprocity in collaborative reflection. We found that asking questions and sharing experiences were correlated with documented learning outcomes in a considerable way, which underpins that articulation of issues is crucial for reflecting together and needs to be support. We also found reinforcement (agreeing when others agreed in data set E, agreeing on issues when experiences were shared in data set M) as an example for reciprocal interaction to be common and of potential importance for collaborative reflection. The important role of reciprocity was also found in multiple suggestions co-occurring (data set E) and statements on change following statements indicating learning had happened data set M). All of these examples create a situation in which the participants of a collaborative reflection process relate to each other and reflect together rather than sharing thoughts and taking these as input for parallel individual reflections.
Additionally, we have to take into consideration that in online collaborative reflection certain steps may be done by the users of a tool supporting collaborative reflection outside the system, that is, cognitively without being articulated in the tool or in face-to-face interaction. This may explain why the articulation of experiences had a direct influence on learning, why suggestions based on experiences came up without experiences articulated in a thread before, and why some learning outcomes were not documented. If this is the case, then reflection support may even benefit from this, as there would be no need to emphasize or force certain steps but mechanisms could rely on users often fulfilling these steps on their own. In any case, we must regard collaborative reflection in online media as an online and offline process.
(How) Do reflective conversations unfold along the elements mentioned in common models of reflection?
While the answer to RQ1 is largely based on our insights about the multiple paths that collaborative reflection may take in practice, the answer to RQ2 needs to describe these paths. The multiplicity of sequences we found means that from our work we cannot (and should not) derive a single description of how reflection threads unfold in online communities. Comparing our findings to existing reflection models, the threads we analyzed seem to “jump” certain steps in the models, and other steps occur without the triggers that would be expected.
Our findings include insights on the positive influence of experiences exchange, questions and the provision of suggestions based on experience on collaborative reflection. There are also new insights that add additional facets on collaborative reflection. One of these is that collaborative reflection often continued in our groups after learning had been documented. This may show that, as assumed in previous work (Krogstie et al. 2013), collaborative reflection does not terminate but is an iterative process. As an example of activities most likely performed offline, we found that the provision of experiences correlated with the documentation of learning, implicating that the step of deriving learning insights was done cognitively or face-to-face.
An important aspect to take away from the answers to RQ1 and RQ2 is that reflective conversations may go different ways than suggested by most reflection models, and that reflection may create valuable outcomes on all of these different ways.
Which are the elements of reflection that lead to successful outcomes of reflection?
Our work suggests that providing experiences influences reflection positively and lead to documented outcomes and suggestions based on experiences. We also found that suggestions based on experiences were followed by more suggestions based on experiences, and that questions for interpretation were helpful for the creation of outcomes. These relations are in line with the literature, but (as discussed above) represent only some findings amongst others.
Besides relations between codes and corresponding utterances that we assumed to be present, we also found surprising relations in the data. Among these, we found that relating own experiences and knowledge to learning documented by others may lead to understanding and adoption of this learning for oneself. This is not prominently featured in reflection models and adds to these models. In addition, this finding also may point to a way in which the provision of knowledge, which is usually referred to as not being helpful for collaborative reflection, may foster learning from collaborative reflection.
We also observed that agreement has various influences on outcomes of collaborative reflection, as among others it is related to suggestions based on knowledge and single loop learning. While this may sound trivial initially, it points towards a culture to be established in online collaborative reflection tools, in which users engage with what others share and reassure them that these experiences have been made by others as well. This culture cannot be taken for granted in many organizations, which is also supported by results we gathered when we applied the tool resulting in data set E in practice (Blunk and Prilla 2017b). In terms of support, facilitation mechanisms should pick this up and encourage users to agree and disagree with each other to continue the discussion.
Our observation that many of the correlations and models we found showed their highest effect sizes in the Learning cluster of threads in the data sets supports the notion that these combinations of codes following each other in threads are helpful to lead to outcomes from collaborative reflection.
Which are the elements that diminish reflection in the conversations?
We found minor relations between codes that provide insights on elements or behavior to avoid in collaborative reflection. Among these, we found a weak relationship in which providing advice led to suggestions based on knowledge, which are not favorable in collaborative reflection (as opposed to suggestions based on experience). We also found occasions in which advice or suggestions based on knowledge provided by users led to (single loop) learning, which is also not what is desired in reflection support. It should be noted that both effects may still include valuable insights for users, but that the scope of pure analysis was on fostering learning from reflection.
Moreover, as we observed various correlations occurring in the No-Learning clusters (e.g. multiple consecutive questions for more information (Q_INF)), we may hypothesize an effect of these kinds of questions on threads turning into a direction that does not lead to learning. This is supported by the negative correlation we found between this type of question and the provision of experiences. Facilitation support may therefore try to encourage other types of questions as described below.
Impacts on modeling reflection
Although reflection literature often implies a sequence of things that need to happen before reflective learning can take place (see Schön 1983; Boud et al. 1985; Krogstie et al. 2013), our data does not support these models fully but shows much more variety and even unexpected relations. For example, from the frequency analysis, we found we can see that both solution proposals as well as learning occurred right from the beginning. Our results therefore suggest that there is not one single way to describe online collaborative reflection (RQ 1 and 2), and that there were both commonly assumed and surprising elements that fostered (and hindered) reflection in the discussions we looked at (RQ 3 and 4).
This affects the way models of collaborative reflection should be built and used. Rather than showing or implicating sequences, such models should instead focus on the collaborative process. This may include how people related to each other (interactivity, reciprocity) and how online collaborative reflection is not only a process of individual and collaborative activity as described in Prilla (Prilla 2015), but also a process that happens online and offline and therefore misses some traces and trajectories in the online medium. Dealing with these gaps in the visibility and availability of reflection aspects for the group reflecting together means allowing more flexible ways for reflection to unfold (rather than prescribing paths implicated by models) and the need to emphasize the benefit created by sharing these aspects with the other participants in order to allow them to related to these aspects.
Our findings also suggest that we need to carefully use existing models when analyzing and designing for collaborative reflection. Using these models to explain online collaborative models may automatically result in an incomplete view of what happens in reflective conversations and the question arises whether models can capture collaborative reflection after all, given the diversity we found. This may be the true meaning of reflection being ‘tamed and domesticated at the risk of destroying what it can offer’ as stated by Cressey et al. (2006, p. 23). On the other hand, we must not stop at the notion that reflection is messy, as we have shown that there are multiple paths that it unfolds along. In any case, our work points towards the need to at least partially reconsider these models, especially linking our new findings to them. Further work in this direction, as stated above many times, needs further investigation of our findings and additional data to examine.
Designing for (collaborative) reflection: implications for facilitation support
In our analysis we gained several insights into how collaborative reflection in online discussion unfolds and which factors might influence others. One implication from our work is that facilitation mechanisms, which have been presented as key to the support for collaborative reflection above, may not have to strictly adhere to specific steps or prerequisites in order to support reflection, but provide freedom for collaborative reflection to unfold along different paths and directly ask users to provide certain contributions. Based on our analysis we derived several suggestions to deal with the variety of ways we found for collaborative reflection:
Facilitation mechanisms should support users in articulating and sharing their experiences, in relating their statements to each other (reciprocity), and in providing solution proposals based on experience from the beginning of a thread on. Both describe paths that led to outcomes in the threads we analyzed. One way of providing this support could be in prompting users to articulate corresponding contributions. Our initial work on prompts for reflection supports this (Blunk and Prilla 2017a; Renner et al. 2016), but further work is needed to build and evaluate this support.
Facilitation mechanisms should encourage users to continue reflective discussion even after learning has been documented by referring to their experiences and knowledge to the documented learning outcomes. For the implementation of support this means two things: First, mechanisms need to help users to make documented learning visible to others, as this may help others to benefit from it. Second, promoting mechanisms should encourage users to articulate related experiences in order to help them to learn as well. However, support for collaborative reflection also needs to take into accounts its nature as an online and offline process, and therefore we should not force users to explicate everything but allow them to carry out phases in the ways they can reflect best.
Facilitation mechanisms should foster a culture in which users engage with other users’ contributions and explicitly agree or disagree with it in their contributions. Both ways of referring to other contributions were found to be useful in our analysis. This could be done by encouraging and showcasing respective openness and engagement. In addition, prompts may make users aware of the positive aspects of engaging with other content.
Facilitation mechanisms should encourage and help users to ask questions for interpretation of content shared with them, as in our analysis this showed to support reflection. One way to gently direct online discussions in directions that foster collaborative reflection is to provide users with blueprints for such questions, which they can adopt and integrate into their contributions. We have implemented a prototype for this, and at the time of writing this paper, we are evaluating it (see Blunk and Prilla 2017b for examples and very early results).
Facilitation mechanisms should adapt to the specific context of the collaborative reflection and to the user(s). Enabling users to take multiple paths in collaborative reflection needs context-dependent mechanisms that provide support for the respective reflection path a thread follows or should follow according to its context or users. It may also afford the personalization of facilitation, that is, providing the right kind of support for a certain user. However, whether and how the success of certain facilitation support can be related to the situation or the person receiving the support is subject to further work. Our work suggests that this is a path worthwhile following.
We also noticed that the data sets differed in the influences we found on collaborative reflection. We attributed this to the difference in the contexts the data stems from. For the design of reflection support, this means that we need to understand the context (e.g., small vs. large groups, short or long-term exposure etc.), what it means for the flow of reflection, and tailor support to it. Following up on our discussion above, in smaller groups we may have to prompt for activity that includes sensemaking such as the provision of additional experiences, while in larger groups we may directly ask for solution proposals. However, while our work points at differences in the context, it can only provide initial insights and further work is needed to explore this influence.
For the implementation of context-dependent support, there is a need to gain an understanding of what is happening in a collaborative reflection thread. The manual content coding approach that we employed to analyze the data, while being well-suited for this analysis, does not work for that. Instead, there is a need for on-the-spot detection of the situation in thread. Numerous tools for automated content analysis such as LIWC (Tausczik and Pennebaker 2010) and EMPATH (Fast et al. 2016) exist. However, as stated above, these tools do not include elements needed to analyze collaborative reflection. Adding these elements based on the work presented here and other work as well as using their existing elements to analyze, for example, the topics reflected upon could enable us to automatically assess and categorize the content. In this case, facilitation features could be selected based on what may be most helpful in a thread. This, however, needs additional work, as none of the existing tools contains the categories included in the coding scheme we used.
Our work brings forward interesting insights on how collaborative reflection unfolds in online communities, and how to potentially support it. However, we aware of the fact that our findings cannot be generalized. We discuss limitations of our work below.
Our findings are based on sequences analyses and correlations backed up by corresponding models from linear regression analysis. While this is sufficient to draw the tentative conclusions we draw in the paper (e.g., multiple paths of reflection that are potentially influenced by contextual factors), we did not find particular patterns, and the explanatory power of some models is low. We mentioned this explicitly in our results, and we marked those relations between contributions to collaborative reflection that need further investigation. However, despite this limitation, our work clearly shows the plurality of paths that collaborative reflection may take, and it indicates that we may need to support some of these different paths rather than following certain models.
In addition, given the amount of data, we were almost guaranteed to observe several correlations. Thus, we focused on correlations appearing multiple times (that is, in the overall data sets and in the clusters we created) as well as on correlations that we found acceptable regression models for. Despite this, we emphasize that further work is needed to approve and strengthen our findings, while we also emphasize that these findings provide new insights on their own.
We also applied the content analysis to content written in a language the authors do not speak. The coding scheme by Prilla et al. (2015) used was developed and evaluated for the analysis of German and English texts, and therefore it is not guaranteed that it picks up all language and culture related subtleties of this language. In addition, coding needed to be done by different coders for the data sets due to this language barrier. We accounted for these issues by training the coders for data set E with the coding applied to data set M by the researchers, making sure there was a higher interrater reliability between the coders and the researchers in order to make sure that the coders’ understanding of the coding scheme was the same than that of the researchers. This way we could make sure that both groups of coders had a comparable understanding of the coding scheme.