Skip to main content
Log in

Preventing undesirable effects of mutual trust and the development of skepticism in virtual groups by applying the knowledge and information awareness approach

  • Published:
International Journal of Computer-Supported Collaborative Learning Aims and scope Submit manuscript

Abstract

Empirical studies have proven the effectiveness of the knowledge and information awareness approach of Engelmann and colleagues for improving collaboration and collaborative problem-solving performance of spatially distributed group members. This approach informs group members about both their collaborators’ knowledge structures and their collaborators’ information. In the current study, we investigated whether this implicit approach reduces undesirable effects of mutual trust and mutual skepticism. Trust is an important influencing factor with regard to behavior and performance of groups. High mutual trust can have a negative impact on group effectiveness because it reduces mutual control and, as a result, the detection of the others’ mistakes. In an empirical study, 20 triads collaborating with the knowledge and information awareness approach were compared with 20 triads collaborating without this approach. The members of a triad were spatially distributed and participated in a computer-supported collaboration. The results demonstrated that the availability of the knowledge and information awareness approach overrides the negative impact of too much mutual trust and counteracts the development of mutual skepticism. This study contributes to further clarifying the impact of trust on effectiveness and efficiency of virtual groups depending upon different situational contexts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. A factor analysis with Varimax rotation with the 13 trust items included in the questionnaire on control measurements was applied and resulted in these two interpretable factors (cf. Bortz and Schuster 2010): initial skepticism, Cronbach’s α = 0.59; initial trust, Cronbach’s α = 0.78. Since the internal consistency is only acceptable if Cronbach’s α is higher than 0.70 (e.g., Nunnally and Bernstein 1994), the factor “initial scepticism” was not included in further analyses.

  2. The questionnaire after the collaboration phase contained 50 items (that were identical between the conditions), that is, three factor analyses were necessary to comply with the rules for conducting factor analyses (cf. Bortz and Schuster 2010). Factor Analysis 1, including 17 items on trust, resulted in two interpretable factors: Developed trust, Cronbach’s α = 0.73; developed skepticism, Cronbach’s α = 0.78. Factor Analysis 2, including 19 items on mutual control, coordination, communication, and subjective evaluation of the group outcomes, resulted in one interpretable factor: Developed suspiciousness, Cronbach’s α = 0.46. Factor Analysis 3, including 14 items on study evaluation, group map creation, and collaboration, resulted in one interpretable factor: Cognitive effort, Cronbach’s α = 0.50. Because of their low Cronbach’s α values, the factors “developed suspiciousness” and “cognitive effort” were not included in further analyses.

  3. A factor analysis with Varimax rotation with the 15 control measure items resulted in six factors with eigenvalues higher than 1. According to Bortz and Schuster (2010), in a Varimax-rotated factor structure, only those factors can be interpreted that have at least four items with a loading > 0.60 or at least ten items with a loading > 0.40. This criterion was met only by the factor “computer experience”. However, an univariate ANOVA did not result in a significant difference between the two conditions (F < 1).

References

  • Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions. Thousand Oaks, California: SAGE Publications, Inc.

    Google Scholar 

  • Alpert, S. R. (2005). Comprehensive mapping of knowledge and information resources: The case of Webster. In S.-O. Tergan & T. Keller (Eds.), Knowledge and information visualization. Searching for synergies (pp. 220–237). Heidelberg: Springer-Verlag. LNCS 3426.

    Chapter  Google Scholar 

  • Amelang, M., Gold, A., & Külbel, E. (1984). Über einige Erfahrung mit einer deutschsprachigen Skala zur Erfassung zwischenmenschlichen Vertrauens (Interpersonal Trust). Diagnostica, 30(3), 198–215.

    Google Scholar 

  • Aubert, B. A., & Kelsey, B. L. (2003). Further understanding of trust and performance in virtual teams. Small Group Research, 34(5), 575–618.

    Article  Google Scholar 

  • Baker, M., Bernard, F.-X., & Dumez-Féroc, I. (2012). Integrating computer-supported collaborative learning into the classroom: The anatomy of a failure. Journal of Computer Assisted Learning, 28(2), 161–176.

    Article  Google Scholar 

  • Beers, P. J., Boshuizen, H., Kirschner, P. A., & Gijselaers, W. H. (2005). Computer support for knowledge construction in collaborative learning environments. Computers in Human Behavior, 21(4), 623–643.

    Article  Google Scholar 

  • Bodemer, D. (2011). Tacit guidance for collaborative multimedia learning. Computers in Human Behavior, 27, 1097–1086.

    Google Scholar 

  • Bortz, J., & Schuster, C. (2010). Statistik für Human- und Sozialwissenschaftler (7th ed.). Berlin/Heidelberg: Springer.

    Book  Google Scholar 

  • Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.

    Article  Google Scholar 

  • Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2002). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Colquitt, J. A., Scott, B. A., & LePine, J. A. (2007). Trust, trustworthiness, and trust propensity: A meta-analytic test of their unique relationships with risk taking and job performance. Journal of applied psychology, 92(4), 909.

    Article  Google Scholar 

  • Cress, U. (2008). The need for considering multilevel analysis in CSCL research—an appeal for the use of more advanced statistical methods. International Journal of Computer-Supported Collaborative Learning, 3(1), 69–84.

    Article  Google Scholar 

  • Dehler-Zufferey, J., Bodemer, D., Buder, J., & Hesse, F. W. (2011). Partner knowledge awareness in knowledge communication: Learning by adapting to the partner. The Journal of Experimental Education, 79(1), 102–125.

    Article  Google Scholar 

  • Dirks, K. T., & Ferrin, D. L. (2001). The role of trust in organizational settings. Organization Science, 12(4), 450–467.

    Article  Google Scholar 

  • Engelmann, T., & Hesse, F. W. (2010). How digital concept maps about the collaborators’ knowledge and information influence computer-supported collaborative problem solving. International Journal of Computer-Supported Collaborative Learning, 5(3), 299–320.

    Article  Google Scholar 

  • Engelmann, T., & Hesse, F. W. (2011). Fostering sharing of unshared knowledge by having access to the collaborators’ meta-knowledge structures. Computers in Human Behavior, 27, 2078–2087.

    Article  Google Scholar 

  • Engelmann, T., & Kolodziej, R. (2012). Do virtual groups recognize situations in which it is advantageous to create digital concept maps? In A. Cañas, J. D. Novak, & J. Vanhear (Eds.), Concept maps: Theory, methodology, technology. Proceedings of the 5th International Conference on Concept Mapping (Vol. 1, pp. 172–179). Malta: University of Malta.

    Google Scholar 

  • Engelmann, T., Dehler, J., Bodemer, D., & Buder, J. (2009). Knowledge awareness in CSCL: A psychological perspective. Computers in Human Behavior, 25(4), 949–960.

    Article  Google Scholar 

  • Engelmann, T., Tergan, S.-O., & Hesse, F. W. (2010). Evoking KIA for enhancing computer-supported collaborative problem solving. The Journal of Experimental Education, 78, 1–20.

    Google Scholar 

  • Fransen, J., Weinberger, A., & Kirschner, P. A. (2013). Team effectiveness and team development in CSCL. Educational Psychologist, 48(1), 9–24.

    Article  Google Scholar 

  • Frazier, P. A., Tix, A. P., & Barron, K. E. (2004). Testing moderator and mediator effects in counseling psychology research. Journal of Counseling Psychology, 51(1), 115–134.

    Article  Google Scholar 

  • Hausmann, R. G. M., Chi, M. T. H., & Roy, M. (2004). Learning from collaborative problem-solving: An analysis of three hypothesized mechanisms. In K. D. Forbus, D. Gentner, & T. Regier (Eds.), Proceedings of the 26th Annual Cognitive Science Society (pp. 547–552). Mahwah, NJ: Lawrence Erlbaum.

    Google Scholar 

  • Hsu, J.-L., Hwang, W.-Y., Huang, Y.-M., & Liu, J.-J. (2011). Online behavior in virtual space: An empirical study on helping. Educational Technology & Society, 14(1), 146–157.

    Google Scholar 

  • Janssen, J., & Bodemer, D. (2013). Coordinated computer-supported collaborative learning: Awareness and awareness tools. Educational Psychologist, 48(1), 40–55.

    Article  Google Scholar 

  • Janssen, J., Erkens, G., Kanselaar, G., & Jaspers, J. (2007). Visualization of participation: Does it contribute to successful computer-supported collaborative learning? Computers & Education, 49, 1037–1065.

    Article  Google Scholar 

  • Jarvenpaa, S. L., Knoll, K., & Leidner, D. E. (1998). Is there anybody out there? Antecedents of trust in global virtual teams. Journal of Management Information Systems, 14(4), 29–64.

    Google Scholar 

  • Jarvenpaa, S. L., Shaw, T. R., & Staples, D. S. (2004). Toward contextualized theories of trust: The role of trust in global virtual teams. Information Systems Research, 15(3), 250–267.

    Article  Google Scholar 

  • Kanawattanachai, P., & Yoo, Y. (2002). Dynamic nature of trust in virtual teams. The Journal of Strategic Information Systems, 11, 187–213.

    Article  Google Scholar 

  • Keller, T., Tergan, S.-O., & Coffey, J. (2006). Concept maps used as a “knowledge and information awareness” tool for supporting collaborative problem solving in distributed groups. In A. J. Cañas & J. D. Novak (Eds.), Concept maps: Theories, methodology, technology. Proceedings of the Second International Conference on Concept mapping (pp. 128–135). San José: Sección de Impresión del SIEDIN.

    Google Scholar 

  • Kirschner, P. A., & Erkens, G. (2013). Toward a framework for CSCL research. Educational Psychologist, 48(1), 1–8.

    Article  Google Scholar 

  • Kirschner, P. A., Beers, P. J., Boshuizen, H., & Gijselaers, W. H. (2008). Coercing shared knowledge in collaborative learning environments. Computers in Human Behavior, 24(2), 403–420.

    Article  Google Scholar 

  • Kramer, R. M. (1999). Trust and distrust in organizations: Emerging perspectives, enduring questions. Annual Reviews Psychology, 50, 569–598.

    Article  Google Scholar 

  • Lambropoulos, N., Faulkner, X., & Culwin, F. (2012). Supporting social awareness in collaborative e-learning. British Journal of Educational Technology, 43(2), 295–306.

    Article  Google Scholar 

  • Liang, D. W., Moreland, R., & Argote, L. (1995). Group versus individual training and group performance: The mediating role of transactive memory. Personality and Social Psychology Bulletin, 21(4), 384–393.

    Article  Google Scholar 

  • Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734.

    Google Scholar 

  • Nickerson, R. S. (1999). How we know – and sometimes misjudge – what others know: Imputing one’s own knowledge to others. Psychological Bulletin, 125(6), 737–759.

    Article  Google Scholar 

  • Novak, J. D., & Gowin, D. B. (1984). Learning how to learn. New York: Cambridge University Press.

    Book  Google Scholar 

  • Nückles, M., & Stürz, A. (2006). The assessment tool: A method to support asynchronous communication between computer experts and laypersons. Computers in Human Behavior, 22, 917–940.

    Article  Google Scholar 

  • Nunnally, J., & Bernstein, I. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill, Inc.

    Google Scholar 

  • Paul, D. L., & McDaniel, R. R., Jr. (2004). A field study of the effect of interpersonal trust on virtual collaborative relationship performance. MIS Quarterly, 28(2), 183–227.

    Google Scholar 

  • Peña, E. A., & Slate, E. H. (2006). Global validation of linear model assumptions. Journal of the American Statistical Association, 101(473), 341–354.

    Article  Google Scholar 

  • Peterson, R. S., & Behfar, K. J. (2003). The dynamic relationship between performance feedback, trust, and conflict in groups: A longitudinal study. Organizational Behavior and Human Decision Processes, 92(1–2), 102–112.

    Article  Google Scholar 

  • Salas, E., Sims, D. E., & Burke, C. S. (2005). Is there a “big five” in teamwork? Small Group Research, 36(5), 555–599.

    Article  Google Scholar 

  • Schreiber, M., & Engelmann, T. (2010). KIA for initiating transactive memory system processes of computer-supported collaborating ad hoc groups. Computers in Human Behavior, 26, 1701–1709.

    Article  Google Scholar 

  • Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlation: Uses in assessing rater reliability. Psychological Bulletin, 86, 420–428.

    Article  Google Scholar 

  • Smith, G. G., Sorensen, C., Gump, A., Heindel, A. J., Caris, M., & Martinez, C. D. (2011). Overcoming student resistance to group work: Online versus face-to-face. The Internet and Higher Education, 14(2), 121–128.

    Article  Google Scholar 

  • Wegner, D. M. (1986). Transactive memory: A contemporary analysis of the group mind. In B. Mullen & G. R. Goethals (Eds.), Theories of group behaviour (pp. 185–208). New York: Springer.

    Google Scholar 

Download references

Acknowledgments

This research project was supported by the German Research Foundation (DFG), by the European Social Fund, and by the Ministry of Science, Research, and the Arts Baden-Württemberg (Germany).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tanja Engelmann.

Appendix: Analytical procedures

Appendix: Analytical procedures

Due to the fact that we were interested in interaction effects between condition and variables of trust, as well as between condition and variables of collaboration quality, regression analyses were conducted. More concretely, moderator analyses were conducted following Aiken and West (1991). The necessary requirements for conducting regression analyses were tested in each time, that is, for each analysis the global test statistic was calculated: The global test statistic as a function of the model residuals “is formed from four asymptotically independent statistics, each with the potential to detect a particular violation” (Peña and Slate 2006, p. 353). These independent statistics are linearity, homoscedasticity, uncorrelatedness, and normality. In this paper, only those analyses are reported that met the global test statistic.

For condition as a categorical moderator variable, unweighted effects coding was used (control condition = −1, experimental condition = +1) because then, the regression coefficients represent the difference between each condition’s mean and the unweighted mean of both conditions (Cohen et al. 2002). Z-standardization was applied on all other predictors because they were continuous variables. Like centering, z-standardization eliminates the problems of multicollinearity between the categorical moderator variable and the specific continuous predictor variable. In addition to this, it simplifies the comparison of significant moderator effects on different criterion variables and eases their plotting (Aiken and West 1991; Cohen et al. 2002; Frazier et al. 2004).

To calculate the moderator analyses according to Aiken and West (1991), a first series of regression analyses was calculated with only the moderator and another predictor as predictor variables and an outcome measure as the criterion variable. This first series of regression analyses was needed to obtain the change in adjusted R2 in a second series of regression analyses with the same variables and also – by multiplying the moderator with the other z-standardized predictor – the interaction term for the additional explained variance of the interaction. To test the significance of the simple slopes for each level of the categorical moderator variable, two additional regression analyses were conducted (Aiken and West 1991; Frazier et al. 2004): To test the significance of the simple slope for the control condition, a dummy coding of control condition = 0 and experimental condition = 1 was applied. For the significance of the simple slope for the experimental condition, a dummy coding of control condition = 1 and experimental condition = 0 was applied. Regression analyses were calculated with one of these newly coded moderators, another predictor, as well as their interaction term as predictor variables, and an outcome measure as the criterion variable.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Engelmann, T., Kolodziej, R. & Hesse, F.W. Preventing undesirable effects of mutual trust and the development of skepticism in virtual groups by applying the knowledge and information awareness approach. Intern. J. Comput.-Support. Collab. Learn. 9, 211–235 (2014). https://doi.org/10.1007/s11412-013-9187-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11412-013-9187-y

Keywords

Navigation