Information Systems Frontiers

, Volume 19, Issue 1, pp 31–56 | Cite as

What you think and what I think: Studying intersubjectivity in knowledge artifacts evaluation

  • Dmytro Babik
  • Rahul Singh
  • Xia Zhao
  • Eric W. Ford


Miscalibration, the failure to accurately evaluate one’s own work relative to others' evaluation, is a common concern in social systems of knowledge creation where participants act as both creators and evaluators. Theories of social norming hold that individual’s self-evaluation miscalibration diminishes over multiple iterations of creator-evaluator interactions and shared understanding emerges. This paper explores intersubjectivity and the longitudinal dynamics of miscalibration between creators' and evaluators' assessments in IT-enabled social knowledge creation and refinement systems. Using Latent Growth Modeling, we investigated dynamics of creator’s assessments of their own knowledge artifacts compared to peer evaluators' to determine whether miscalibration attenuates over multiple interactions. Contrary to theory, we found that creator’s self-assessment miscalibration does not attenuate over repeated interactions. Moreover, depending on the degree of difference, we found self-assessment miscalibration to amplify over time with knowledge artifact creators' diverging farther from their peers' collective opinion. Deeper analysis found no significant evidence of the influence of bias and controversy on miscalibration. Therefore, relying on social norming to correct miscalibration in knowledge creation environments (e.g., social media interactions) may not function as expected.


Intersubjectivity Miscalibration Longitudinal analysis Knowledge artifacts Peer-evaluation Latent classes 


  1. Alwin, D. F., & Krosnick, J. A. (1985). The measurement of values in surveys: a comparison of ratings and rankings. Public Opinion Quarterly, 49(4), 535–552. doi: 10.1086/268949.CrossRefGoogle Scholar
  2. Amin, A., & Roberts, J. (2008). Knowing in action: beyond communities of practice. Research Policy, 37(2), 353–369. doi: 10.1016/j.respol.2007.11.003.CrossRefGoogle Scholar
  3. Anders, R., & Batchelder, W. H. (2012). Cultural consensus theory for multiple consensus truths. Journal of Mathematical Psychology, 56(6), 452–469.CrossRefGoogle Scholar
  4. Bandura, A. (1962). Social learning through imitation. University of Nebraska Press.Google Scholar
  5. Bandura, A. (1977). Self-efficacy: toward a unifying theory of behavioral change. Psychological Review, 84, 191–215. doi: 10.1037/0033-295X.84.2.191.CrossRefGoogle Scholar
  6. Bandura, A., & Walters, R. H. (1963). Social Learning and Personality Development. New York, Holt, Rinehart and Winston [1963].Google Scholar
  7. Barnett, W. (2003). The modern theory of consumer behavior: ordinal or cardinal? The Quarterly Journal of Austrian Economics, 6(1), 41–65.CrossRefGoogle Scholar
  8. Batchelder, W. H., & Anders, R. (2012). cultural consensus theory: comparing different concepts of Cultural truth. Journal of Mathematical Psychology, 56(5), 316–332.CrossRefGoogle Scholar
  9. Bentein, K., Vandenberghe, C., Vandenberg, R., & Stinglhamber, F. (2005). The role of change in the relationship between commitment and turnover: a latent growth modeling approach. Journal of Applied Psychology, 90(3), 468–482. doi: 10.1037/0021-9010.90.3.468.CrossRefGoogle Scholar
  10. Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. doi: 10.1080/0969595980050102.CrossRefGoogle Scholar
  11. Bostrom, R. P., & Heinen, J. S. (1977). MIS problems and failures: a socio-technical perspective. Part I: the causes. MIS Quarterly, 1(3), 17–32.CrossRefGoogle Scholar
  12. Bouzidi, L., & Jaillet, A. (2009). Can online peer assessment be trusted? Educational Technology & Society, 12(4), 257–268.Google Scholar
  13. Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32–42. doi: 10.3102/0013189X018001032.CrossRefGoogle Scholar
  14. Brown, J. S., & Duguid, P. (2001). Knowledge and organization: a social-practice perspective. Organization Science, 12(2), 198–213. doi: 10.1287/orsc. Scholar
  15. Brutus, S., & Donia, M. B. L. (2010). Improving the effectiveness of students in groups with a centralized peer evaluation system. The Academy of Management Learning and Education (AMLE), 9(4), 652–662.CrossRefGoogle Scholar
  16. Brutus, S., Donia, M. B. L., & Ronen, S. (2013). Can business students learn to evaluate better? Evidence from repeated exposure to a peer-evaluation system. Academy of Management Learning & Education Academy of Management Learning & Education, 12(1), 18–31.CrossRefGoogle Scholar
  17. Buchy, M., & Quinlan, K. M. (2000). Adapting the scoring matrix: a case study of adapting disciplinary tools for learning centred evaluation. Assessment & Evaluation in Higher Education, 25(1), 81–91. doi: 10.1080/713611419.CrossRefGoogle Scholar
  18. Campbell, D. J. (1988). Task complexity: a review and analysis. Academy of Management Review, 13(1), 40–52.Google Scholar
  19. Carterette, B., Bennett, P. N., Chickering, D. M., & Dumais, S. T. (2008). Here or there: preference judgments for relevance. Lecture Notes in Computer Science, 4956, 16–27.CrossRefGoogle Scholar
  20. Cho, K., Chung, T. R., King, W. R., & Schunn, C. D. (2008). Peer-based computer-supported knowledge refinement: an empirical investigation. Communications of the ACM, 51(3), 83–88. doi: 10.1145/1325555.1325571.CrossRefGoogle Scholar
  21. Conklin, J. (2001). Wicked problems and social complexity. Dialogue Mapping:Building Shared Understanding of Wicked Problems Retrieved from Scholar
  22. Crooks, T. (2001). The validity of formative assessments. University of Leeds.Google Scholar
  23. Curran, P., Bauer, D., & Willoughby, M. (2004). Testing main effects and interactions in latent curve analysis. Psychological Methods, 9(2), 220–237.CrossRefGoogle Scholar
  24. Cusinato, A., Della Mea, V., Di Salvatore, F., & Mizzaro, S. (2009). QuWi: Quality control in Wikipedia. In Proceedings of the 3rd workshop on information credibility on the web (pp. 27–34). New York, NY, USA: ACM. doi: 10.1145/1526993.1527001.CrossRefGoogle Scholar
  25. Dai, H., Zhu, F., Lim, E.-P., & Pang, H. (2012). Detecting Anomalies in Bipartite Graphs with Mutual Dependency Principles. In IEEE 12th International Conference on Data Mining (ICDM) 2012 (pp. 171–180). IEEE. Retrieved from
  26. Dede, C. (2008). A seismic shift in epistemology. Educause Review, 43(3).Google Scholar
  27. Dochy, F., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer and Co-assessment in higher education: a review. Studies in Higher Education, 24(3), 331–350.CrossRefGoogle Scholar
  28. Dorst, K. (2003). The problem of design problems. Expertise in Design, 135–147.Google Scholar
  29. Douceur, J. R. (2009). Paper rating vs. Paper ranking. ACM SIGOPS Operating Systems Review, 43(2), 117–121.CrossRefGoogle Scholar
  30. Duncan, T. E. (1999). An introduction to latent variable growth curve modeling: Concepts, issues, and applications. Mahwah, N.J.:L. Erlbaum Associates Retrieved from Scholar
  31. Edwards, K. (2001). Epistemic communities. Situated Learning and Open Source Software Development. Epistemic Cultures and the Practice of Interdisciplinarity Retrieved from
  32. Falchikov, N. (1986). Product comparisons and process benefits of collaborative peer group and self assessments. Assessment and Evaluation in Higher Education, 11(2), 146–166.CrossRefGoogle Scholar
  33. Falchikov, N., & Boud, D. (1989). Student self-assessment in higher education: a meta-analysis. Review of Educational Research, 59(4), 395–430. doi: 10.3102/00346543059004395.CrossRefGoogle Scholar
  34. Ford, E., & Babik, D. (2013). Methods and Systems for Educational On-Line Methods.Google Scholar
  35. Gagne, R. M. (1985). The conditions of learning and theory of instruction. Holt Rinehart & Winston.Google Scholar
  36. Gersick, C. J. G. (1991). Revolutionary change theories: a multilevel exploration of the punctuated equilibrium paradigm. Academy of Management Review, 16(1), 10–36. doi: 10.5465/AMR.1991.4278988.CrossRefGoogle Scholar
  37. Gillespie, A., & Cornish, F. (2010). Intersubjectivity: towards a dialogical analysis. Journal for the Theory of Social Behaviour, 40(1), 19–46. doi: 10.1111/j.1468-5914.2009.00419.x.CrossRefGoogle Scholar
  38. Habermas, J. (1981). New social movements.Google Scholar
  39. Hardaway, D. E., & Scamell, R. W. (2012). Open knowledge creation: bringing transparency and inclusiveness to the peer review process. MIS Quarterly, 36(2).Google Scholar
  40. Hargrave, T. J., & Van de Ven, A. H. (2006). A collective action model of institutional innovation. Academy of Management Review, 31(4), 864–888.CrossRefGoogle Scholar
  41. Heersmink, R. (2013). A taxonomy of cognitive artifacts: function, information, and categories. Review of Philosophy and Psychology, 4(3), 465–481.CrossRefGoogle Scholar
  42. Huhta, A. (2008). Diagnostic and formative assessment. In B. Spolsky, & F. M. Hult (Eds.), The handbook of educational linguistics (pp. 469–482). Blackwell Publishing Ltd. Retrieved from
  43. Hu, M., Lim, E.-P., Sun, A., Lauw, H. W., & Vuong, B.-Q. (2007). Measuring article quality in Wikipedia: Models and evaluation. In Proceedings of the sixteenth ACM conference on information and knowledge management (pp. 243–252). New York: ACM. doi: 10.1145/1321440.1321476.CrossRefGoogle Scholar
  44. Jones, B. L., Nagin, D. S., & Roeder, K. (2001). A SAS procedure based on mixture models for estimating developmental trajectories. Sociological Methods & Research, 29(3), 374–393. doi: 10.1177/0049124101029003005.CrossRefGoogle Scholar
  45. Joordens, S., Desa, S., & Paré, D. (2009). The pedagogical anatomy of peer assessment: dissecting a peerScholar assignment. Journal of Systemics, Cybernetics & Informatics, 7(5) Retrieved from$/sci/pdfs/XE123VF.pdf.
  46. Kamsu-Foguem, B., Tchuenté-Foguem, G., & Foguem, C. (2014). Using conceptual graphs for clinical guidelines representation and knowledge visualization. Information Systems Frontiers, 16(4), 571–589. doi: 10.1007/s10796-012-9360-2.CrossRefGoogle Scholar
  47. Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association, 90(430), 773–795. doi: 10.1080/01621459.1995.10476572.CrossRefGoogle Scholar
  48. Kass, R. E., & Wasserman, L. (1995). A reference Bayesian test for nested hypotheses and its relationship to the Schwarz criterion. Journal of the American Statistical Association, 90(431), 928–934. doi: 10.1080/01621459.1995.10476592.CrossRefGoogle Scholar
  49. King, A. (1989). Verbal interaction and problem-solving within computer-assisted cooperative learning groups. Journal of Educational Computing Research, 5(1), 15.Google Scholar
  50. Kirsh, D. (2010). Thinking with external representations. AI & SOCIETY, 25(4), 441–454.CrossRefGoogle Scholar
  51. Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning & Memory Journal of Experimental Psychology: Human Learning & Memory, 6(2), 107–118.CrossRefGoogle Scholar
  52. Kreps, D. M. (1997). Intrinsic motivation and extrinsic incentives. American Economic Review, 87(2), 359–364.Google Scholar
  53. Krosnick, J. A. (1999). Maximizing questionnaire quality. In J. P. Robinson, P. R. Shaver, & L. S. Wrightsman (Eds.), Measures of political attitudes (pp. 37–57). San Diego, CA US: Academic Press.Google Scholar
  54. Krosnick, J. A., Thomas, R., & Shaeffer, E. (2003). How Does Ranking Rate?: A Comparison of Ranking and Rating Tasks. In Conference PapersAmerican Association for Public Opinion Research (p. N.PAG).Google Scholar
  55. Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121–1134. doi: 10.1037/0022-3514.77.6.1121.CrossRefGoogle Scholar
  56. Kruger, J., & Dunning, D. (2002). Unskilled and unaware–but why? A reply to Krueger and Mueller. Journal of Personality and Social Psychology, 82(2), 189–192.CrossRefGoogle Scholar
  57. Lauw, H. W., Lim, E.-P., & Wang, K. (2006). Bias and Controversy: Beyond the Statistical Deviation. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 625–630). ACM. Retrieved from
  58. Lauw, H. W., Lim, E.-P., & Wang, K. (2008). Bias and controversy in evaluation systems. IEEE Transactions on Knowledge and Data Engineering, 20(11), 1490–1504.CrossRefGoogle Scholar
  59. Leite, W. L., & Stapleton, L. M. (2011). Detecting growth shape misspecifications in latent growth models: an evaluation of fit indexes. The Journal of Experimental Education, 79(4), 361–381. doi: 10.1080/00220973.2010.509369.CrossRefGoogle Scholar
  60. Lichtenstein, S., & Fischhoff, B. (1980). Training for calibration. OBHP Organizational Behavior and Human Performance, 26(2), 149–171.CrossRefGoogle Scholar
  61. Lindblom-ylänne, S., Pihlajamäki, H., & Kotkas, T. (2006). Self-, peer- and teacher-assessment of student essays. Active Learning in Higher Education, 7(1), 51–62.CrossRefGoogle Scholar
  62. Lin, S. S.., Liu, E. Z.., & Yuan, S. M. (2001). Web-based peer assessment: feedback for students with various thinking-styles. Journal of Computer Assisted Learning, 17(4), 420–432. doi: 10.1046/j.0266-4909.2001.00198.x CrossRefGoogle Scholar
  63. Markus, M. L., & Robey, D. (1988). Information technology and organizational change: causal structure in theory and research. Management Science Management Science, 34(5), 583–598.CrossRefGoogle Scholar
  64. Matusov, E. (1996). Intersubjectivity without agreement. Mind, Culture, and Activity, 3(1), 25–45. doi: 10.1207/s15327884mca0301_4.CrossRefGoogle Scholar
  65. McKenzie, C. R. (1997). Underweighting alternatives and overconfidence. YOBHD Organizational Behavior and Human Decision Processes, 71(2), 141–160.CrossRefGoogle Scholar
  66. Meyer, B. D. (1995). Natural and quasi-experiments in economics. Journal of Business & Economic Statistics, 13(2).Google Scholar
  67. Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review, 63(2), 81.CrossRefGoogle Scholar
  68. Miranda, S. M., & Saunders, C. S. (2003). The social construction of meaning: an alternative perspective on information sharing. Information Systems Research, 14(1), 87–106.CrossRefGoogle Scholar
  69. Mizzaro, S. (2003). Quality control in scholarly publishing: a new proposal. Journal of the American Society for Information Science and Technology, 54(11), 989–1005.CrossRefGoogle Scholar
  70. Norman, D. A. (1992). Design principles for cognitive artifacts. Research in Engineering Design Research in Engineering Design, 4(1), 43–50.CrossRefGoogle Scholar
  71. Orsmond, P., Merry, S., & Reiling, K. (2000). The use of student derived marking criteria in peer and self-assessment. Assessment and Evaluation in Higher Education, 25(1), 23–38. doi: 10.1080/02602930050025006.CrossRefGoogle Scholar
  72. Piaget, J., & Gabain, M. (1926). The language and thought of the child, by Jean Piaget...Preface by Professor E. Claparède. London, K. Paul, Trench, Trubner & co., ltd.; New York, Harcourt Brace & company, inc., 1926.Google Scholar
  73. Pulford, B. D., & Colman, A. M. (1997). Overconfidence: feedback and item difficulty effects. PAID Personality and Individual Differences, 23(1), 125–133.CrossRefGoogle Scholar
  74. Raman, K., & Joachims, T. (2014). Methods for Ordinal Peer Grading. arXiv:1404.3656 [cs]. Retrieved from
  75. Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4(2), 155–169. doi: 10.1007/BF01405730.CrossRefGoogle Scholar
  76. Roos, M., Rothe, J., Rudolph, J., Scheuermann, B., & Stoyan, D. (2012). A Statistical Approach to Calibrating the Scores of Biased Reviewers: The Linear vs. the Nonlinear Model. In Proceedings of the 6th Multidisciplinary Workshop on Advances in Preference Handling. Retrieved from
  77. Ryvkin, D., Krajč, M., & Ortmann, A. (2012). Are the unskilled doomed to remain unaware? Journal of Economic Psychology, 33(5), 1012–1031. doi: 10.1016/j.joep.2012.06.003.CrossRefGoogle Scholar
  78. Sadler, P. M., & Good, E. (2006). The impact of self-and peer-grading on student learning. Educational Assessment, 11(1), 1–31.Google Scholar
  79. Sage, A. P., & Rouse, W. B. (1999). Information systems frontiers in knowledge management. Information Systems Frontiers, 1(3), 205–219.CrossRefGoogle Scholar
  80. Salazar-Torres, G., Colombo, E., Da Silva, F. S. C., Noriega, C. A., & Bandini, S. (2008). Design issues for knowledge artifacts. Knowledge-Based Systems, 21(8), 856–867. doi: 10.1016/j.knosys.2008.03.058.CrossRefGoogle Scholar
  81. Sargeant, J., Mann, K., van der Vleuten, C., & Metsemakers, J. (2008). “Directed” self-assessment: practice and feedback within a social context. Journal of Continuing Education in the Health Professions, 28(1), 47–54.CrossRefGoogle Scholar
  82. Scheff, T. J. (2006). Goffman unbound!: A New paradigm for social science. Boulder, Colo.:Paradigm Publishers.Google Scholar
  83. Schleicher, D. J., Bull, R. A., & Green, S. G. (2008). Rater reactions to forced distribution rating systems. Journal of Management, 35(4), 899–927. doi: 10.1177/0149206307312514.CrossRefGoogle Scholar
  84. Schutz, A. (1967). The Phenomenology of the Social World. Translated by George Walsh and Frederick Lehnert. With an introd. by George Walsh. Evanston, Ill.:Northwestern University Press.Google Scholar
  85. Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6(2), 461–464. doi: 10.1214/aos/1176344136.CrossRefGoogle Scholar
  86. Shah, N. B., Bradley, J. K., Parekh, A., Wainwright, M., & Ramchandran, K. (2013). A case for ordinal peer evaluation in MOOCs. NIPS Workshop on Data Driven Education Retrieved from
  87. Sharp, G. L., Cutler, B. L., & Penrod, S. D. (1988). Performance feedback improves the resolution of confidence judgments. YOBHD Organizational Behavior and Human Decision Processes, 42(3), 271–283.CrossRefGoogle Scholar
  88. Shepard, L. A. (2007). Formative assessment: Caveat emptor. Erlbaum.Google Scholar
  89. Sieck, W. R., & Arkes, H. R. (2005). The recalcitrance of overconfidence and its contribution to decision aid neglect. J. Behav. Decis. Making Journal of Behavioral Decision Making, 18(1), 29–53.CrossRefGoogle Scholar
  90. Sieck, W. R., Merkle, E. C., & Van Zandt, T. (2007). Option fixation: a cognitive contributor to overconfidence. Organizational Behavior and Human Decision Processes, 103(1), 68–83. doi: 10.1016/j.obhdp.2006.11.001.CrossRefGoogle Scholar
  91. Simon, H. A. (1959). Theories of decision-making in economics and behavioral science. The American Economic Review, 49(3), 253–283.Google Scholar
  92. Simon, H. A. (1969). The sciences of the artificial. The MIT Press.Google Scholar
  93. Simon, H. A. (1973). The structure of ill-structured problems. ARTINT Artificial Intelligence, 4(3), 181–201.CrossRefGoogle Scholar
  94. Simon, H. A. (1979). Information processing models of cognition. Annual Review of Psychology, 30(1), 363–396.CrossRefGoogle Scholar
  95. Slavin, R. E. (1992). When and why does cooperative learning increase academic achievement? Theoretical and empirical perspectives:Cambridge University Press.Google Scholar
  96. Sluijsmans, D., Brand-Gruwel, S., van Merriënboer, J. J. G., & Bastiaens, T. J. (2002). The training of peer assessment skills to promote the development of reflection skills in teacher education. Studies in Educational Evaluation, 29(1), 23–42. doi: 10.1016/S0191-491X(03)90003-4.CrossRefGoogle Scholar
  97. Sluijsmans, D., Brand-Gruwel, S., van Merrienboer, J., & Martens, R. (2004). Training teachers in peer-assessment skills: effects on performance and perceptions. Innovations in Education and Teaching International, 41(1), 59–78.CrossRefGoogle Scholar
  98. Stewart, N., Brown, G. D. A., & Chater, N. (2005). Absolute identification by relative judgment. Psychological Review, 112(4), 881–911.CrossRefGoogle Scholar
  99. Stone, E. R., & Opel, R. B. (2000). Training to improve calibration and discrimination: the effects of performance and environmental feedback. YOBHD Organizational Behavior and Human Decision Processes, 83(2), 282–309.CrossRefGoogle Scholar
  100. Surowiecki, J. (2004). The wisdom of crowds: Why the many are smarter than the Few and How collective wisdom shapes business, economies, societies, and nations. New York:Doubleday.Google Scholar
  101. Sutton, D. C. (2001). What is knowledge and can it be managed? European Journal of Information Systems, 10(2), 80–88.CrossRefGoogle Scholar
  102. Taras, M. (2002). Using assessment for learning and learning from assessment. Assessment & Evaluation in Higher Education, 27(6), 501–510. doi: 10.1080/0260293022000020273.CrossRefGoogle Scholar
  103. Topping, K. J. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249–276. doi: 10.3102/00346543068003249.CrossRefGoogle Scholar
  104. Topping, K. J. (2005). Trends in peer learning. Educational Psychology, 25(6), 631–645. doi: 10.1080/01443410500345172.CrossRefGoogle Scholar
  105. Tung, W.-F., Yuan, S.-T., Wu, Y.-C., & Hung, P. (2014). Collaborative service system design for music content creation. Information Systems Frontiers, 16(2), 291–302. doi: 10.1007/s10796-012-9346-0.CrossRefGoogle Scholar
  106. Uebersax, J. S. (1988). Validity inferences from interobserver agreement. Psychological Bulletin, 104(3), 405–416.CrossRefGoogle Scholar
  107. Van Gennip, N. A. E., Segers, M. S. R., & Tillema, H. H. (2009). Peer assessment for learning from a social perspective: the influence of interpersonal variables and structural features. Educational Research Review, 4(1), 41–54. doi: 10.1016/j.edurev.2008.11.002.CrossRefGoogle Scholar
  108. Van Gennip, N. A. E., Segers, M. S. R., & Tillema, H. H. (2010). Peer assessment as a collaborative learning activity: the role of interpersonal variables and conceptions. Learning and Instruction, 20(4), 280–290. doi: 10.1016/j.learninstruc.2009.08.010.CrossRefGoogle Scholar
  109. Walsham, G. (2006). Doing interpretive research. European Journal of Information Systems, 15(3), 320–330.CrossRefGoogle Scholar
  110. Wang, M. (2011). Integrating organizational, social, and individual perspectives in web 2.0-based workplace E-learning. Information Systems Frontiers, 13(2), 191–205. doi: 10.1007/s10796-009-9191-y.CrossRefGoogle Scholar
  111. Wang, S.-L., & Wu, P.-Y. (2008). The role of feedback and self-efficacy on web-based learning: the social cognitive perspective. Computers & Education, 51(4), 1589–1598. doi: 10.1016/j.compedu.2008.03.004.CrossRefGoogle Scholar
  112. Yu, F. Y., Liu, Y. H., & Chan, T. W. (2005). A web-based learning system for question-posing and peer assessment. Innovations in Education and Teaching International, 42(4), 337–348.CrossRefGoogle Scholar
  113. Zheng, Z. (. E.)., Pavlou, P. A., & Gu, B. (2014). Latent growth modeling for information systems: theoretical extensions and practical applications. Information Systems Research, 25(3), 547–568. doi: 10.1287/isre.2014.0528.CrossRefGoogle Scholar
  114. Zigurs, I., & Buckland, B. K. (1998). A theory of task: technology fit and group support systems effectiveness. MIS Quarterly, 22(3), 313–334.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.Department of Information Systems and Supply Chain ManagementThe University of North Carolina at GreensboroGreensboroUSA
  2. 2.Health Policy and Management, Bloomberg School of Public HealthJohns Hopkins UniversityBaltimoreUSA
  3. 3.Social Learning Solutions LLCGreensboroUSA

Personalised recommendations