Computer-based Assessment of Collaborative Problem Solving: Exploring the Feasibility of Human-to-Agent Approach

Article

Abstract

How can activities in which collaborative skills of an individual are measured be standardized? In order to understand how students perform on collaborative problem solving (CPS) computer-based assessment, it is necessary to examine empirically the multi-faceted performance that may be distributed across collaboration methods. Theaim of this study was to explore possible differences in student performance in humanto-agent (H-A), compared to human-to-human (H-H) CPS assessment tasks. One hundred seventy nine 14 years-old students from the United States, Singapore and Israel participated in the study. Students in both H-H and H-A modes were able to collaborate and communicate by using identical methods and resources. However, while in the H-A mode, students collaborated with a simulated computer-driven partner, and in the H-H mode students collaborated with another student to solve a problem. Overall, the findings showed that CPS with a computer agent involved significantly higher levels of shared understanding, progress monitoring, and feedback. However, no significant difference was found in a student’s ability to solve the problem or in student motivation with a computer agent or a human partner. One major implication of CPS score difference in collaboration measures between the two modes is that in H-A mode one can program a wider range of interaction possibilities than would be available with a human partner. Thus, H-A approach offers more opportunities for students to demonstrate their CPS skills. This study is among the first of its kind to investigate systematically the effect of collaborative problem solving in standardized assessment settings.

Keywords

Collaborative problem solving Computer agent Performance assessment 

References

  1. Adejumo, G., Duimering, R. P., & Zhong, Z. (2008). A balance theory approach to group problem solving. Social Networks, 30(1), 83–99.CrossRefGoogle Scholar
  2. Anderson, J. R. (1990). The adaptive character of thought. Hillsdale: Erlbaum.Google Scholar
  3. Aronson, E., & Patnoe, S. (2011). Cooperation in the classroom: The jigsaw method. London: Pinter & Martin, Ltd.Google Scholar
  4. Avouris, N., Dimitracopoulou, A., & Komis, V. (2003). On analysis of collaborative problem solving: an object-oriented approach. Computers in Human Behavior, 19(2), 147–167.CrossRefGoogle Scholar
  5. Barth, C. M., & Funke, J. (2010). Negative affective environments improve complex solving performance. Cognition and Emotion, 24(7), 1259–1268.CrossRefGoogle Scholar
  6. Biswas, G., Leelawong, K., Schwartz, D., & Vye, N. (2005). Learning by teaching: a new agent paradigm for educational software. Applied Artificial Intelligence, 19, 363–392.CrossRefGoogle Scholar
  7. Biswas, G., Jeong, H., Kinnebrew, J. S., Sulcer, B., & Roscoe, A. R. (2010). Measuring self-regulated learning skills through social interactions in a teachable agent environment. Research and Practice in Technology-Enhanced Learning, 5(2), 123–152.CrossRefGoogle Scholar
  8. Cai, Z., Graesser, A. C., Forsyth, C., Burkett, C., Millis, K., Wallace, P., Halpern, D., & Butler, H. (2011). Trialog in ARIES: User input assessment in an intelligent tutoring system. In W. Chen & S. Li (Eds.), Proceedings of the 3rd IEEE international conference on intelligent computing and intelligent systems (pp. 429–433). Guangzhou: IEEE Press.Google Scholar
  9. Chen, Z., & Klahr, D. (1999). All other things being equal: acquisition and transfer of the control of variables strategy. Child Development, 70(5), 1098–1120.CrossRefGoogle Scholar
  10. Chung, G. K. W. K., O’Neil, H. F., Jr., & Herl, H. E. (1999). The use of computer-based collaborative knowledge mapping to measure team processes and team outcomes. Computers in Human Behavior, 15, 463–494.CrossRefGoogle Scholar
  11. Clark, H. H. (1996). Using language. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  12. Cooke, N. J., Kiekel, P. A., Salas, E., Stout, R., Bowers, C., & Cannon- Bowers, J. (2003). Measuring team knowledge: a window to the cognitive underpinnings of team performance. Group Dynamics: Theory, Research and Practice, 7(3), 179–219.CrossRefGoogle Scholar
  13. Dillenbourg, P. (Ed.). (1999). Collaborative learning: Cognitive and computational approaches. Amsterdam, NL: Pergamon, Elsevier Science.Google Scholar
  14. Dillenbourg, P., & Traum, D. (2006). Sharing solutions: persistence and grounding in multi-modal collaborative problem solving. The Journal of the Learning Sciences, 15(1), 121–151.CrossRefGoogle Scholar
  15. Eklöf, H. (2006). Development and validation of scores from an instrument measuring student test-taking motivation. Educational and Psychological Measurement, 66(4), 643–656.MathSciNetCrossRefGoogle Scholar
  16. Fall, R., Webb, N., & Chudowsky, N. (1997). Group discussion and large-scale language arts assessment: Effects on students’ comprehension. CSE Technical Report 445. Los Angeles: CRESST.Google Scholar
  17. Fiore, S., & Schooler, J. W. (2004). Process mapping and shared cognition: Teamwork and the development of shared problem models. In E. Salas & S. M. Fiore (Eds.), Team cognition: Understanding the factors that drive process and performance (pp. 133–152). Washington DC: American Psychological Association.CrossRefGoogle Scholar
  18. Fiore, S., Rosen, M., Smith-Jentsch, K., Salas, E., Letsky, M., & Warner, N. (2010). Toward an understanding of macrocognition in teams: predicting process in complex collaborative contexts. The Journal of the Human Factors and Ergonomics Society, 53(2), 203–224.CrossRefGoogle Scholar
  19. Foltz, P. W., & Martin, M. J. (2008). Automated communication analysis of teams. In E. Salas, G. F. Goodwin, & S. Burke (Eds.), Team effectiveness in complex organizations and systems: Cross-disciplinary perspectives and approaches (pp. 411–431). New York: Routledge.Google Scholar
  20. Franklin, S., & Graesser, A. C. (1996). Is it an agent or just a program? A taxonomy for autonomous agents. In Proceedings of the Agent Theories, Architectures, and Languages Workshop (pp. 21–35). Berlin: Springer-Verlag.Google Scholar
  21. Funke, J. (2010). Complex problem solving: a case for complex cognition? Cognitive Processing, 11, 133–142.CrossRefGoogle Scholar
  22. Graesser, A. C., Jeon, M., & Dufty, D. (2008). Agent technologies designed to facilitate interactive knowledge construction. Discourse Processes, 45(4), 298–322.CrossRefGoogle Scholar
  23. Graesser, A. C., Foltz, P., Rosen, Y., Forsyth, C., & Germany, M. (2015). Challenges of assessing collaborative problem solving. In B. Csapo, J. Funke, & A. Schleicher (eds.), The nature of problem solving. OECD Series.Google Scholar
  24. Griffin, P., Care, E., & McGaw, B. (2012). The changing role of education and schools. In P. Griffin, B. McGaw, & E. Care (Eds.), Assessment and teaching 21st century skills (pp. 1–15). Heidelberg: Springer.CrossRefGoogle Scholar
  25. Hsieh, I.-L., & O’Neil, H. F., Jr. (2002). Types of feedback in a computer-based collaborative problem solving group task. Computers in Human Behavior, 18, 699–715.CrossRefGoogle Scholar
  26. Laurillard, D. (2009). The pedagogical challenges to collaborative technologies. International Journal of Computer-Supported Collaborative Learning, 4(1), 5–20.CrossRefGoogle Scholar
  27. Leelawong, K., & Biswas, G. (2008). Designing learning by teaching systems: the betty’s brain system. International Journal of Artificial Intelligence in Education, 18(3), 181–208.Google Scholar
  28. Mayer, R. E., & Wittrock, M. C. (1996). Problem-solving transfer. In D. C. Berliner & R. C. Calfee (Eds.), Handbook of educational psychology (pp. 47–62). New York: Macmillan Library Reference USA, Simon & Schuster Macmillan.Google Scholar
  29. Mayer, R. E., & Wittrock, M. C. (2006). Problem solving. In P. A. Alexander & P. Winne (Eds.), Handbook of educational psychology (2nd ed., pp. 287–304). Mahwah: Lawrence Erlbaum Associates.Google Scholar
  30. Millis, K., Forsyth, C., Butler, H., Wallace, P., Graesser, A. C., & Halpern, D. (2011). Operation ARIES! A serious game for teaching scientific inquiry. In M. Ma, A. Oikonomou, & J. Lakhmi (Eds.), Serious games and edutainment applications (pp. 169–195). London: Springer.CrossRefGoogle Scholar
  31. Mitchell, R., & Nicholas, S. (2006). Knowledge creation in groups: the value of cognitive diversity, transactive memory and open-mindedness norms. The Electronic Journal of Knowledge Management, 4(1), 64–74.Google Scholar
  32. National Research Council. (2011). Assessing 21st century skills. Washington: National Academies Press.Google Scholar
  33. Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs: Prentice-Hall.Google Scholar
  34. O’Neil, H. F., Jr., & Chuang, S. H. (2008). Measuring collaborative problem solving in low-stakes tests. In E. L. Baker, J. Dickieson, W. Wulfeck, & H. F. O’Neil (Eds.), Assessment of problem solving using simulations (pp. 177–199). Mahwah: Lawrence Erlbaum Associates.Google Scholar
  35. O’Neil, H. F., Jr., Chung, G. K. W. K., & Brown, R. (1997). Use of networked simulations as a context to measure team competencies. In H. F. O’Neil Jr. (Ed.), Workforce readiness: Competencies and assessment (pp. 411–452). Mahwah: Lawrence Erlbaum Associates.Google Scholar
  36. O’Neil, H. F., Jr., Chen, H. H., Wainess, R., & Shen, C. Y. (2008). Assessing problem solving in simulation games. In E. L. Baker, J. Dickieson, W. Wulfeck, & H. F. O’Neil (Eds.), Assessment of problem solving using simulations (pp. 157–176). Mahwah: Lawrence Erlbaum Associates.Google Scholar
  37. O’Neil, H. F., Jr., Chuang, S. H., & Baker, E. L. (2010). Computer-based feedback for computer-based collaborative problem solving. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 261–279). New York: Springer.CrossRefGoogle Scholar
  38. OECD (2013). PISA 2015 Collaborative problem solving framework. OECD Publishing.Google Scholar
  39. Rimor, R., Rosen, Y., & Naser, K. (2010). Complexity of social interactions in collaborative learning: the case of online database environment. Interdisciplinary Journal of E-Learning and Learning Objects, 6, 355–365.Google Scholar
  40. Roschelle, J., & Teasley, S. D. (1995). The construction of shared knowledge in collaborative problem-solving. In C. E. O’Malley (Ed.), Computer-supported collaborative learning (pp. 69–97). Berlin: Springer.CrossRefGoogle Scholar
  41. Rosé, C. P., & Torrey, C. (2005). Interactivity and expectation: Eliciting learning oriented behavior with tutorial dialogue systems. In Human-computer interaction-INTERACT 2005 (pp. 323–336). Springer Berlin Heidelberg.Google Scholar
  42. Rosen, Y. (2009). Effects of an animation-based on-line learning environment on transfer of knowledge and on motivation for science and technology learning. Journal of Educational Computing Research, 40(4), 451–467.CrossRefGoogle Scholar
  43. Rosen, Y. (2014). Comparability of conflict opportunities in human-to-human and human-to-agent online collaborative problem solving. Technology, Knowledge and Learning, 19(1–2), 147–174.CrossRefGoogle Scholar
  44. Rosen, Y., & Beck-Hill, D. (2012). Intertwining digital content and a one-to-one laptop environment in teaching and learning: lessons from the Time To Know program. Journal of Research on Technology in Education, 44(3), 225–241.CrossRefGoogle Scholar
  45. Rosen, Y., & Rimor, R. (2009). Using collaborative database to enhance students’ knowledge construction. Interdisciplinary Journal of E-Learning and Learning Objects, 5, 187–195.Google Scholar
  46. Rosen, Y., & Rimor, R. (2012). Teaching and assessing problem solving in online collaborative environment. In R. Hartshorne, T. Heafner, & T. Petty (Eds.), Teacher education programs and online learning tools: Innovations in teacher preparation (pp. 82–97). Hershey: Information Science Reference, IGI Global.Google Scholar
  47. Scardamalia, M. (Ed.). (2002). Collective cognitive responsibility for the advancement of knowledge. Chicago: Open Court.Google Scholar
  48. Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge. Cambridge: MIT Press.Google Scholar
  49. Sundre, D. L. (1999). Does examinee motivation moderate the relationship between test consequences and test performance? Paper presented at the annual meeting of the American Educational Research Association, Montreal, Quebec, Canada.Google Scholar
  50. Sundre, D. L., & Kitsantas, A. (2004). An exploration of the psychology of the examinee: can examinee self-regulation and test-taking motivation predict consequential and nonconsequential test performance? Contemporary Educational Psychology, 29(1), 6–26.CrossRefGoogle Scholar
  51. U.S. Department of Education. (2010). Transforming American education – learning powered by technology: National Education Technology Plan 2010. Washington: Office of Educational Technology, U.S. Department of Education.Google Scholar
  52. Vollmeyer, R., & Rheinberg, F. (1999). Motivation and metacognition when learning a complex problem. European Journal of Psychology of Education, 14, 541–554.CrossRefGoogle Scholar
  53. Webb, N. M. (1995). Group collaboration in assessment: multiple objectives, processes, and outcomes. Educational Evaluation and Policy Analysis, 17(2), 239–261.CrossRefGoogle Scholar
  54. Webb, N. M., Nemer, K. M., Chizhik, A. W., & Sugrue, B. (1998). Equity issues in collaborative group assessment: group composition and performance. American Educational Research Journal, 35(4), 607–651.CrossRefGoogle Scholar
  55. Weinberger, A., & Fischer, F. (2006). A framework to analyze argumentative knowledge construction in computer-supported collaborative learning. Computers & Education, 46, 71–95.CrossRefGoogle Scholar
  56. Wildman, J. L., Shuffler, M. L., Lazzara, E. H., Fiore, S. M., Burke, C. S., Salas, E., & Garven, S. (2012). Trust development in swift starting action teams: a multilevel framework. Group & Organization Management, 37(2), 138–170.CrossRefGoogle Scholar
  57. Wise, S. L., & DeMars, C. E. (2005). Low examinee effort in low-stakes assessment: problems and potential solutions. Educational Assessment, 10(1), 1–17.CrossRefGoogle Scholar
  58. Wise, S. L., & DeMars, C. E. (2006). An application of item response time: the effort-moderated IRT model. Journal of Educational Measurement, 43(1), 19–38.Google Scholar
  59. Zhang, J. (1998). A distributed representation approach to group problem solving. Journal of the American Society for Information Science, 49, 801–809.CrossRefGoogle Scholar

Copyright information

© International Artificial Intelligence in Education Society 2015

Authors and Affiliations

  1. 1.PearsonBrooklineUSA

Personalised recommendations