Validation study of a method for assessing complex ill-structured problem solving by using causal representations

Development Article

Abstract

The important but little understood problem that motivated this study was the lack of research on valid assessment methods to determine progress in higher-order learning in situations involving complex and ill-structured problems. Without a valid assessment method, little progress can occur in instructional design research with regard to designing effective learning environments to facilitate acquisition of expertise in complex, ill-structured knowledge domains. In this paper, we first present a method based on causal representations for assessing progress of learning in complex, ill-structured problem solving and discuss its theoretical framework. Then, we present an experimental study investigating its validity against adapted protocol analysis. This study explored the impact of a massively multiplayer online educational game, which was designed to support an interdisciplinary STEM education on ninth-grade students’ complex, ill-structured problem solving skill acquisition. We identify conceptual similarities and differences between the two methods, present our comparative study and its results, and then discuss implications for diagnostics and applications. We conclude by determining how the two approaches could be used in conjunction for further research on complex and ill-structured problem solving.

Keywords

Complex problem solving Causal representation HIMATT 

References

  1. Achtenhagen, F. (2000). Reality, models, and complex teaching-learning environments. In J. M. Spector & T. M. Anderson (Eds.), Integrated and holistic perspectives on learning, instruction, and technology: Understanding complexity (pp. 159–174). Dordrecht: Kluwer Academic Publishers.Google Scholar
  2. Akin, O. (1978). How do architectures design? In J. C. Latombe (Ed.), Artificial intelligence and pattern recognition in computer-aided design (pp. 65–119). Amsterdam: North-Holland.Google Scholar
  3. Al-Diban, S. (2008). Progress in the diagnosis of mental models. In D. Ifenthaler, P. Pirnay-Dummer, & J. M. Spector (Eds.), Understanding models for learning and instruction: Essays in honor of Norbert M. Seel (pp. 81–102). New York: Springer.CrossRefGoogle Scholar
  4. Al-Diban, S., & Ifenthaler, D. (2011). Comparison of two analysis approaches for measuring externalized mental models: Implications for diagnostics and applications. Journal of Educational Technology & Society, 14(2), 16–30.Google Scholar
  5. Andrews, G., & Halford, G. S. (2002). A cognitive complexity metric applied to cognitive development. Cognitive Psychology, 45(2), 153–219.CrossRefGoogle Scholar
  6. Baker, E. L., & Schacter, J. (1996). Expert benchmarks for student academic performance: The case for gifted children. Gifted Child Quarterly, 40, 61–65.CrossRefGoogle Scholar
  7. Belland, B. R., French, B. F., & Ertmer, P. A. (2009). Validity and problem-based learning research: A review of instruments used to assess intended learning outcomes. The Interdisciplinary Journal of Problem-based Learning, 3(1), 59–89.Google Scholar
  8. Berliner, D. C. (2002). Learning about and learning from expert teachers. International Journal of Educational Research, 35(5), 463–482. doi:10.1016/S0883-0355(02)00004-6.CrossRefGoogle Scholar
  9. Bierhals, R., Schuster, I., Kohler, P., & Badke-Schaub, P. (2007). Shared mental models—linking team cognition and performance. CoDesign, 3(1), 75–94.CrossRefGoogle Scholar
  10. Chi, M. T. H., & Glaser, R. (1985). Problem solving ability. In R. J. Sternberg (Ed.), Human abilities: An information processing approach (pp. 227–257). San Francisco: W. H. Freeman & Co.Google Scholar
  11. Clariana, R. B. (2010). Deriving individual and group knowledge structure from network diagrams and from essays. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 117–130). New York: Springer.CrossRefGoogle Scholar
  12. Clariana, R. B., & Wallace, P. E. (2007). A computer-based approach for deriving and measuring individual and team knowledge structure from essay questions. Journal of Educational Computing Research, 37(3), 211–227.CrossRefGoogle Scholar
  13. Day, E. A., Arthur, W, Jr, & Gettman, D. (2001). Knowledge structures and the acquisition of complex skill. Journal of Applied Psychology, 86, 1022–1033.CrossRefGoogle Scholar
  14. Dijkstra, S., & van Merriënboer, J. J. G. (1997). Plans, procedures, and theories to solve instructional design problems. In S. Dijkstra, N. M. Seel, F. Schott, & R. D. Tennyson (Eds.), Instructional design: International perspectives (Vol. 2, pp. 23–43). Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  15. Dörner, D. (1987). On the difficulties people have in dealing with complexity. In J. Rasmussen, K. Duncker, & J. Leplat (Eds.), New technology and human error (pp. 97–109). Chichester, NY: Wiley.Google Scholar
  16. Dörner, D., Kreuzig, H. W., Reither, F., & Stäudel, T. (1983). Lohhausen. Vom Umgang mit Unbestimmtheit und Komplexität. [Lohhausen. On dealing with uncertainty and complexity]. Bern: Huber.Google Scholar
  17. Dörner, D., & Wearing, A. (1995). Complex problem solving: Toward a (computer-simulated) theory. In P. A. Frensch & J. Funke (Eds.), Complex problem solving: The European perspective (pp. 65–99). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  18. Ericsson, K. A., & Simon, H. A. (1984). Protocol analysis: Verbal reports as data. Cambridge, MA: Bradford Books/MIT Press.Google Scholar
  19. Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data (revised ed.). Cambridge, MA: The MIT Press.Google Scholar
  20. Eseryel, D. (2006). Expert conceptualizations of the domain of instructional design: An investigative study on the DEEP assessment methodology for complex problem-solving outcomes. Ph.D. Doctoral Dissertation, Syracuse University, Syracuse, NY.Google Scholar
  21. Eseryel, D., Ge, X., Ifenthaler, D., & Law, V. (2011a). Dynamic modeling as cognitive regulation scaffold for complex problem solving skill acquisition in an educational massively multiplayer online game environment. Journal of Educational Computing Research, 45(3), 265–287.CrossRefGoogle Scholar
  22. Eseryel, D., Ifenthaler, D., & Ge, X. (2011b). Alternative assessment strategies for complex problem solving in game-based learning environments. In D. Ifenthaler, Kinshuk, P. Isaias, D. G. Sampson, & J. M. Spector (Eds.), Multiple perspectives on problem solving and learning in the digital age (pp. 159–178). New York: Springer.CrossRefGoogle Scholar
  23. Eseryel, D., Law, V., Ifenthaler, D., Ge, X., & Miller, R. B. (2013). An investigation of the interrelationships between motivation, engagement, and complex problem solving in game-based learning. Educational Technology & Society (under review).Google Scholar
  24. Feldon, D. F. (2007). The implications of research on expertise for curriculum and pedagogy. Educational Psychology Review, 19(2), 91–110. doi:10.1007/s10648-006-9009-0.CrossRefGoogle Scholar
  25. Funke, J. (1985). Steuerung dynamischer Prozesse durch Aufbau und Anwendung subjektiver Kausalmodelle. Zeitschrift für Psychologie, 193(4), 443–465.Google Scholar
  26. Funke, J. (1991). Solving complex problems: Exploration and control of complex problems. In R. J. Sternberg & P. A. Frensch (Eds.), Complex problem solving: Principles and mechanisms (pp. 185–222). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  27. Funke, J. (2012). Complex problem solving. In N. M. Seel (Ed.), The encyclopedia of the sciences of learning (Vol. 3, pp. 682–685). New York: Springer.Google Scholar
  28. Ge, X., & Land, S. M. (2003). Scaffolding students’ problem-solving processes in an ill-structured task using question prompts and peer interactions. Educational Technology Research and Development, 51(1), 21–38.CrossRefGoogle Scholar
  29. Ge, X., Planas, L. G., & Er, N. (2010). A cognitive support system to scaffold students’ problem-based learning in a Web-based learning environment. Interdisciplinary Journal of Problem-based Learning, 4(1), 30–56.CrossRefGoogle Scholar
  30. Gordon, I., & Zemke, R. (2000). The attack on ISD: Have we got instructional design all wrong? Training, 37, 43–53.Google Scholar
  31. Guindon, R. (1988). Software design tasks as ill-structured problems, software design as an opportunistic process. Austin, TX: Microelectronics and Computer Technology Corporation.Google Scholar
  32. Ifenthaler, D. (2008). Practical solutions for the diagnosis of progressing mental models. In D. Ifenthaler, P. Pirnay-Dummer, & J. M. Spector (Eds.), Understanding models for learning and instruction. Essays in honor of Norbert M. Seel (pp. 43–61). New York: Springer.CrossRefGoogle Scholar
  33. Ifenthaler, D. (2010a). Relational, structural, and semantic analysis of graphical representations and concept maps. Educational Technology Research and Development, 58(1), 81–97. doi:10.1007/s11423-008-9087-4.CrossRefGoogle Scholar
  34. Ifenthaler, D. (2010b). Scope of graphical indices in educational diagnostics. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 213–234). New York: Springer.CrossRefGoogle Scholar
  35. Ifenthaler, D. (2011a). Identifying cross-domain distinguishing features of cognitive structures. Educational Technology Research and Development, 59(6), 817–840. doi:10.1007/s11423-011-9207-4.CrossRefGoogle Scholar
  36. Ifenthaler, D. (2011b). Intelligent model-based feedback. Helping students to monitor their individual learning progress. In S. Graf, F. Lin, Kinshuk, & R. McGreal (Eds.), Intelligent and adaptive systems: Technology enhanced support for learners and teachers (pp. 88–100). Hershey, PA: IGI Global.CrossRefGoogle Scholar
  37. Ifenthaler, D. (2012). Determining the effectiveness of prompts for self-regulated learning in problem-solving scenarios. Journal of Educational Technology & Society, 15(1), 38–52.Google Scholar
  38. Ifenthaler, D., & Eseryel, D. (2013). Facilitating complex learning by mobile augmented reality learning environments. In R. Huang, J. M. Spector, & Kinshuk (Eds.), Reshaping learning: The frontiers of learning technologies in a global context (pp. 415–438). New York: Springer.CrossRefGoogle Scholar
  39. Ifenthaler, D., & Lehmann, T. (2012). Preactional self-regulation as a tool for successful problem solving and learning. Technology, Instruction, Cognition and Learning, 9(1–2), 97–110.Google Scholar
  40. Ifenthaler, D., Masduki, I., & Seel, N. M. (2011). The mystery of cognitive structure and how we can detect it. Tracking the development of cognitive structures over time. Instructional Science, 39(1), 41–61. doi:10.1007/s11251-009-9097-6.CrossRefGoogle Scholar
  41. Ifenthaler, D., & Pirnay-Dummer, P. (2013). Model-based tools for knowledge assessment. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (4th ed.). New York: Springer.Google Scholar
  42. Ifenthaler, D., & Seel, N. M. (2005). The measurement of change: Learning-dependent progression of mental models. Technology, Instruction, Cognition and Learning, 2(4), 317–336.Google Scholar
  43. Ifenthaler, D., & Seel, N. M. (2011). A longitudinal perspective on inductive reasoning tasks. Illuminating the probability of change. Learning and Instruction, 21(4), 538–549. doi:10.1016/j.learninstruc.2010.08.004.CrossRefGoogle Scholar
  44. Johnson, T. E., Ifenthaler, D., Pirnay-Dummer, P., & Spector, J. M. (2009). Using concept maps to assess individuals and team in collaborative learning environments. In P. L. Torres & R. C. V. Marriott (Eds.), Handbook of research on collaborative learning using concept mapping (pp. 358–381). Hershey, PA: Information Science Publishing.CrossRefGoogle Scholar
  45. Johnson, T. E., O’Connor, D. L., Spector, J. M., Ifenthaler, D., & Pirnay-Dummer, P. (2006). Comparative study of mental model research methods: Relationships among ACSMM, SMD, MITOCAR & DEEP methodologies. In A. J. Cañas & J. D. Novak (Eds.), Concept maps: Theory, methodology, technology. Proceedings of the Second International Conference on Concept Mapping, Volume 1 (pp. 87–94). San José: Universidad de Costa Rica.Google Scholar
  46. Jonassen, D. H. (1997). Instructional design models for well-structured and ill-structured problem-solving learning outcomes. Educational Technology Research and Development, 45(1), 65–94.CrossRefGoogle Scholar
  47. Jonassen, D. H. (2000). Toward a design theory of problem solving. Educational Technology Research and Development, 48(4), 63–85. doi:10.1007/BF02300500.CrossRefGoogle Scholar
  48. Jonassen, D. H. (2004). Learning to solve problems: An instructional design guide. San Francisco: Pfeiffer.Google Scholar
  49. Jonassen, D. H. (2009). Externally modeling mental models. In L. Moller, J. B. Huett, & D. Harvey (Eds.), Learning and instructional technologies for the 21st century. Visions of the future (pp. 49–74). New York: Springer.Google Scholar
  50. Jonassen, D. H. (2011). Learning to solve problems. A handbook for designing problem-solving learning environments. New York: Routledge.Google Scholar
  51. Jonassen, D. H., Beissner, K., & Yacci, M. (1993). Structural knowledge: Techniques for representing, conveying, and acquiring structural knowledge. Hilsdale, NJ: Lawrence Erlbaum.Google Scholar
  52. Jonassen, D. H., & Cho, Y. H. (2008). Externalizing mental models with mind tools. In D. Ifenthaler, P. Pirnay-Dummer, & J. M. Spector (Eds.), Understanding models for learning and instruction. Essays in honor of Norbert M. Seel (pp. 145–160). New York: Springer.CrossRefGoogle Scholar
  53. Jonassen, D. H., & Wang, S. (1993). Acquiring structural knowledge from semantically structured hypertext. Journal of Computer-Based Instruction, 20(1), 1–8.Google Scholar
  54. Kauffman, D., Ge, X., Xie, K., & Chen, C. (2008). Prompting in web-based environments: Supporting self-monitoring and problem solving skills in college students. Journal of Educational Computing Research, 38(2), 115–137.CrossRefGoogle Scholar
  55. Kearney, E., Gebert, D., & Voelpel, S. C. (2009). When and how diversity benefits teams: The importance of team members’ need for cognition. Academy of Management Journal, 52(3), 581–598.CrossRefGoogle Scholar
  56. Kim, H. (2008). An investigation of the effects of model-centered instruction in individual and collaborative contexts: The case of acquiring instructional design expertise. Tallahassee, FL: Florida State University.Google Scholar
  57. Lachner, A., & Pirnay-Dummer, P. (2010). Model-based knowledge mapping. In J. M. Spector, D. Ifenthaler, P. Isaias, Kinshuk, & D. G. Sampson (Eds.), Learning and instruction in the digital age (pp. 69–86). New York: Springer.CrossRefGoogle Scholar
  58. LeBlanc, S. E., & Fogler, H. S. (1995). Strategies for creative problem solving. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
  59. Lee, J. (2009). Effects of model-centered instruction and levels of learner expertise on effectiveness, efficiency, and engagement with ill-structured problem solving: An exploratory study of ethical decision making in program evaluation. Tallahassee, FL: Florida State University.Google Scholar
  60. Mason, E. J., & Bramble, W. J. (1989). Understanding and conducting research: Applications in education and the behavioral sciences. New York: McGraw-Hill.Google Scholar
  61. McKeown, J. O. (2009). Using annotated concept map assessments as predictors of performance and understanding of complex problems for teacher technology integration. Tallahassee, FL: Florida State University.Google Scholar
  62. Means, B. (1993). Cognitive task analysis as a basis for instructional design. In M. Rabinowitz (Ed.), Cognitive science foundations of instruction (pp. 97–118). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  63. Newble, D., Norman, G., & Vleuten, C. (2000). Assessing clinical reasoning. In J. H. M. Jones (Ed.), Clinical reasoning in the health professions (pp. 156–168). Oxford, UK: Butterworth-Heinemann.Google Scholar
  64. Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
  65. Perez, R. S., Fleming Johnson, J., & Emery, C. D. (1995). Instructional design expertise: A cognitive model of design. Instructional Science, 23(5–6), 21–349.Google Scholar
  66. Pirnay-Dummer, P., & Ifenthaler, D. (2010). Automated knowledge visualization and assessment. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 77–115). New York: Springer.CrossRefGoogle Scholar
  67. Pirnay-Dummer, P., Ifenthaler, D., & Spector, J. M. (2010). Highly integrated model assessment technology and tools. Educational Technology Research and Development, 58(1), 3–18. doi:10.1007/s11423-009-9119-8.CrossRefGoogle Scholar
  68. Reitman, W. R. (1965). Cognition and thought: An information-processing approach. New York: Wiley.Google Scholar
  69. Robertson, W. C. (1990). Detection of cognitive structure with protocol data: Predicting performance on physics transfer problems. Cognitive Science, 14, 253–280.CrossRefGoogle Scholar
  70. Scandura, J. M. (1977). Problem solving: A structural/process approach with instructional implications. New York: Academic Press.Google Scholar
  71. Scheele, B., & Groeben, N. (1984). Die Heidelberger Struktur-Lege-Technik (SLT). Eine Dialog-Konsens-Methode zur Erhebung subjektiver Theorien mittlerer Reichweite. Weinheim: Beltz.Google Scholar
  72. Seel, N. M. (1999). Educational diagnosis of mental models: Assessment problems and technology-based solutions. Journal of Structural Learning and Intelligent Systems, 14(2), 153–185.Google Scholar
  73. Seel, N. M. (2001). Epistemology, situated cognition, and mental models: Like a bridge over troubled water. Instructional Science, 29(4–5), 403–427.CrossRefGoogle Scholar
  74. Seel, N. M., Al-Diban, S., & Blumschein, P. (2000). Mental models and instructional planning. In J. M. Spector & T. M. Anderson (Eds.), Integrated and holistic perspectives on learning, instruction and technology: Understanding complexity (pp. 129–158). Dordrecht: Kluwer Academic Publishers.Google Scholar
  75. Seel, N. M., Ifenthaler, D., & Pirnay-Dummer, P. (2009a). Mental models and problem solving: Technological solutions for measurement and assessment of the development of expertise. In P. Blumschein, W. Hung, D. H. Jonassen, & J. Strobel (Eds.), Model-based approaches to learning: Using systems models and simulations to improve understanding and problem solving in complex domains (pp. 17–40). Rotterdam: Sense Publishers.Google Scholar
  76. Seel, N. M., Pirnay-Dummer, P., & Ifenthaler, D. (2009b). Quantitative Bildungsforschung. In R. Tippelt & B. Schmidt (Eds.), Handbuch Bildungsforschung (pp. 551–570). Wiesbaden: VS Verlag für Sozialwissenschaften.CrossRefGoogle Scholar
  77. Shute, V. J., Jeong, A. C., Spector, J. M., Seel, N. M., & Johnson, T. E. (2009). Model-based methods for assessment, learning, and instruction: Innovative educational technology at Florida State University. In M. Orey (Ed.), Educational media and technology yearbook (pp. 61–79). New York: Springer.CrossRefGoogle Scholar
  78. Snow, R. E. (1990). New approaches to cognitive and conative assessment in education. International Journal of Educational Research, 14(5), 455–473.Google Scholar
  79. Spector, J. M. (1998). The role of epistemology in instructional design. Instructional Science, 26(3–4), 193–203.CrossRefGoogle Scholar
  80. Spector, J. M. (2006). A methodology for assessing learning in complex and ill-structured task domains. Innovations in Education and Teaching International, 43(2), 109–120.CrossRefGoogle Scholar
  81. Spector, J. M. (2009). Adventures and advances in instructional design theory and practice. In L. Moller, J. B. Huett, & D. M. Harvey (Eds.), Learning and instructional technologies for the 21st Century (pp. 1–14). New York: Springer.CrossRefGoogle Scholar
  82. Spector, J. M. (2010). Mental representations and their analysis: An epestimological perspective. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 27–40). New York: Springer.CrossRefGoogle Scholar
  83. Spector, J. M., & Anderson, T. M. (Eds.). (2000). Integrated and holistic perspectives on learning, instruction, and technology: Understanding complexity. Dordrecht: Kluwer Academic Publishers.Google Scholar
  84. Spector, J. M., Christensen, D. L., Siotine, A. V., & McCormack, D. (2001). Models and simulations for learning in complex domains: Using causal loop diagrams for assessment and evaluation. Computers in Human Behavior, 17(5–6), 517–545.CrossRefGoogle Scholar
  85. Spector, J. M., & Koszalka, T. A. (2004). The DEEP methodology for assessing learning in complex domains (Final report to the National Science Foundation Evaluative Research and Evaluation Capacity Building). Syracuse, NY: Syracuse University.Google Scholar
  86. Tversky, A. (1977). Features of similarity. Psychological Review, 84, 327–352.CrossRefGoogle Scholar
  87. Wood, P. K. (1983). Inquiring systems and problem structures: Implications for cognitive development. Human Development, 26, 249–265.CrossRefGoogle Scholar
  88. York, C. S., & Ertmer, P. A. (2011). Towards an understanding of instructional design heuristics: An exploratory Delphi study. Educational Technology Research and Development, 59(6), 841–863.CrossRefGoogle Scholar

Copyright information

© Association for Educational Communications and Technology 2013

Authors and Affiliations

  1. 1.Department of Educational PsychologyUniversity of OklahomaNormanUSA

Personalised recommendations