Evaluating mental models in mathematics: a comparison of methods

Development Article


Cognitive scientists investigate mental models (how humans organize and structure knowledge in their minds) so as to understand human understanding of and interactions with the world. Cognitive and mental model research is concerned with internal conceptual systems that are not easily or directly observable. The goal of this research was to investigate the use of Evaluation of Mental Models (EMM) to assess the mental models of individuals and groups in solving complex problems and to compare novices and experts models as bases for providing feedback to learners. This study tested a qualified web-based assessment tool kit, Highly Interactive Model-based Assessment Tools and Technologies (HIMATT), in an as yet untested domain—mathematics. In this study, university students and their mathematics instructors used two tools in HIMATT, Dynamic Evaluation of Enhanced Problem Solving (DEEP) and Text-Model Inspection Trace of Concepts and Relations (T-MITOCAR). The research questions include: Do novice participants exhibit common patterns of thoughts when they conceptualize complex mathematical problems? Do novices conceptualize complex mathematical problems differently from experts? What differences in DEEP and T-MITOCAR patterns and responses exist according to the measures of HIMATT? Findings suggest that EMM and HIMATT could effectively support formative assessment in a complex mathematical domain. Finally, this study confirms a common assumption of cognitive scientists that the tool being used could affect the tool user’s understanding of the problem being solved. In this case, while DEEP and T-MITOCAR led to somewhat different expert models, both tools prove useful in support of formative assessment.


Mental models Formative assessment Complex learning HIMATT DEEP T-MITOCAR 



This research study is the main part of a Grant project called “Evaluation of Mental Models” supported by the Scientific & Technological Research Council of Turkey (TUBİTAK). Some parts of the results of the research study were presented and published as shown in the author’s cited references that are Gogus (2009, 2012a, 2012b), and Gogus and Gogus (2009). This is an expanded version of the paper, Gogus (2012a).


  1. Alexander, C. (1963). Notes on the synthesis of form. Cambridge, MA: Harvard University Press.Google Scholar
  2. Andrews, G., & Halford, G. S. (2002). A cognitive complexity metric applied to cognitive development. Cognitive Psychology, 45(2), 153–219.CrossRefGoogle Scholar
  3. Ariely, D. (2009). Predictably irrational: The hidden forces that shape our decisions. New York: Harper Collins Publishers Ltd.Google Scholar
  4. Boyce, W. E., & DiPrima, R. C. (2005). Elementary differential equations and boundary value problems (8th ed.). Hoboken, NJ: Wiley.Google Scholar
  5. Dabbagh, N. H., Jonassen, D. H., Yueh, H. P., & Samouilova, M. (2000). Assessing a problem-based learning approach to an introductory instructional design course: A case study. Performance Improvement Quarterly, 13(3), 60–83.CrossRefGoogle Scholar
  6. Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data (revised ed.). Cambridge, MA: Bradford Books/MIT Press.Google Scholar
  7. Ericsson, K. A., & Smith, J. (1991). Prospects and limits in the empirical study of expertise: An introduction. In K. A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 1–38). Cambridge, MA: Cambridge University Press.Google Scholar
  8. Fisher, K. M. (2000). SemNet® semantic networking. In K. M. Fisher, J. H. Wandersee, & D. Moody (Eds.), Mapping biology knowledge (pp. 143–166). Dordrecht, The Netherlands: Kluwer.Google Scholar
  9. Gentner, D., & Stevens, A. L. (1983). Mental models. Mahwah, NJ: Lawrence Erlbaum Associates Inc.Google Scholar
  10. Glaser, R. (1996). Changing the agency for learning: Acquiring expert performance. In K. A. Ericsson (Ed.), The road to excellence: The acquisition of expert performance in the arts and sciences, sports, and games (pp. 303–311). Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  11. Gogus, A. (2009). Assessment of learning in complex domains. The First International Congress of Educational Technology, “Trends and Issues in Educational Research”, 1-3 May 2009, Canakkale, Turkey.Google Scholar
  12. Gogus, A. (2012a). Evaluation of mental models: Using highly Interactive Model-Based Assessment Tools and Technologies (HIMATT) in mathematics domain. Technology Instruction, Cognition and Learning (TICL), 9(1), 31–50.Google Scholar
  13. Gogus, A. (2012b). Model-based assessment of conceptual representations. Journal of Instructional Psychology (accepted).Google Scholar
  14. Gogus, A., & Gogus, N. G. (2009, November). Evaluation of Mental Models (EMM) in mathematics domain. Proceedings of IADIS International Conference on Cognition and Exploratory Learning in Digital Age (CELDA 2009), 19-22, 2009, Italy.Google Scholar
  15. Gogus, A., Koszalka, T. A., & Spector, J. M. (2009). Assessing conceptual representations of ill-structured problems. Technology Instruction, Cognition and Learning, 7(1), 1–20.Google Scholar
  16. Herl, H. E., O’Neil, H. F, Jr, Chung, G. L., & Schacter, J. (1999). Reliability and validity of a computer-based knowledge mapping system to measure content understanding. Computers in Human Behavior, 15(3–4), 315–333.CrossRefGoogle Scholar
  17. Ifenthaler, D. (2006). Diagnosis of the learning-dependent progression of mental models. Development of the SMD Technology as a methodology for assessing individual models on relational, structural and semantic levels. (Dissertation, Freiburg: Universitäts, 2006).Google Scholar
  18. Ifenthaler, D. (2009, November). Bridging the gap between expert-novice differences: The model-based feedback approach. IADIS International Conference on Cognition and Exploratory Learning in Digital Age (CELDA 2009), Rome, Italy.Google Scholar
  19. Ifenthaler, D. (2010). Relational, structural, and semantic analysis of graphical representations and concept maps. Educational Technology Research and Development, 58(1), 81–97. doi: 10.1007/s11423-008-9087-4.CrossRefGoogle Scholar
  20. Ifenthaler, D., Masduki, I., & Seel, N. M. (2009). The mystery of cognitive structure and how we can detect it. Tracking the development of cognitive structures over time. Instructional Science. doi: 10.1007/s11251-009-9097-6.Google Scholar
  21. Ifenthaler, D., Pirnay-Dummer, P., & Seel, N. M. (2007). The role of cognitive learning strategies and intellectual abilities in mental model building processes. Technology, Instruction, Cognition and Learning, 5, 353–366.Google Scholar
  22. Ifenthaler, D., & Seel, N. M. (2005). The measurement of change: Learning-dependent progression of mental models. Technology, Instruction, Cognition, and Learning, 2(4), 317–336.Google Scholar
  23. Jacobson, M. J. (2000). Problem solving about complex systems: Difference between experts and novices. In B. Fishman & S. O’Connor-Divelbiss (Eds.), Fourth international conference of the learning sciences (pp. 14–21). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.Google Scholar
  24. Johnson, T.E., O’Connor, D.L., Pirnay-Dummer, P.N., Ifenthaler, D., Spector, J.M., & Seel, N. (2006). Comparative study of mental model research methods: relationships among ACSMM, SMD, MITOCAR & DEEP methodologies. Paper presented at the Second International Conference on Concept Mapping, San Jose, Costa Rica.Google Scholar
  25. Johnson-Laird, P. N. (1983). Mental models. Cambridge: Cambridge University Press.Google Scholar
  26. Jonassen, D. H. (1997). Instructional design models for well-structured and ill-structured problem-solving learning outcomes. Educational Technology Research and Development, 45(1), 65–95.CrossRefGoogle Scholar
  27. Jonassen, D. H. (2000). Toward a design theory of problem solving. Educational Technology Research and Development, 48(4), 63–85.CrossRefGoogle Scholar
  28. Jonassen, D. H., Beissner, K., & Yacci, M. (1993). Structural knowledge: Techniques for assessing, conveying, and acquiring structural knowledge. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.Google Scholar
  29. Kim, H. (2008). An investigation of the effects of model-centered instruction in individual and collaborative contexts: The case of acquiring instructional design expertise. Unpublished dissertation, Educational Psychology & Learning Systems, Florida State University, Tallahassee, FL.Google Scholar
  30. Kim, A. (2010). The effects of initial mental model construction on mental model progression, performance, intrinsic motivation, and metacognition in learning financial literacy. Unpublished dissertation, Educational Psychology & Learning Systems, Florida State University, Tallahassee, FL.Google Scholar
  31. Kim, M. K. (2012). Assessment technologies for adaptive instruction: Diagnosis of stage-sequential learning progress in problem solving contexts. Unpublished dissertation, Educational Psychology & Instructional Technology, University of Georgia, Athens, GA.Google Scholar
  32. Lakoff, G., & Johnson, M. (1981). Metaphors we live by. Chicago, IL: Chicago University Press.Google Scholar
  33. Lee, J. (2008). Effects of model-centered instruction and levels of learner expertise on ill-structured problem solving. Unpublihed dissertation, Educational Psychology & Learning Systems, Florida State University, Tallahassee, FL.Google Scholar
  34. Liu, X., & Hichey, M. (1996). The internal consistency of a concept mapping scoring scheme and its effect on prediction validity. International Journal of Science Education, 18(8), 921–937.CrossRefGoogle Scholar
  35. Markham, K. M., Mintzes, J. J., & Jones, M. G. (1994). The concept map as a research and evaluation tool: Further evidence of validity. Journal of Research in Science Teaching, 31(1), 91–101.CrossRefGoogle Scholar
  36. McClure, J., Sonak, B., & Suen, H. (1999). Concept map assessment of classroom learning: Reliability, validity, and logistical practicality. Journal of Research in Science Teaching, 36, 475–492.CrossRefGoogle Scholar
  37. McKeown, J. O. (2008). Using annotated concept map assessments as predictors of performance and understanding of complex problems for teacher technology integration. Unpublished dissertation, Educational Psychology and Learning Systems, Florida State University, Tallahassee, FL.Google Scholar
  38. Milrad, M., Spector, J. M., & Davidsen, P. I. (2003). Model Facilitated Learning. In S. Naidu (Ed.), Learning and teaching with technology: Principles and practices (pp. 13–27). London: Kogan Page.Google Scholar
  39. Payne, S. J. (1991). A descriptive study of mental models. Behaviour and Information Technology, 10(1), 3–21.CrossRefGoogle Scholar
  40. Perez, R. S., Johnson, J. F., & Emery, C. D. (1995). Instructional design expertise: a cognitive model of design. Instructional Science, 23(5–6), 321–349.CrossRefGoogle Scholar
  41. Pirnay-Dummer, P. (2007). Model inspection trace of concepts and relations. A heuristic approach to language-oriented model assessment. AREA 2007, Chicago, IL.Google Scholar
  42. Pirnay-Dummer, P. (2010). Complete structure comparison. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 235–258). New York: Springer.CrossRefGoogle Scholar
  43. Pirnay-Dummer, P., & Ifenthaler, D. (2010). Automated knowledge visualization and assessment. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 77–115). New York: Springer.CrossRefGoogle Scholar
  44. Pirnay-Dummer, P., Ifenthaler, D., & Spector, J. M. (2008, October). Highly integrated model assessment technology and tools. Paper presented at the CELDA International Conference, Freiburg, Germany.Google Scholar
  45. Pirnay-Dummer, P., Ifenthaler, D., & Spector, J. M. (2010). Highly integrated model assessment technology and tools. Educational Technology Research and Development, 58(1), 3–18.CrossRefGoogle Scholar
  46. Polk, T. A., & Seifert, C. M. (2002). Cognitive modelling. Cambridge, MA: MIT Press.Google Scholar
  47. Ruiz-Primo, M. A., & Shavelson, R. J. (1997). Concept-map based assessment: On possible sources of sampling variability. ERIC Document Reproduction Service No. ED422403. Google Scholar
  48. Seel, N. M. (1991). Weltwissen und mentale Modelle. Göttingen: Hogrefe.Google Scholar
  49. Seel, N. M. (1999). Educational diagnosis of mental models: Assessment problems and technology-based solutions. Journal of Structural Learning and Intelligent Systems, 14(2), 153–185.Google Scholar
  50. Seel, N. M. (2003). Model centered learning and instruction. Technology, Instruction, Cognition and Learning, 1(1), 59–85.Google Scholar
  51. Seel, N. M. (2004). Model-centered learning environments: Theory, instructional design, and effects. In N. M. Seel & S. Dijkstra (Eds.), Curriculum, plans, and processes in instructional design: International perspectives. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.Google Scholar
  52. Shin, N., Jonassen, D. H., & McGee, S. (2003). Predictors of well-structured and ill-structured problem solving in an astronomy simulation. Journal of Research in Science Teaching, 40(1), 6–33.CrossRefGoogle Scholar
  53. Shute, V., Torreano, L., & Willis, R. (2000). DNA: Toward an automated knowledge elicitation and organization tool. In S. P. Lajoie (Ed.), Computers as cognitive tools, Vol. two: No more walls (pp. 309–338). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.Google Scholar
  54. Spector, J. M. (2010, October). Assessing progress of learning in complex domains. Invited paper presentation at the 11th International Conference on Education Research, Seoul, South Korea.Google Scholar
  55. Spector, J. M., Dennen, V. P., & Koszalka, T. A. (2005). Causal maps, mental models and assessing acquisition of expertise. Technology, Instruction, Cognition and Learning, 3, 167–183.Google Scholar
  56. Spector, J. M., & Koszalka, T. A. (2004). The DEEP methodology for assessing learning in complex domains. Technical Report No. NSF-03-542. Syracuse, NY: Syracuse University, Instructional Design Development, and Evaluation (IDD&E).Google Scholar
  57. Taleb, N. N. (2008). The black swan: The impact of the highly improbable. New York: Penguin.Google Scholar
  58. Taricani, E. M., & Clariana, R. B. (2006). A technique for automatically scoring open ended concept maps. Educational Technology Research and Development, 54(1), 61–78.CrossRefGoogle Scholar
  59. van Merriënboer, J. J. G. (1997). Training complex cognitive skills: A four-component instructional design model for technical training. Englewood Cliffs, NJ: Educational Technology Publications.Google Scholar
  60. Young, I. (2008). Mental models: Aligning design strategy with human behavior. New York: Rosenfeld Media.Google Scholar

Copyright information

© Association for Educational Communications and Technology 2013

Authors and Affiliations

  1. 1.Sabancı University, CIADIstanbulTurkey

Personalised recommendations