Model-Based Tools for Knowledge Assessment

Chapter

Abstract

This chapter introduces the functions of knowledge representation and presents a critical review of the model-based tools for assessment: Pathfinder, ALA-Reader, jMAP, HIMATT, AKOVIA. For each model-based tool, foundations and applications are discussed. Next, the tools are compared in order to illustrate their advantages and disadvantages, strengths, and limitations. The latter part postulates that there is no easy and no complete way to integrate any of the model-based tools. However, the strength of good research lies in a best possible integration: Multiple perspectives on the same construct are usually needed. Thus, the further development of existing tools as well as of new ones is necessary to explore human knowledge, its change, decision-making, performance, and problem-solving as our understanding of those complex human potentials evolves.

Keywords

Mental representation AKOVIA HIMATT Pathfinder 

References

  1. Acton, W. H., Johnson, P. J., & Goldsmith, T. E. (1994). Structural knowledge assessment: Comparison of referent structures. Journal of Educational Psychology, 86(2), 303–311.CrossRefGoogle Scholar
  2. Al-Diban, S., & Ifenthaler, D. (2011). Comparison of two analysis approaches for measuring externalized mental models: Implications for diagnostics and applications. Journal of Educational Technology & Society, 14(2), 16–30.Google Scholar
  3. Almond, R. G., Steinberg, L. S., & Mislevy, R. J. (2002). Enhancing the design and delivery of assessment systems: A four process architecture. Journal of Technology, Learning and Assessment, 1(5), 3–63.Google Scholar
  4. Branaghan, R. J. (1990). Pathfinder networks and mutlidimensional spaces: Relative strength in representing strong associates. In R. W. Schvaneveldt (Ed.), Pathfinder associative networks: Studies in knowledge organization (pp. 111–120). Norwood, NJ: Ablex Publishing Corporation.Google Scholar
  5. *Clariana, R. B. (2010). Deriving individual and group knowledge structure from network diagrams and from essays. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 117–130). New York, NY: Springer.Google Scholar
  6. Clariana, R. B., & Koul, R. (2004). A computer-based approach for translating text into concept map-like representations. In A. Canas, J. Novak, & F. M. González (Eds.), Concept maps: Theory, methodology, technology (pp. 131–134). Pamplona: Universidad Pública de Navarra.Google Scholar
  7. Clariana, R. B., & Wallace, P. E. (2007). A computer-based approach for deriving and measuring individual and team knowledge structure from essay questions. Journal of Educational Computing Research, 37(3), 211–227.CrossRefGoogle Scholar
  8. Clariana, R. B., & Wallace, P. E. (2009). A comparison of pair-wise, list-wise, and clustering approaches for eliciting structural knowledge in information systems courses. International Journal of Instructional Media, 36(3), 287–302.Google Scholar
  9. Davis, F. D., & Yi, M. Y. (2004). Improving computer skill training: Behavior modeling, symbolic mental rehearsal, and the role of knowledge structures. Journal of Applied Psychology, 89(3), 509–523.CrossRefGoogle Scholar
  10. Ellson, J., Gansner, E. R., Koutsofios, E., North, S. C., & Woodhull, G. (2003). GraphViz and Dynagraph. Static and dynamic graph drawing tools. Florham Park, NJ: AT&T Labs.Google Scholar
  11. Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data. Cambridge, MA: MIT Press.Google Scholar
  12. Galbraith, D. (1999). Writing as a knowledge-constituting process. In M. Torrance & D. Galbraith (Eds.), Knowing what to write. Conceptual processes in text production (pp. 139–160). Amsterdam: University Press.Google Scholar
  13. Gammack, J. G. (1990). Expert conceptual structure: The stability of Pathfinder representations. In R. W. Schvaneveldt (Ed.), Pathfinder associative networks: Studies in knowledge organization (pp. 213–226). Norwood, NJ: Ablex Publishing Corporation.Google Scholar
  14. Gillan, D. J., Breedin, S. D., & Cooke, N. M. (1992). Network and multidimensional representations of the declarative knowledge of human-computer interface design experts. International Journal of Man-machine Studies, 36, 587–615.CrossRefGoogle Scholar
  15. Goldsmith, T. E., & Davenport, D. M. (1990). Assessing structural similarity of graphs. In R. W. Schvaneveldt (Ed.), Pathfinder associative networks: Studies in knowledge organization (pp. 75–87). Norwood, NJ: Ablex Publishing Corporation.Google Scholar
  16. Gomez, R. L., Hadfield, O. D., & Housner, L. D. (1996). Conceptual maps and simulated teaching episodes as indicators of competence in teaching elementary mathematics. Journal of Educational Psychology, 88, 572–585.CrossRefGoogle Scholar
  17. Gomez, R. L., Schvaneveldt, R. W., & Staudenmayer, H. (1996). Assessing beliefs about ‘environmental illness/multiple chemical sensitivity’. Journal of Health Psychology, 1(1), 107–123.CrossRefGoogle Scholar
  18. Gonzalvo, P., Canas, J., & Bajo, M. T. (1994). Structural representations in knowledge acquisition. Journal of Educational Psychology, 86(4), 601–616.CrossRefGoogle Scholar
  19. Hannafin, M. J. (1992). Emerging technologies, ISD, and learning environments: Critical perspectives. Educational Technology Research and Development, 40(1), 49–63.CrossRefGoogle Scholar
  20. Ifenthaler, D. (2008). Practical solutions for the diagnosis of progressing mental models. In D. Ifenthaler, P. Pirnay-Dummer, & J. M. Spector (Eds.), Understanding models for learning and instruction. Essays in honor of Norbert M. Seel (pp. 43–61). New York, NY: Springer.CrossRefGoogle Scholar
  21. Ifenthaler, D. (2009). Model-based feedback for improving expertise and expert performance. Technology, Instruction, Cognition and Learning, 7(2), 83–101.Google Scholar
  22. Ifenthaler, D. (2010a). Bridging the gap between expert-novice differences: The model-based feedback approach. Journal of Research on Technology in Education, 43(2), 103–117.Google Scholar
  23. Ifenthaler, D. (2010b). Learning and instruction in the digital age. In J. M. Spector, D. Ifenthaler, P. Isaías, Kinshuk, & D. G. Sampson (Eds.), Learning and instruction in the digital age: Making a difference through cognitive approaches, technology-facilitated ­collaboration and assessment, and personalized communications (pp. 3–10). New York, NY: Springer.Google Scholar
  24. *Ifenthaler, D. (2010c). Relational, structural, and semantic analysis of graphical representations and concept maps. Educational Technology Research and Development, 58(1), 81–97. doi:10.1007/s11423-008-9087-4
  25. *Ifenthaler, D. (2010d). Scope of graphical indices in educational diagnostics. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 213–234). New York, NY: Springer.Google Scholar
  26. Ifenthaler, D. (2011a). Identifying cross-domain distinguishing features of cognitive structures. Educational Technology Research and Development, 59(6), 817–840. doi:10.1007/s11423-011-9207-4.CrossRefGoogle Scholar
  27. Ifenthaler, D. (2011b). Intelligent model-based feedback. Helping students to monitor their individual learning progress. In S. Graf, F. Lin, Kinshuk, & R. McGreal (Eds.), Intelligent and adaptive systems: Technology enhanced support for learners and teachers (pp. 88–100). Hershey, PA: IGI Global.CrossRefGoogle Scholar
  28. *Ifenthaler, D. (2012). Determining the effectiveness of prompts for self-regulated learning in problem-solving scenarios. Journal of Educational Technology & Society, 15(1), 38–52.Google Scholar
  29. Ifenthaler, D., & Pirnay-Dummer, P. (2009). Assessment of knowledge: Do graphical notes and texts represent different things? In M. R. Simonson (Ed.), Annual proceedings of selected research and development papers presented at the national convention of the Association for Educational Communications and Technology (32nd, Louisville, KY, 2009) (Vol. 2, pp. 86–93). Bloomington, IN: AECT.Google Scholar
  30. Ifenthaler, D., & Pirnay-Dummer, P. (2010). Artefacts of thought: Properties and kinds of re-representations. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 75–76). New York, NY: Springer.CrossRefGoogle Scholar
  31. Ifenthaler, D., & Pirnay-Dummer, P. (2011). States and processes of learning communities. Engaging students in meaningful reflection and elaboration. In B. White, I. King, & P. Tsang (Eds.), Social media tools and platforms in learning environments: Present and future (pp. 81–94). New York, NY: Springer.CrossRefGoogle Scholar
  32. Janetzko, D. (1999). Statistische Anwendungen im Internet. Daten in Netzumgebungen erheben, auswerten und präsentieren. München: Addison-Wesley.Google Scholar
  33. Jeong, A. C. (2010). Assessing change in learners’ causal understanding. Using sequential analysis and causal maps. In V. J. Shute & B. J. Becker (Eds.), Innovative assessment for the 21st century (pp. 187–205). New York, NY: Springer.CrossRefGoogle Scholar
  34. Johnson, P., Goldsmith, T., & Teague, K. (1994). Locus of the predictive advantage in Pathfinder-based representations of classroom knowledge. Journal of Educational Psychology, 86(4), 617–626.CrossRefGoogle Scholar
  35. Johnson-Laird, P. N. (1983). Mental models. Towards a cognitive science of language, inference, and consciousness. Cambridge, UK: Cambridge University Press.Google Scholar
  36. Jonassen, D. H. (2009). Externally modeling mental models. In L. Moller, J. B. Huett, & D. Harvey (Eds.), Learning and instructional technologies for the 21st century. Visions of the future (pp. 49–74). New York, NY: Springer.Google Scholar
  37. *Jonassen, D. H., & Cho, Y. H. (2008). Externalizing mental models with mindtools. In D. Ifenthaler, P. Pirnay-Dummer, & J. M. Spector (Eds.), Understanding models for learning and instruction. Essays in honor of Norbert M. Seel (pp. 145–160). New York, NY: Springer.Google Scholar
  38. Kim, H. (2008). An investigation of the effects of model-centered instruction in individual and collaborative contexts: The case of acquiring instructional design expertise. Tallahassee, FL: Florida State University.Google Scholar
  39. Kintsch, E. (1988). The role of knowledge in discourse comprehension: A construction-integration model. Psychological Review, 95(2), 163–182.CrossRefGoogle Scholar
  40. Kirschner, P. A. (2004). Introduction to part II of the special issue: Design, development and implementation of electronic learning environments for collaborative learning. Educational Technology Research and Development, 52(4), 37.CrossRefGoogle Scholar
  41. Kirwan, B., & Ainsworth, L. K. (1992). A guide to task analysis. London: Taylor & Francis Group.Google Scholar
  42. Koper, R., & Tattersall, C. (2004). New directions for lifelong learning using network technologies. British Journal of Educational Technology, 35(6), 689–700.CrossRefGoogle Scholar
  43. Lachner, A., & Pirnay-Dummer, P. (2010). Model-based knowledge mapping. In J. M. Spector, D. Ifenthaler, P. Isaias, Kinshuk, & D. G. Sampson (Eds.), Learning and instruction in the digital age (pp. 69–86). New York, NY: Springer.CrossRefGoogle Scholar
  44. Lee, J. (2009). Effects of model-centered instruction and levels of learner expertise on effectiveness, efficiency, and engagement with ill-structured problem solving: An exploratory study of ethical decision making in program evaluation. Tallahassee, FL: Florida State University.Google Scholar
  45. Lee, Y., & Nelson, D. W. (2004). A conceptual framework for external representations of knowledge in teaching and learning environments. Educational Technology, 44(2), 28–36.Google Scholar
  46. Mandl, H., Gruber, H., & Renkl, A. (1995). Mental models of complex systems: When veridicality decreases functionality. In C. Zucchermaglio, S. Bagnara, & S. U. Stucky (Eds.), Organizational learning and technological change (pp. 102–111). Berlin: Springer.CrossRefGoogle Scholar
  47. McKeown, J. O. (2009). Using annotated concept map assessments as predictors of performance and understanding of complex problems for teacher technology integration. Tallahassee, FL: Florida State University.Google Scholar
  48. Minsky, M. (1981). A framework for representing knowledge in mind design. In R. J. Brachmann & H. J. Levesque (Eds.), Readings in knowledge representation (pp. 245–262). Los Altos, CA: Morgan Kaufmann.Google Scholar
  49. Mislevy, R. J., Behrens, J. T., Bennett, R. E., Demark, S. F., Frezzo, D. C., Levy, R., et al. (2010). On the roles of external knowledge representations in assessment design. Journal of Technology, Learning and Assessment, 8(2), 1–57.Google Scholar
  50. Pirnay-Dummer, P. (2010). Complete structure comparison. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge. New York, NY: Springer.Google Scholar
  51. *Pirnay-Dummer, P., & Ifenthaler, D. (2010). Automated knowledge visualization and assessment. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 77–115). New York, NY: Springer.Google Scholar
  52. Pirnay-Dummer, P., & Ifenthaler, D. (2011a). Reading guided by automated graphical representations: How model-based text visualizations facilitate learning in reading comprehension tasks. Instructional Science, 39(6), 901–919. doi:10.1007/s11251-010-9153-2.CrossRefGoogle Scholar
  53. Pirnay-Dummer, P., & Ifenthaler, D. (2011b). Text-guided automated self assessment. A graph-based approach to help learners with ongoing writing. In D. Ifenthaler, Kinshuk, P. Isaias, D. G. Sampson, & J. M. Spector (Eds.), Multiple perspectives on problem solving and learning in the digital age (pp. 217–225). New York, NY: Springer.CrossRefGoogle Scholar
  54. Pirnay-Dummer, P., Ifenthaler, D., & Seel, N. M. (2012a). Designing model-based learning environments to support mental models for learning. In D. H. Jonassen & S. Land (Eds.), Theoretical foundations of learning environments (2nd ed., pp. 66–94). New York, NY: Routledge.Google Scholar
  55. Pirnay-Dummer, P., Ifenthaler, D., & Seel, N. M. (2012b). Knowledge representation. In N. M. Seel (Ed.), Encyclopedia of the sciences of learning (Vol. 11, pp. 1689–1692). New York, NY: Springer.Google Scholar
  56. *Pirnay-Dummer, P., Ifenthaler, D., & Spector, J. M. (2010). Highly integrated model assessment technology and tools. Educational Technology Research and Development, 58(1), 3–18. doi:10.1007/s11423-009-9119-8.
  57. Rowe, A. L., Cooke, N. J., Hall, E. P., & Halgren, T. L. (1996). Toward an on-line knowledge assessment methodology: Building on the relationship between knowing and doing. Journal of Experimental Psychology: Applied, 2(1), 31–47.CrossRefGoogle Scholar
  58. Scaife, M., & Rogers, Y. (1996). External cognition: How do graphical representations work? International Journal of Human-Computer Studies, 45(2), 185–213.CrossRefGoogle Scholar
  59. Scheele, B., & Groeben, N. (1984). Die Heidelberger Struktur-Lege-Technik (SLT). Eine Dialog-Konsens-Methode zur Erhebung subjektiver Theorien mittlerer Reichweite. Weinheim: Beltz.Google Scholar
  60. *Schvaneveldt, R. W. (Ed.). (1990). Pathfinder associative networks: Studies in knowledge organization. Norwood, NJ: Ablex Publishing Corporation.Google Scholar
  61. Schvaneveldt, R. W., Durso, F. T., & Dearholt, D. W. (1989). Network structures in proximity data. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 24, pp. 249–284). New York, NY: Academic Press.Google Scholar
  62. Seel, N. M. (1991). Weltwissen und mentale Modelle. Göttingen: Hogrefe.Google Scholar
  63. *Seel, N. M. (1999a). Educational diagnosis of mental models: Assessment problems and technology-based solutions. Journal of Structural Learning and Intelligent Systems, 14(2), 153–185.Google Scholar
  64. Seel, N. M. (1999). Educational semiotics: School learning reconsidered. Journal of Structural Learning and Intelligent Systems, 14(1), 11–28.Google Scholar
  65. Seel, N. M. (2003). Model-centered learning and instruction. Technology, Instruction, Cognition and Learning, 1(1), 59–85.Google Scholar
  66. Shute, V. J., Jeong, A. C., Spector, J. M., Seel, N. M., & Johnson, T. E. (2009). Model-based methods for assessment, learning, and instruction: Innovative educational technology at Florida State University. In M. Orey (Ed.), Educational media and technology yearbook (pp. 61–79). New York, NY: Springer.CrossRefGoogle Scholar
  67. Shute, V. J., Masduki, I., Donmez, O., Kim, Y. J., Dennen, V. P., Jeong, A. C., et al. (2010). Assessing key competencies within game environments. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 281–310). New York, NY: Springer.CrossRefGoogle Scholar
  68. Smith, L. J. (2009). Graph and property set analysis: A methodology for comparing mental model representations. Tallahassee, FL: Florida State University.Google Scholar
  69. Spector, J. M. (2009). Adventures and advances in instructional design theory and practice. In L. Moller, J. B. Huett, & D. M. Harvey (Eds.), Learning and instructional technologies for the 21st century (pp. 1–14). New York, NY: Springer.CrossRefGoogle Scholar
  70. *Spector, J. M. (2010). Mental representations and their analysis: An epestimological perspective. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 27–40). New York, NY: Springer.Google Scholar
  71. Spector, J. M., & Koszalka, T. A. (2004). The DEEP methodology for assessing learning in complex domains (Final report to the National Science Foundation Evaluative Research and Evaluation Capacity Building). Syracuse, NY: Syracuse University.Google Scholar
  72. Stachowiak, F. J. (1973). Allgemeine Modelltheorie. Berlin: Springer.CrossRefGoogle Scholar
  73. *Strasser, A. (2010). A functional view toward mental representations. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 15–26). New York, NY: Springer.Google Scholar
  74. Taricani, E. M., & Clariana, R. B. (2006). A technique for automatically scoring open-ended concept maps. Educational Technology Research and Development, 54(1), 65–82.CrossRefGoogle Scholar
  75. Trumpower, D. L., Sharara, H., & Goldsmith, T. E. (2010). Specificity of structural assessment of knowledge. The Journal of Technology, Learning and Assessment, 8(5), 2–32.Google Scholar
  76. Wygotski, L. S. (1969). Denken und Sprechen. Mit einer Einleitung von Thomas Luckmann. Übersetzt von Gerhard Sewekow. Stuttgart: Fischer Verlag.Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Department of Educational PsychologyUniversity of OklahomaNormanUSA
  2. 2.Department of Educational ScienceUniversity of FreiburgFreiburgGermany

Personalised recommendations