Advertisement

Managing items and knowledge components: domain modeling in practice

  • Radek PelánekEmail author
Development Article
  • 21 Downloads

Abstract

Adaptive learning systems need large pools of examples for practice—thousands of items that need to be organized into hundreds of knowledge components within a domain model. Domain modeling and closely related student modeling are intensively studied in research studies. However, there is a gap between research studies and practical issues faced by developers of scalable educational technologies. The aim of this paper is to bridge this gap by connecting techniques and notions used in research papers to practical problems in development. We put specific emphasis on scalability—on techniques that enable relatively cheap and fast development of adaptive learning systems. We summarize conceptual questions in domain modeling, provide an overview of approaches in the research literature, and discuss insights based on the development and analysis of a widely used system. We conclude with recommendations for both developers and researchers in the area of adaptive learning systems.

Keywords

Domain modeling Student modeling Adaptivity Scalability Knowledge component System development 

Notes

Acknowledgements

The author thanks Petr Jarušek, the chief developer of Umíme systems, for long-term development and thorough discussions, on which this paper is based. The author also thanks members of the Adaptive Learning group at Masaryk University for their insights and feedback.

Compliance with ethical standards

Conflict of interest

The author declares that he has no conflict of interest.

References

  1. Abyaa, A., Idrissi, M. K., & Bennani, S. (2019). Learner modelling: Systematic review of the literature from the last 5 years. Educational Technology Research and Development.  https://doi.org/10.1007/s11423-018-09644-1.
  2. Agarwal, M. & Mannem, P. (2011). Automatic gap-fill question generation from text books. In Proceedings of innovative use of NLP for building educational applications (pp. 56–64). Association for Computational Linguistics.Google Scholar
  3. Al-Yahya, M., George, R., & Alfaries, A. (2015). Ontologies in e-learning: Review of the literature. International Journal of Software Engineering and Its Applications, 9(2), 67–84.Google Scholar
  4. Aleven, V., McLaughlin, E. A., Glenn, R. A., & Koedinger, K. R. (2016). Handbook of research on learning and instruction, chapter Instruction based on adaptive learning technologies. New York: Routledge.Google Scholar
  5. Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., et al. (2000). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives (abridged ed.). London: Pearson.Google Scholar
  6. Attali, Y. (2018). Automatic item generation unleashed: An evaluation of a large-scale deployment of item models. In Proceedings of artificial intelligence in education (pp. 17–29). New York: Springer.Google Scholar
  7. Ayers, E. & Junker, B. (2006). Do skills combine additively to predict task difficulty in eighth grade mathematics. In Proceedings of educational data mining.Google Scholar
  8. Baker, R. S. (2016). Stupid tutoring systems, intelligent humans. International Journal of Artificial Intelligence in Education, 26(2), 600–614.CrossRefGoogle Scholar
  9. Barnes, T. (2005). The q-matrix method: Mining student response data for knowledge. In Proceedings of American association for artificial intelligence 2005 educational data mining workshop (pp. 1–8).Google Scholar
  10. Beck, J. E., Pardos, Z. A., Heffernan, N. T., & Ruiz, C. (2008). The composition effect: Conjunctive or compensatory? An analysis of multi-skill math questions in its. Proceedings of Educational Data Mining, 2008, 147–156.Google Scholar
  11. Bieliková, M., Šimko, M., Barla, M., Tvarožek, J., Labaj, M., Móro, R., et al. (2014). ALEF: From application to platform for adaptive collaborative learning. In N. Manouselis, H. Drachsler, K. Verbert, & O. Santos (Eds.), Recommender systems for technology enhanced learning (pp. 195–225). New York: Springer.CrossRefGoogle Scholar
  12. Bloom, B. S., Engelhart, M. B., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives. The classification of educational goals. Handbook 1: Cognitive domain. New York: Longmans Green.Google Scholar
  13. Brusilovsky, P. (1998). Methods and techniques of adaptive hypermedia. In Proceedings of adaptive hypertext and hypermedia (pp 1–43). New York: Springer.Google Scholar
  14. Bull, S., & Kay, J. (2007). Student models that invite the learner in: The smili: Open learner modelling framework. International Journal of Artificial Intelligence in Education, 17(2), 89–120.Google Scholar
  15. Carmona, C., Millán, E., Pérez-de-la Cruz, J.-L., Trella, M., & Conejo, R. (2005). Introducing prerequisite relations in a multi-layered bayesian student model. In Proceedings of user modeling (pp. 347–356). New York: Springer.Google Scholar
  16. Cen, H., Koedinger, K. R., & Junker, B. (2006). Learning factors analysis–a general method for cognitive model evaluation and improvement. In Proceedings of intelligent tutoring systems (pp. 164–175). New York: Springer.Google Scholar
  17. Cen, H., Koedinger, K. R., & Junker, B. (2007). Is over practice necessary? Improving learning efficiency with the cognitive tutor through educational data mining. Proceedings of Artificial Intelligence in Education, 158, 511–518.Google Scholar
  18. Cen, H., Koedinger, K. R., & Junker, B. (2008). Comparing two irt models for conjunctive skills. In Proceedings of intelligent tutoring systems (pp. 796–798). New York: Springer.Google Scholar
  19. Churchill, D. (2007). Towards a useful classification of learning objects. Educational Technology Research and Development, 55(5), 479–497.CrossRefGoogle Scholar
  20. Clark, R. E., Feldon, D., van Merrienboer, J., Yates, K., & Early, S. (2007). Handbook of research on educational communications and technology., Cognitive task analysis Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  21. Conati, C., Gertner, A., & Vanlehn, K. (2002). Using bayesian networks to manage uncertainty in student modeling. User Modeling and User–Adapted Interaction, 12(4), 371–417.CrossRefGoogle Scholar
  22. Corbett, A. T., & Anderson, J. R. (1994). Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modeling and User–Adapted Interaction, 4(4), 253–278.CrossRefGoogle Scholar
  23. De Ayala, R. J. (2013). The theory and practice of item response theory. New York: Guilford Publications.Google Scholar
  24. Desmarais, M. (2011). Conditions for effectively deriving a q-matrix from data with non-negative matrix factorization. In Proceedings of Educational Data Mining (pp. 41–50).Google Scholar
  25. Dicheva, D., Dichev, C., Agre, G., & Angelova, G. (2015). Gamification in education: A systematic mapping study. Journal of Educational Technology & Society, 18(3), 75–88.Google Scholar
  26. Doignon, J.-P., & Falmagne, J.-C. (2012). Knowledge spaces. New York: Springer.Google Scholar
  27. Feng, M., Heffernan, N., Mani, M., & Heffernan, C. (2006). Using mixed-effects modeling to compare different grain-sized skill models. Proceedings of Educational Data Mining, 2, 79–92.Google Scholar
  28. Gagné, R. (1985). The conditions of learning and theory of instruction. New York: Holt, Rinehart and Winston.Google Scholar
  29. Gong, Y., Beck, J. E., & Heffernan, N. T. (2010). Comparing knowledge tracing and performance factor analysis by using multiple model fitting procedures. In Proceedings of intelligent tutoring systems (pp. 35–44). New York: Springer.Google Scholar
  30. González-Brenes, J., Huang, Y., & Brusilovsky, P. (2014). General features in knowledge tracing: Applications to multiple subskills, temporal item response theory, and expert knowledge. In Proceedings of educational data mining (pp. 84–91).Google Scholar
  31. Heffernan, N. T. & Koedinger, K. R. (1997). The composition effect in symbolizing: The role of symbol production vs. text comprehension. In Proceedings of the nineteenth annual conference of the cognitive science society (pp. 307–312).Google Scholar
  32. Honebein, P. C., & Honebein, C. H. (2015). Effectiveness, efficiency, and appeal: Pick any two? the influence of learning domains and learning outcomes on designer judgments of useful instructional methods. Educational Technology Research and Development, 63(6), 937–955.CrossRefGoogle Scholar
  33. Hosseini, R., & Brusilovsky, P. (2017). A study of concept-based similarity approaches for recommending program examples. New Review of Hypermedia and Multimedia, 23(3), 161–188.CrossRefGoogle Scholar
  34. Huang, Y., Guerra-Hollstein, J. D., & Brusilovsky, P. (2016). Modeling skill combination patterns for deeper knowledge tracing. In Proceedings of user modeling adaptation and personalization (extended proceedings).Google Scholar
  35. Käser, T., Busetto, A. G., Solenthaler, B., Baschera, G.-M., Kohn, J., Kucian, K., et al. (2013). Modelling and optimizing mathematics learning in children. International Journal of Artificial Intelligence in Education, 23(1–4), 115–135.CrossRefGoogle Scholar
  36. Käser, T., Klingler, S., Schwing, A. G., & Gross, M. (2014). Beyond knowledge tracing: Modeling skill topologies with Bayesian networks. In Proceedings of intelligent tutoring systems (pp. 188–198).Google Scholar
  37. Koedinger, K. R., Corbett, A. T., & Perfetti, C. (2012). The knowledge-learning-instruction framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive Science, 36(5), 757–798.CrossRefGoogle Scholar
  38. Koedinger, K. R., & McLaughlin, E. A. (2010). Seeing language learning inside the math: Cognitive analysis yields transfer. In Proceedings of the annual meeting of the cognitive science society. Austin, TX: Cognitive Science Society.Google Scholar
  39. Koedinger, K. R., & McLaughlin, E. A. (2016). Closing the loop with quantitative cognitive task analysis. In Proceedings of educational data mining (pp. 412–417).Google Scholar
  40. Koedinger, K. R., Pavlik, P. I, Jr., Stamper, J. C., Nixon, T., & Ritter, S. (2011). Avoiding problem selection thrashing with conjunctive knowledge tracing. In Proceedings of educational data mining (pp. 91–100).Google Scholar
  41. Koedinger, K. R., Stamper, J. C., McLaughlin, E. A., & Nixon, T. (2013). Using data-driven discovery of better student models to improve student learning. In Proceedings of artificial intelligence in education (pp. 421–430). New York: Springer.Google Scholar
  42. Koedinger, K. R., Yudelson, M. V., & Pavlik, P. I. (2016). Testing theories of transfer using error rate learning curves. Topics in Cognitive Science, 8(3), 589–609.CrossRefGoogle Scholar
  43. Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). Multiple-choice tests exonerated, at least of some charges: Fostering test-induced learning and avoiding test-induced forgetting. Psychological Science, 23(11), 1337–1344.CrossRefGoogle Scholar
  44. Liu, R., Koedinger, K. R., & McLaughlin, E. A. (2014). Interpreting model discovery and testing generalization to a new dataset. In Proceedings of educational data mining (pp. 107–113).Google Scholar
  45. McNee, S. M., Kapoor, N., & Konstan, J. A. (2006). Don’t look stupid: avoiding pitfalls when recommending research papers. In Proceedings of computer supported cooperative work (pp. 171–180). New York: ACM.Google Scholar
  46. Millán, E., Loboda, T., & Pérez-de-la Cruz, J. L. (2010). Bayesian networks for student model engineering. Computers & Education, 55(4), 1663–1683.CrossRefGoogle Scholar
  47. Molenaar, I. & Knoop-van Campen, C. (2017). Teacher dashboards in practice: Usage and impact. In Proceedings of European conference on technology enhanced learning (pp. 125–138). New York: Springer.Google Scholar
  48. Nkambou, R., Mizoguchi, R., & Bourdeau, J. (2010). Advances in intelligent tutoring systems (Vol. 308). New York: Springer.CrossRefGoogle Scholar
  49. Pardos, Z. A. & Heffernan, N. T. (2011). KT-IDEM: Introducing item difficulty to the knowledge tracing model. In Proceedings of user modeling, adaption and personalization (pp. 243–254). New York: Springer.Google Scholar
  50. Pardos, Z. A., Heffernan, N. T., Anderson, B., Heffernan, C. L., & Schools, W. P. (2010). Handbook of educational data mining. Using fine-grained skill models to fit student performance with Bayesian networks (pp. 417–426). Boca Raton, FL: Chapman & Hall/CRC PressGoogle Scholar
  51. Pelánek, R. (2016). Applications of the Elo rating system in adaptive educational systems. Computers & Education, 98, 169–179.CrossRefGoogle Scholar
  52. Pelánek, R. (2017). Bayesian knowledge tracing, logistic models, and beyond: An overview of learner modeling techniques. User Modeling and User–Adapted Interaction, 27(3), 313–350.CrossRefGoogle Scholar
  53. Pelánek, R. (2018). The details matter: Methodological nuances in the evaluation of student models. User Modeling and User–Adapted Interaction, 28, 207–235.CrossRefGoogle Scholar
  54. Pelánek, R. (2019). Measuring similarity of educational items: An overview. IEEE Transactions on Learning Technologies.Google Scholar
  55. Pelánek, R., Papoušek, J., Řihák, J., Stanislav, V., & Nižnan, J. (2017). Elo-based learner modeling for the adaptive practice of facts. User Modeling and User-Adapted Interaction, 27(1), 89–118.CrossRefGoogle Scholar
  56. Pelánek, R. & Řihák, J. (2016). Properties and applications of wrong answers in online educational systems. In Proceedings of educational data mining.Google Scholar
  57. Pelánek, R., & Řihák, J. (2018). Analysis and design of mastery learning criteria. New Review of Hypermedia and Multimedia, 24, 133–159.CrossRefGoogle Scholar
  58. Porter, A. C. (2002). Measuring the content of instruction: Uses in research and practice. Educational Researcher, 31(7), 3–14.CrossRefGoogle Scholar
  59. Rau, M. A., Aleven, V., & Rummel, N. (2010). Blocked versus interleaved practice with multiple representations in an intelligent tutoring system for fractions. In Proceedings of intelligent tutoring systems (pp. 413–422). New York: Springer.Google Scholar
  60. Reigeluth, C. M., & Carr-Chellman, A. A. (2009a). Situational principles of instruction. In C. M. Reigeluth & A. A. Carr-Chellman (Eds.), Instructional-design theories and models. Building a common knowledge base (Vol. III, pp. 57–68). New York: Routledge.CrossRefGoogle Scholar
  61. Reigeluth, C. M., & Carr-Chellman, A. A. (2009b). Understanding instructional theory. In C. M. Reigeluth & A. A. Carr-Chellman (Eds.), Instructional-design theories and models. Building a common knowledge base (Vol. III, pp. 3–26). New York: Routledge.CrossRefGoogle Scholar
  62. Reigeluth, C. M., & Keller, J. B. (2009). Understanding instruction. In C. M. Reigeluth & A. A. Carr-Chellman (Eds.), Instructional-design theories and models. Building a common knowledge base (Vol. III, pp. 27–39). New York: Routledge.CrossRefGoogle Scholar
  63. Reigeluth, C. M., Merrill, M. D., & Bunderson, C. V. (1978). The structure of subject matter content and its instructional design implications. Instructional Science, 7(2), 107–126.CrossRefGoogle Scholar
  64. Reigeluth, C. M., Merrill, M. D., Wilson, B. G., & Spiller, R. T. (1980). The elaboration theory of instruction: A model for sequencing and synthesizing instruction. Instructional Science, 9(3), 195–219.CrossRefGoogle Scholar
  65. Řihák, J., & Pelánek, R. (2017). Measuring similarity of educational items using data on learners’ performance. Proceedings of educational data mining (pp. 16–23).Google Scholar
  66. Roediger, H. L., & Pyc, M. A. (2012). Inexpensive techniques to improve education: Applying cognitive psychology to enhance educational practice. Journal of Applied Research in Memory and Cognition, 1(4), 242–248.CrossRefGoogle Scholar
  67. Sottilare, R. A., Graesser, A. C., Hu, X., Olney, A., Nye, B., & Sinatra, A. M. (2016). Design recommendations for intelligent tutoring systems. Domain modeling (Vol. 4). Adelphi: US Army Research Laboratory.Google Scholar
  68. Stillson, H., & Alsup, J. (2003). Smart ALEKS... or not? teaching basic algebra using an online interactive learning system. Mathematics and Computer Education, 37(3), 329–340.Google Scholar
  69. Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4(4), 295–312.CrossRefGoogle Scholar
  70. Tatsuoka, K. K. (1983). Rule space: An approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20(4), 345–354.CrossRefGoogle Scholar
  71. Taylor, K., & Rohrer, D. (2010). The effects of interleaved practice. Applied Cognitive Psychology, 24(6), 837–848.CrossRefGoogle Scholar
  72. Wang, Y., Heffernan, N. T., & Heffernan, C. (2015). Towards better affect detectors: effect of missing skills, class features and common wrong answers. In Proceedings of learning analytics and knowledge (pp. 31–35). New York: ACM.Google Scholar
  73. Wang, Z., Lan, A. S., Nie, W., Waters, A. E., Grimaldi, P. J., & Baraniuk, R. G. (2018). QG-net: a data-driven question generation model for educational content. In Proceedings of learning at scale (pp. 7:1–7:10). New York: ACM.Google Scholar
  74. Xu, Y., & Mostow, J. (2012). Comparison of methods to trace multiple subskills: Is LR-DBN best? Proceedings of educational data mining (pp. 41–48).Google Scholar

Copyright information

© Association for Educational Communications and Technology 2019

Authors and Affiliations

  1. 1.Faculty of InformaticsMasaryk UniversityBrnoCzech Republic

Personalised recommendations