A Measurement Model of Microgenetic Transfer for Improving Instructional Outcomes

  • Philip I. PavlikJr
  • Michael Yudelson
  • Kenneth R. Koedinger


Efforts to improve instructional task design often make reference to the mental structures, such as “schemas” (e.g., Gick & Holyoak, 1983) or “identical elements” (Thorndike & Woodworth, 1901), that are common to both the instructional and target tasks. This component based (e.g., Singley & Anderson, 1989) approach has been employed in psychometrics (Tatsuoka, 1983), cognitive science (Koedinger & MacLaren, 2002), and most recently in educational data mining (Cen, Koedinger, & Junker, 2006). A typical assumption of these theory based models is that an itemization of “knowledge components” shared between tasks is sufficient to predict transfer between these tasks. In this paper we step back from these more cognitive theory based models of transfer and suggest a psychometric measurement model that removes most cognitive assumptions, thus allowing us to understand the data without the bias of a theory of transfer or domain knowledge. The goal of this work is to help provide a methodology that allows researchers to analyse complex data without the theoretical assumptions clearly part of other methods. Our experimentally controlled examples illustrate the non-intuitive nature of some transfer situations which motivates the necessity of the unbiased analysis that our model provides. We explain how to use this Contextual Performance Factors Analysis (CPFA) model to measure learning progress of related skills at a fine granularity. This CPFA analysis then allows us to answer questions regarding the best order of practice for related skills and the appropriate amount of repetition depending on whether students are succeeding or failing with each individual practice problem. We conclude by describing how the model allows us to test theories, in which we discuss how well two different cognitive theories agree with the qualitative results of the model.


Human learning Pedagogical strategy Transfer of learning Student modelling 


  1. Ainsworth, S. (1999). The functions of multiple representations. Computers & Education, 33(2–3), 131–152.CrossRefGoogle Scholar
  2. Anderson, J. R., & Fincham, J. M. (1994). Acquisition of procedural skills from examples. Journal of Experimental Psychology Learning Memory and Cognition, 20(6), 1322–1340.CrossRefGoogle Scholar
  3. Anderson, J. R., & Lebiere, C. (1998). The atomic components of thought. Mahwah: Lawrence Erlbaum Associates.Google Scholar
  4. Atkinson, R. C. (1972). Ingredients for a theory of instruction. American Psychologist, 27(10), 921–931.CrossRefGoogle Scholar
  5. Atkinson, R. K., Derry, S. J., Renkl, A., & Wortham, D. (2000). Learning from examples: instructional principles from the worked examples research. Review of Educational Research, 70(2), 181–214.CrossRefGoogle Scholar
  6. Atkinson, R. K., Lin, L., & Harrison, C. (2009). Comparing the efficacy of different signaling techniques. In G. Siemens & C. Fulford (Eds.), World conference on educational multimedia, hypermedia and telecommunications 2009 (pp. 954–962). Honolulu: AACE.Google Scholar
  7. Baker, R. S. J. D., & Yacef, K. (2009). The state of educational data mining in 2009: a review and future visions. Journal of Educational Data Mining, 1(1), 3–17.Google Scholar
  8. Barnes, T. (2005). The Q-matrix method: mining student response data for knowledge. Paper presented at the American Association for Artificial Intelligence 2005 Educational Data Mining Workshop.Google Scholar
  9. Bassok, M., & Holyoak, K. J. (1989). Interdomain transfer between isomorphic topics in algebra and physics. Journal of Experimental Psychology Learning Memory and Cognition, 15(1), 153–166.CrossRefGoogle Scholar
  10. Beck, J., & Mostow, J. (2008). How who should practice: using learning decomposition to evaluate the efficacy of different types of practice for different types of students. (pp. 353–362).Google Scholar
  11. Booth, J. L., & Koedinger, K. R. (2008). Key misconceptions in algebraic problem solving. In B. C. Love, K. McRae, & V. M. Sloutsky (Eds.), Proceedings of the 30th annual meeting of the cognitive science society (pp. 571–576). Austin: Cognitive Science Society.Google Scholar
  12. Bransford, J. D., & Schwartz, D. L. (1999). Rethinking transfer: a simple proposal with multiple implications. Review of Research in Education, 24(1), 61–100.CrossRefGoogle Scholar
  13. Carrier, M., & Pashler, H. (1992). The influence of retrieval on retention. Memory & Cognition, 20(6), 633–642.CrossRefGoogle Scholar
  14. Cen, H., Koedinger, K. R., & Junker, B. (2006). Learning factors analysis - a general method for cognitive model evaluation and improvement Proceedings of the 8th International Conference on Intelligent Tutoring Systems (pp. 164–175). Springer Berlin / Heidelberg.Google Scholar
  15. Cen, H., Koedinger, K., & Junker, B. (2008). Comparing two IRT Models for conjunctive skills. In B. Woolf, E. Aïmeur, R. Nkambou, & S. Lajoie (Eds.), Intelligent tutoring systems (Vol. 5091, pp. 796–798). Springer Berlin Heidelberg.Google Scholar
  16. Chen, Z., & Klahr, D. (2008). Remote transfer of scientific-reasoning and problem-solving strategies in children. Advances in Child Development and Behavior, 36, 419–470.CrossRefGoogle Scholar
  17. Chi, M., Koedinger, K. R., Gordon, G., Jordan, P., & VanLehn, K. (2011). Instructional factors analysis: A cognitive model for multiple instructional interventions. Proceedings of the 4th International Conference on Educational Data Mining (pp. 61–70), Eindhoven, The Netherlands.Google Scholar
  18. Corbett, A. T., & Anderson, J. R. (1992). Student modeling and mastery learning in a computer-based programming tutor. In C. Frasson, G. Gauthier, & G. McCalla (Eds.), Intelligent tutoring systems: Second international conference on intelligent tutoring systems (pp. 413–420). New York: Springer.CrossRefGoogle Scholar
  19. Desmarais, M. C., Maluf, A., & Liu, J. (1996). User-expertise modeling with empirically derived probabilistic implication networks. User Modeling and User-Adapted Interaction, 5(3–4), 283–315.CrossRefGoogle Scholar
  20. Draney, K. L., Pirolli, P., & Wilson, M. (1995). A measurement model for a complex cognitive skill. In P. D. Nichols, S. F. Chipman, & R. L. Brennan (Eds.), Cognitively diagnostic assessment (pp. 103–125).Google Scholar
  21. Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989). The structure-mapping engine: algorithm and examples. Artificial Intelligence, 41(1), 1–63.MATHCrossRefGoogle Scholar
  22. Falmagne, J.-C., Doignon, J.-P., Cosyn, E., & Thiery, N. (2003). The assessment of knowledge in theory and in practice. Institute for Mathematical Behavioral Sciences, Paper 26.Google Scholar
  23. Fischer, G. H. (1973). The linear logistic test model as an instrument in educational research. Acta Psychologica, 37(6), 359–374.CrossRefGoogle Scholar
  24. Gentner, D., Loewenstein, J., & Thompson, L. (2003). Learning and transfer: a general role for analogical encoding. Journal of Educational Psychology, 95(2), 393–405.CrossRefGoogle Scholar
  25. Gentner, D., Loewenstein, J., Thompson, L., & Forbus, K. D. (2009). Reviving inert knowledge: analogical abstraction supports relational retrieval of past events. Cognitive Science, 33(8), 1343–1382.CrossRefGoogle Scholar
  26. Gibson, E. J. (1940). A systematic application of the concepts of generalization and differentiation to verbal learning. Psychological Review, 47(3), 196–229.CrossRefGoogle Scholar
  27. Gick, M. L., & Holyoak, K. J. (1980). Analogical problem solving. [Journal Article]. Cognitive Psychology, 12(3), 306–355.CrossRefGoogle Scholar
  28. Gick, M. L., & Holyoak, K. J. (1983). Schema induction and analogical transfer. Cognitive Psychology, 15(1), 1–38.CrossRefGoogle Scholar
  29. Gobet, F. (1998). Expert memory: a comparison of four theories. Cognition, 66(2), 115–152.CrossRefGoogle Scholar
  30. Gong, Y., Beck, J., & Heffernan, N. T. (2010). Comparing knowledge tracing and performance factor analysis by using multiple model fitting procedures. In V. Aleven, J. Kay, & J. Mostow (Eds.), Intelligent tutoring systems (Vol. 6094, pp. 35–44). Springer Berlin / HeidelbergGoogle Scholar
  31. Hummel, J. E., & Holyoak, K. J. (2003). A symbolic-connectionist theory of relational inference and generalization. Psychological Review, 110(2), 220–264.CrossRefGoogle Scholar
  32. Karpicke, J. D., & Roediger, H. L., III. (2008). The critical importance of retrieval for learning. Science, 319(5865), 966–968.CrossRefGoogle Scholar
  33. Koedinger, K.R. (2002). Toward evidence for instructional design principles: Examples from cognitive tutor math 6. Proceedings of PME-NA XXXIII (the North American Chapter of the International Group for the Psychology of Mathematics Education) (pp. 21–49).Google Scholar
  34. Koedinger, K. R., & Corbett, A. T. (2010). The knowledge-learning-instruction (KLI) framework: Toward bridging the science-practice chasm to enhance robust student learning. Pittsburgh: Carnegie Mellon University.Google Scholar
  35. Koedinger, K. R., & MacLaren, B. A. (2002). Developing a pedagogical domain theory of early algebra problem solving CMU-HCII Tech Report 02–100.Google Scholar
  36. Koedinger, K. R., & Nathan, M. J. (2004). The real story behind story problems: effects of representation on quantitative reasoning. The Journal of the Learning Sciences, 13(2), 129–164.CrossRefGoogle Scholar
  37. Koedinger, K. R., Aleven, V., Roll, I., & Baker, R. S. J. D. (2009). In vivo experiments on whether supporting metacognition in intelligent tutoring systems yields robust learning. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of metacognition in education. New York: Routledge.Google Scholar
  38. Koedinger, K. R., Corbett, A. T., & Perfetti, C. (2012). The knowledge-learning-instruction framework: bridging the science-practice chasm to enhance robust student learning. Cognitive Science, 36(5), 757–798.CrossRefGoogle Scholar
  39. Morris, C. D., Bransford, J. D., & Franks, J. J. (1977). Levels of processing versus transfer appropriate processing. Journal of Verbal Learning and Verbal Behavior, 16(5), 519–533.CrossRefGoogle Scholar
  40. Ohlsson, S. (2011). Deep learning: How the mind overrides experience. Cambridge University Press.Google Scholar
  41. Pavlik, P. I., Jr. (2007). Understanding and applying the dynamics of test practice and study practice. Instructional Science, 35, 407–441.CrossRefGoogle Scholar
  42. Pavlik Jr., P. I., Yudelson, M., & Koedinger, K. R. (2011). Using contextual factors analysis to explain transfer of least common multiple skills. In G. Biswas, S. Bull, J. Kay, & A. Mitrovic (Eds.), Artificial intelligence in education (Vol. 6738, pp. 256–263). Berlin, Germany: Springer.Google Scholar
  43. Pavlik, P. I., Jr., Cen, H., & Koedinger, K. R. (2009). In V. Dimitrova, R. Mizoguchi, B. D. Boulay, & A. Graesser (Eds.), Performance factors analysis -- A new alternative to knowledge tracing (pp. 531–538). Brighton: Proceedings of the 14th International Conference on Artificial Intelligence in Education.Google Scholar
  44. Peterson, L. R. (1965). Paired-associate latencies after the last error. Psychonomic Science, 2(6), 167–168.CrossRefGoogle Scholar
  45. Postman, L., Keppel, G., & Zacks, R. (1968). Studies of learning to learn: VII. The effects of practice on response integration. Journal of Verbal Learning and Verbal Behavior, 7(4), 776–784.CrossRefGoogle Scholar
  46. Rasch, G. (1966). An item analysis which takes individual differences into account. British Journal of Mathematical and Statistical Psychology, 19(1), 49–57.CrossRefGoogle Scholar
  47. Rickard, T. C. (1997). Bending the power law: a CMPL theory of strategy shifts and the automatization of cognitive skills. Journal of Experimental Psychology: General, 126(3), 288–311.CrossRefGoogle Scholar
  48. Rickard, T. C. (1999). A CMPL alternative account of practice effects in numerosity judgment tasks. Journal of Experimental Psychology Learning Memory and Cognition, 25(2), 532–542.CrossRefGoogle Scholar
  49. Rickard, T. C., & Bourne, L. E., Jr. (1996). Some tests of an identical elements model of basic arithmetic skills. Journal of Experimental Psychology Learning Memory and Cognition, 22(5), 1281–1295.CrossRefGoogle Scholar
  50. Rittle-Johnson, B., Saylor, M., & Swygert, K. E. (2008). Learning from explaining: does it matter if mom is listening? Journal of Experimental Child Psychology, 100(3), 215–224.CrossRefGoogle Scholar
  51. Romero, C., & Ventura, S. (2007). Educational data mining: a survey from 1995 to 2005. Expert Systems with Applications, 33(1), 135–146.CrossRefGoogle Scholar
  52. Scheiblechner, H. (1972). Das Lernen und Losen komplexer Denkaufgaben. Zeitschrift für Experimentelle und Angewandte Psychologie, 19, 476–506.Google Scholar
  53. Segalowitz, N. S., & Segalowitz, S. J. (1993). Skilled performance, practice, and the differentiation of speed-up from automatization effects - evidence from 2nd-language word recognition. Applied Psycholinguistics, 14(3), 369–385.CrossRefGoogle Scholar
  54. Shadish, W. R., & Cook, T. D. (2009). The renaissance of field experimentation in evaluating interventions. Annual Review of Psychology, 60(1), 607–629.CrossRefGoogle Scholar
  55. Siegler, R. S., & Crowley, K. (1991). The microgenetic method: a direct means for studying cognitive development. American Psychologist, 46(6), 606–620.CrossRefGoogle Scholar
  56. Singley, M. K., & Anderson, J. R. (1985). The transfer of text-editing skill. International Journal of Man-Machine Studies, 22(4), 403–423.CrossRefGoogle Scholar
  57. Singley, M. K., & Anderson, J. R. (1989). Transfer in the ACT* theory The transfer of cognitive skill (pp. viii, 300). Cambridge, MA, US; Harvard University Press.Google Scholar
  58. Skinner, B. F. (1958). Teaching machines; from the experimental study of learning come devices which arrange optimal conditions for self instruction. Science (New York, N.Y.), 128(3330), 969–977.CrossRefGoogle Scholar
  59. Sloutsky, V. M., Kaminski, J. A., & Heckler, A. F. (2005). The advantage of simple symbols for learning and transfer. Psychonomic Bulletin & Review, 12(3), 508–513.CrossRefGoogle Scholar
  60. Son, J. Y., & Goldstone, R. L. (2009). Fostering general transfer with specific simulations. Pragmatics and Cognition, 17, 1–42.CrossRefGoogle Scholar
  61. Son, J. Y., Smith, L. B., & Goldstone, R. L. (2008). Simplicity and generalization: short-cutting abstraction in children’s object categorizations. Cognition, 108(3), 626–638.CrossRefGoogle Scholar
  62. Spada, H. (1977). Logistic models of learning and thought. In H. Spada & W. F. Kempf (Eds.), Structural models of learning and thought (pp. 227–262). Bern: Huber.Google Scholar
  63. Spada, H., & McGaw, B. (1985). The assessment of learning effects with linear logistic test models. In S. Embretson (Ed.), Test design: Developments in psycholgoy and psychometrics. Orlando: Academic.Google Scholar
  64. Stamper, J., & Koedinger, K. (2011). Human-machine student model discovery and improvement using DataShop. In G. Biswas, S. Bull, J. Kay, & A. Mitrovic (Eds.), Artificial intelligence in education (Vol. 6738, pp. 353–360). Berlin: Springer.CrossRefGoogle Scholar
  65. Sternberg, R. J. (2008). Increasing fluid intelligence is possible after all. Proceedings of the National Academy of Sciences of the United States of America, 105(19), 6791–6792.CrossRefGoogle Scholar
  66. Sweller, J., Chandler, P., Tierney, P., & Cooper, M. (1990). Cognitive load as a factor in the structuring of technical material. [Journal Article]. Journal of Experimental Psychology: General, 119(2), 176–192.CrossRefGoogle Scholar
  67. Taatgen, N. A., & Lee, F. J. (2003). Production compilation: simple mechanism to model complex skill acquisition. [Journal Article]. Human Factors, 45(1), 61–76.CrossRefGoogle Scholar
  68. Tatsuoka, K. K. (1983). Rule space: an approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20(4), 345–354.CrossRefGoogle Scholar
  69. Thiessen, E. D., & Pavlik, P. I., Jr. (2013). iMinerva: a mathematical model of distributional statistical learning. Cognitive Science, 37(2), 310–343.CrossRefGoogle Scholar
  70. Thompson, C. P., Wenger, S. K., & Bartling, C. A. (1978). How recall facilitates subsequent recall: a reappraisal. Journal of Experimental Psychology: Human Learning and Memory, 4(3), 210–221.Google Scholar
  71. Thompson, C. K., Shapiro, L. P., & Roberts, M. M. (1993). Treatment of sentence production deficits in aphasia: a linguistic-specific approach to wh-interrogative training and generalization. Aphasiology, 7(1), 111–133.CrossRefGoogle Scholar
  72. Thorndike, E. L., & Woodworth, R. S. (1901). The influence of improvement in one mental function upon the efficiency of other functions. (I). [Journal Article]. Psychological Review, 8(3), 247–261.CrossRefGoogle Scholar
  73. Yudelson, M., Pavlik Jr., P. I., & Koedinger, K. R. (2011). User modeling – A notoriously black art. In J. Konstan, R. Conejo, J. Marzo, & N. Oliver (Eds.), User modeling, adaption and personalization (Vol. 6787, pp. 317–328). Springer Berlin / Heidelberg.Google Scholar

Copyright information

© International Artificial Intelligence in Education Society 2015

Authors and Affiliations

  • Philip I. PavlikJr
    • 1
  • Michael Yudelson
    • 3
  • Kenneth R. Koedinger
    • 2
  1. 1.Institute for Intelligent Systems and PsychologyUniversity of MemphisMemphisUSA
  2. 2.Human Computer Interaction InstituteCarnegie Mellon UniversityPittsburghUSA
  3. 3.Carnegie Learning, Inc.PittsburghUSA

Personalised recommendations