Advertisement

Technology, Knowledge and Learning

, Volume 19, Issue 1–2, pp 165–181 | Cite as

Unifying Computer-Based Assessment Across Conceptual Instruction, Problem-Solving, and Digital Games

  • William L. Miller
  • Ryan S. Baker
  • Lisa M. Rossi
Origninal research

Abstract

As students work through online learning systems such as the Reasoning Mind blended learning system, they often are not confined to working within a single educational activity; instead, they work through various different activities such as conceptual instruction, problem-solving items, and fluency-building games. However, most work on assessing student knowledge using methods such as Bayesian Knowledge Tracing has focused only on modeling learning in only one context or activity, even when the same skill is encountered in multiple different activities. We investigate ways in which student learning can be modeled across activities, towards understanding the relationship between different activities and which approaches are relatively more successful at integrating information across activities. However, we find that integrating data across activities does not improve predictive power relative to using data from just one activity. This suggests that seemingly identical skills in different activities may actually be cognitively different for students.

Keywords

Bayesian Knowledge Tracing Educational activities Learning environments Transfer 

Notes

Acknowledgments

We thank support from the Bill and Melinda Gates Foundation, and also thank George Khachatryan for valuable suggestions and comments, and Belinda Yew for assistance in literature review.

References

  1. Archibald, T. N., & Poltrack, J. (2011). ADL: Preliminary systems and content integration research within the next generation learning environment. In The Interservice/Industry Training, Simulation & Education Conference (I/ITSEC) (Vol. 2011, No. 1). National Training Systems Association.Google Scholar
  2. Baker, R. S. J. D., & Clarke-Midura, J. (2013). Predicting successful inquiry learning in a virtual performance assessment for science. In Proceedings of the 21st international conference on user modeling, adaptation, and personalization (pp. 203–214).Google Scholar
  3. Baker, R. S. J. D., Corbett, A. T., & Aleven, V. (2008). Improving contextual models of guessing and slipping with a truncated training set. In Proceedings of the 1st international conference on educational data mining (pp. 67–76).Google Scholar
  4. Baker, R. S. J. D., Corbett, A. T., Gowda, S. M., Wagner, A. Z., MacLaren, B. M., et al. (2010). Contextual slip and prediction of student performance after use of an intelligent tutor. In Proceedings of the 18th annual conference on user modeling, adaptation, and personalization (pp. 52–63).Google Scholar
  5. Baker, R. S. J. D., Gowda, S. M., & Corbett, A. T. (2011). Automatically detecting a student’s preparation for future learning: Help use is key. In Proceedings of the 4th international conference on educational data mining (pp. 179–188).Google Scholar
  6. Beck, J. E., & Chang, K.-M. (2007). Identifiability: A fundamental problem of student modeling. In Proceedings of the 11th international conference on user modeling (UM 2007).Google Scholar
  7. Bloom, B. S. (1968). Learning for mastery. Evaluation Comment, 1(2), 1–12.Google Scholar
  8. Cen, H., Koedinger, K., & Junker, B. (2007). Is over practice necessary? Improving learning efficiency with the cognitive tutor through educational data mining. In R. Luckin & K. Koedinger (Eds.), Proceedings of the 13th international conference on artificial intelligence in education (pp. 511–518). Amsterdam: IOS Press.Google Scholar
  9. Chen, Z. (1996). Children’s analogical problem solving: The effects of superficial, structural, and procedural similarity. Journal of Experimental Child Psychology, 62(3), 410–431.Google Scholar
  10. Corbett, A. (2001). Cognitive computer tutors: Solving the two-sigma problem.  Lecture Notes in Computer Science, 2109, 137–147.Google Scholar
  11. Corbett, A. T., & Anderson, J. R. (1995). Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modeling and User-Adapted Interaction, 4, 253–278.CrossRefGoogle Scholar
  12. Corbett, A. T., & Bhatnagar, A. (1997). Student modeling in the ACT programming tutor: Adjusting a procedural learning model with declarative knowledge. In A. Jemeson, C. Paris, & C. Tasso (Eds.), User Modeling: Proceedings of the Sixth International Conference (UM97) (pp. 243–254).Google Scholar
  13. Feng, M., Heffernan, N. T., & Koedinger, K. R. (2006). Predicting state test scores better with intelligent tutoring systems: Developing metrics to measure assistance required. In Ikeda, Ashley & Chan (Eds.), Proceedings of the 8th international onference on intelligent tutoring systems (pp. 31–40). Springer-Verlag: Berlin.Google Scholar
  14. Gentner, D., Loewenstein, J., & Thompson, L. (2003). Learning and transfer: A general role for analogical encoding. Journal of Educational Psychology, 95(2), 393–408.Google Scholar
  15. Gick, M. L., & Holyoak, K. J. (1980). Analogical problem solving. Cognitive Psychology, 12(3), 306–355.Google Scholar
  16. Gobert, J. D., Sao Pedro, M., Raziuddin, J., & Baker, R. (2013). From log files to assessment metrics: Measuring students’ science inquiry skills using educational data mining. Journal of the Learning Sciences, 22(4), 521–563.Google Scholar
  17. Greiff, S., & Funke, J. (2009). Measuring complex problem solving: The MicroDYN approach. In F. Scheuermann (Ed.), The transition to computer-based assessment: Lessons learned from largescale surveys and implications for testing. Luxembourg: Office for Official Publications of the European Communities.Google Scholar
  18. Hanley, J., & McNeil, B. (1980). The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143, 29–36.Google Scholar
  19. Khachatryan, G., Romashov, A., Khachatryan, A., Gaudino, S., Khachatryan, J., Guarian, K., & Yufa, N. (in press). Reasoning Mind Genie 2: An intelligent learning system as a vehicle for international transfer of instructional methods in mathematics. International Journal of Artificial Intelligence in Education. Google Scholar
  20. Koedinger, K. R. (2002). Toward evidence for instructional design principles: Examples from cognitive tutor Math 6. In Proceedings of PME-NA XXXIII (the North American Chapter of the international group for the psychology of mathematics education).Google Scholar
  21. Koedinger, K. R., Corbett, A. C., & Perfetti, C. (2012). The Knowledge-Learning-Instruction (KLI) framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive Science, 36(5), 757–798.CrossRefGoogle Scholar
  22. Lomas, D., Ching, D., Stampfer, E., Sandoval, M., & Koedinger, K. (2012). Battleship numberline: A digital game for improving estimation accuracy on fraction number lines. In Conference of the American Education Research Association (AERA).Google Scholar
  23. Maass, J. K., & Pavlik, P. I. (2013) Using learner modeling to determine effective conditions of learning for optimal transfer. In Proceedings of the 16th international conference on artificial intelligence in education, Memphis, TN (pp. 189–198).Google Scholar
  24. McQuiggan, S. W., Rowe, J. P., Lee, S., & Lester, J. C. (2008). Story-based learning: The impact of narrative on learning experiences and outcomes. In Proceedings of the 9th international conference on intelligent tutoring systems (pp. 530–539).Google Scholar
  25. Milgram, R. J. (2005). The mathematics pre-service teachers need to know. Retrieved from ftp://math.stanford.edu/pub/papers/milgram/FIE-book.pdf.
  26. Molnár, G., Greiff, S., & Csapó, B. (2013). Inductive reasoning, domain specific and complex problem solving: Relations and development. Thinking Skills and Creativity, 9, 35–45.CrossRefGoogle Scholar
  27. Pardos, Z. A., Baker, R. S. J.d., Gowda, S. M., & Heffernan, N. T. (2011). The sum is greater than the parts: Ensembling models of student knowledge in educational software. SIGKDD Explorations, 13(2), 37–44.CrossRefGoogle Scholar
  28. Pardos, Z. A., Baker, R. S. J. D., San Pedro, M. O. C. Z., Gowda, S. M., & Gowda, S. M. (2013) Affective states and state tests: Investigating how affect throughout the school year predicts end of year learning outcomes. In Proceedings of the 3rd international conference on learning analytics and knowledge (pp. 117–124). Washington, DC: Association for Computing Machinery.Google Scholar
  29. Pavlik, P. L., & Anderson, J. R. (2011). Using a model to compute the optimal schedule of practice. Journal of Experimental Psychology: Applied, 14(2), 101–117.Google Scholar
  30. Quellmalz, E., Timms, M., & Schneider, S. (2009). Assessment of student learning in science simulations and games. Washington, DC: National Research Council Report.Google Scholar
  31. Razzaq, L., Feng, M., Nuzzo-Jones, G., Heffernan, N. T., Koedinger, K. R., Junker, B., et al. (2005). The Assistment project: Blending assessment and assisting. In C. K. Looi, G. McCalla, B. Bredeweg, & J. Breuker (Eds.), Proceedings of the 12th artificial intelligence in education (pp. 555–562). Amsterdam: ISO Press.Google Scholar
  32. Reye, J. (2004). Student modeling based on belief networks. International Journal of Artificial Intelligence in Education, 14, 1–33.Google Scholar
  33. Rittle-Johnson, B., & Star, J. R. (2007). Does comparing solution methods facilitate conceptual and procedural knowledge? An experimental study on learning to solve equations. Journal of Educational Psychology, 99(3), 561–574.Google Scholar
  34. Rittle-Johnson, B., & Star, J. R. (2009). Compared with what? The effects of different comparisons on conceptual knowledge and procedural flexibility for equation solving. Journal of Educational Psychology, 101(3), 529–544.Google Scholar
  35. San Pedro, M. O. Z., Baker, R. S. J. D., Bowers, A. J., & Heffernan, N. T. (2013a) Predicting college enrollment from student interaction with an intelligent tutoring system in middle school. In Proceedings of the 6th international conference on educational data mining (pp. 177–184). Worcester, MA: International Educational Data Mining Society.Google Scholar
  36. Sao Pedro, M., Baker, R., & Gobert, J. (2013b). Incorporating Scaffolding and Tutor context into Bayesian Knowledge Tracing to predict inquiry skill acquisition. In Proceedings of the 6th international conference on educational data mining (pp. 185–192).Google Scholar
  37. Sao Pedro, M. A., Baker, R. S. J. D., Gobert, J., Montalvo, O., & Nakama, A. (2013c). Leveraging machine-learned detectors of systematic inquiry behavior to estimate and predict transfer of inquiry skill. User Modeling and User-Adapted Interaction, 23(1), 1–39.CrossRefGoogle Scholar
  38. Schofield, J. W. (1995). Computers and classroom culture. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  39. Schwonke, R., Renkl, A., Krieg, C., Wittwer, J., Aleven, V., & Salden, R. (2009). The worked-example effect: Not an artefact of lousy control conditions. Computers in Human Behavior, 25, 258–266.CrossRefGoogle Scholar
  40. Tatto, M. T., Schwille, J. S., Senk, S., Ingvarson, L. C., Peck, R., & Rowley, G. L. (2009). Teacher education and development study in mathematics (TEDS-M): Conceptual framework. Amsterdam: International Association for the Evaluation of Educational Achievement.Google Scholar
  41. Wüstenberg, S., Greiff, S., & Funke, J. (2012). Complex problem solving. More than reasoning? Intelligence, 40, 1–14.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  • William L. Miller
    • 1
  • Ryan S. Baker
    • 2
  • Lisa M. Rossi
    • 3
  1. 1.Reasoning MindHoustonUSA
  2. 2.Teachers CollegeColumbia UniversityNew YorkUSA
  3. 3.Georgia Institute of TechnologyAtlantaUSA

Personalised recommendations