Evolution of an Intelligent Deductive Logic Tutor Using Data-Driven Elements

Article

Abstract

Deductive logic is essential to a complete understanding of computer science concepts, and is thus fundamental to computer science education. Intelligent tutoring systems with individualized instruction have been shown to increase learning gains. We seek to improve the way deductive logic is taught in computer science by developing an intelligent, data-driven logic tutor. We have augmented Deep Thought, an existing computer-based logic tutor, by adding data-driven methods, specifically; intelligent problem selection based on the student’s current proficiency, automatically generated on-demand hints, and determination of student problem solving strategies based on clustering previous students. As a result, student tutor completion (the amount of the tutor the students completed) steadily improved as data-driven methods were added to Deep Thought, allowing students to be exposed to more logic concepts. We also gained additional insights into the effects of different course work and teaching methods on tutor effectiveness.

Keywords

Deductive logic instruction Intelligent tutoring systems Data-driven methods 

References

  1. Barnes, T., & Stamper, J. (2008). Toward automatic hint generation for logic proof tutoring using historical student data. In Proceedings of the thirteenth international conference on intelligent tutoring systems (ITS ’08) (pp. 373–382).Google Scholar
  2. Barnes, T., Stamper, J., Lehmann, L., & Croy, M.J. (2008). A pilot study on logic proof tutoring using hints generated from historical student data. In Proceedings of the first international conference on educational data mining (EDM ’08) (pp. 197–201).Google Scholar
  3. Croy, M.J. (2000). Problem solving, working backwards, and graphic proof representation. Teaching Philosophy, 23(2), 169–187.CrossRefGoogle Scholar
  4. Corbett, A.T., & Anderson, J.R. (1995). Knowledge Tracing: Modeling the Acquisition of Procedural Knowledge. User Modeling and User-Adapted Interaction, 4, 253–278.CrossRefGoogle Scholar
  5. Despotovic-Zrakic, M., Markovic, A., Bogdanovic, Z., Barac, D., & Krco, S. (2012). Providing adaptivity in Moodle LMS courses. Educational Technology Society, 15(1), 326–338.Google Scholar
  6. Eagle, M., & Barnes, T. (2012). Data-Driven Method for assessing Skill-Opportunity recognition in open procedural problem solving environments. In Proceedings of the fifteenth international conference on Intelligent Tutoring Systems (ITS ’12) (pp. 615–617).Google Scholar
  7. Eagle, M., & Barnes, T. (2014a). Modeling student dropout in tutoring systems. In Proceedings of the twelfth international conference on Intelligent Tutoring Systems (ITS ’14) (pp. 676–678).Google Scholar
  8. Eagle, M., & Barnes, T. (2014b). Survival analysis on duration data in intelligent tutors. In Proceedings of the twelfth international conference on intelligent tutoring systems (ITS ’14) (pp. 178– 187).Google Scholar
  9. Ehle, A., Hundeshagen, N., & Lange, M. (2015). The sequent calculus trainer - helping students to correctly construct proofs. In Proceedings of the fourth international conference on tools for teaching logic (TTL ’15) (pp. 35–44).Google Scholar
  10. Elmadani, M., Mathews, M., & Mitrovic, A. (2012). Data-driven misconception discovery in constraint-based intelligent tutoring systems. In Workshop proceedings of the twentieth International Conference on Computers in Education (ICCE ’12).Google Scholar
  11. Fancsali, S.E. (2014). Causal discovery with models: behavior, affect, and learning in cognitive tutor algebra. In Proceedings of the seventh international conference on educational data mining (EDM ’14) (pp. 28–35).Google Scholar
  12. Gluz, J.C., Penteado, F., Mossmann, M., Gomes, L., & Vicari, R. (2014). A student model for teaching natural deduction based on a prover that mimics student reasoning. In Proceedings of the twelfth international conference on intelligent tutoring systems (ITS ’14) (pp. 482–489).Google Scholar
  13. Gonzalez-Brenes, J.P., & Mostow, J. (2013). What and when do students learn? Fully data-driven joint estimation of cognitive and student models. In Proceedings of the sixth international conference on educational data mining (EDM ’13) (pp. 236–239).Google Scholar
  14. Hilbert, T.S., & Renkl, A. (2009). Learning how to use a computer-based concept-mapping tool: self-explaining examples helps. Computers in Human Behavior, 25(2), 267–274.CrossRefGoogle Scholar
  15. Kizilcec, R.F., Piech, C., & Schneider, E. (2013). Deconstructing disengagement: analyzing learner subpopulations in massive open online courses. In Proceedings of the third international conference on Learning Analytics and Knowledge. (LAK ’13) (pp. 170–179).Google Scholar
  16. Klasnja-Milicevic, A., Vesin, B., Ivanovic, M., & Budimac, Z. (2011). E-learning personalization based on hybrid recommendation strategy and learning style identification. Computers Education, 56(3), 885–899.CrossRefGoogle Scholar
  17. Lee, J., & Brunskill, E. (2012). The impact on individualizing student models on necessary practice opportunities. In Proceedings of the fifth international conference on Educational Data Mining (EDM ’12) (pp. 118–125).Google Scholar
  18. Lodder, J., Heeren, B., & Jeuring, J. (2015). A pilot study of the use of LogEx, lessons learned. In Proceedings of the fourth international conference on tools for teaching logic (TTL ’15) (pp. 94– 100).Google Scholar
  19. Lukins, S., Levicki, A., & Burg, J. (2002). A tutorial program for propositional logic with human/computer interactive learning. ACM SIGCSE Bulletin, 34(1), 381–385. New York, NY: ACM.CrossRefGoogle Scholar
  20. Ma, W., Adesope, O., Nesbit, J.C., & Liu, Q. (2014). Intelligent tutoring systems and learning outcomes: a meta-analysis. Journal of Educational Psychology, 106(4), 901–918.CrossRefGoogle Scholar
  21. McLaren, B., Lim, S., & Koedinger, K. (2008). When and how often should worked examples be given to students? New results and a summary of the current state of research. In Proceedings of the thirtieth conference of the cognitive science society (pp. 2176–2181).Google Scholar
  22. Merceron, A., & Yacef, K. (2005). Educational data mining: a case study. In Proceedings of the twelfth international conference on artificial intelligence in education (AIED ’05) (pp. 467–474).Google Scholar
  23. Meyers, J.P. Jr. (1990). The central role of mathematical logic in computer science. In Miller, J.E., & Joyce, D.T. (Eds.) Proceedings of the twenty-first SIGCSE technical symposium on computer science education (SIGCSE ’90) (pp. 22–26). New York, NY: ACM.CrossRefGoogle Scholar
  24. Murray, T., & Arroyo, I. (2002). Toward measuring and maintaining the zone of proximal development in adaptive instructional systems. In Proceedings of the tenth international conference on intelligent tutoring systems (ITS ’02) (pp. 289–294).Google Scholar
  25. Page, R.L. (2003). Software is discrete mathematics. In Proceedings of the eighth ACM SIGPLAN international conference on functional programming (ICFP ’03) (pp. 79–86). New York, NY: ACM.CrossRefGoogle Scholar
  26. Perez, E.V., Santos, L.M.R., Perez, M.J.V., de Castro Fernandez, J.P., & Martin, R.G. (2012). Automatic classification of question difficulty level: teachers’ estimation vs. students’ perception. In Proceedings of the IEEE frontiers in education conference (pp. 1–5).Google Scholar
  27. Rivers, K., & Koedinger, K.R. (2015). Data-driven hint generation in vast solution spaces: a self-improving python programming tutor. International Journal of Artificial Intelligence in Education, 1–28.Google Scholar
  28. Ritter, S. (2007). Cognitive tutor: applied research in mathematics education. Psychonomic Bulletin Review, 14(2), 249–255.CrossRefGoogle Scholar
  29. Shih, B., Koedinger, K.R., & Scheines, R. (2008). A response time model for bottom-out hints as worked examples.Google Scholar
  30. Sieg, W. (2007). The AProS project: strategic thinking & computational logic. Logic Journal of IGPL, 15(4), 359–368.CrossRefMATHGoogle Scholar
  31. Steenbergen-Hu, S., & Cooper, H. (2014). A meta-analysis of the effectiveness of intelligent tutoring systems on college students’ academic learning. Journal of Educational Psychology, 106(2), 331.CrossRefGoogle Scholar
  32. Sweller, J., & Cooper, G.A. (1985). The use of worked examples as a substitute for problem solving in learning algebra. Cognition and Instruction, 2(1), 59–89.CrossRefGoogle Scholar
  33. VanLehn, K. (2006). The behavior of tutoring systems. International Journal of Artificial Intelligence in Education, 16(2), 227–265.Google Scholar
  34. VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221.CrossRefGoogle Scholar
  35. VanLehn, K., Lynch, C., Schulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., & Wintersgill, M. (2005). The Andes physics tutoring system: lessons learned. International Journal of Artificial Intelligence in Education, 15(3), 147–203.MATHGoogle Scholar
  36. Watering, G., & Rijt, J. (2006). Teachers and students perceptions of assessments: a review and a study into the ability and accuracy of estimating the difficulty levels of assessment items. Educational Research Review, 1(2), 133–147.CrossRefGoogle Scholar

Copyright information

© International Artificial Intelligence in Education Society 2016

Authors and Affiliations

  1. 1.North Carolina State UniversityRaleighUSA

Personalised recommendations