Data-Driven Hint Generation in Vast Solution Spaces: a Self-Improving Python Programming Tutor

Article

Abstract

To provide personalized help to students who are working on code-writing problems, we introduce a data-driven tutoring system, ITAP (Intelligent Teaching Assistant for Programming). ITAP uses state abstraction, path construction, and state reification to automatically generate personalized hints for students, even when given states that have not occurred in the data before. We provide a detailed description of the system’s implementation and perform a technical evaluation on a small set of data to determine the effectiveness of the component algorithms and ITAP’s potential for self-improvement. The results show that ITAP is capable of producing hints for almost any given state after being given only a single reference solution, and that it can improve its performance by collecting data over time.

Keywords

Data-driven tutoring Automatic hint generation Programming tutors Solution space 

References

  1. Altadmri, A., & Brown, N. C. (2015). 37 Million Compilations: Investigating Novice Programming Mistakes in Large-Scale Student Data. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education (pp. 522–527).Google Scholar
  2. Anderson, J. R., Conrad, F. G., & Corbett, A. T. (1989). Skill acquisition and the LISP tutor. Cognitive Science, 13(4), 467–505.CrossRefGoogle Scholar
  3. Barnes, T., & Stamper, J. (2008). Toward automatic hint generation for logic proof tutoring using historical student data. In Proceedings of the 9th international conference on Intelligent Tutoring Systems (pp. 373–382).Google Scholar
  4. Carter, J., Dewan, P., & Pichiliani, M. (2015). Towards incremental separation of surmountable and insurmountable programming difficulties. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education (pp. 241–246).Google Scholar
  5. Corbett, A. T., & Anderson, J. R. (2001). Locus of feedback control in computer-based tutoring: Impact on learning rate, achievement and attitudes. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 245–252).Google Scholar
  6. Corbett, A. T., Koedinger, K. R., & Anderson, J. R. (1997). Intelligent tutoring systems. In M. G. Helander, T. K. Landauer, & P. V. Prabhu (Eds.), Handbook of human-computer interaction (pp. 849–874). Amsterdam: Elsevier Science B. V.CrossRefGoogle Scholar
  7. Eagle, M., & Barnes, T. (2013). Evaluation of automatically generated hint feedback. In Proceedings of the 6th International Conference on Educational Data Mining (pp. 372–374).Google Scholar
  8. Eagle, M., Johnson, M., & Barnes, T. (2012). Interaction networks: Generating high level hints based on network community clustering. In Proceedings of the 5th International Conference on Educational Data Mining (pp. 164–167).Google Scholar
  9. Folsom-Kovarik, J. T., Schatz, S., & Nicholson, D. (2010). Plan ahead: Pricing ITS learner models. In Proceedings of the 19th Behavior Representation in Modeling & Simulation (BRIMS) Conference (pp. 47–54).Google Scholar
  10. Fossati, D., Di Eugenio, B., Ohlsson, S., Brown, C., Chen, L., & Cosejo, D. (2009). I learn from you, you learn from me: How to make iList learn from students. In Proceedings of the 2009 conference on Artificial Intelligence in Education: Building Learning Systems that Care: From Knowledge Representation to Affective Modelling (pp. 491–498).Google Scholar
  11. Gerdes, A., Jeuring, J. T., & Heeren, B. J. (2010). Using strategies for assessment of programming exercises. In Proceedings of the 41st ACM technical symposium on Computer science education (pp. 441–445).Google Scholar
  12. Gerdes, A., Jeuring, J., & Heeren, B. (2012). An interactive functional programming tutor. In Proceedings of the 17th ACM annual conference on Innovation and technology in computer science education (pp. 250–255).Google Scholar
  13. Gross, S., Mokbel, B., Paassen, B., Hammer, B., & Pinkwart, N. (2014). Example-based feedback provision using structured solution spaces. International Journal of Learning Technology, 9(3), 248–280.CrossRefGoogle Scholar
  14. Hicks, A., Peddycord III, B., & Barnes, T. (2014). Building Games to Learn from Their Players: Generating Hints in a Serious Game. In Proceedings of the 12th international conference on Intelligent Tutoring Systems (pp. 312-317).Google Scholar
  15. Hosseini, R., & Brusilovsky, P. (2013). JavaParser: A fine-grain concept indexing tool for java problems. In The First Workshop on AI-supported Education for Computer Science (AIEDCS 2013) (pp. 60-63).Google Scholar
  16. Kimball, R. (1982). A self-improving tutor for symbolic integration. In D. Sleeman & J.S. Brown (Eds.), Intelligent tutoring systems (Vol. 1) (pp. 283-307). Academic Press.Google Scholar
  17. Koedinger, K. R., McLaughlin, E. A., & Stamper, J. C. (2012). Automated Student Model Improvement. In Proceedings of the 5th International Conference on Educational Data Mining (pp. 17–24).Google Scholar
  18. Lazar, T., & Bratko, I. (2014). Data-driven program synthesis for hint generation in programming tutors. In Proceedings of the 12th international conference on Intelligent Tutoring Systems (pp. 306–311).Google Scholar
  19. Le, N. T., & Menzel, W. (2007). Using constraint-based modelling to describe the solution space of ill-defined problems in logic programming. In Proceedings of the 6th international conference on Advances in web based learning (pp. 367–379).Google Scholar
  20. Le, N. T., Strickroth, S., Gross, S., & Pinkwart, N. (2013). A review of AI-supported tutoring approaches for learning programming. In N.T. Nguyen, T. van Do, & H. A. le Thi (Eds.), Advanced Computational Methods for Knowledge Engineering (pp. 267–279). Springer International Publishing.Google Scholar
  21. McLaren, B.M., Koedinger, K.R., Schneider, M., Harrer, A., & Bollen, L. (2004). Bootstrapping novice data: semi-automated tutor authoring using student log files. In The Proceedings of the Workshop on Analyzing Student-Tutor Interaction Logs to Improve Educational Outcomes, Seventh International Conference on Intelligent Tutoring Systems (ITS-2004).Google Scholar
  22. Min, W., Mott, B., & Lester, J. (2014). Adaptive scaffolding in an intelligent game-based learning environment for computer science. In Proceedings of the Second Workshop on AI-supported Education for Computer Science (AIEDCS 2014) (pp. 41–50).Google Scholar
  23. Moghadam, J. B., Choudhury, R. R., Yin, H., & Fox, A. (2015). AutoStyle: Toward coding style feedback at scale. In Proceedings of the Second (2015) ACM Conference on Learning@Scale (pp. 261–266).Google Scholar
  24. Peddycord III, B., Hicks, A., & Barnes, T. (2014). Generating hints for programming problems using intermediate output. In Proceedings of the 7th International Conference on Educational Data Mining (pp. 92–98).Google Scholar
  25. Perelman, D., Gulwani, S. & Grossman, D. (2014). Test-driven synthesis for automated feedback for introductory computer science assignments. In Proceedings of Data Mining for Educational Assessment and Feedback (ASSESS 2014).Google Scholar
  26. Piech, C., Sahami, M., Huang, J., & Guibas, L. (2015). Autonomously generating hints by inferring problem solving policies. In Proceedings of the Second (2015) ACM Conference on Learning@Scale (pp. 195–204).Google Scholar
  27. Razzaq, L., Heffernan, N. T., & Lindeman, R. W. (2007). What level of tutor interaction is best?. In Proceedings of the 2007 conference on Artificial Intelligence in Education: Building Technology Rich Learning Contexts That Work (pp. 222–229).Google Scholar
  28. Rivers, K., & Koedinger, K. R. (2012). A canonicalizing model for building programming tutors. In Proceedings of the 11th international conference on Intelligent Tutoring Systems (pp. 591–593).Google Scholar
  29. Rivers, K., & Koedinger, K. R. (2014). Automating hint generation with solution space path construction. In Proceedings of the 12th international conference on Intelligent Tutoring Systems (pp. 329–339).Google Scholar
  30. Shih, B., Koedinger, K. R., & Scheines, R. (2011). A response time model for bottom-out hints as worked examples. In C. Romero, S. Venture, M. Pechenizkiy, & R. S.J.D. Baker (Eds.), Handbook of Educational Data Mining (pp. 201–212). CRC Press.Google Scholar
  31. Singh, R., Gulwani, S. & Solar-Lezama, A. (2013). Automated feedback generation for introductory programming assignments. In Proceedings of the 34th ACM SIGPLAN conference on Programming language design and implementation (pp. 15–26).Google Scholar
  32. Stamper, J. C., Eagle, M., Barnes, T., & Croy, M. (2011). Experimental evaluation of automatic hint generation for a logic tutor. In Proceedings of the 15th international conference on Artificial intelligence in education (pp. 345–352).Google Scholar
  33. VanLehn, K. (2006). The behavior of tutoring systems. International Journal of Artificial Intelligence in Education, 16(3), 227–265.Google Scholar
  34. Xu, S., & Chee, Y. S. (2003). Transformation-based diagnosis of student programs for programming tutoring systems. IEEE Transactions on Software Engineering, 29(4), 360–384.CrossRefGoogle Scholar

Copyright information

© International Artificial Intelligence in Education Society 2015

Authors and Affiliations

  1. 1.Carnegie Mellon UniversityPittsburghUSA

Personalised recommendations