Advertisement

A Data-Driven Method for Helping Teachers Improve Feedback in Computer Programming Automated Tutors

  • Jessica McBroom
  • Kalina Yacef
  • Irena Koprinska
  • James R. Curran
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10947)

Abstract

The increasing prevalence and sophistication of automated tutoring systems necessitates the development of new methods for their evaluation and improvement. In particular, data-driven methods offer the opportunity to provide teachers with insight about student interactions with online systems, facilitating their improvement to maximise their educational value. In this paper, we present a new technique for analysing feedback in an automated programming tutor. Our method involves first clustering submitted programs with the same functionality together, then applying sequential pattern mining and graphically visualising student progress through an exercise. Using data from a beginner Python course, we demonstrate how this method can be applied to programming exercises to analyse student approaches, responses to feedback, areas of greatest difficulty and repetition of mistakes. This process could be used by teachers to more effectively understand student behaviour, allowing them to adapt both traditional and online teaching materials and feedback to optimise student experiences and outcomes.

Keywords

Data-driven teacher support Automated tutoring systems Feedback improvement Tutoring system evaluation 

References

  1. 1.
    Gerdes, A., Heeren, B., Jeuring, J., van Binsbergen, L.T.: Ask-Elle: an adaptable programming tutor for haskell giving automated feedback. Int. J. Artif. Intell. Educ. 27(1), 65–100 (2017).  https://doi.org/10.1007/s40593-015-0080-xCrossRefGoogle Scholar
  2. 2.
    Venables, A., Haywood, L.: Programming students NEED instant feedback! In: Proceedings of the Fifth Australasian Conference on Computing (ACE), pp. 267−272. ACM, Adelaide (2003)Google Scholar
  3. 3.
    Edwards, S.H., Perez-Quinones, M.A.: Web-CAT: automatically grading programming assignments. In: Proceedings of the 13th Annual Conference on Innovation and Technology in Computer Science Education (ITiCSE), p. 328. ACM, Madrid (2008).  https://doi.org/10.1145/1597849.1384371
  4. 4.
    Enstrom, E., Kreitz, G., Niemela, F., Soderman, P., Kann, V.: Five years with Kattis using an automated assessment system in teaching. In: Proceedings - Frontiers in Education Conference (FIE), pp. T3 J-1−T3 J-6, IEEE, Rapid City (2011).  https://doi.org/10.1109/fie.2011.6142931
  5. 5.
    Rivers, K., Koedinger, K.: Data-driven hint generation in vast solution spaces: a self-improving python programming tutor. Int. J. Artif. Intell. Educ. 27(1), 37–64 (2017).  https://doi.org/10.1007/s40593-015-0070-zCrossRefGoogle Scholar
  6. 6.
    Chow, S., Yacef, K., Koprinska, I., Curran, J.: Automated data-driven hints for computer programming students. In: Adjunct Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization (UMAP), pp. 5−10, ACM, Bratislava, Slovakia (2017).  https://doi.org/10.1145/3099023.3099065
  7. 7.
    Gramoli, V., Charleston, M., Jeffries, B., Koprinksa, I., McGrane, M., Radu, A., Viglas, A., Yacef, K.: Mining autograding data in computer science education. In: Proceedings of the Australasian Computer Science Week Multiconference. ACM, Canberra (2016).  https://doi.org/10.1145/2843043.2843070
  8. 8.
    Stamper, J., Barnes, T., Lehmann, L., Croy, M.: The hint factory: automatic generation of contextualized help for existing computer aided instruction. In: Proceedings of the 9th International Conference on Intelligent Tutoring Systems (ITS), pp. 71−78. Springer, Montreal, Canada (2008)Google Scholar
  9. 9.
    Johnson, S., Zaiane, O.: Deciding on feedback polarity and timing. In: Proceedings of the International Conference on Educational Data Mining (EDM), IEDMS, Chania, Greece, pp. 220−222 (2012)Google Scholar
  10. 10.
    Barnes, T., Stampler, J.: Automatic hint generation for logic proof tutoring using historical data. J. Educ. Technol. Soc. 13(1), 3–12 (2010).  https://doi.org/10.1007/978-3-540-69132-7_41CrossRefGoogle Scholar
  11. 11.
    Perikos, I., Grivokostopoulou, F., Hatzilygeroudis, I.: Assistance and feedback mechanism in an intelligent tutoring system for teaching conversion of natural language into logic. Int. J. Artif. Intell. Educ. 27(3), 475–514 (2017).  https://doi.org/10.1007/s40593-017-0139-yCrossRefGoogle Scholar
  12. 12.
    Dominguez, A., Yacef, K., Curran, J.: Data mining for individualized hints in eLearning. In: Proceedings of the International Conference on Educational Data Mining (EDM), IEDMS, Pittsburgh, PA, United States, pp. 91−100 (2010)Google Scholar
  13. 13.
    Yin, H., Moghadam, J., Fox, A.: Clustering student programming assignments to multiply instructor leverage. In: Proceedings of the Learning at Scale Conference (L@S), pp. 367−372. ACM, Vancouver (2015).  https://doi.org/10.1145/2724660.2728695
  14. 14.
    Gross, S., Mokbel, B., Paassen, B., Hammer, B., Pinkwart, N.: Example-based feedback provision using structured solution spaces. Int. J. Learn. Technol. 9(3), 248–280 (2014).  https://doi.org/10.1504/ijlt.2014.065752CrossRefGoogle Scholar
  15. 15.
    Paassen, B., Mokbel, B., Hammer, B.: Adaptive structure metrics for automated feedback provision in intelligent tutoring systems. Neurocomputing 192, 3–13 (2016).  https://doi.org/10.1016/j.neucom.2015.12.108CrossRefGoogle Scholar
  16. 16.
    Glassman, E., Scott, J., Singh, R., Guo, P.J., Miller, R.C.: Overcode: visualizing variation in student solutions to programming problems at scale. ACM Trans. Comput. Hum. Interact. 22(2), 1–35 (2015).  https://doi.org/10.1145/2699751CrossRefGoogle Scholar
  17. 17.
    Koprinska, I., Stretton, J., Yacef, K.: Students at risk: detection and remediation. In: Proceedings of the International Conference on Educational Data Mining (EDM), IEDMS, Madrid, Spain, pp. 512−515 (2015)Google Scholar
  18. 18.
    Koprinska, I., Stretton, J., Yacef, K.: Predicting student performance from multiple data sources. In: Conati, C., Heffernan, N., Mitrovic, A., Verdejo, M.F. (eds.) AIED 2015. LNCS (LNAI), vol. 9112, pp. 678–681. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-19773-9_90CrossRefGoogle Scholar
  19. 19.
    McBroom, J., Jeffries, B., Koprinska, I., Yacef, K.: Mining behaviours of students in autograding submission. In: Proceedings of the International Conference on Educational Data Mining (EDM), IEDMS, Raleigh, NC, United States, pp. 159−166 (2016)Google Scholar
  20. 20.
    McBroom, J., Jeffries, B., Koprinska, I., Yacef, K.: Exploring and following students’ strategies when completing their weekly tasks. In: Proceedings of the International Conference on Educational Data Mining (EDM), IEDMS, Raleigh, NC, United States, pp. 609−610 (2016)Google Scholar
  21. 21.
    Grok Learning. https://groklearning.com. Accessed 8 Feb 2018
  22. 22.
    Figures 1, 3 and 4 produced using Cytoscape graphing software Cytoscape. www.cytoscape.org. Accessed 8 Feb 2018

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Jessica McBroom
    • 1
  • Kalina Yacef
    • 1
  • Irena Koprinska
    • 1
  • James R. Curran
    • 2
  1. 1.The University of SydneySchool of Information TechnologiesSydneyAustralia
  2. 2.Grok LearningSydneyAustralia

Personalised recommendations