Advertisement

Recommending Tasks in Online Judges

  • Giorgio Audrito
  • Tania Di Mascio
  • Paolo Fantozzi
  • Luigi LauraEmail author
  • Gemma Martini
  • Umberto Nanni
  • Marco Temperini
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1007)

Abstract

Online Judges are e-learning tools used to improve the programming skills, typically for programming contests such as International Olympiads in Informatics and ACM International Collegiate Programming Contest.

In this context, due to the nowadays broad list of programming tasks available in Online Judges, it is crucial to help the learner by recommending a challenging but not unsolvable task. So far, in the literature, few authors focused on Recommender Systems (RSs) for Online Judges; in this paper we discuss some peculiarities of this problem, that prevent the use of standard RSs, and address a first building brick: the assessment of (relative) tasks hardness.

We also present the results of a preliminary experimental evaluation of our approach, that proved to be effective against the available dataset, consisting in all the submissions made in the Italian National Online Judge, used to train students for the Italian Olympiads in Informatics.

Keywords

Recommender systems Programming contests e-learning 

References

  1. 1.
    Ala-Mutka, K.M.: A survey of automated assessment approaches for programming assignments. Comput. Sci. Educ. 15(2), 83–102 (2005)CrossRefGoogle Scholar
  2. 2.
    Amaroli, N., Audrito, G., Laura, L.: Fostering informatics education through teams olympiad. Olympiads Inf. 12, 133–146 (2018)Google Scholar
  3. 3.
    Astrachan, O.: Non-competitive programming contest problems as the basis for just-in-time teaching. In: 2004 34th Annual Frontiers in Education, FIE 2004, vol. 1, pp. T3H/20–T3H/24, October 2004Google Scholar
  4. 4.
    Audrito, G., Demo, G.B., Giovannetti, E.: The role of contests in changing informatics education: a local view. Olympiads Inf. 6, 3–20 (2012)Google Scholar
  5. 5.
    Blumenstein, M., Green, S., Fogelman, S., Nguyen, A., Muthukkumarasamy, V.: Performance analysis of game: a generic automated marking environment. Comput. Educ. 50, 1203–1216 (2008)CrossRefGoogle Scholar
  6. 6.
    Caiza, J., Del Alamo, J.: Programming assignments automatic grading: review of tools and implementations. In: INTED 2013 Proceedings of 7th International Technology, Education and Development Conference, 4–5 March 2013, pp. 5691–5700. IATED (2013)Google Scholar
  7. 7.
    Caro-Martinez, M., Jimenez-Diaz, G.: Similar users or similar items? Comparing similarity-based approaches for recommender systems in online judges. In: Aha, D.W., Lieber, J. (eds.) Case-Based Reasoning Research and Development, vol. 10339, pp. 92–107. Springer, Cham (2017)CrossRefGoogle Scholar
  8. 8.
    Dagienė, V.: Sustaining informatics education by contests. In: International Conference on Informatics in Secondary Schools-Evolution and Perspectives, pp. 1–12. Springer (2010)Google Scholar
  9. 9.
    Di Luigi, W, Fantozzi, P., Laura, L., Martini, G., Morassutto, E., Ostuni, D., Pic-cardo, G., Versari, L.: Learning analytics in competitive programming training systems. In: 2018 22nd International Conference Information Visualisation (IV), pp. 321–325, July 2018Google Scholar
  10. 10.
    Di Luigi, W., Farina, G., Laura, L., Nanni, U., Temperini, M., Versari, L.: Oii-web: an interactive online programming contest training system. Olympiads Inf. 10, 195–205 (2016)CrossRefGoogle Scholar
  11. 11.
    Di Mascio, T., Laura, L., Temperini, M.: A framework for personalized competitive programming training. In: 2018 17th International Conference on Information Technology Based Higher Education and Training (ITHET), pp. 1–8, April 2018Google Scholar
  12. 12.
    Garcia-Mateos , G., Fernandez-Aleman, J.L.: Make learning fun with programming contests. In: Transactions on Edutainment II, pp. 246–257. Springer (2009)Google Scholar
  13. 13.
    Kurnia, A., Lim, A., Cheang, B.: Online judge. Comput. Educ. 36(4), 299–315 (2001)CrossRefGoogle Scholar
  14. 14.
    Toledo, R.Y., Mota, Y.C.: An e-learning collaborative filtering approach to suggest problems to solve in programming online judges. Int. J. Distance Educ. Technol. 12(2), 51–65 (2014)CrossRefGoogle Scholar
  15. 15.
    Wang, T., Su, P., Ma, X., Wang, Y., Wang, K.: Ability-training-oriented automated assessment in introductory programming course. Comput. Educ. 56(1), 220–226 (2011)CrossRefGoogle Scholar
  16. 16.
    Yera, R., Martínez, L.: A recommendation approach for programming online judges supported by data preprocessing techniques. Appl. Intell. 47(2), 277–290 (2017)CrossRefGoogle Scholar
  17. 17.
    Yera Toledo, R., Caballero Mota, Y., Martínez, L.: A recommender system for programming online judges using fuzzy information modeling. Informatics 5(2), 17 (2018)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Giorgio Audrito
    • 1
    • 3
  • Tania Di Mascio
    • 2
  • Paolo Fantozzi
    • 4
  • Luigi Laura
    • 3
    • 4
    Email author
  • Gemma Martini
    • 3
  • Umberto Nanni
    • 4
  • Marco Temperini
    • 4
  1. 1.University of TorinoTurinItaly
  2. 2.University of L’AquilaL’AquilaItaly
  3. 3.Italian Association for Informatics and Automatic Calculus (AICA)MilanItaly
  4. 4.Sapienza University of RomeRomeItaly

Personalised recommendations