Skip to main content
Log in

Using image recognition to automatically assess programming tasks with graphical output

  • Published:
Education and Information Technologies Aims and scope Submit manuscript

Abstract

There are thousands of participants in different programming MOOCs (Massive Open Online Courses) which means thousands of solutions have to be assessed. As it is very time-consuming to assess that amount of solutions manually, using automated assessment is essential. Since task requirements must be strict for the solutions to be automatically gradable, it often limits the types of different assignments and creativity. In order to promote more creativity we wanted to enable programming tasks with graphical output. In order to analyze and assess the creative tasks we developed, implemented and analyzed a system capable of assessing the graphical output of a solution program using image recognition. Image recognition is used to analyze the graphical output (image) produced by the solution program. The graphical output with a keyword attached to it is sent to an image recognition service provider that responds with a probability score. The solution is accepted or rejected based on the probability of a given object appearing in the image. The system was tested and evaluated in two runs of the MOOC “Introduction to Programming.” In the first run, we used the system to automatically assess the solutions of programming tasks on a predefined topic and in the second run on a topic chosen by the participant. The evaluation of the usefulness of the system and overview of participants’ feedback are presented as results. Suggestions for future improvements of the system and possible research are also listed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  • Alcarria, R., Bordel, B., de Andrés, D. M., & Robles, T. (2018). Enhanced peer assessment in MOOC evaluation through assignment and review analysis. International Journal of Emerging Technologies in Learning, 13(1), 206–219. https://doi.org/10.3991/ijet.v13i01.7461.

    Article  Google Scholar 

  • An J., & Park, N. (2011). Computer application in elementary education bases on fractal geometry theory using LOGO programming. In IT convergence and services (pp. 241–249). https://doi.org/10.1007/978-94-007-2598-0_26.

  • Artifex. (2019). Ghostscript Homepage. Retrieved from https://www.ghostscript.com/

  • Bers, M. U. (2017). Coding as a playground: Programming and computational thinking in the early childhood classroom. New York: Routledge. https://doi.org/10.4324/9781315398945.

    Book  Google Scholar 

  • Bey, A., Jermann, P., & Dillenbourg, P. (2018). A comparison between two automatic assessment approaches for programming: An empirical study on MOOCs. Educational Technology & Society, 21(2), 259–272.

    Google Scholar 

  • Clarifai. (2019). Clarifai Homepage. Retrieved from https://www.clarifai.com/

  • Doherty, I., Harbutt, D., & Sharma, N. (2015). Designing and developing a MOOC. Medical Science Educator, 25(2), 177–181. https://doi.org/10.1007/s40670-015-0123-9.

    Article  Google Scholar 

  • Douce, C., Livingstone, D., & Orwell, J. (2005). Automatic test-based assessment of programming: A review. Journal on Educational Resources in Computing (JERIC), 5(3). https://doi.org/10.1145/1163405.1163409.

  • Dougiamas, M., & Taylor, P. C. (2003). Moodle: Using Learning Communities to Create an Open Source Course Management System. In Proceedings of the EDMEDIA 2003 Conference, Honolulu, Hawaii. Retrieved from https://www.learntechlib.org/p/13739/

  • English, J. (2004). Automated assessment of GUI programs using JEWL. ACM SIGCSE Bulletin, 36(3), 137–141. https://doi.org/10.1145/1026487.1008033.

    Article  Google Scholar 

  • Fricker, P., Wartmann, C., & Hovestadt, L. (2008). Processing: Programming instead of drawing. 26th eCAADe proceedings, 525-530. Retrieved from https://cumincad.architexturez.net/system/files/pdf/ecaade2008_183.content.pdf

  • Gardner, J., & Brooks, C. (2018). Student success prediction in MOOCs. User Modeling and User-Adapted Interaction, 28(2), 127–203. https://doi.org/10.1007/s11257-018-9203-z.

    Article  Google Scholar 

  • Giannakos, M. N., Jaccheri, L., & Proto, R. (2013). Teaching computer science to young children through creativity: Lessons learned from the case of Norway. In Proceedings of the 3rd Computer Science Education Research Conference (pp. 103-111). Retrieved from https://dl.acm.org/citation.cfm?id=2541927

  • Goh, K. N., & Hilisebua Manao, R. (2013). Assessing engineering drawings through automated assessment: Discussing mechanism to award marks. International Journal of Smart Home, 7(4), 327–335.

    Google Scholar 

  • Google Cloud. (2019). Vision AI. Retrieved from https://cloud.google.com/vision/

  • Hein, H., Kiho, J., Palm, R., & Tõnisson, E. (2007). Programmeerimise Ülesannete Kogu. Tartu: Tartu Ülikool.

    Google Scholar 

  • Higgins, C. A., Gray, G., Symeonidis, P., & Tsintsifas, A. (2005). Automated assessment and experiences of teaching programming. Journal on Educational Resources in Computing, 5(3). https://doi.org/10.1145/1163405.1163410.

  • Horstmann, C. S. (2015). Big Java: Early objects. New York: Wiley Textbooks.

    Google Scholar 

  • Huisman, B., Admiraal, W., van de Ven, M., & Pilli, O. (2016). Peer assessment in MOOCs: The relationship between peer reviewers’ ability and authors’ essay performance. British Journal of Educational Technology, 49(1), 101–110. https://doi.org/10.1111/bjet.12520.

    Article  Google Scholar 

  • Imagga. (2019). All in One Image Recognition Solutions for Developers and Businesses. Retrieved from https://imagga.com/

  • Kohn T., & Komm, D. (2018). Teaching programming and algorithmic complexity with tangible machines. In International Conference on Informatics in Schools: Situation, Evolution, and Perspectives (pp. 68-83). https://doi.org/10.1007/978-3-030-02750-6_6.

  • Kulkarni, C., Wei, K. P., Le, H., Chia, D., Papadopoulos, K., Cheng, J., Koller, D., & Klemmer, S. R. (2013). Peer and self assessment in massive online classes. ACM Transactions on Computer-Human Interaction, 20(6), 131–168. https://doi.org/10.1145/2505057.

    Article  Google Scholar 

  • Lepp, M., Luik, P., Palts, T., Papli, K., Suviste, R., Säde, M., & Tõnisson, E. (2017a). MOOC in programming: A success story. In Proceedings of the International Conference on e-Learning (pp. 138–147).

  • Lepp, M., Luik, P., Palts, T., Papli, K., Suviste, R., Säde, M., Hollo, K., Vaherpuu, V., & Tõnisson, E. (2017b). Self-and automated assessment in programming MOOCs. International Computer Assisted Assessment Conference (pp. 72–85). Springer, Cham. https://doi.org/10.1007/978-3-319-57744-9_7.

  • Lepp, M., Palts, T., Luik, P., Papli, K., Suviste, R., Säde, M., Hollo, K., Vaherpuu, V., & Tõnisson, E. (2018). Troubleshooters for Tasks of Introductory Programming MOOCs. International Review of Research in Open and Distributed Learning, 19(4). https://doi.org/10.19173/irrodl.v19i4.3639.

  • Luik, P., Lepp, M., Palts, T., Säde, M., Suviste, R., Tõnisson, E., & Gaiduk, M. (2018). Completion of programming MOOC or dropping out: Are there any differences in motivation. In Proceedings of the 17th European Conference on e-Learning ECEL (pp. 329-337).

  • Muuli, E., Papli, K., Tõnisson, E., Lepp, M., Palts, T., Suviste, R., Säde, M., & Luik, P. (2017). Automatic assessment of programming assignments using image recognition. European Conference on Technology Enhanced Learning (pp. 153–163). https://doi.org/10.1007/978-3-319-66610-5_12.

  • Papathoma, T., Blake, C., Clow, D., & Scanlon, E. (2015). Investigating learners’ views of assessment types in massive open online courses (MOOCs). In Design for Teaching and Learning in a Networked World (pp. 617–621). Springer, Cham. https://doi.org/10.1007/978-3-319-24258-3_72.

  • Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. New York: Basic Books.

    Google Scholar 

  • Pears, A., Seidman, S., Malmi, L., Mannila, L., Adams, E., Bennedsen, J., Devlin, M., & Paterson, J. (2007). A survey of literature on the teaching of introductory programming. ACM SIGCSE Bulletin, 39(4), 204–223. https://doi.org/10.1145/1345443.1345441.

    Article  Google Scholar 

  • Pieterse, V. (2013). Automated assessment of programming assignments. In Proceedings of the 3rd Computer Science Education Research Conference on Computer Science Education Research (CSERC '13) (pp. 45–56). Open Universiteit, Heerlen, The Netherlands. Retrieved from https://dl.acm.org/citation.cfm?id=2541921

  • Python. (2019a). Abstract Syntax Trees. Retrieved from https://docs.python.org/3.6/library/ast.html

  • Python. (2019b). Graphical User Interfaces with Tk. Retrieved from https://docs.python.org/3/library/tk.html

  • Rodríguez-del-Pino, J. C., Rubio-Royo, E., & Hernández-Figueroa, Z. J. (2012). A virtual programming lab for Moodle with automatic assessment and anti-plagiarism features. In Proceedings of The 2012 International Conference on e-Learning, e-Business, Enterprise Information Systems, & e-Government.

  • Sánchez-Vera, M. M., & Prendes-Espinosa, M. P. (2015). Beyond objective testing and peer assessment: Alternative ways of assessment in MOOCs. International Journal of Educational Technology in Higher Education, 12(1), 119–130. https://doi.org/10.7238/rusc.v12i1.2262.

    Article  Google Scholar 

  • Shah, D. (2018). By The Numbers: MOOCs in 2018. Retrieved from https://www.classcentral.com/report/mooc-stats-2018/

  • Skalka, J., Drlík, M., & Obonya, J. (2019). Automated assessment in learning and teaching programming languages using virtual learning environment. In 2019 IEEE Global Engineering Education Conference (EDUCON) (pp. 689–697). Dubai, United Arab Emirates. https://doi.org/10.1109/EDUCON.2019.8725127.

  • Staubitz, T., Klement, H., Renz, J., Teusner R., & Meinel, C. (2015). Towards practical programming exercises and automated assessment in massive open online courses. In 2015IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE) (pp. 23-30). https://doi.org/10.1109/TALE.2015.7386010.

  • Thornton, M., Edwards, S. H., Tan, R. P., & Pérez-Quiñones, M. A. (2008). Supporting student-written tests of GUI programs. ACM SIGCSE Bulletin, 40(1), 537–541. https://doi.org/10.1145/1352322.1352316.

    Article  Google Scholar 

  • Vidal Duarte, E. (2016). Teaching the first programming course with Python's turtle graphic library. In Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education (pp. 244-245). https://doi.org/10.1145/2899415.2925499.

  • Vihavainen, A., Luukkainen, M., & Kurhila, J. (2012). Multi-faceted support for MOOC in programming. In Proceedings of the 13th annual conference on Information technology education (pp. 171–176). https://doi.org/10.1145/2380552.2380603

  • VPL. (2014). Virtual Programming Lab for Moodle (Module). Retrieved from http://vpl.dis.ulpgc.es/

  • Wang, Y., Liang, Y., Liu, L., & Liu, Y. (2016). A multi-peer assessment platform for programming language learning: Considering group non-consensus and personal radicalness. Interactive Learning Environments, 24(8), 2011–2031. https://doi.org/10.1080/10494820.2015.1073748.

    Article  Google Scholar 

  • Wulf, J., Blohm, I., Leimeister, J. M., & Brenner, W. (2014). Massive Open Online Courses. Business & Information Systems Engineering (BISE), 6(2), 111–114. https://doi.org/10.1007/s11576-014-0405-7.

    Article  Google Scholar 

  • Xvfb. (2018). Xvfb Homepage. Retrieved from https://www.x.org/archive/X11R7.6/doc/man/man1/Xvfb.1.xhtml

Download references

Acknowledgments

The authors of this paper would like to thank the participants and organizers of the MOOCs for their contribution to this paper. This work has been supported by the Information Technology Foundation for Education (HITSA) within an IT Academy Programme.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eerik Muuli.

Ethics declarations

Conflict of interest

None.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Muuli, E., Tõnisson, E., Lepp, M. et al. Using image recognition to automatically assess programming tasks with graphical output. Educ Inf Technol 25, 5185–5203 (2020). https://doi.org/10.1007/s10639-020-10218-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10639-020-10218-z

Keywords

Navigation