A Review of Static Analysis Approaches for Programming Exercises

  • Michael Striewe
  • Michael Goedicke
Part of the Communications in Computer and Information Science book series (CCIS, volume 439)

Abstract

Static source code analysis is a common feature in automated grading and tutoring systems for programming exercises. Different approaches and tools are used in this area, each with individual benefits and drawbacks, which have direct influence on the quality of assessment feedback. In this paper, different principal approaches and different tools for static analysis are presented, evaluated and compared regarding their usefulness in learning scenarios. The goal is to draw a connection between the technical outcomes of source code analysis and the didactical benefits that can be gained from it for programming education and feedback generation.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Static Analysis Tool Exposition (SATE 2009) Workshop, Co-located with 11th semiannual Software Assurance Forum, Arlington, VA (2009)Google Scholar
  2. 2.
  3. 3.
    Ala-Mutka, K.M.: A Survey of Automated Assessment Approaches for Programming Assignments. Computer Science Education 15(2), 83–102 (2005)CrossRefGoogle Scholar
  4. 4.
    Amelung, M., Forbrig, P., Rösner, D.: Towards generic and flexible web services for e-assessment. In: ITiCSE 2008: Proceedings of the 13th Annual Conference on Innovation and Technology in Computer Science Education, pp. 219–224. ACM, New York (2008)Google Scholar
  5. 5.
    Bildhauer, D., Ebert, J.: Querying Software Abstraction Graphs. In: Working Session on Query Technologies and Applications for Program Comprehension (QTAPC 2008), Collocated with ICPC 2008 (2008)Google Scholar
  6. 6.
    Cheang, B., Kurnia, A., Lim, A., Oon, W.-C.: On automated grading of programming assignments in an academic institution. Comput. Educ. 41(2), 121–131 (2003)CrossRefGoogle Scholar
  7. 7.
  8. 8.
    Copeland, T.: PMD applied. Centennial Books (2005)Google Scholar
  9. 9.
    Denny, P., Luxton-Reilly, A., Tempero, E.D., Hendrickx, J.: Understanding the syntax barrier for novices. In: Rößling, G., Naps, T.L., Spannagel, C. (eds.) Proceedings of the 16th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education, ITiCSE 2011, Darmstadt, Germany, June 27-29, pp. 208–212. ACM (2011)Google Scholar
  10. 10.
    Douce, C., Livingstone, D., Orwell, J., Grindle, S., Cobb, J.: A technical perspective on ASAP - automated systems for assessment of programming. In: Proceedings of the 9th CAA Conference, Loughborough University (2005)Google Scholar
  11. 11.
  12. 12.
    Gruttmann, S.J.: Formatives E-Assessment in der Hochschullehre. MV-Wissenschaft (2009)Google Scholar
  13. 13.
    Hage, J., Rademaker, P., van Vugt, N.: A comparison of plagiarism detection tools. Technical report, Department of Information and Computing Sciences, Utrecht University (2010)Google Scholar
  14. 14.
    Higgins, C., Hegazy, T., Symeonidis, P., Tsintsifas, A.: The CourseMarker CBA System: Improvements over Ceilidh. Education and Information Technologies 8(3), 287–304 (2003)CrossRefGoogle Scholar
  15. 15.
    Hoffmann, A., Quast, A., Wismüller, R.: Online-Übungssystem für die Programmierausbildung zur Einführung in die Informatik. In: Seehusen, S., Lucke, U., Fischer, S. (eds.) DeLFI 2008, 6. e-Learning Fachtagung Informatik. LNI, vol. 132, pp. 173–184. GI (2008)Google Scholar
  16. 16.
    Ihantola, P., Ahoniemi, T., Karavirta, V., Seppälä, O.: Review of recent systems for automatic assessment of programming assignments. In: Proceedings of the 10th Koli Calling International Conference on Computing Education Research, Koli Calling 2010, pp. 86–93. ACM, New York (2010)Google Scholar
  17. 17.
    Joy, M., Griffiths, N., Boyatt, R.: The BOSS Online Submission and Assessment System. Journal on Educational Resources in Computing (JERIC) 5(3) (2005)Google Scholar
  18. 18.
    Köllmann, C., Goedicke, M.: A Specification Language for Static Analysis of Student Exercises. In: Proceedings of the International Conference on Automated Software Engineering (2008)Google Scholar
  19. 19.
    Leal, J.P., Silva, F.: Mooshak: a Web-based multi-site programming contest system. Software–Practice & Experience 33(6), 567–581 (2003)CrossRefGoogle Scholar
  20. 20.
    Mengel, S.A., Yerramilli, V.: A case study of the static analysis of the quality of novice student programs. In: The Proceedings of the Thirtieth SIGCSE Technical Symposium on Computer Science Education, SIGCSE 1999, pp. 78–82. ACM, New York (1999)CrossRefGoogle Scholar
  21. 21.
    Morth, T., Oechsle, R., Schloß, H., Schwinn, M.: Automatische Bewertung studentischer Software. In: Workshop “Rechnerunterstütztes Selbststudium in der Informati”, Universität Siegen, 17 (September 2007)Google Scholar
  22. 22.
    Naude, K.A.: Assessing Program Code through Static Structural Similarity. Master’s Thesis, Faculty of Science, Nelson Mandela Metropolitan University (2007)Google Scholar
  23. 23.
  24. 24.
    Rutar, N., Almazan, C.B., Foster, J.S.: A Comparison of Bug Finding Tools for Java. In: Proceedings of the 15th International Symposium on Software Reliability Engineering, pp. 245–256. IEEE Computer Society, Washington, DC (2004)Google Scholar
  25. 25.
    Schwieren, J., Vossen, G., Westerkamp, P.: Using Software Testing Techniques for Efficient Handling of Programming Exercises in an e-Learning Platform. The Electronic Journal of e-Learning 4(1), 87–94 (2006)Google Scholar
  26. 26.
    Shah, A.: Web-CAT: A Web-based Center for Automated Testing. Master’s thesis, Virginia Polytechnic Institute and State University (2003)Google Scholar
  27. 27.
    Spacco, J., Hovemeyer, D., Pugh, W., Emad, F., Hollingsworth, J.K., Padua-Perez, N.: Experiences with Marmoset: Designing and using an advanced submission and testing system for programming courses. SIGCSE Bull. 38(3), 13–17 (2006)CrossRefGoogle Scholar
  28. 28.
    Strickroth, S., Olivier, H., Pinkwart, N.: Das GATE-System: Qualitätssteigerung durch Selbsttests für Studenten bei der Onlineabgabe von Übungsaufgaben? In: DeLFI 2011 - Die 9. e-Learning Fachtagung Informatik der Gesellschaft für Informatik e.V. LNI, vol. 188, pp. 115–126. GI (2011)Google Scholar
  29. 29.
    Striewe, M., Balz, M., Goedicke, M.: A Flexible and Modular Software Architecture for Computer Aided Assessments and Automated Marking. In: Proceedings of the First International Conference on Computer Supported Education (CSEDU), Lisboa, Portugal, March 23-26, vol. 2, pp. 54–61. INSTICC (2009)Google Scholar
  30. 30.
    Striewe, M., Balz, M., Goedicke, M.: Enabling Graph Transformations on Program Code. In: Proceedings of the 4th International Workshop on Graph Based Tools, Enschede, The Netherlands (2010)Google Scholar
  31. 31.
    Truong, N., Bancroft, P., Roe, P.: A Web Based Environment for Learning to Program. In: Proceedings of the 26th Annual Conference of ACSC, pp. 255–264 (2003)Google Scholar
  32. 32.
    Truong, N., Roe, P., Bancroft, P.: Static Analysis of Students’ Java Programs. In: Lister, R., Young, A.L. (eds.) Sixth Australasian Computing Education Conference (ACE 2004), Dunedin, New Zealand, pp. 317–325 (2004)Google Scholar
  33. 33.
    Zeller, A.: Making Students Read and Review Code. In: Proceedings of the 5th ACM SIGCSE/SIGCUE Annual Conference on Innovation and Technology in Computer Science Education (ITiCSE 2000), Helsinki, Finland, pp. 89–92 (2000)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Michael Striewe
    • 1
  • Michael Goedicke
    • 1
  1. 1.University of Duisburg-EssenGermany

Personalised recommendations