Skip to main content

Learning by Reviewing Paper-Based Programming Assessments

Part of the Lecture Notes in Computer Science book series (LNISA,volume 11082)

Abstract

This paper presents a retrospective analysis of students’ use of self-regulated learning strategies while using an educational technology that connects physical and digital learning spaces. A classroom study was carried out in a Data Structures & Algorithms course offered by the School of Computer Science. Students’ reviewing behaviors were logged and the associated learning impacts were analyzed by monitoring their progress throughout the course. The study confirmed that students who had an improvement in their performance spent more time and effort reviewing formal assessments, particularly their mistakes. These students also demonstrated consistency in their reviewing behavior throughout the semester. In contrast, students who fell behind in class ineffectively reviewed their graded assessments by focusing mostly on what they already knew instead of their knowledge misconceptions.

Keywords

  • Programming learning
  • Reviewing behavior
  • Educational technology
  • Educational data mining
  • Behavioral analytics

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-319-98572-5_39
  • Chapter length: 14 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   84.99
Price excludes VAT (USA)
  • ISBN: 978-3-319-98572-5
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   109.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.

Notes

  1. 1.

    https://cidsewpga.fulton.asu.edu/.

References

  1. Butler, D.L., Winne, P.H.: Feedback and self-regulated learning: a theoretical synthesis. Rev. Educ. Res. 65(3), 245–281 (1995)

    CrossRef  Google Scholar 

  2. Bergin, S., Reilly, R.: The influence of motivation and comfort-level on learning to program. In: Proceedings of the 17th Workshop of the Psychology of Programming Interest Group, Sussex, UK, Psychology of Programming Interest Group, pp. 293–304 (2005)

    Google Scholar 

  3. Loksa, D., Ko, A.J.: The role of self-regulation in programming problem solving process and success. In: ICER, pp. 83–91. ACM, New York (2016)

    Google Scholar 

  4. Eteläpelto, A.: Metacognition and the expertise of computer program comprehension. Scand. J. Educ. Res. 37(3), 243–254 (1993)

    CrossRef  Google Scholar 

  5. Hsiao, I.H., Bakalov, F., Brusilovsky, P., König-Ries, B.: Progressor: social navigation support through open social student modeling. New Rev. Hypermedia Multimed. 19(2), 112–131 (2013)

    CrossRef  Google Scholar 

  6. Falkner, K., Vivian, R., Falkner, N.J.: Identifying computer science self-regulated learning strategies. In: Proceedings of the 2014 Conference on Innovation & Technology in Computer Science Education, pp. 291–296. ACM, New York (2014)

    Google Scholar 

  7. Morrison, B.B., Decker, A., Margulieux, L.E.: Learning loops: a replication study illuminates impact of hs courses. In: Proceedings of the 2016 ACM Conference on International Computing Education Research, pp. 221–230. ACM, New York (2016)

    Google Scholar 

  8. Gehringer, E.F.: Electronic peer review and peer grading in computer-science courses. ACM SIGCSE Bull. 33(1), 139–143 (2001)

    CrossRef  Google Scholar 

  9. Trees, A.R., Jackson, M.H.: The learning environment in clicker classrooms: student processes of learning and involvement in large university-level courses using student response systems. Learn. Media Technol. 32(1), 21–40 (2007)

    CrossRef  Google Scholar 

  10. Martinez-Maldonado, R., Dimitriadis, Y., Martinez-Monés, A., Kay, J., Yacef, K.: Capturing and analyzing verbal and physical collaborative learning interactions at an enriched interactive tabletop. Int. J. Comput. Support. Collab. Learn. 8(4), 455–485 (2013)

    CrossRef  Google Scholar 

  11. Hattie, J., Timperley, H.: The power of feedback. Rev. Educ. Res. 77(1), 81–112 (2007)

    CrossRef  Google Scholar 

  12. Edwards, S.H., Perez-Quinones, M.A.: Web-cat: automatically grading programming assignments. In: ACM SIGCSE Bulletin, vol. 40, pp. 328–328. ACM, New York (2008)

    CrossRef  Google Scholar 

  13. Jackson, D., Usher, M.: Grading student programs using assyst. In: ACM SIGCSE Bulletin, vol. 29, pp. 335–339. ACM, New York (1997)

    CrossRef  Google Scholar 

  14. Hartmann, B., MacDougall, D., Brandt, J., Klemmer, S.R.: What would other programmers do: suggesting solutions to error messages. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1019–1028. ACM, New York (2010)

    Google Scholar 

  15. Hsiao, I.H., Sosnovsky, S., Brusilovsky, P.: Guiding students to the right questions: adaptive navigation support in an E-learning system for Java programming. J. Comput. Assist. Learn. 26(4), 270–283 (2010)

    CrossRef  Google Scholar 

  16. Denny, P., Luxton-Reilly, A., Hamer, J.: Student use of the peerwise system. In: ACM SIGCSE Bulletin, vol. 40, pp. 73–77. ACM, New York (2008)

    CrossRef  Google Scholar 

  17. Singh, A., Karayev, S., Gutowski, K., Abbeel, P.: Gradescope: A fast, flexible, and fair system for scalable assessment of handwritten work. In: Proceedings of the Fourth ACM Conference on Learning@ Scale, pp. 81–88. ACM, New York (2017)

    Google Scholar 

  18. Guerra, J., Sahebi, S., Lin, Y.R., Brusilovsky, P.: The problem solving genome: analyzing sequential patterns of student work with parameterized exercises. In: Educational Data Mining, EDM, North Carolina (2014)

    Google Scholar 

  19. Piech, C., Sahami, M., Koller, D., Cooper, S., Blikstein, P.: Modeling how students learn to program. In: Proceedings of the 43rd ACM Technical Symposium on Computer Science Education, pp. 153–160. ACM, New York (2012)

    Google Scholar 

  20. Boyer, K.E., et al.: Investigating the relationship between dialogue structure and tutoring effectiveness: a hidden Markov modeling approach. Int. J. Artif. Intell. Educ. 21(1–2), 65–81 (2011)

    Google Scholar 

  21. Lu, Y., Sharon, I., Hsiao, H.: Seeking programming-related information from large scaled discussion forums, help or harm? In: Proceedings of the 9th International Conference on Educational Data Mining, EDM, North Carolina, pp. 442–447 (2016)

    Google Scholar 

  22. Altadmri, A., Brown, N.C.: 37 million compilations: investigating novice programming mistakes in large-scale student data. In: Proceedings of the 46th ACM Technical Symposium on Computer Science Education, pp. 522–527. ACM, NY (2015)

    Google Scholar 

  23. Buffardi, K., Edwards, S.H.: Effective and ineffective software testing behaviors by novice programmers. In: Proceedings of the Ninth Annual International ACM Conference on International Computing Education Research, pp. 83–90. ACM, New York (2013)

    Google Scholar 

  24. Carter, A.S., Hundhausen, C.D., Adesope, O.: The normalized programming state model: predicting student performance in computing courses based on programming behavior. In: Proceedings of the Eleventh Annual International Conference on International Computing Education Research, pp. 141–150. ACM, New York (2015)

    Google Scholar 

  25. Montalvo, O., Baker, R.S., Sao Pedro, M.A., Nakama, A., Gobert, J.D.: Identifying students’ inquiry planning using machine learning. In: Educational Data Mining, EDM, North Carolina (2010)

    Google Scholar 

  26. Bernardini, A., Conati, C.: Discovering and recognizing student interaction patterns in exploratory learning environments. In: Aleven, V., Kay, J., Mostow, J. (eds.) ITS 2010. LNCS, vol. 6094, pp. 125–134. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13388-6_17

    CrossRef  Google Scholar 

  27. Hsiao, I.H., Huang, P.K., Murphy, H.: Uncovering reviewing and reflecting behaviors from paper-based formal assessment. In: Proceedings of the Seventh International Learning Analytics & Knowledge Conference, pp. 319–328. ACM, New York (2017)

    Google Scholar 

  28. Blikstein, P.: Using learning analytics to assess students’ behavior in open-ended programming tasks. In: Proceedings of the 1st International Conference on Learning Analytics and Knowledge, pp. 110–116. ACM, New York (2011)

    Google Scholar 

  29. Jenks, G.F.: The data model concept in statistical mapping. Int. Yearb. Cartogr. 7, 186–190 (1967)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yancy Vance Paredes .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Paredes, Y.V., Azcona, D., Hsiao, IH., Smeaton, A. (2018). Learning by Reviewing Paper-Based Programming Assessments. In: Pammer-Schindler, V., Pérez-Sanagustín, M., Drachsler, H., Elferink, R., Scheffel, M. (eds) Lifelong Technology-Enhanced Learning. EC-TEL 2018. Lecture Notes in Computer Science(), vol 11082. Springer, Cham. https://doi.org/10.1007/978-3-319-98572-5_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-98572-5_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-98571-8

  • Online ISBN: 978-3-319-98572-5

  • eBook Packages: Computer ScienceComputer Science (R0)