Understanding the Differences Between Novice and Expert Programmers in Memorizing Source Code
- 881 Downloads
Abstract
This study investigates the difference between novice and expert programmers in memorizing source code. The categorization was based on a questionnaire, which measured the self-estimated programming experience. An instrument for assessing the ability to memorize source code was developed. Also, well-known cognitive tests for measuring working memory capacity and attention were used, based on the work of Kellog and Hayes. Forty-two participants transcribed items which were hidden initially but could be revealed by the participants at will. We recorded all keystrokes, counted the lookups and measured the lookup time. The results suggest that experts could memorize more source code at once, because they used fewer lookups and less lookup time. By investigating the items in more detail, we found that it is possible that experts memorize short source codes in semantic entities, whereas novice programmers memorize them line by line. Because our experts were significantly better in the performed memory capacity tests, our findings must be viewed with caution. Therefore, there is a definite need to investigate the correlation between working memory and self-estimated programming experience.
Keywords
Assessment Object-oriented programming Working memory Programming experienceReferences
- 1.Klieme, E., Hartig, J., Rauch, D.: The concept of competence in educational contexts. Assess. Competencies Educ. Contexts. 3–22 (2008)Google Scholar
- 2.Koeppen, K., Hartig, J., Klieme, E., Leutner, D.: Current issues in competence modeling and assessment. Zeitschrift für Psychologie/J. Psychol. 216(2), 61–73 (2008)CrossRefGoogle Scholar
- 3.Martens, K., Niemann, D.: When do numbers count? The differential impact of the PISA rating and ranking on education policy in Germany and the US. Ger. Polit. 22(3), 314–332 (2013)CrossRefGoogle Scholar
- 4.Weinert, F.E.: Concept of competence: a conceptual clarification. In: Rychen, D., Salganik, S., Hersh, L. (eds.) Defining and Selecting Key Competencies. Hogrefe & Huber Publishers, Ashland (2001)Google Scholar
- 5.Leutner, D., Fleischer, J., Grünkorn, J., Klieme, E. (eds.): Competence Assessment in Education. MEMA. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-50030-0 Google Scholar
- 6.Kramer, M.; Hubwieser, P.; Brinda, T.: A competency structure model of object-oriented programming. In: 2016 International Conference on Learning and Teaching in Computing and Engineering (LaTICE), pp. 1–8. IEEE (2016)Google Scholar
- 7.Kramer, M., Tobinski, D., Brinda, T.: On the way to a test instrument for object-oriented programming competencies. In: Proceedings of the 16th Koli Calling International Conference on Computing Education Research, pp. 145–149. ACM (2016)Google Scholar
- 8.Adelson, B.: Problem solving and the development of abstract categories in programming languages. Mem. Cogn. 9(4), 422–433 (1981)CrossRefGoogle Scholar
- 9.Kellog, R.T.: A model of working memory in writing. Cogn. Demands Writ.: Process. Capacity Working Mem. Text Prod. 57–71. Amsterdam University Press (1996)Google Scholar
- 10.Hayes, J.R.: A new framework for understanding cognition and affect in writing. In: Perspectives on writing: Research, theory, and practice, p. 6 (1996)Google Scholar
- 11.Baddeley, A.D., Hitch, G.: Working memory. Psychol. Learn. Motiv. 8, 47–89 (1974)CrossRefGoogle Scholar
- 12.de Groot, A.: Thought and Choice in Chess. De Gruyter, Berlin (2014)Google Scholar
- 13.Chase, W.G., Simon, H.A.: Perception in chess. Cogn. Psychol. 4(1), 55–81 (1973)CrossRefGoogle Scholar
- 14.Corsi, P.: Human Memory and the Medial Temporal Region of the Brain. McGill University, Montréal (1972)Google Scholar
- 15.Brunetti, R., Gatto, C.D., Delogu, F.: eCorsi: Implementation and testing of the corsi block-tapping task for digital tablets. Front. Psychol. 5 (2014)Google Scholar
- 16.Beauducel, A.: Intelligence Structure Test: IST. Hogrefe, Oxford (2009)Google Scholar
- 17.Bates, M.E., Lemay, E.P.: The d2 test of attention: construct validity and extensions in scoring techniques. J. Int. Neuropsychological Soc. 10(3), 392–400 (2004)CrossRefGoogle Scholar
- 18.Levenshtein, V.: Binary codes capable of correcting deletions, insertions reversals. Sov. Phys. Dokl. 10, 707 (1966)MathSciNetzbMATHGoogle Scholar
- 19.Siegmund, J., Kästner, C., Liebig, J., Apel, S., Hanenberg, S.: Measuring and modeling programming experience. Empirical Softw. Eng. 19(5), 1299–1334 (2014)CrossRefGoogle Scholar
- 20.Misra, S., Adewumi, A.: Object-oriented cognitive complexity measures. In: Handbook of Research on Innovations in Systems and Software Engineering. IGI Global, Hershey (2015)Google Scholar
- 21.Amstad, T.: Wie verständlich sind unsere Zeitungen? University of Zurich (1978)Google Scholar
- 22.Flesch, R.: A new readability yardstick. J. Appl. Psychol. 32(3), 221–233 (1948)CrossRefGoogle Scholar
- 23.Bamberger, R.: Lesen - verstehen - lernen - schreiben: die Schwierigkeitsstufen von Texten in deutscher Sprache. Jugend und Volk [u.a.]. Wien (1984)Google Scholar