International Conference on Informatics in Schools: Situation, Evolution, and Perspectives

Informatics in Schools. Curricula, Competences, and Competitions pp 45-56 | Cite as

Defining Proficiency Levels of High School Students in Computer Science by an Empirical Task Analysis Results of the MoKoM Project

  • Jonas Neugebauer
  • Johannes Magenheim
  • Laura Ohrndorf
  • Niclas Schaper
  • Sigrid Schubert
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9378)

Abstract

In the last few years an interdisciplinary team of researchers in the fields of organizational psychology and didactics of informatics have worked together to develop an empirical sound competence structure model, a measurement instrument and a competence level model. This is considered a relevant step for the reliable assessment of competences and the development of competence based curricula to foster the recent outcome orientation of the German educational system.

In this paper we publish the last component of our efforts: a model of proficiency levels, derived from the results of a competence assessment with over 500 German students. We describe different approaches to define proficiency levels and the process we used to derive them from our data. In the end, a detailed overview of the four proficiency levels is given and supplemented with exemplary tasks students should be able to solve on each.

Keywords

Competence Modeling Proficiency Levels Competence Level Model Secondary Education 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Linck, B., Ohrndorf, L., Schubert, S., Stechert, P., Magenheim, J., Nelles, W., Neugebauer, J., Schaper, N.: Competence model for informatics modelling and system comprehension. In: Proceedings of the 4th Global Engineering Education Conference, IEEE EDUCON 2013, Berlin, pp. 85–93 (2013) Google Scholar
  2. 2.
    Neumann, K., Fischer, H.E., Kauertz, A.: From Pisa To Educational Standards: The Impact Of Large-Scale Assessments On Science Education In Germany. International Journal of Science and Mathematics Education 8, 545–563 (2010)CrossRefGoogle Scholar
  3. 3.
    Koeppen, K., Hartig, J., Klieme, E., Leutner, D.: Current Issues in Competence Modeling and Assessment. Zeitschrift für Psychologie 216, 61–73 (2008)CrossRefGoogle Scholar
  4. 4.
    Magenheim, J., Nelles, W., Rhode, T., Schaper, N., Schubert, S., Stechert, P.: Competencies for informatics systems and modeling: results of qualitative content analysis of expert interviews. In: Proceedings of the 1st Global Engineering Education Conference, Educon 2010, pp. 513–521. IEEE Computer Society, Madrid (2010)Google Scholar
  5. 5.
    Nelles, W., Rhode, T., Stechert, P.: Entwicklung eines Kompetenzrahmenmodells – Informatisches Modellieren und Systemverständnis. Informatik-Spektrum 33, 45–53 (2009)CrossRefGoogle Scholar
  6. 6.
    Lehner, L., Magenheim, J., Nelles, W., Rhode, T., Schaper, N., Schubert, S., Stechert, P.: Informatics Systems and Modelling – Case Studies of Expert Interviews. In: Reynolds, N., Turcsányi-Szabó, M. (eds.) KCKS 2010. IFIP AICT, vol. 324, pp. 222–233. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  7. 7.
    Rhode, T.: Entwicklung und Erprobung eines Instruments zur Messung informatischer Modellierungskompetenz im fachdidaktischen Kontext (Doctoral dissertation). University of Paderborn (2013)Google Scholar
  8. 8.
    Osteen, P.: An Introduction to Using Multidimensional Item Response Theory. Journal of the Society for Social Work and Research 1, 66–82 (2010)CrossRefGoogle Scholar
  9. 9.
    Neugebauer, J., Hubwieser, P., Magenheim, J., Ohrndorf, L., Schaper, N., Schubert, S.: Measuring Student Competences in German Upper Secondary Computer Science Education. In: Gülbahar, Y., Karataş, E. (eds.) ISSEP 2014. LNCS, vol. 8730, pp. 100–111. Springer, Heidelberg (2014)Google Scholar
  10. 10.
    Hartig, J., Magenheim, J., Höhler, J., Nelles, W., Rhode, T., Schaper, N., Schubert, S., Stechert, P.: Multidimensional IRT models for the assessment of competencies. Studies in Educational Evaluation 35, 57–63 (2009)CrossRefGoogle Scholar
  11. 11.
    Anderson, L.W., Krathwohl, D.R.: A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. Addison Wesley Longman, New York (2001)Google Scholar
  12. 12.
    Seehorn, D., Carey, S., Fuschetto, B., Lee, I., Moix, D., O’Grady-Cunniff, D., Boucher Owens, B., Stephenson, C., Verno, A.: CSTA K–12 Computer Science Standards. Computer Science Teachers Association Association for Computing Machinery (2011)Google Scholar
  13. 13.
    Schaper, N., Ulbricht, T., Hochholdinger, S.: Zusammenhang von Anforderungsmerkmalen und Schwierigkeitsparametern der MT21-Items. In: Blömeke, S., Kaiser, G., and Lehmann, R. (eds.) Professionelle Kompetenz angehender Lehrerinnen und Lehrer: Wissen, Überzeugungen und Lerngelegenheiten deutscher Mathematikstudierender und-referendare. Erste Ergebnisse zur Wirksamkeit der Lehrerausbildung, pp. 453–480. Waxmann Verlag (2008) Google Scholar
  14. 14.
    Magenheim, J., Nelles, W., Neugebauer, J., Ohrndorf, L., Schaper, N., Schubert, S.: Expert rating of competence levels in Upper Secondary Computer Science Education. In: Brinda, T., Reynolds, N., and Romeike, R. (eds.) KEYCIT 2014 – Key Competencies in Informatics and ICT, pp. 1–12 (2014). Google Scholar
  15. 15.
    Watermann, R., Klieme, E.: Reporting Results of Large-Scale Assessment in Psychologically and Educationally Meaningful Terms. European Journal of Psychological Assessment 18, 190–203 (2002)CrossRefGoogle Scholar
  16. 16.
    Organisation for Economic Co-operation and development: PISA 2003 Technical Report. OECD (2005)Google Scholar
  17. 17.
    Hartig, J., Frey, A., Nold, G., Klieme, E.: An Application of Explanatory Item Response Modeling for Model-Based Proficiency ScalingGoogle Scholar
  18. 18.
    Beaton, A.E., Allen, N.L.: Interpreting Scales Through Scale Anchoring. Journal of Educational and Behavioral Statistics 17, 191–204 (1992)Google Scholar
  19. 19.
    DESI-Konsortium [Hrsg]: Unterricht und Kompetenzerwerb in Deutsch und Englisch Ergebnisse der DESI-Studie Weinheim ua: Beltz, pp. 34–54 (2008)Google Scholar
  20. 20.
    Martin, M.O., Mullis, I.V.S.: Overview of TIMSS 2003. In: Martin, M.O., Mullis, I.V.S., Chrostowski, S.J. (eds.) TIMSS 2003 Technical Report, pp. 3–20. Boston College, Chestnut Hill (2004)Google Scholar
  21. 21.
    Hammond, M., Rogers, P.: An investigation of children’s conceptualisation of computers and how they work. Education and Information Technologies 12, 3–15 (2006)CrossRefGoogle Scholar
  22. 22.
    Magenheim, J., Neugebauer, J., Stechert, P., Ohrndorf, L., Linck, B., Schubert, S., Nelles, W., Schaper, N.: Competence Measurement and Informatics Standards in Secondary Education. In: Diethelm, I., Mittermeir, R.T. (eds.) ISSEP 2013. LNCS, vol. 7780, pp. 159–170. Springer, Heidelberg (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Jonas Neugebauer
    • 1
  • Johannes Magenheim
    • 1
  • Laura Ohrndorf
    • 2
  • Niclas Schaper
    • 1
  • Sigrid Schubert
    • 2
  1. 1.University of PaderbornPaderbornGermany
  2. 2.University of SiegenSiegenGermany

Personalised recommendations