Advertisement

Psychometric Challenges in Modeling Scientific Problem-Solving Competency: An Item Response Theory Approach

  • Ronny SchererEmail author
Part of the Studies in Classification, Data Analysis, and Knowledge Organization book series (STUDIES CLASS)

Abstract

The ability to solve complex problems is one of the key competencies in science. In previous research, modeling scientific problem solving has mainly focused on the dimensionality of the construct, but rarely addressed psychometric test characteristics such as local item dependencies which could occur, especially in computer-based assessments. The present study consequently aims to model scientific problem solving by taking into account four components of the construct and dependencies among items within these components. Based on a data set of 1,487 German high-school students of different grade levels, who worked on computer-based assessments of problem solving, local item dependencies were quantified by using testlet models and Q 3 statistics. The results revealed that a model differentiating testlets of cognitive processes and virtual systems fitted the data best and remained invariant across grades.

Keywords

Grade Level Item Response Theory Scientific Problem Anchor Item Item Dependency 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

The author wishes to thank Professor Dr. Rüdiger Tiemann (Humboldt-Universität zu Berlin, Germany) for his conceptual support in conducting the proposed study. This research has been partly funded by a grant of the German Academic Exchange Service (DAAD).

References

  1. Bond, T. G., & Fox, C. M. (2007). Applying the Rasch model: Fundamental measurement in the human sciences (2nd ed.). Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  2. Brandt, S. (2012). Robustness of multidimensional analyses against local item dependence. Psychological Test and Assessment Modeling, 54, 36–53.Google Scholar
  3. De Ayala, R. J. (2009). The theory and practice of item response theory. New York, NY: The Guilford Press.Google Scholar
  4. De Boeck, P., & Wilson, M. (2004). Explanatory item response models. New York, NY: Springer.CrossRefzbMATHGoogle Scholar
  5. DeMars, C. E. (2012). Confirming testlet effects. Applied Psychological Measurement, 36, 104–121.CrossRefGoogle Scholar
  6. Funke, J. (2010). Complex problem solving: A case for complex cognition? Cognitive Processing, 11, 133–142.CrossRefGoogle Scholar
  7. Ip, E. H. (2010). Empirically indistinguishable multidimensional IRT and locally dependent unidimensional item response models. British Journal of Mathematical and Statistical Psychology, 63, 395–416.CrossRefMathSciNetGoogle Scholar
  8. Köller, O., & Parchmann, I. (2012). Competencies: The German notion of learning outcomes. In S. Bernholt, K. Neumann, & P. Nentwig (Eds.), Making it tangible – Learning outcomes in science education (pp. 165–185). Münster: Waxmann.Google Scholar
  9. Kolen, M. J., & Brennan, R. L. (2004). Test equating, scaling, and linking. New York, NY: Springer.CrossRefzbMATHGoogle Scholar
  10. Koppelt, J. (2011). Modellierung dynamischer Problemlösekompetenz im Chemieunterricht. Berlin: Mensch & Buch.Google Scholar
  11. Lucke, J. F. (2005). “Rassling the Hog”: The influence of correlated item error on internal consistency, classical reliability, and congeneric reliability. Applied Psychological Measurement, 29, 106–125.CrossRefMathSciNetGoogle Scholar
  12. Marais, I., & Andrich, D. (2008). Formalizing dimension and response violations of local independence in the unidimensional Rasch model. Journal of Applied Measurement, 9, 200–215.Google Scholar
  13. Millsap, R. E. (2011). Statistical approaches to measurement invariance. New York, NY: Taylor & Francis.Google Scholar
  14. OECD. (2004). Problem solving for tomorrow’s world. Paris: OECD.Google Scholar
  15. Ragni, M., & Löffler, C. M. (2010). Complex problem solving: another test case? Cognitive Processing, 11, 159–170.CrossRefGoogle Scholar
  16. Robitzsch, A. (2013). R package sirt – Supplementary functions for item response theory. Salzburg: Bifie.Google Scholar
  17. Robitzsch, A., Dörfler, T., Pfost, M., & Artelt, C. (2011). Die Bedeutung der Itemauswahl und die Modellwahl für die längsschnittliche Erfassung von Kompetenzen. Zeitschrift für Entwicklungspsychologie und Pädagogische Psychologie, 43, 213–227.CrossRefGoogle Scholar
  18. Scherer, R. (2012). Analyse der Struktur, Ausprägung und Messinvarianz komplexer Problemlösekompetenz im Fach Chemie. Berlin: Logos.Google Scholar
  19. Scherer, R., & Tiemann, R. (2014). Evidence on the effects of task interactivity and grade level on thinking skills involved in complex problem solving. Thinking Skills and Creativity, 11, 48–64.CrossRefGoogle Scholar
  20. Wirth, J., & Klieme, E. (2004). Computer-based assessment of problem solving competence. Assessment in Education: Principles, Policy and Practice, 10, 329–345.CrossRefGoogle Scholar
  21. Wu, M., Adams, R., Wilson, M., & Haldane, S. (2007). ACER ConQuest 2.0. Camberwell: ACER.Google Scholar
  22. Wüstenberg, S., Greiff, S., & Funke, J. (2012). Complex problem solving – more than reasoning? Intelligence, 40, 1–14.CrossRefGoogle Scholar
  23. Yen, W. M. (1993). Scaling performance assessments: Strategies for managing local item dependence. Journal of Educational Measurement, 30, 187–213.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.Faculty of Educational Sciences, Centre for Educational Measurement (CEMO)University of OsloOsloNorway

Personalised recommendations