Advertisement

Detecting Person Heterogeneity in a Large-Scale Orthographic Test Using Item Response Models

  • Christine Hohensinn
  • Klaus D. Kubinger
  • Manuel Reif
Conference paper
Part of the Studies in Classification, Data Analysis, and Knowledge Organization book series (STUDIES CLASS)

Abstract

Achievement tests for students are constructed with the aim of measuring a specific competency uniformly for all examinees. This requires students to work on the items in a homogenous way. The dichotomous logistic Rasch model is the model of choice for assessing these assumptions during test construction. However, it is also possible that various subgroups of the population either apply different strategies for solving the items or make specific types of mistakes, or that different items measure different latent traits. These assumptions can be evaluated with extensions of the Rasch model or other Item Response models. In this paper, the test construction of a new large-scale German orthographic test for eighth grade students is presented. In the process of test construction and calibration, a pilot version was administered to 3,227 students in Austria. In the first step of analysis, items yielded a poor model fit to the dichotomous logistic Rasch model. Further analyses found homogenous subgroups in the sample which are characterized by different orthographic error patterns.

References

  1. Adams, R. J., Wilson, M., & Wang, W. C. (1997). The multidimensional random coefficients multinomial logit model. Applied Psychological Measurement, 21, 1–23.CrossRefGoogle Scholar
  2. Andersen, E. B. (1973). A goodness of fit test for the Rasch model. Psychometrika, 38, 123–140.MathSciNetMATHCrossRefGoogle Scholar
  3. Embretson, S., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah: Erlbaum.Google Scholar
  4. Fischer, G. H. (1973). Einführung in die Theorie psychologischer Tests [Introduction to psychological test theory]. Bern: Huber.Google Scholar
  5. Kubinger, K. D., & Draxler, C. (2007). Probleme bei der Testkonstruktion nach dem Rasch-modell [Difficulties in test development applying the Rasch model]. Diagnostica, 53, 131–143.CrossRefGoogle Scholar
  6. Mair, P., Hatzinger, R., Maier, M., & Gilbey, J. (2011). eRm: extended Rasch models. R package version 0.13–4. http://cran.r-project.org/web/packages/eRm.
  7. Preinerstorfer, D., & Formann, A. K. (2011). Parameter recovery and model selection in mixed Rasch models. British Journal of Mathematical and Statistical Psychology, 65, 251–262.MathSciNetCrossRefGoogle Scholar
  8. Rasch, G. (1980). Probabilistic models for some intelligence and attainment test (Reprint from 1960). Chicago: University of Chicago Press.Google Scholar
  9. Rost, J. (1990). Rasch models in latent classes: An integration of two approaches to item analysis. Applied Psychological Measurement, 14, 271–282.CrossRefGoogle Scholar
  10. von Davier, M. (2001). WINMIRA 2001 – A windows-program for analyses with the Rasch model, with the latent class analysis and with the mixed Rasch model [Computer software], ASC-Assessment Systems Corporation USA and Science Plus Group, Groningen.Google Scholar
  11. von Davier, M. (2005). mdltm: Software for the general diagnostic model and for estimating mixtures of multidimensional discrete latent traits models [Computer software]. Princeton: ETS.Google Scholar

Copyright information

© Springer International Publishing Switzerland 2013

Authors and Affiliations

  • Christine Hohensinn
    • 1
  • Klaus D. Kubinger
    • 1
  • Manuel Reif
    • 1
  1. 1.Faculty of Psychology, Department of Psychological Assessment and Applied PsychometricsUniversity of ViennaViennaAustria

Personalised recommendations