Differential Item Functioning in the National Tests in Religious Education in Sweden

Chapter
Part of the Evaluating Education: Normative Systems and Institutional Practices book series (ENSIP)

Abstract

The Mantel-Haenszel method is used to investigate whether there are items in the national tests in religious education from 2013 exhibiting differential item functioning (DIF) between groups of students. DIF in an item means that the item functions differently between two groups, after adjusting for the two groups’ overall abilities. Two comparisons are made: between boys and girls and between native speakers and pupils with Swedish as their second language. The results of the analysis lead, for example, to the speculation that closed format items exhibiting DIF are more likely to favour boys than girls and the reverse speculation holds for items of open format. Having data from only two tests, these speculations need to be investigated further with data from later tests.

In addition to the DIF analysis, some descriptive statistics concerning the pupils’ results on the tests are presented, in particular the results on the items relating to ethics.

Keywords

Differential Item Functioning Item Response Theory National Test Religious Education Dichotomous Item 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Agresti, A. (2013). Categorical data analysis (3rd ed.). Hoboken: Wiley.Google Scholar
  2. Gnaldi, M. B. (2015). Joint assessment of the latent trait dimensionality and observed differential item functioning of students’ national tests. Quality & Quantity, 1–19.Google Scholar
  3. Haenszel, W., & Mantel, N. (1959). Statistical aspects of the analysis of data from retrospective studies of disease. Journal of the National Cancer Institute 22 719.Google Scholar
  4. Hambleton, R. K., Swaminathan, H., & Rogers, H. (1991). Fundamentals of item response theory. Los Angeles: Sage Publications.Google Scholar
  5. Holland, P., & Thayer, D. (1988). Differential item performance and the Mantel-Haenszel procedure. In H. Wainer & H. Braun (Eds.), Test validity (pp. 129–145). Hillsdale: Erlbaum.Google Scholar
  6. Holland, P., & Wainer, H. (1993). Differential item functioning. New York: Routledge.Google Scholar
  7. Johansson, H. (2013). Mathematical reasoning in physics tests. Göteborg: University of Gothenburg.Google Scholar
  8. Mantel, N. (1963). Chi-square tests with one degree of freedom. Extensions of the Mantel-Haenszel procedure. Journal of the American Statistical Association, 58, 690.Google Scholar
  9. Meyer, P. (2014). Applied measurement with jMetrik. New York: Routledge, Taylor and Francis Group.Google Scholar
  10. Ramstedt, K. (1996). Elektriska flickor och mekaniska pojkar. Umeå: Pedagogiska Institutionen, Umeå Universitet.Google Scholar
  11. Skolverket. (2013a). Nationella prov och bedömningsstöd i religionskunskap. Retrieved February 21, 2016, from http://idpp.gu.se/forskning/utvecklingsprojekt/nationella-prov/religionskunskap
  12. Zwick, R. (2012). A review of ETS differential item functioning procedures: Flagging rules, minimum sample size requirements, and criterion refinement. Educational Testing Service.Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Mathematical SciencesChalmers University of Technology and University of GothenburgGothenburgSweden

Personalised recommendations