Advertisement

METRON

, Volume 77, Issue 3, pp 227–238 | Cite as

Bayesian treatment of non-standard problems in test analysis

  • Rajitha M. Silva
  • Yuping Guan
  • Tim B. SwartzEmail author
Article
  • 32 Downloads

Abstract

This paper extends the methods of [10] in an attempt to handle non-standard problems in test analysis. The approach is based on a Bayesian framework where test characteristics are treated as random parameters for which posterior probability assessments are available. The generality of the approach permits straightforward analyses of problems that may be difficult using standard classical test theory and standard item response theory. We first illustrate the methods on aviation test scores where the test outcomes are not dichotomous (i.e. correct and incorrect responses). Instead, the approach is modified to handle questions with answers on a five-point ordinal scale. The second problem addresses the complication of the assessment of instructors in addition to the assessment of test questions and students.

Keywords

Empirical Bayes Markov chain Monte Carlo JAGS programming language 

Notes

References

  1. 1.
    Fan, X.: Item response theory and classical test theory: an empirical comparison of their item/person statistics. Educ. Psychol. Meas. 58(3), 357–381 (1998)CrossRefGoogle Scholar
  2. 2.
    Fox, J.-P.: Bayesian item response modeling: theory and applications. In: Fienberg, S.E., van der Linden, W.J. (eds.) Statistics for social and behavioral sciences series. Springer, New York (2010)Google Scholar
  3. 3.
    Guler, N., Uyanik, G.K., Teker, G.T.: Comparison of classical test theory and item response theory in terms of item parameters. Eur. J. Res. Educ. 2(1), 1–6 (2014)Google Scholar
  4. 4.
    Hambleton, R.K., Jones, R.W.: Comparison of classical test theory and item response theory and their application to test development. Educ. Meas. Issues Pract. 12(3), 38–47 (1993)CrossRefGoogle Scholar
  5. 5.
    Kohli, N., Koran, J., Henn, L.: Relationships among classical test theory and item response theory frameworks via factor analytic models. Educ. Psychol. Meas. 75(3), 389–405 (2015)CrossRefGoogle Scholar
  6. 6.
    Levy, R., Mislevy, R.J.: Bayesian psychometric modeling. Chapman & Hall/CRC statistics in the behavioral science serires, Boca Raton (2016)zbMATHGoogle Scholar
  7. 7.
    Lunn, D., Jackson, C., Best, N., Thomas, A., Spiegelhalter, D.: The Bugs book: A practical introduction to Bayesian analysis. Chapman & Hall/CRC statistics in the behavioral science serires, Boca Raton (2013)Google Scholar
  8. 8.
    Plummer, M.: JAGS version 4.0 user manual. http://www.uvm.edu/bbeckage/Teaching/DataAnalysis/Manuals/manual.jags.pdf (2015). Accessed 5 Jun 2017
  9. 9.
    Raykov, T., Marcoulides, G.A.: On the relationship between classical test theory and item response theory: from one to the other and back. Educ. Psychol. Meas. 76(2), 325–338 (2016)CrossRefGoogle Scholar
  10. 10.
    Silva, R., Guan, Y., Swartz, T.B.: Bayesian diagnostics for test design and analysis. J. Effic. Responsib. Educ. Sci. 10, 44–50 (2017)Google Scholar
  11. 11.
    Swartz, T.B.: Bayesian clustering with priors on partitions. Stat. Neerl. 65(4), 371–386 (2011)MathSciNetCrossRefGoogle Scholar

Copyright information

© Sapienza Università di Roma 2019

Authors and Affiliations

  • Rajitha M. Silva
    • 1
  • Yuping Guan
    • 2
  • Tim B. Swartz
    • 3
    Email author
  1. 1.Department of StatisticsUniversity of Sri JayewardenepuraNugegodaSri Lanka
  2. 2.Astrom Aviation Big Data Inc.RichmondCanada
  3. 3.Department of Statistics and Actuarial ScienceSimon Fraser UniversityBurnabyCanada

Personalised recommendations