Advertisement

The Content Balancing Method for Item Selection in CAT

  • Peng Lu
  • Dongdai Zhou
  • Xiao Cong
  • Wei Wang
  • Da Xu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6249)

Abstract

Compared with traditional testing, Computerized Adaptive Testing owes incomparable advantages. Such as flexibility, reduce the test length and measurement accuracy. There are some components in CAT, the most one is the item selection algorithm. To perform adaptive test, the most frequently adopted method is based on the maximum information (MI) of items to select the examination questions, with the view to draw the most accurate estimation for tester’s capacity. There exists, however, flaws of unbalanced item-exposure as well as unequalled usage of item pool in this method. In this paper, we propose a new item selection algorithm CBIS to solve those problems, and then compare our method with MI method by an experiments. The experiment results are promising.

Keywords

Adaptive Testing IRT Content Balancing Exposure Rate 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Conejo, R., Guzmán, E., Millán, E., Trella, M., PérezdelaCruz, J.L., Ríos, A.: SIETTE: a Web-based tool for adaptive testing. J. Artif. Intell. Educ. 14, 29–61 (2004)Google Scholar
  2. 2.
    Lord, F.M.: Some test theory for tailored testing. In: Holtzman, W.H. (ed.) Computer assisted instruction, testing and guidance, pp. 139–183. Harper and Row, New York (1970)Google Scholar
  3. 3.
    Wainer, H., Mislevy, R.: Item Response Theory, Item Calibration and Proficiency Estimation. In: Wainer, H. (ed.) Computerized Adaptive Testing: A Primer, pp. 65–102. Lawrence Erlbaum Associates Publishers, Hillsdale (1990)Google Scholar
  4. 4.
    Cheng, Y., Chang, H.-H.: The maximum priority index method for severely constrained item selection in computerized adaptive testing. British Journal of Mathematical and Statistical Psychology 62, 369–383 (2009)CrossRefGoogle Scholar
  5. 5.
    Birnbaum, A.: Some Latent Trait Models and Their Use in Inferring an Examinee’s Mental Ability. In: Lord, F.M., Novick, M.R. (eds.) Statistical Theories of Mental Test Scores. Addison-Wesley, Reading (1968)Google Scholar
  6. 6.
    Hambleton, R.K.: Principles and Selected Applications of Item Response Theory. In: Linn, R.L. (ed.) Educational Measurement. MacMillan, New York (1989)Google Scholar
  7. 7.
    Jian-quan, T., Dan-min, M., Xia, Z., Jing-jing, G.: An Introduction to the Computerized Adaptive Testing. US-China Education Review, USA, Serial No.26 4(1) (January 2007) ISSN1548-6613Google Scholar
  8. 8.
    Gershon, R.C.: Test Anxiety and Item Order: New Concerns for Item Response Theory. In: Wilson, M. (ed.) Objective Measurement: Theory into Practice, ch. 11, vol. 1, Ablex, Norwood (1992)Google Scholar
  9. 9.
    Thissen, D., Mislevy, R.J.: Testing algorithms. In: Wainer, H. (ed.) Computerized adaptive testing: A primer, 2nd edn., pp. 101–133. Erlbaum, Mahwah (2000)Google Scholar
  10. 10.
    van der Linden, W.J.: Bayesian item selection criteria for adaptive testing. Psychometrika 63, 201–216 (1998)zbMATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Leung, C.-K., Chang, H.-H., Hau, K.-T.: Computerized adaptive testing: A mixture item selection approach for constrained situations. British Journal of Mathematical and Statistical Psychology 58, 239–257 (2005)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Stocking, M.L., Swanson, L.: Optimal design of item banks for computerized adaptive tests. Applied Psychological Measurement 22, 271–279 (1998)CrossRefGoogle Scholar
  13. 13.
    Huang, S.X.: A Content-Balanced Adaptive Testing Algorithm for Computer-Based Training Systems. In: Lesgold, A.M., Frasson, C., Gauthier, G. (eds.) ITS 1996. LNCS, vol. 1086, pp. 306–314. Springer, Heidelberg (1996)Google Scholar
  14. 14.
    Guzmà, E., Conejo, R.: A model for student knowledge diagnosis through adaptive testing. In: Lester, J.C., Vicari, R.M., Paraguaçu, F. (eds.) ITS 2004. LNCS, vol. 3220, pp. 12–21. Springer, Heidelberg (2004)Google Scholar
  15. 15.
    Cheng, Y., Chang, H., Yi, Q.: Two-phase item selection procedure for flexible content balancing in CAT. Applied Psychological Measurement 31, 467–482 (2007)CrossRefMathSciNetGoogle Scholar
  16. 16.
    Chang, H., Ying, Z.: A-stratified multistage computerized adaptive testing. Applied Psychological Measurement 20, 213–229 (1999)CrossRefGoogle Scholar
  17. 17.
    Parshall, C.G., Kromrey, J.D., Harmes, J.C., Sentovich, C.: Nearest neighbors, simple strata, and probabilistic parameters: An empirical comparison of methods for item exposure control in CATs. Paper presented at the Annual Meeting of National Council on Measurement in Education, Seattle, WA (2001)Google Scholar
  18. 18.
    Chang, H., Qian, J., Ying, Z.: A-stratified multisage CAT with b-blocking. Applied Psychological Measurement 25, 333–341 (2001)CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Peng Lu
    • 1
    • 3
  • Dongdai Zhou
    • 1
    • 2
    • 3
    • 4
  • Xiao Cong
    • 1
    • 3
  • Wei Wang
    • 1
    • 3
  • Da Xu
    • 1
    • 3
  1. 1.Ideal Institute of Information and TechnologyNortheast Normal UniversityChina
  2. 2.School of SoftwareNortheast Normal UniversityChina
  3. 3.Engineering & Research Center of E-learningChina
  4. 4.E-learning Laboratory of Jilin ProvinceChangchun, Jilin

Personalised recommendations