Abstract
This article presents the original model of the computer adaptive testing and grade formation, based on scientifically recognized theories. The base of the model is a personalized algorithm for selection of questions depending on the accuracy of the answer to the previous question. The test is divided into three basic levels of difficulty, and the student automatically goes from one level to another according to the current level of the knowledge that he shows. Such examination creates an image to the student that the test was set up just for his level of knowledge. On the basis of responses, by applying Bayes’ theorem and the Maximum a posteriori approach, the evaluation grade is formed. In fact, based on empirical probability values, which correlate with obtaining of a certain final grade and the accuracy of answers to each question individually, model creates a score that corresponds to the current level of student’s knowledge. After each test answer, the empirical probability value is updated. That further contributes to the statistical stability of the evaluation model. Testing stops when the student answers the minimum number of questions, determined by a teacher, or, when evaluations show a clear convergence towards a single value. The research method and some results of the testing of the hypotheses as well as authors’ conclusions about CAT as a tool for evaluation of students are presented at the end of the article.
This is a preview of subscription content, access via your institution.







Notes
- 1.
One logit is equivalent to the difficulty coefficient of 0.73, or 73 % probability of the correct answers
References
Andjelic, S. (2010). A supplement to objective evaluation of student work using computer adaptive testing, PhD thesis, Defended 10/08/2010, Faculty of Industrial Managment, Union University, Belgrade, Serba.
Bugbee, A. C., & Bernt, F. M. (1990). Testing by computer: findings in six years of use 1982–1988. Journal of Research on Computing in Education, 23(1), 87–100. Publisher: International Association for Computing in Education Washington, DC, USA.
Dawson, B., & Trapp, R. G. (2004). Basic & clinical biostatistics (4th ed.). New York: McGraw-Hill Professional. ISBN 0-0714-1017-1.
Desmarias, M. C., & Pu, X. (2008). A Bayesian student model without hidden nodes and its comparison with item response theory. International Journal of Artificial Intelligence in Education, 15(1). Publisher: IOS Press, Amsterdam, Netherlands. http://iaied.org/pub/1012/file/1012_Desmarais05.pdf. Accessed 21 March 2012.
EU. (2009). ECTS Users’ Guide: Annex 3 ECTS Grading Table. European Commission, 42. Publisher: Office for Official Publications of the European Communities. http://ec.europa.eu/education/lifelong-learning-policy/doc/ects/guide_en.pdf. Accessed 13 June 2012.
Guzmán, E., & Conejo, R. (2004). A model for student knowledge diagnosis through adaptive testing. Intelligent Tutoring Systems. Lecture Notes in Computer Science, 3220/2004, 12–21. doi:10.1007/978-3-540-30139-4_2.
Guzmán, E., Conejo, R., & Pérez-de-la-Cruz, J.-L. (2007). Adaptive testing for hierarchical student models. User Modeling and User-Adapted Interaction, 17(1–2), 119–157. doi:10.1007/s11257-006-9018-1.
Linacre, J. M. (2000). Computer-adaptive testing: A methodology whose time has come. MESA Memorandum No. 69. Seoul: Komesa Press. http://www.rasch.org/memo69.pdf. Accessed 21 March 2012.
Meijer, R. R., & van Krimpen-Stoop, E. M. (2010). Detecting person misfit in adaptive testing. Elements of Adaptive Testing Statistics for Social and Behavioral Sciences, Part 4, 315–329. doi:10.1007/978-0-387-85461-8_16.
Mulder, J., & van der Linden, W. J. (2009). Multidimensional adaptive testing with optimal design criteria for item selection. Psychometrika, 74(2), 273–296. doi:10.1007/s11336-008-9097-5.
Parshall, C. G., Davey, T., & Pashley, P. J. (2002). Innovative item types for computerized testing. Computerized Adaptive Testing: Theory and Practice, Part 3, 129–148. doi:10.1007/0-306-47531-6_7.
Pavlek, S. Bajesovo učenje (Bayes’ learning). (2005). The Institute of Electronics, Microelectronics, Computer and Intelligent Systems, Zagreb, Croatia. http://www.zemris.fer.hr/education/ml/nastava/ag20022003/bayesovo_ucenje.ppt . Accessed 16 September 2011.
Roobaert, D., Karakoulas, G., & Chawla, N. V. (2006). Information gain, correlation and support vector machines. Feature extraction. Studies in Fuzziness and Soft Computing, 207/2006, 463–470. doi:10.1007/978-3-540-35488-8_23.
Segall, D. O. (2002). Principles of multidimensional adaptive testing. Computerized Adaptive Testing: Theory and Practice, Part 1, 53–73. doi:10.1007/0-306-47531-6_3.
Sympson, B.J., & Hetter, R. D. (1985). Controlling item-exposure rates in computerized adaptive testing. Paper presented at the annual conference of the Military Testing Association, San Diego, USA, CA: Navy Personnel Research and Development Center.
Thissen, D., & Mislevy, R. J. (2000). Testing algorithms. In H. Wainer (Ed.), Computerized adaptive testing: A primer (2nd ed., pp. 101–134). Mahwah: Lawrence Erlbaum Associates. ISBN 0-8058-3511-3.
van der Linden, W. J. (2010). Constrained adaptive testing with shadow tests. Elements of Adaptive Tesing, Statistics for Social and Behavioral Sciences, Part 1, 31–55. doi:10.1007/978-0-387-85461-8_2.
van der Linden, W. J., & Veldkamp, B. P. (2004). Constraining item exposure in computerized adaptive testing with shadow tests. Journal of Educational and Behavioral Statistics, 29, 273–291. doi:10.3102/10769986029003273. Publisher: SAGE Publications, Thousand Oaks, USA.
Wainer, H., & Mislevy, R. J. (2000). Item response theory. Calibration and estimation. In H. Wainer (Ed.), Computerized adaptive testing: A primer (2nd ed., pp. 61–99). Mahwah: Lawrence Erlbaum Associates. ISBN 0-8058-3511-3.
Walter, O., Becker, J., Bjorner, J., Fliege, H., Klapp, B., & Rose, M. (2007). Development and evaluation of a computer adaptive test for ‘Anxiety’ (Anxiety-CAT). Quality of Life Research, 16(Supplement 1), 143–155. doi:10.1007/s11136-007-9191-7.
Zenisky, A., Hambleton, R. K., & Luecht, R. M. (2010). Multistage testing: issues, designs, and research. Elements of Adaptive Testing Statistics for Social and Behavioral Sciences, Part 5, 355–372. doi:10.1007/978-0-387-85461-8_18.
Author information
Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Andjelic, S., Cekerevac, Z. CAT model with personalized algorithm for evaluation of estimated student knowledge. Educ Inf Technol 19, 173–191 (2014). https://doi.org/10.1007/s10639-012-9208-x
Published:
Issue Date:
Key words
- Estimation of knowledge
- CAT
- Criteria function
- Bayes’ theorem
- MAP approach