A Two-Phase Item Assigning in Adaptive Testing Using Norm Referencing and Bayesian Classification
Due to the advancement in information technology and varied learner group, e-learning has become popular. Hence computer based assessment become a prevalent method of administering the tests. Randomization of test items here may produce unfair effect on test takers which is unproductive in the outcome of the test. There is a need to develop the Intelligent Tutoring System that assigns intelligent question depending on the student’s response in the testing session. It will be more productive when the questions are assigned based on the ability in the early stage itself. Also, if only the standard multiple-choice questions are focused, then the real embedded nature of computer assessment is sacrificed. Items with different constrained constructs are included to bring out the complex skills, analytical and comprehensive ability of learners. So, this study focus on building up a framework to automatically assign intelligent question with different constructs based on the learner ability while entry. Using Norm Referencing, questions are classified based on item difficulty. Item discrimination is found and there by only the items which can discriminate the performers alone are accumulated in the item pool to have maximum effect of intelligence in tutoring system. The level of new learner is predicted by means of Naïve Bayesian classification and the consequent item is posed. Thereby the objective of Intelligent Tutoring System is achieved by using both adaptability and intelligence in testing.
KeywordsIntelligent Item Classification Adaptivity in ITS Norm Referencing in ITS
Unable to display preview. Download preview PDF.
- 1.Sokolova, M., Totkov, G.: Accumulative Question Types in Elearning environment. In: International Conference on Computer Systems Technologies – CompSysTech (2007)Google Scholar
- 2.Lufi, D., Okasha, S., Cohen, A.: Test anxiety and its effect on the personality of students with learning Disabilities. Learning Disability Quarterly 27(3) (2004)Google Scholar
- 3.Marks, A.M., Cronje, J.C.: Randomised items in computer-based tests: Russia roulette in assessment? Journal of Educational Technology & Society 11(4) (2008)Google Scholar
- 4.Bennette, R.E.: Construction versus Choice in Cognitive measurement: Issues in constructed response. In: Performance Testing and Portfolio Assessment, pp. 1–27. Lawrence Erlbaum Associates, HillsdaleGoogle Scholar
- 5.Erdoğdu, B.: Computer based testing evaluation of question classification for Computer Adaptive testing, A Master Thesis (2009) Google Scholar
- 6.Lilley, M., Barker, T.: The development and evaluation of a computer-adaptive Testing application for English language. In: Proceedings of the 6th Computer-Assisted Assessment Conference, Loughborough University, United Kingdom (2002)Google Scholar
- 7.Cheng, S.-C., Huang, Y.-M., Chen, J.-N., Lin, Y.-T.: Automatic Leveling System for E-Learning Examination Pool Using Entropy-Based Decision Tree. In: Lau, R., Li, Q., Cheung, R., Liu, W. (eds.) ICWL 2005. LNCS, vol. 3583, pp. 273–278. Springer, Heidelberg (2005)Google Scholar
- 8.Izard, J.: Trial Testing and Item Analysis in Test Construction, Module 7 in Quantitative Research Methods in Educational Planning. UNESCO International Institute for Educational Planning (September 2005)Google Scholar