Clustering Students to Generate an Ensemble to Improve Standard Test Score Predictions
In typical assessment student are not given feedback, as it is harder to predict student knowledge if it is changing during testing. Intelligent Tutoring systems, that offer assistance while the student is participating, offer a clear benefit of assisting students, but how well can they assess students? What is the trade off in terms of assessment accuracy if we allow student to be assisted on an exam. In a prior study, we showed the assistance with assessments quality to be equal. In this work, we introduce a more sophisticated method by which we can ensemble together multiple models based upon clustering students. We show that in fact, the assessment quality as determined by the assistance data is a better estimator of student knowledge. The implications of this study suggest that by using computer tutors for assessment, we can save much instructional time that is currently used for just assessment.
KeywordsClustering Ensemble Learning Intelligent Tutoring Systems Regression Dynamic Assessment Educational Data Mining
Unable to display preview. Download preview PDF.
- 1.Feng, M., Heffernan, N.T., Koedinger, K.R.: Addressing the assessment challenge in an online system that tutors as it assesses. User Modeling and User-Adapted Interaction: The Journal of Personalization Research 19(3) (2009)Google Scholar
- 2.Feng, M., Heffernan, N.T.: Can We Get Better Assessment From A Tutoring System Compared to Traditional Paper Testing? Can We Have Our Cake (better assessment) and Eat it too (student learning during the test). In: Proceedings of the 3rd International Conference on Educational Data Mining?, pp. 41–50 (2010)Google Scholar
- 4.Pardos, Z.A., Heffernan, N.T.: Using HMMs and bagged decision trees to leverage rich features of user and skill from an intelligent tutoring system dataset. Journal of Machine Learning Research C & WP (in press 2011)Google Scholar
- 5.Baker, R.S.J.d, Corbett, A. T., Aleven, V.: More Accurate Student Modeling Through Contextual Estimation of Guess and Slip Probabilities in Bayesian Knowledge Tracing. In: Proceedings of the 14th International Conference on Artificial Intelligence in Education, Brightion, UK, pp. 531–538.Google Scholar
- 7.Campione, J. C., Brown, A. L.: Dynamic Assessment: One Approach and some Initial Data. Technical Report. No. 361. Cambridge, MA. Illinois University, Urbana, Center for the Study of Reading. ED 269735 (1985)Google Scholar
- 11.Baker, R.S., Corbett, A.T., Koedinger, K.R., Wagner, A.Z.: Off-task behaviour in the Cognitive Tutor Classroom: When Students “game the system”. In: Proceedings of the ACM CHI 2004: Computer - Human Interaction, pp. 383–390. ACM, New York (2004)Google Scholar
- 15.Trivedi, S., Pardos, Z.A., Heffernan, N.T.: The Utility of Clustering in Prediction Tasks. In: Submission to the 17th Conference on Knowledge Discovery and Data Mining (in submission, 2011)Google Scholar