Advertisement

Better Student Assessing by Finding Difficulty Factors in a Fully Automated Comprehension Measure

  • Brooke Soden Hensler
  • Joseph Beck
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4053)

Abstract

The multiple choice cloze (MCC) question format is commonly used to assess students’ comprehension. It is an especially useful format for ITS because it is fully automatable and can be used on any text.  Unfortunately, very little is known about the factors that influence MCC question difficulty and student performance on such questions.  In order to better understand student performance on MCC questions, we developed a model of MCC questions. Our model shows that the difficulty of the answer and the student’s response time are the most important predictors of student performance.  In addition to showing the relative impact of the terms in our model, our model provides evidence of a developmental trend in syntactic awareness beginning around the 2 nd grade.  Our model also accounts for 10% more variance in students’ external test scores compared to the standard scoring method for MCC questions.

Keywords

Target Word Reading Comprehension Intelligent Tutoring System Proficiency Reader Student Identity 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Mostow, J., Beck, J., Bey, J., Cuneo, A., Sison, J., Tobin, B., Valeri, J.: Using automated questions to assess reading comprehension, vocabulary, and effects of tutorial interventions. Technology, Instruction, Cognition and Learning 2, 97–134 (2004)Google Scholar
  2. 2.
    Beck, J.E.: Engagement tracing: using response times to model student disengagement. In: International Conference on Artificial Intelligence and Education (to appear, 2005)Google Scholar
  3. 3.
    Schwanenflugel, P.J., Stahl, S.A., McFalls, E.L.: Partial word knowledge and vocabulary growth during reading comprehension. Journal of Literacy Research 29(4), 531–553 (1997)CrossRefGoogle Scholar
  4. 4.
    Kamil, M.L., Mosenthal, P.B., Pearson, P.D., Barr, R. (eds.): Handbook of Reading Research, vol. III. Lawrence Erlbaum Associates, Mahwah (2000)Google Scholar
  5. 5.
    Gentner, D.: Some interesting differences between verbs and nouns. Cognition and Brain Theory 4(2), 161–178 (1981)Google Scholar
  6. 6.
    Golinkoff, R.M., Hirsh-Pasek, K., Bloom, L., et al.: Becoming a word learner: A debate on lexical acquisition. Oxford University Press, New York (2000)CrossRefGoogle Scholar
  7. 7.
    Schmid, H.: Probabilistic Part-of-Speech Tagging Using Decision Trees. In: International Conference on New Methods in Language Processing, pp. 44–49 (1994)Google Scholar
  8. 8.
    Coniam, D.: A preliminary inquiry into using corpus word frequency data in the automatic generation of English language cloze tests. CALICO Journal 14(2-4), 15–33 (1997)Google Scholar
  9. 9.
    Menard, S.: Applied Logistic Regression Analysis. Quantitative Applications in the Social Sciences, 106 (1995)Google Scholar
  10. 10.
    Woodcock, R.W.: Woodcock Reading Mastery Tests - Revised (WRMT-R/NU), Circle Pines, Minnesota: American Guidance Service (1998)Google Scholar
  11. 11.
    Abraham, R.G., Chapelle, C.A.: The meaning of cloze test scores: an item difficulty perspective. The Modern Language Journal 76, 468–479 (1992)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Brooke Soden Hensler
    • 1
  • Joseph Beck
    • 2
  1. 1.Robotics InstituteCarnegie Mellon UniversityPittsburghUSA
  2. 2.Machine Learning DepartmentCarnegie Mellon UniversityPittsburghUSA

Personalised recommendations