Advertisement

Predicting Comprehension from Students’ Summaries

  • Mihai Dascalu
  • Larise Lucia Stavarache
  • Philippe Dessus
  • Stefan Trausan-Matu
  • Danielle S. McNamara
  • Maryse Bianco
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9112)

Abstract

Comprehension among young students represents a key component of their formation throughout the learning process. Moreover, scaffolding students as they learn to coherently link information, while organically constructing a solid knowledge base, is crucial to students’ development, but requires regular assessment and progress tracking. To this end, our aim is to provide an automated solution for analyzing and predicting students’ comprehension levels by extracting a combination of reading strategies and textual complexity factors from students’ summaries. Building upon previous research and enhancing it by incorporating new heuristics and factors, Support Vector Machine classification models were used to validate our assumptions that automatically identified reading strategies, together with textual complexity indices applied on students’ summaries, represent reliable estimators of comprehension.

Keywords

Reading strategies Textual complexity Summaries assessment Comprehension prediction Support vector machines 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    McNamara, D.S.: SERT: Self-Explanation Reading Training. Discourse Processes 38, 1–30 (2004)CrossRefGoogle Scholar
  2. 2.
    McNamara, D.S., Levinstein, I., Boonthum, C.: iSTART: Interactive strategy training for active reading and thinking. Behavior Research Methods, Instruments, & Computers 36(2), 222–233 (2004)CrossRefGoogle Scholar
  3. 3.
    Millis, K., Magliano, J.P., Wiemer-Hastings, K., Todaro, S., McNamara, D.S.: Assessing and improving comprehension with latent semantic analysis. In: Landauer, T.K., McNamara, D., Dennis, S., Kintsch, W. (eds.) Handbook of Latent Semantic Analysis, pp. 207–225. Erlbaum, Mahwah (2007)Google Scholar
  4. 4.
    McNamara, D.S., O’Reilly, T.P., Best, R.M., Ozuru, Y.: Improving adolescent students’ reading comprehension with iSTART. Journal of Educational Computing Research 34(2), 147–171 (2006)CrossRefGoogle Scholar
  5. 5.
    Jackson, G.T., McNamara, D.S.: Motivation and performance in a game-based intelligent tutoring system. Journal of Educational Psychology 105, 1036–1049 (2013)CrossRefGoogle Scholar
  6. 6.
    McNamara, D.S., Magliano, J.P.: Self-explanation and metacognition. In: Hacher, J.D., Dunlosky, J., Graesser, A.C. (eds.) Handbook of metacognition in education, pp. 60–81. Erlbaum, Mahwah (2009)Google Scholar
  7. 7.
    Dascalu, M., Dessus, P., Bianco, M., Trausan-Matu, S., Nardy, A.: Mining texts, learners productions and strategies with ReaderBench. In: Peña-Ayala, A. (ed.) Educational Data Mining: Applications and Trends. SCI, vol. 524, pp. 345–377. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  8. 8.
    Dascălu, M.: Analyzing Discourse and Text Complexity for Learning and Collaborating. SCI, vol. 534. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  9. 9.
    Nash-Ditzel, S.: Metacognitive Reading Strategies Can Improve Self-Regulation. Journal of College Reading and Learning 40(2), 45–63 (2010)CrossRefGoogle Scholar
  10. 10.
    Dascălu, M., Trausan-Matu, S., Dessus, P.: Towards an integrated approach for evaluating textual complexity for learning purposes. In: Popescu, E., Li, Q., Klamma, R., Leung, H., Specht, M. (eds.) ICWL 2012. LNCS, vol. 7558, pp. 268–278. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  11. 11.
    Dascalu, M., Dessus, P., Trausan-Matu, Ş., Bianco, M., Nardy, A.: ReaderBench, an environment for analyzing text complexity and reading strategies. In: Lane, H.C., Yacef, K., Mostow, J., Pavlik, P. (eds.) AIED 2013. LNCS, vol. 7926, pp. 379–388. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  12. 12.
    Cortes, C., Vapnik, V.N.: Support-Vector Networks. Machine Learning 20(3), 273–297 (1995)zbMATHGoogle Scholar
  13. 13.
    van Dijk, T.A., Kintsch, W.: Strategies of discourse comprehension. Academic Press, New York (1983)Google Scholar
  14. 14.
    Dascalu, M., Dessus, P., Bianco, M., Trausan-Matu, S.: Are automatically identified reading strategies reliable predictors of comprehension? In: Trausan-Matu, S., Boyer, K.E., Crosby, M., Panourgia, K. (eds.) ITS 2014. LNCS, vol. 8474, pp. 456–465. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  15. 15.
    Budanitsky, A., Hirst, G.: Evaluating WordNet-based Measures of Lexical Semantic Relatedness. Computational Linguistics 32(1), 13–47 (2006)zbMATHCrossRefGoogle Scholar
  16. 16.
    Sagot, B.: WordNet Libre du Francais (WOLF) (2008). http://alpage.inria.fr/~sagot/wolf.html
  17. 17.
    Landauer, T.K., Dumais, S.T.: A solution to Plato’s problem: the Latent Semantic Analysis theory of acquisition, induction and representation of knowledge. Psychological Review 104(2), 211–240 (1997)CrossRefGoogle Scholar
  18. 18.
    Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet Allocation. Journal of Machine Learning Research 3(4–5), 993–1022 (2003)zbMATHGoogle Scholar
  19. 19.
    Powers, D.E., Burstein, J., Chodorow, M., Fowles, M.E., Kukich, K.: Stumping e-rater®: Challenging the validity of automated essay scoring. ETS, Princeton (2001)Google Scholar
  20. 20.
    Grosz, B.J., Weinstein, S., Joshi, A.K.: Centering: a framework for modeling the local coherence of discourse. Computational Linguistics 21(2), 203–225 (1995)Google Scholar
  21. 21.
    Nelson, J., Perfetti, C., Liben, D., Liben, M.: Measures of text difficulty Council of Chief State School Officers, Washington, DC (2012)Google Scholar
  22. 22.
    Dascalu, M., Stavarache, L.L., Trausan-Matu, S., Dessus, P., Bianco, M.: Reflecting comprehension through french textual complexity factors. In: ICTAI 2014, pp. 615–619. IEEE, Limassol (2014)Google Scholar
  23. 23.
    Page, E.: The imminence of grading essays by computer. Phi Delta Kappan 47, 238–243 (1966)Google Scholar
  24. 24.
    Housen, A., Kuiken, F.: Complexity, Accuracy, and Fluency in Second Language Acquisition. Applied Linguistics 30(4), 461–473 (2009)CrossRefGoogle Scholar
  25. 25.
    Nardy, A., Bianco, M., Toffa, F., Rémond, M., Dessus, P.: Contrôle et régulation de la compréhension. In: David, J., Royer, C. (eds.) L’apprentissage de la Lecture: Convergences, Innovations, Perspectives, p. 16. Peter Lang, Bern-Paris (in press)Google Scholar
  26. 26.
    Bergstra, J., Bengio, Y.: Random Search for Hyper-Parameter Optimization. The Journal of Machine Learning Research 13, 281–305 (2012)zbMATHMathSciNetGoogle Scholar
  27. 27.
    Todd, R.W., Khongput, S., Darasawang, P.: Coherence, cohesion and comments on students’ academic essays. Assessing Writing 12(1), 10–25 (2007)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Mihai Dascalu
    • 1
  • Larise Lucia Stavarache
    • 1
  • Philippe Dessus
    • 2
  • Stefan Trausan-Matu
    • 1
  • Danielle S. McNamara
    • 3
  • Maryse Bianco
    • 2
  1. 1.Computer Science DepartmentUniversity Politehnica of BucharestBucharestRomania
  2. 2.LSEUniversity Grenoble AlpesGrenobleFrance
  3. 3.LSIArizona State UniversityTempeUSA

Personalised recommendations