Abstract
Teachers may wish to use open-ended learning activities and tests, but they are burdensome to assess compared to forced-choice instruments. At the same time, forced-choice assessments suffer from issues of guessing (when used as tests) and may not encourage valuable behaviors of construction and generation of understanding (when used as learning activities). Previous work demonstrates that automated scoring of constructed responses such as summaries and essays using latent semantic analysis (LSA) can successfully predict human scoring. The goal for this study was to test whether LSA can be used to generate predictive indices when students are learning from social science texts that describe theories and provide evidence for them. The corpus consisted of written responses generated while reading textbook excerpts about a psychological theory. Automated scoring indices based in response length, lexical diversity of the response, the LSA match of the response to the original text, and LSA match to an idealized peer were all predictive of human scoring. In addition, student understanding (as measured by a posttest) was predicted uniquely by the LSA match to an idealized peer.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Chi, M.T.H.: Self-explaining expository texts: the dual process of generating inferences and repairing mental models. In: Glaser, R. (ed.) Advances in Instructional Psychology, pp. 161–237. Erlbaum, Mahwah (2000)
Hinze, S.R., Wiley, J., Pellegrino, J.W.: The importance of constructive comprehension processes in learning from tests. J. Mem. Lang. 69, 151–164 (2013)
Chi, M.T.H., de Leeuw, N., Chiu, M.H., LaVancher, C.: Eliciting self-explanation improves understanding. Cogn. Sci. 18, 439–477 (1994)
McNamara, D.S.: SERT: self-explanation reading training. Discourse Process. 38, 1–30 (2004)
Guerrero, T.A., Wiley, J.: Effects of text availability and reasoning processes on test performance. In: Proceedings of the 40th Annual Conference of the Cognitive Science Society, pp. 1745–1750. Cognitive Science Society, Madison (2018)
Landauer, T.K., Foltz, P.W., Laham, D.: An introduction to latent semantic analysis. Discourse Process. 25, 259–284 (1998)
Foltz, P.W., Gilliam, S., Kendall, S.: Supporting content-based feedback in on-line writing evaluation with LSA. Interact. Learn. Environ. 8, 111–127 (2000)
Wolfe, M.B., Schreiner, M.E., Rehder, B., Laham, D., Foltz, P.W., Kintsch, W., Landauer, T.K.: Learning from text: matching readers and text by latent semantic analysis. Discourse Process. 25, 309–336 (1998)
León, J.A., Olmos, R., Escudero, I., Cañas, J.J., Salmerón, L.: Assessing short summaries with human judgments procedure and latent semantic analysis in narrative and expository texts. Behav. Res. Methods 38, 616–627 (2006)
Kintsch, E., Steinhart, D., Stahl, G., LSA research group: Developing summarization skills through the use of LSA-based feedback. Interact. Learn. Environ. 8, 87–109 (2000)
Graesser, A.C., Penumatsa, P., Ventura, M., Cai, Z., Hu, X.: Using LSA in AutoTutor: learning through mized initiative dialogue in natural language. In: Handbook of Latent Semantic Analysis, pp. 243–262 (2007)
Ventura, M.J., Franchescetti, D.R., Pennumatsa, P., Graesser, A.C., Hu, G.T.J.X., Cai, Z.: Combining computational models of short essay grading for conceptual physics problems. In: Lester, J.C., Vicari, R.M., Paraguaçu, F. (eds.) ITS 2004. LNCS, vol. 3220, pp. 423–431. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-30139-4_40
Wiley, J., et al.: Different approaches to assessing the quality of explanations following a multiple-document inquiry activity in science. Int. J. Artif. Intell. Educ. 27, 758–790 (2017)
Muñoz, B., Magliano, J.P., Sheridan, R., McNamara, D.S.: Typing versus thinking aloud when reading: implications for computer-based assessment and training tools. Behav. Res. Methods 38, 211–217 (2006)
McNamara, D.S., Boonthum, C., Levinstein, I.B., Millis, K.: Evaluating self-explanations in iSTART: comparing word-based and LSA algorithms. In: Handbook of Latent Semantic Analysis, pp. 227–241 (2007)
Pennebaker, J. W., Booth, R.J., Boyd, R.L., Francis, M.E.: Linguistic inquiry and word count: LIWC 2015. LIWC.net, Austin, TX (2015)
Graesser, A.C., McNamara, D.S., Louwerse, M.M., Cai, Z.: Coh-Metrix: analysis of text on cohesion and language. Behav. Res. Methods Instrum. Comput. 36, 193–202 (2004)
Dikli, S.: An overview of automated scoring of essays. J. Technol. Learn. Assess. 5, 1–35 (2006)
Kobrin, J.L., Deng, H., Shaw, E.J.: The association between SAT prompt characteristics, response features, and essay scores. Assessing Writ. 16, 154–169 (2011)
Ferris, D.R.: Lexical and syntactic features of ESL writing by students at different levels of L2 proficiency. TESOL Q. 28, 414–420 (1994)
Crossley, S.A., McNamara, D.S.: Predicting second language writing proficiency: the role of cohesion, readability, and lexical difficulty. J. Res. Read. 35, 115–135 (2012)
Guenther, F., Dudschig, C., Kaup, B.: LSAfun: an R package for computations based on latent semantic analysis. Behav. Res. Methods 47, 930–944 (2015)
Acknowledgements
This research was supported by grants from the Institute for Education Sciences (R305A160008) and the National Science Foundations (GRFP to first author). The authors thank Grace Li for her support in scoring the student responses, and Thomas D. Griffin and Marta K. Mielicki for their contributions as part of the larger project from which these data were derived.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Guerrero, T.A., Wiley, J. (2019). Using “Idealized Peers” for Automated Evaluation of Student Understanding in an Introductory Psychology Course. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds) Artificial Intelligence in Education. AIED 2019. Lecture Notes in Computer Science(), vol 11625. Springer, Cham. https://doi.org/10.1007/978-3-030-23204-7_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-23204-7_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-23203-0
Online ISBN: 978-3-030-23204-7
eBook Packages: Computer ScienceComputer Science (R0)