Skip to main content

Automated Model of Comprehension V2.0

  • 1830 Accesses

Part of the Lecture Notes in Computer Science book series (LNAI,volume 12749)

Abstract

Reading comprehension is key to knowledge acquisition and to reinforcing memory for previous information. While reading, a mental representation is constructed in the reader’s mind. The mental model comprises the words in the text, the relations between the words, and inferences linking to concepts in prior knowledge. The automated model of comprehension (AMoC) simulates the construction of readers’ mental representations of text by building syntactic and semantic relations between words, coupled with inferences of related concepts that rely on various automated semantic models. This paper introduces the second version of AMoC that builds upon the initial model with a revised processing pipeline in Python leveraging state-of-the-art NLP models, additional heuristics for improved representations, as well as a new radiant graph visualization of the comprehension model.

Keywords

  • Comprehension model
  • Natural language processing
  • Semantic links
  • Lexical dependencies

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-78270-2_21
  • Chapter length: 5 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   69.99
Price excludes VAT (USA)
  • ISBN: 978-3-030-78270-2
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   89.99
Price excludes VAT (USA)
Fig. 1.

References

  1. McNamara, D.S., Kintsch, E., Songer, N.B., Kintsch, W.: Are good texts always better? Interactions of text coherence, background knowledge, and levels of understanding in learning from text. Cogn. Instr. 14(1), 1–43 (1996)

    CrossRef  Google Scholar 

  2. Kintsch, W., Welsch, D.M.: The construction-integration model: a framework for studying memory for text. In: Relating Theory and Data: Essays on Human Memory in Honor of Bennet B. Murdock., pp. 367–385. Lawrence Erlbaum Associates, Inc., Hillsdale (1991)

    Google Scholar 

  3. Van den Broek, P., Young, M., Tzeng, Y., Linderholm, T.: The Landscape Model of Reading: Inferences and the Online Construction of a Memory Representation. The Construction of Mental Representations during Reading, pp. 71–98 (1999)

    Google Scholar 

  4. Abadi, M., et al.: Tensorflow: a system for large-scale machine learning. In: 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), pp. 265–283. {USENIX} Association, Savannah, GA, USA (2016)

    Google Scholar 

  5. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, pp. 8026–8037, Vancouver, BC, Canada (2019)

    Google Scholar 

  6. Honnibal, M., Montani, I.: Spacy 2: natural language understanding with bloom embeddings. In: Convolutional Neural Networks and Incremental Parsing, vol. 7, no. 1 (2017)

    Google Scholar 

  7. Dascalu, M., Dessus, P., Trausan-Matu, Ş, Bianco, M., Nardy, A.: ReaderBench, an environment for analyzing text complexity and reading strategies. In: Lane, H.C., Yacef, K., Mostow, J., Pavlik, P. (eds.) AIED 2013. LNCS (LNAI), vol. 7926, pp. 379–388. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39112-5_39

    CrossRef  Google Scholar 

  8. huggingface: Neuralcoref, Accessed 30 Dec 2020. https://github.com/huggingface/neuralcoref (2020)

  9. Manning, C.D., Surdeanu, M., Bauer, J., Finkel, J.R., Bethard, S., McClosky, D.: The Stanford CoreNLP natural language processing toolkit. In: Proceedings of 52nd Annual Meeting of the Association for computational Linguistics: System Demonstrations, pp. 55–60. The Association for Computer Linguistics, Baltimore, MD, USA (2014)

    Google Scholar 

  10. Wolf, T.: State-of-the-art neural coreference resolution for chatbots, Accessed 30 Dec 2020. https://medium.com/huggingface/state-of-the-art-neural-coreference-resolution-for-chatbots-3302365dcf30 (2017)

  11. Explosion: SpaCy, Accessed 9 Feb 2021. https://spacy.io/usage/facts-figures#benchmarks (2016–2021).

  12. The Stanford Natural Language Processing Group: Neural Network Dependency Parser, Accessed 9 Feb 2021. https://nlp.stanford.edu/software/nndep.html

  13. Google: word2vec, Accessed 30 Nov 2020. https://code.google.com/archive/p/word2vec/ (2013)

  14. Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R.: Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41(6), 391–407 (1990)

    CrossRef  Google Scholar 

  15. Davies, M.: The corpus of contemporary American English as the first reliable monitor corpus of English. Literary Linguist. Comput. 25(4), 447–464 (2010)

    CrossRef  Google Scholar 

  16. Ratner, B.: The correlation coefficient: its values range between+1/−1, or do they? J. Target. Meas. Anal. Mark. 17(2), 139–142 (2009)

    CrossRef  Google Scholar 

  17. Kuperman, V., Stadthagen-Gonzalez, H., Brysbaert, M.: Age-of-acquisition ratings for 30,000 English words. Behav. Res. Methods 44(4), 978–990 (2012)

    CrossRef  Google Scholar 

  18. Brin, S., Page, L.: The anatomy of a large-scale hypertextual web search engine. Comput. Netw. 30(1–7), 107–117 (1998)

    Google Scholar 

  19. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

Download references

Acknowledgments

This research was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS – UEFISCDI, project number TE 70 PN-III-P1-1.1-TE-2019-2209, ATES – “Automated Text Evaluation and Simplification”, the Institute of Education Sciences (R305A180144 and R305A180261), and the Office of Naval Research (N00014-17-1-2300; N00014-20-1-2623). The opinions expressed are those of the authors and do not represent views of the IES or ONR.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mihai Dascalu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Corlatescu, DG., Dascalu, M., McNamara, D.S. (2021). Automated Model of Comprehension V2.0. In: Roll, I., McNamara, D., Sosnovsky, S., Luckin, R., Dimitrova, V. (eds) Artificial Intelligence in Education. AIED 2021. Lecture Notes in Computer Science(), vol 12749. Springer, Cham. https://doi.org/10.1007/978-3-030-78270-2_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-78270-2_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-78269-6

  • Online ISBN: 978-3-030-78270-2

  • eBook Packages: Computer ScienceComputer Science (R0)