Advertisement

Using Production to Assess Learning: An ILE That Fosters Self-Regulated Learning

  • Philippe Dessus
  • Benoît Lemaire
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2363)

Abstract

Current systems aiming at engaging students in Self-Regulated Learning processes are often prompt-based and domain-dependent. Such metacognitive prompts are either difficult to interpret for novices or ignored by experts. Although domain-dependence per se cannot be considered as a drawback, it is often due to a rigid structure which prevents from moving to another domain. We detail here Apex, a two-loop system which provides texts to be learned through summarization. In the first loop, called Reading, the student formulates a query and is provided with texts related to this query. Then the student judges whether each text presented could be summarized. In the second loop, called Writing, the student writes out a summary of the texts, then gets an assessment from the system. In order to automatically perform various comprehension-centered tasks (i.e., texts that match queries, assessment of summaries), our system uses LSA (Latent Semantic Analysis), a tool devised for the semantic comparison of texts.

Keywords

Latent Semantic Analysis Summarizable Text Interactive Learn Environment Semantic Comparison Student Essay 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lesgold, A.: Toward a theory of curriculum for use in designing intelligent tutoring systems. In Mandl, H., Lesgold, A., eds.: Learning Issues for Intelligent Tutoring Systems. Springer, Berlin (1988) 114–137Google Scholar
  2. 2.
    Boylan, P., Vergaro, C.: Metacognition in epistolary rhetoric: A case-based system for writing effective business letters in a foreign language. In Lajoie, S.P., Vivet, M., eds.: Artificial Intelligence in Education. IOS Press, Amsterdam (1999) 305–312Google Scholar
  3. 3.
    Zellermayer, M., Salomon, G., Globerson, T., Givon, H.: Enhancing writing-related metacognitions through a computerized writing partner. American Educational Research Journal 28 (1991) 373–391CrossRefGoogle Scholar
  4. 4.
    Veermans, K., de Jong, T., van Joolingen, W.R.: Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments 8 (2000) 229–255CrossRefGoogle Scholar
  5. 5.
    Mostow, J., Aist, G.: Evaluating tutors that listen. In Forbus, K.D., Feltovich, P.J., eds.: Smart Machines in Education. AAAI, Menlo Park (2001) 169–234Google Scholar
  6. 6.
    Salomon, G., Globerson, T., Guterman, E.: The computer as a zone of proximal development: Internalizing reading-related metacognitions from a reading partner. Journal of Educational Psychology 81 (1989) 620–627CrossRefGoogle Scholar
  7. 7.
    Self, J.: The role of student models in learning environments. Trans. of the Institute of Electronics, Information and Communication Engineers E77-D (1994) 3–8Google Scholar
  8. 8.
    Azevedo, R.: Using hypermedia to learn about complex systems: A self-regulation model. In: Help Workshop of the AI-ED 2001 Conference, San Antonio (2001)Google Scholar
  9. 9.
    Butler, D.L., Winne, P.H.: Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research 65 (1995) 245–281CrossRefGoogle Scholar
  10. 10.
    Nelson, T.O.: Cognition versus metacognition. In Sternberg, R.J., ed.: The Nature of Cognition. MIT Press, Cambridge (1999) 625–644Google Scholar
  11. 11.
    Graesser, C., Person, N., Harter, D., the Tutoring Research Group: Teaching tactics and dialog in Autotutor. International Journal of Artificial Intelligence in Education 12 (in press)Google Scholar
  12. 12.
    Boekaerts, M.: Self-regulated learning: a new concept embraced by researchers, policy makers, educators, teachers, and students. Learning and Instruction 7 (1997) 161–186CrossRefGoogle Scholar
  13. 13.
    Deerwester, S., Dumais, S., Furnas, G., Landauer, T., Harshman, R.: Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science 41 (1990) 391–407CrossRefGoogle Scholar
  14. 14.
    Landauer, T., Foltz, P., Laham, D.: An introduction to Latent Semantic Analysis. Discourse Processes 25 (1998) 259–284Google Scholar
  15. 15.
    Dumais, S.: Improving the retrieval of information from external sources. Behavior Research Methods, Instruments and Computers 23 (1991) 229–236Google Scholar
  16. 16.
    Wolfe, M., Schreiner, M., Rehder, B., Laham, D., Foltz, P., Kintsch, W., Landauer, T.: Learning from text: Matching readers and texts by Latent Semantic Analysis. Discourse Processes 25 (1998) 309–336CrossRefGoogle Scholar
  17. 17.
    Kintsch, E., Steinhart, D., Stahl, G., Matthews, C., Lamb, R., the LSA Group: Developing summarization skills through the use of LSA-based feedback. Interactive Learning Environments 8 (2000) 87–109CrossRefGoogle Scholar
  18. 18.
    Zampa, V., Lemaire, B.: Latent Semantic Analysis for user modeling. Journal of Intelligent Information Systems 18 (2002) 15–30CrossRefGoogle Scholar
  19. 19.
    Lemaire, B., Dessus, P.: A system to assess the semantic content of student essays. Journal of Educational Computing Research 24 (2001) 305–320CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Philippe Dessus
    • 1
  • Benoît Lemaire
    • 1
  1. 1.Laboratoire des Sciences de l’EducationUniversité Pierre-Mendès-FranceGrenoble Cedex 9France

Personalised recommendations