Advertisement

Soft Computing in Intelligent Tutoring Systems and Educational Assessment

  • Rodney D. Nielsen
  • Wayne Ward
  • James H. Martin
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 230)

Abstract

The need for soft computing technologies to facilitate effective automated tutoring is pervasive – from machine learning techniques to predict content significance and generate appropriate questions, to interpretation of noisy spoken responses and statistical assessment of the response quality, through user modeling and determining how best to respond to the learner in order to optimize learning gains. This chapter focuses primarily on the domain-independent semantic analysis of learner responses, reviewing prior work in intelligent tutoring systems and educational assessment. We present a new framework for assessing the semantics of learner responses and the results of our initial implementation of a machine learning approach based on this framework.

Keywords

Soft Computing Latent Semantic Analysis Intelligent Tutor System Text Fragment Educational Assessment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Agichtein, E., Gravano, L.: Snowball: Extracting relations from large plain-text collections. In: Proc. of the 5th ACM ICDL (2000)Google Scholar
  2. Aleven, V., Popescu, O., Koedinger, K.R.: A tutorial dialogue system with knowledge-based understanding and classification of student explanations. In: IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems (2001)Google Scholar
  3. Anderson, J.R., Corbett, A.T., Koedinger, K., Pelletier, R.: Cognitive tutors: Lessons learned. J of Learning Sciences 4, 167–207 (1995)CrossRefGoogle Scholar
  4. Banerjee, S., Pedersen, T.: Extended gloss overlaps as a measure of semantic relatedness. In: IJCAI, pp. 805–810 (2003)Google Scholar
  5. Barzilay, R., Lee, L.: Learning to paraphrase: An unsupervised approach using multiple-sequence alignment. In: Proc. of HLT-NAACL, pp. 16–23 (2003)Google Scholar
  6. Barzilay, R., McKeown, K.: Extracting paraphrases from a parallel corpus. In: Proc. of the ACL/EACL, pp. 50–57 (2001)Google Scholar
  7. Bennett R.: Reinventing assessment: Speculations on the future of large-scale educational testing. Educational Testing Service, on August 16, 2005 (1998) http://www.ets.org/research/pic/bennett.html (Downloaded from August 16, 2005)
  8. Bethard, S., Nielsen, R.D., Martin, J.H., Ward, W., Palmer, M.: Semantic integration in learning from text. In: Machine Reading AAAI Spring Symposium (2007)Google Scholar
  9. Bloom, B.S.: The 2 sigma problem: The search for methods of group instruction as effective as one-on-one tutoring. Educational Researcher 13, 4–16 (1984)Google Scholar
  10. Braz, R.S., Girju, R., Punyakanok, V., Roth, D., Sammons, M.: An inference model for semantic entailment in natural language. In: Proc. of the PASCAL Recognizing Textual Entailment challenge workshop (2005)Google Scholar
  11. Breiman, L.: Random forests. Journal of Machine Learning 45(1), 5–32 (2001)zbMATHCrossRefGoogle Scholar
  12. Budanitsky, A., Hirst, G.: Evaluating WordNet-based measures of lexical semantic relatedness. Computational Linguistics 32(1) (2006)Google Scholar
  13. Burger, J., Ferro, L.: Generating an entailment corpus from news headlines. In: Proc. of the ACL workshop on Empirical Modeling of Semantic Equivalence and Entailment, pp. 49–54 (2005)Google Scholar
  14. Callear, D., Jerrams-Smith, J., Soh, V.: CAA of short non-MCQ answers. In: Proc. of the 5th International CAA conference, Loughborough (2001)Google Scholar
  15. Chi, M.T.H.: Constructing self-explanations and scaffolded explanations in tutoring. Applied Cognitive Psychology 10, 33–49 (1996)CrossRefMathSciNetGoogle Scholar
  16. Church, K.W., Hanks, P.: Word association norms, mutual information, and lexicography. In: Proceedings of the 27th ACL, Vancouver BC, pp. 76–83 (1989)Google Scholar
  17. Cohen, J.: A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20, 37–46 (1960)CrossRefGoogle Scholar
  18. Cohen, P.A., Kulik, J.A., Kulik, C.L.C.: Educational outcomes of tutoring: A meta-analysis of findings. American Educational Research J 19, 237–248 (1982)Google Scholar
  19. Cole R., VanVuuren S., Pellom B., Hacioglu K., Ma J., Movellan J., Schwartz S., Wade-Stein D., Ward W., Yan J.: Perceptive animated interfaces: First steps toward a new paradigm for human computer interaction (2003) Google Scholar
  20. Cowie, J., Lehnert, W.G.: Information extraction. Communications of the ACM 39(1), 80–91 (1996)CrossRefGoogle Scholar
  21. Dagan, I., Glickman, O., Magnini, B.: The PASCAL Recognizing Textual Entailment Challenge. In: Proc. of the PASCAL RTE challenge workshop (2005)Google Scholar
  22. Duclaye, F., Yvon, F., Collin, O.: Using the web as a linguistic resource for learning reformulations automatically. LREC (2002)Google Scholar
  23. Franzke, M., Kintsch, E., Caccamise, D., Johnson, N., Dooley, S.: Summary Street: Computer support for comprehension and writing. Journal of Educational Computing Research 33, 53–80 (2005)CrossRefGoogle Scholar
  24. Gildea, D., Jurafsky, D.: Automatic labeling of semantic roles. Computational Linguistics 28(3), 245–288 (2002)CrossRefGoogle Scholar
  25. Glickman, O., Dagan, I.: Identifying lexical paraphrases from a single corpus: A case study for verbs. In: Proc. of RANLP (2003)Google Scholar
  26. Glickman, O., Dagan, I., Koppel, M.: Web based probabilistic textual entailment. In: Proc. of the PASCAL RTE challenge workshop (2005)Google Scholar
  27. Graesser, A.C., Hu, X., Susarla, S., Harter, D., Person, N.K., Louwerse, M., Olde, B.: The Tutoring Research Group AutoTutor: An intelligent tutor and conversational tutoring scaffold. In: Proc. of the 10th International Conference of Artificial Intelligence in Education, San Antonio, TX, pp. 47–49 (2001)Google Scholar
  28. Haghighi, A.D., Ng, A.Y., Manning, C.D.: Robust Textual Inference via Graph Matching. In: Proc. HLT-EMNLP (2005)Google Scholar
  29. Herrera, J., Peñas, A., Verdejo, F.: Textual entailment recognition based on dependency analysis and WordNet. In: Proc. of PASCAL Recognizing Textual Entailment challenge workshop (2005)Google Scholar
  30. Hickl, A., Bensley, J.: A discourse commitment-based framework for recognizing textual entailment. In: Proc. of the ACL-PASCAL workshop on textual entailment and paraphrasing (2007)Google Scholar
  31. Hickl, A., Bensley, J., Williams, J., Roberts, K., Rink, B., Shi, Y.: Recognizing textual entailment with LCC’s GROUNDHOG system. In: Proc. of the Second PASCAL Recognizing Textual Entailment challenge workshop (2006)Google Scholar
  32. Jiang, J.J., Conrath, D.W.: Semantic similarity based on corpus statistics and lexical taxonomy. In: Proc. of International Conference on Research in Computational Linguistics (ROCLING X), Taiwan, pp. 19–33 (1997)Google Scholar
  33. Jordan, P.W., Makatchev, M., VanLehn, K.: Combining competing language understanding approaches in an intelligent tutoring system. In: Lester, J.C., Vicari, R.M., Paraguacu, F. (eds.) 7th Conference on Intelligent Tutoring Systems, pp. 346–357. Springer, Heidelberg (2004)Google Scholar
  34. Kintsch, W.: Comprehension: A paradigm for cognition. Cambridge University Press, Cambridge (1998)Google Scholar
  35. Kipper, K., Dang, H.T., Palmer, M.: Class-based construction of a verb lexicon. In: AAAI seventeenth National Conference on Artificial Intelligence (2000)Google Scholar
  36. Koedinger, K.R., Anderson, J.R., Hadley, W.H., Mark, M.A.: Intelligent tutoring goes to school in the big city. International J of AI in Education 8, 30–43 (1997)Google Scholar
  37. Landauer, T.K., Dumais, S.T.: A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. J Psychological Review (1997)Google Scholar
  38. Landauer, T.K., Foltz, P.W., Laham, D.: An introduction to latent semantic analysis. Discourse Processes 25, 259–284 (1998)CrossRefGoogle Scholar
  39. Leacock, C.: Scoring free-response automatically: A case study of a large-scale Assessment. Examens 1(3) (2004)Google Scholar
  40. Leacock, C., Chodorow, M.: C-rater: Automated scoring of short-answer questions. Computers and the Humanities 37, 4 (2003)CrossRefGoogle Scholar
  41. Lin, D.: Automatic retrieval and clustering of similar words. In: Proc. of the ACL, Montreal, pp. 898–904 (1998)Google Scholar
  42. Lin, D., Pantel, P.: DIRT – Discovery of inference rules from text. In: Proc. of KDD (2001a)Google Scholar
  43. Lin, D., Pantel, P.: Discovery of inference rules for question answering. Natural Language Engineering 7(4), 343–360 (2001b)CrossRefGoogle Scholar
  44. MacCartney, B., Grenager, T., de Marneffe, M., Cer, D., Manning, C.: Learning to recognize features of valid textual entailments. In: Proc. of HLT-NAACL (2006)Google Scholar
  45. Madden, N.A., Slavin, R.E.: Effective pullout programs for students at risk. In: Slavin, R.E., Karweit, N.L., Madden, N.A. (eds.) Effective programs for students at risk, Allyn and Bacon, Boston (1989)Google Scholar
  46. Makatchev, M., Jordan, P., VanLehn, K.: Abductive theorem proving for analyzing student explanations and guiding feedback in intelligent tutoring systems. Journal of Automated Reasoning special issue on automated reasoning and theorem proving in education 32(3), 187–226 (2004)Google Scholar
  47. Mathews, E.C., Jackson, G.T., Person, N.K., Graesser, A.C.: Discourse patterns in Why/AutoTutor. In: Proceedings of the 2003 AAAI Spring Symposia on Natural Language Generation, pp. 45–51. AAAI Press, Menlo Park, Palo Alto, CA (2003)Google Scholar
  48. Meichenbaum, D., Biemiller, A.: Nurturing independent learners: Helping students take charge of their learning. Brookline. Brookline, Cambridge (1998)Google Scholar
  49. Mitchell, T., Aldridge, N., Broomhead, P.: Computerized marking of short-answer free-text responses. In: 29th annual conference of the International Association for Educational Assessment, Manchester, UK (2003) Google Scholar
  50. Mitchell, T., Russell, T., Broomhead, P., Aldridge, N.: Towards robust computerized marking of free-text responses. In: Proc. of 6th International Computer Aided Assessment Conference, Loughborough (2002)Google Scholar
  51. Mostow, J., Aist, G., Burkhead, P., Corbett, A., Cuneo, A., Eitelman, S., Huang, C., Junker, B., Sklar, M.B., Tobin, B.: Evaluation of an automated reading tutor that listens: Comparison to human tutoring and classroom instruction. Journal of Educational Computing Research 29(1), 61–117 (2003)CrossRefGoogle Scholar
  52. Nielsen, R.D.: MOB-ESP and other improvements in probability estimation. In: Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence (2004)Google Scholar
  53. Nielsen, R.D., Pradhan, S.: Mixing weak learners in semantic parsing. In: Proc. of EMNLP, Barcelona, Spain (2004)Google Scholar
  54. Nielsen, R.D., Ward, W., Martin, J.H.: Toward dependency path based entailment. In: Proc. of the second PASCAL RTE challenge workshop (2006)Google Scholar
  55. Nielsen, R.D., Ward, W.: A corpus of fine-grained entailment relations. In: Proc. of the ACL workshop on Textual Entailment and Paraphrasing (2007)Google Scholar
  56. Nivre J., Kubler S.: Dependency parsing. Tutorial at COLING-ACL, Sydney (2006) Google Scholar
  57. Nivre, J., Hall, J., Nilsson, J., Chanev, A., Eryigit, G., Kübler, S., Marinov, S., Marsi, E.: MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering 13(2), 95–135 (2007)Google Scholar
  58. Pang, B., Knight, K., Marcu, D.: Syntax-based alignment of multiple translations: Extracting paraphrases and generating sentences. In: Proc. HLT/NAACL (2003)Google Scholar
  59. Pellom, B.: SONIC: The University of Colorado continuous speech recognizer. University of Colorado, tech report #TR-CSLR-2001-01, Boulder (2001) Google Scholar
  60. Pellom, B., Hacioglu, K.: Recent improvements in the CU SONIC ASR system for noisy speech: The SPINE task. In: Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Hong Kong (2003)Google Scholar
  61. Peters, S., Bratt, E.O., Clark, B., Pon-Barry, H., Schultz, K.: Intelligent systems for training damage control assistants. In: Proc. of Interservice/Industry Training, Simulation and Education Conference (2004)Google Scholar
  62. Platt, J.: Probabilities for support vector machines. In: Smola, A., Bartlett, P., Scolkopf, B., Schuurmans, D. (eds.) Advances in Large Margin Classifiers. MIT Press, Cambridge (2000)Google Scholar
  63. Pon-Barry, H., Clark, B., Schultz, K., Bratt, E.O., Peters, S.: Contextualizing learning in a reflective conversational tutor. In: Proceedings of the 4th IEEE International Conference on Advanced Learning Technologies (2004)Google Scholar
  64. Pradhan, S., Ward, W., Hacioglu, K., Martin, J.H., Jurafsky, D.: Semantic role labeling using different syntactic views. In: Proceedings of ACL (2005)Google Scholar
  65. Pulman, S.G., Sukkarieh, J.Z.: Automatic short answer marking. In: Proc. of the 2nd Workshop on Building Educational Applications Using NLP, ACL (2005)Google Scholar
  66. Raina, R., Haghighi, A., Cox, C., Finkel, J., Michels, J., Toutanova, K., MacCartney, B., de Marneffe, M.C., Manning, C.D., Ng, A.Y.: Robust textual inference using diverse knowledge sources. In: Proc of the PASCAL RTE challenge workshop (2005)Google Scholar
  67. Ravichandran, D., Hovy, E.: Learning surface text patterns for a question answering system. In: Proc. of the 40th ACL conference, Philadelphia (2002)Google Scholar
  68. Roll, I., Baker, R.S., Aleven, V., McLaren, B.M., Koedinger, K.R.: Modeling students’ metacognitive errors in two intelligent tutoring systems. In: Ardissono, L., Brna, P., Mitrovic, A. (eds.) User Modeling, pp. 379–388 (2005)Google Scholar
  69. Rosé, C.P., Roque, A., Bhembe, D., VanLehn, K.: A hybrid text classification approach for analysis of student essays. In: Building Educational Applications Using Natural Language Processing, pp. 68–75 (2003a)Google Scholar
  70. Rosé, C.P., Gaydos, A., Hall, B.S., Roque, A., VanLehn, K.: Overcoming the knowledge engineering bottleneck for understanding student language input. In: Proceedings of AI in Education. IOS Press, Amsterdam (2003b)Google Scholar
  71. Shinyama, Y., Sekine, S.: Paraphrase acquisition for information extraction. In: Proc. of the Second International Workshop on Paraphrasing, Sapporo, Japan (2003)Google Scholar
  72. Shinyama, Y., Sekine, S., Sudo, K., Grishman, R.: Automatic paraphrase acquisition from news articles. In: Proc. of HLT, San Diego, CA (2002)Google Scholar
  73. Sudo, K., Sekine, S., Grishman, R.: Automatic pattern acquisition for japanese information extraction. In: Proc. of HLT, San Diego, CA (2001)Google Scholar
  74. Sukkarieh, J.Z., Pulman, S.G., Raikes, N.: Auto-marking: Using computational linguistics to score short, free text responses. In: Proc. of the 29th conference of the International Association for Educational Assessment, Manchester, UK (2003)Google Scholar
  75. Sukkarieh, J.Z., Pulman, S.G.: Information extraction and machine learning: Auto-marking short free text responses to science questions. In: Proc. of AIED (2005)Google Scholar
  76. Sweet, A.P., Snow, C.E. (eds.): Rethinking reading comprehension. Guilford Press, New York (2003)Google Scholar
  77. Tatu, M., Moldovan, D.: COGEX at RTE 3. In: Proc. of the ACL-PASCAL workshop on Textual Entailment and Paraphrasing (2007)Google Scholar
  78. Topping, K., Whitley, M.: Participant evaluation of parent-tutored and peer-tutored projects in reading. Educational Research 32(1), 14–32 (1990)Google Scholar
  79. Turney, P.D.: Mining the web for synonyms: PMI-IR versus LSA on TOEFL. In: Proc. of 12th European Conference on Machine Learning, pp. 491–502 (2001)Google Scholar
  80. Turney, P.D., Littman, M.L., Bigham, J., Shnayder, V.: Combining independent modules to solve multiple-choice synonym and analogy problems. In: Proceedings of RANLP, Borovets, Bulgaria, pp. 482–489 (2003)Google Scholar
  81. VanLehn, K., Lynch, C., Schulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., Wintersgill, M.: The Andes physics tutoring system: Five years of evaluations. In: McCalla, G., Looi, C.K. (eds.) Proc. of the 12th International Conference on AI in Education. IOS Press, Amsterdam (2005)Google Scholar
  82. VanLehn, K., Siler, S., Murray, C., Yamauchi, T., Baggett, W.B.: Why do only some events cause learning during human tutoring. In: Cognition and Instruction, vol. 21(3), pp. 209–249. Lawrence Erlbaum Associates (2003)Google Scholar
  83. Voorhees, E.M., Harman, D.K.: Overview of the 6th text retrieval conference (TREC6). In: Proc. of 17th Text Retrieval Conference. NIST (1998)Google Scholar
  84. Ward, W.H.: The Phoenix system: Understanding spontaneous speech. In: Proc. of IEEE ICASSP (1991)Google Scholar
  85. Witten, I.H., Frank, E.: Data mining: Practical machine learning tools with Java implementations. Morgan Kaufmann, San Francisco (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Rodney D. Nielsen
    • 1
    • 2
    • 3
  • Wayne Ward
    • 1
    • 2
    • 3
  • James H. Martin
    • 1
    • 2
    • 3
    • 4
  1. 1.Center for Spoken Language Research (2 Director) 
  2. 2.Institute of Cognitive Science 
  3. 3.Department of Computer Science 
  4. 4.Department of Linguistics, University of Colorado, Boulder 

Personalised recommendations