Opportunities for Natural Language Processing Research in Education

  • Jill Burstein
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5449)

Abstract

This paper discusses emerging opportunities for natural language processing (NLP) researchers in the development of educational applications for writing, reading and content knowledge acquisition. A brief historical perspective is provided, and existing and emerging technologies are described in the context of research related to content, syntax, and discourse analyses. Two systems, e-rater® and Text Adaptor, are discussed as illustrations of NLP-driven technology. The development of each system is described, as well as how continued development provides significant opportunities for NLP research.

Keywords

Natural language processing automated essay scoring and evaluation text adaptation English language learning educational technology 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Flesch, R.: A new readability yardstick. Journal of Applied Psychology 32, 221–233 (1948)CrossRefGoogle Scholar
  2. 2.
    MacGinitie, W.H., Tretiak, R.: Measures of sentence complexity as predictors of the difficulty of reading materials. In: Proceedings of the 77th Annual Convention of the APA (1969)Google Scholar
  3. 3.
    Chall, J.S., Dale, E.: Readability revisited: The new Dale-Chall readability formula, p. 159. Brookline Books, Cambridge (1995)Google Scholar
  4. 4.
    Collins-Thompson, K., Callan, J.: A language modeling approach to predicting reading difficulty. In: Proceedings of the HLT/NAACL (2004)Google Scholar
  5. 5.
    Schwarm, S., Ostendorf, M.: Reading level assessment using support vector machines and statistical language models. In: Proceedings of the ACL, Ann Arbor, MI, pp. 523–530 (2005)Google Scholar
  6. 6.
    Deane, P., Sheehan, K.M., Sabatini, J., Futagi, Y., Kostin, I.: Differences in text structure and its implications for assessment of struggling readers. Scientific Studies of Reading 10(3), 257–275 (2006)CrossRefGoogle Scholar
  7. 7.
    Elhadad, N., Sutaria, K.: Mining a lexicon of technical terms and lay equivalents. In: Biological, translational, and clinical language processing, ACL, Prague, Czech Republic, pp. 49–56 (2007)Google Scholar
  8. 8.
    Barzilay, R., Lapata, M.: Modeling local coherence: An entity-based approach. In: Proceedings of the 43rd Annual Meeting of the ACL, Ann Arbor, MI, pp. 141–148 (2005)Google Scholar
  9. 9.
    Barzilay, R., Lapata, M.: Modeling local coherence: An entity-based approach. Computational Linguistics 34(1), 1–34 (2008)CrossRefGoogle Scholar
  10. 10.
    Pitler, E., Nenkova, A.: Revisiting Readability: A Unified Framework for Predicting Text Quality. In: Proceedings of Conference on Empirical Methods in Natural Language Processing (2008)Google Scholar
  11. 11.
    Page, E.B.: The imminence of grading essays by computer. Phi Delta Kappan 48, 238–243 (1966)Google Scholar
  12. 12.
    Burstein, J., Kukich, K., Wolff, S., Lu, C., Chodorow, M., Braden-Harder, L., Harris, M.D.: Automated scoring using a hybrid feature identification technique. In: Proceedings of the Annual Meeting of the ACL, Montreal, Canada (1998)Google Scholar
  13. 13.
    Foltz, P.W., Kintsch, W., Landauer, T.K.: Analysis of Text Coherence Using Latent Semantic Analysis. Discourse Processes 25(2-3), 285–307 (1998)CrossRefGoogle Scholar
  14. 14.
    Macdonald, N.H., Frase, L.T., Gingrich, P.S., Keenan, S.A.: Writer’s Workbench: Computer Aid for Text Analysis. IEEE Transactions on Communications, Special Issue on Communication in the Automated Office 30(1), 105–110 (1982)Google Scholar
  15. 15.
    Carbonell, J.R.: AI in CAI: An artificial intelligence approach to computer-assisted instruction. IEEE Transactions on Man-Machine Systems 11(4), 190–202 (1970)CrossRefGoogle Scholar
  16. 16.
    Brown, J.S., Burton, R.R., Bell, A.G.: SOPHIE: A sophisticated instruction environment for teaching electronic troubleshooting (An example of AI in CAI). BBN Technical Report 2790. Bolt, Beranek, and Newman, Inc., Cambridge (1974)CrossRefGoogle Scholar
  17. 17.
    Stevens, A.L., Collins, A.: The goal structure of a Socratic tutor. BBN Technical Report 351. Bolt, Beranek, and Newman, Inc., Cambridge (1977)Google Scholar
  18. 18.
    Burton, R.R., Brown, J.S.: An investigation of computer coaching for informal Activities. In: Sleeman, D.H., Brown, J.S. (eds.) Intelligent Tutoring Systems. Academic Press, New York (1982)Google Scholar
  19. 19.
    Clancy, W.J.: Knowledge-Based Tutoring: The GUIDON Program. MIT Press, Cambridge (1987)Google Scholar
  20. 20.
    VanLehn, K., Graesser, A.C., Jackson, G.T., Jordan, P., Olney, A., Rose, C.P.: When are tutorial dialogues more effective than reading? Cognitive Science 31(1), 3–52 (2007)CrossRefGoogle Scholar
  21. 21.
    Leacock, C., Chodorow, M.: c-rater: Scoring of short-answer questions. Computers and the Humanities 37(4), 389–405 (2003)CrossRefGoogle Scholar
  22. 22.
    Sukkarieh, J., Bolge, E.: Leveraging C-rater’s automated scoring capability for providing instructional feedback for short constructed responses. In: Woolf, B.P., Aïmeur, E., Nkambou, R., Lajoie, S. (eds.) ITS 2008. LNCS, vol. 5091, pp. 779–783. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  23. 23.
    Bernstein, J.: PhonePass testing: Structure and construct, Ordinate Corporation, Menlo Park, CA (May 1999)Google Scholar
  24. 24.
    Zechner, K., Higgins, D., Xi, X.: SpeechRaterTM: A Construct-Driven Approach to Scoring Spontaneous Non-Native Speech. In: SLaTE 2007 (2007)Google Scholar
  25. 25.
    Burstein, J., Shore, J., Sabatini, J., Lee, Y., Ventura, M.: Developing a reading support tool for English language learners. In: Demo Proceedings of NAACL-HLT 2007, Rochester, NY (2007)Google Scholar
  26. 26.
    Salton, G.: Automatic text processing: the transformation, analysis, and retrieval of information by computer. Addison-Wesley, Reading (1989)Google Scholar
  27. 27.
    Burstein, J., Kukich, K., Wolff, S., Lu, C., Chodorow, M.: Enriching automated scoring using discourse marking. In: Proceedings of the workshop on discourse relations & discourse marking in conjunction with the ACL, Montreal, Canada (1998)Google Scholar
  28. 28.
    Larkey, L.: Automatic Essay Grading Using Text Categorization Techniques. In: Proceedings of the 21st ACM-SIGIR Conference on Research and Development in Information Retrieval, Melbourne, Australia, pp. 90–95 (1998)Google Scholar
  29. 29.
    Quirk, R., Sydney, G., Leech, G., Svartik, J.: A Comprehensive Grammar of the English Language, Longman, New York (1985)Google Scholar
  30. 30.
    Higgins, D., Burstein, J., Attali, Y.: Identifying off-topic student essays without topic-specific training data. Natural Language Engineering 12(2), 145–159 (2006)CrossRefGoogle Scholar
  31. 31.
    Burstein, J., Chodorow, M., Leacock, C.: Automated essay evaluation: The Criterion Online Writing Evaluation service. AI Magazine 25(3), 27–36 (2004)Google Scholar
  32. 32.
    Ratnaparkhi, A.: A Maximum Entropy Part-of- Speech Tagger. In: Proceedings of EMNLP, University of Pennsylvania (1996)Google Scholar
  33. 33.
    Chodorow, M., Leacock, C.: An Unsupervised Method for Detecting Grammatical Errors. In: Proceedings of NAACL, pp. 140–147 (2000)Google Scholar
  34. 34.
    Golding, A.: A Bayesian Hybrid for Context-Sensitive Spelling Correction. In: Proceedings of the 3rd Workshop on Very Large Corpora, Cambridge, MA, pp. 39–53 (1995)Google Scholar
  35. 35.
    Burstein, J., Wolska, M.: Toward evaluation of writing style: Finding overly repetitive word use in student essays. In: Proceedings of the 11th Conference of the EACL, Budapest, Hungary (2003)Google Scholar
  36. 36.
    Burstein, J., Marcu, D., Knight, K.: Finding the WRITE stuff: Automatic identification of discourse structure in student essays. In: Harabagiu, S., Ciravegna, F. (eds.) IEEE Intelligent Systems: Special Issue on Advances in Natural Language Processing, vol. 18(1), pp. 32–39 (2003)Google Scholar
  37. 37.
    Attali, Y., Burstein, J.: Automated essay scoring with e-rater v.2. Journal of Technology, Learning, and Assessment 4(3) (2006)Google Scholar
  38. 38.
    Francis, D., Rivera, M., Lesaux, N., Keiffer, M., Rivera, H.: Practical guidelines for the education of English language learners: Research based recommendations for instruction and academic interventions. Center on Instruction, Portsmouth (2006) (retrieved April 25, 2008), www.centeroninstruction.org/files/ELL1-Interventions.pdf Google Scholar
  39. 39.
    Calderón, M.: Curricula and methodologies used to teach Spanish-speaking limited English proficient students to read English. In: Slavin, R.E., Calderón, M. (eds.) Effective programs for Latino students, pp. 251–305. Lawrence Erlbaum, Mahwah (2001)Google Scholar
  40. 40.
    National Council for the Social Studies. Expectations of excellence: Curriculum standards for social studies, Washington, DC (1994)Google Scholar
  41. 41.
    Calderón, M., Minaya-Rowe, L.: Teaching reading, oral language and content to English language learners – How ELLs keep pace with mainstream students. Corwin Press, Thousand Oaks (2007)Google Scholar
  42. 42.
    Gándara, P., Maxwell-Jolly, J., Driscoll, A.: Listening to teachers of English language learners: A survey of California teachers’ challenges, experiences, and professional development needs. The Regents of the University of California, Sacramento (2005) (retrieved September 14, 2007), www.tyg.jp/tgu/school_guidance/bulletin/K14/images/nishimura.pdf Google Scholar
  43. 43.
    Nagy, W.E., Berninger, V.W., Abbott, R.D.: Contributions of morphology beyond phonology to literacy outcomes of upper elementary and middle-school students. Journal of Educational Psychology 98(1), 134–146 (2006)CrossRefGoogle Scholar
  44. 44.
    August, D.: Supporting the development of English literacy in English language learners: Key issues and promising practices (Report No. 61). The John Hopkins University, Center for Research on the Education of Students Placed at Risk, Baltimore (2003), www.cde.state.co.us/cdesped/download/pdf/ELL_SupportDevelopEngLangLit.pdf Google Scholar
  45. 45.
    Echevarria, J., Vogt, M., Short, D.: Making content comprehensible for English language learners: The SIOP model. Pearson Education, Inc., New York (2004)Google Scholar
  46. 46.
    Scarcella, R.: Academic English: A Conceptual Framework. University of California, Linguistic Minority Research Group, University of California, Irvine, Technical Report 2003-1 (2003)Google Scholar
  47. 47.
    Calderón, M.: Teaching Reading to English Language Learners, Grades 6-12: A Framework for Improving Achievement in the Content Areas. Corwin Press, Thousand Oaks (2007)Google Scholar
  48. 48.
    Schleppegrell, M.: The Linguistic Challenges of Mathematics Teaching and Learning: A Research Review. Reading and Writing Quarterly 23, 139–159 (2007)CrossRefGoogle Scholar
  49. 49.
    Kieffer, M.J., Lesaux, N.K.: Breaking down words to build meaning: Morphology, vocabulary, and reading comprehension in the urban classroom. The Reading Teacher 61, 134–144 (2007)CrossRefGoogle Scholar
  50. 50.
    Adger, C.T., Snow, C., Christian, D.: What teachers need to know about language. Center for Applied Linguistics, Washington (2002)Google Scholar
  51. 51.
    Calderón, M., August, D., Slavin, R., Cheung, A., Durán, D., Madden, N.: Bringing words to life in classrooms with English language learners. In: Hiebert, A., Kamil, M. (eds.) Research and development on vocabulary. Lawrence Erlbaum, Mahwah (2005)Google Scholar
  52. 52.
    Hiebert, A., Kamil, M. (eds.): Research and development on vocabulary. Lawrence Erlbaum, Mahwah (2005)Google Scholar
  53. 53.
    Marcu, D.: The Theory and Practice of Discourse Parsing and Summarization. MIT Press, Cambridge (2000)MATHGoogle Scholar
  54. 54.
    Gernsbacher, M.A., Faust, M.: The mechanism of suppression: A component of general comprehension skill. Journal of Experimental Psychology: Learning, Memory, & Cognition 17, 245–262 (1991)Google Scholar
  55. 55.
    Kang, H.-W.: How can a mess be fine? Polysemy and reading in a foreign language. Mid-Atlantic Journal of Foreign Language Pedagogy 1, 35–49 (1993)Google Scholar
  56. 56.
    McNamara, D.S., McDaniel, M.: Suppressing irrelevant information: Knowledge activation or inhibition? Journal of Experimental Psychology: Learning, Memory, & Cognition 30, 465–482 (2004)Google Scholar
  57. 57.
    Kieffer, M.J., Lesaux, N.K.: Breaking down words to build meaning: Morphology, vocabulary, and reading comprehension in the urban classroom. The Reading Teacher 61, 134–144 (2007)CrossRefGoogle Scholar
  58. 58.
    Carlisle, J.F.: Awareness of the structure and meaning of morphologically complex words: Impact on reading. Reading and Writing: An Interdisciplinary Journal 12, 169–190 (2000)CrossRefGoogle Scholar
  59. 59.
    Freyd, P., Baron, J.: Individual differences in acquisition of derivational morphology. Journal of Verbal Learning and Verbal Behavior 21, 282–295 (1982)CrossRefGoogle Scholar
  60. 60.
    Palmer, B.C., Brooks, M.A.: Reading until the cows come home: Figurative language and reading comprehension. Journal of Adolescent and Adult Literacy 47(5), 370–379 (2004)Google Scholar
  61. 61.
    Schleppegrell, M., de Oliveira, L.C.: An integrated language and content approach for history teachers. Journal of English for Academic Purposes 5(4), 254–268 (2006)CrossRefGoogle Scholar
  62. 62.
    Kintsch, W.: The use of knowledge in discourse processing: A construction-integration model. Psychological Review 95, 163–182 (1988)CrossRefGoogle Scholar
  63. 63.
    Kintsch, W.: Comprehension: A paradigm for cognition. Cambridge University Press, Cambridge (1998)Google Scholar
  64. 64.
    McNamara, D.S.: Reading both high and low coherence texts: Effects of text sequence and prior knowledge. Canadian Journal of Experimental Psychology 55, 51–62 (2001)CrossRefGoogle Scholar
  65. 65.
    McNamara, D.S., Kintsch, W.: Learning from text: Effects of prior knowledge and text coherence. Discourse Processes 22, 247–287 (1996)CrossRefGoogle Scholar
  66. 66.
    Van Dijk, T.A., Kintsch, W.: Strategies of discourse comprehension. Academic Press, New York (1983)Google Scholar
  67. 67.
    Weimer-Hastings, P., Graesser, A.: Select-a-Kibbutzer: A computer tool that gives meaningful feedback on student compositions. Interactive Learning Environments 8(2), 149–169 (2000)CrossRefGoogle Scholar
  68. 68.
    Hearst, M.A., Plaunt, C.: Subtopic structuring for full-length document access. In: Proceedings of the Association of Computational Linguistics, Special Interest Group on Information Retrieval, pp. 59–68 (1993)Google Scholar
  69. 69.
    Hearst, M.A.: Text Tiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics 23(1), 33–64 (1997)Google Scholar
  70. 70.
    Grosz, B., Joshi, A., Weinstein, S.: Centering: A framework for modeling the local coherence of discourse. Computational Linguistics 21(2), 203–226 (1995)Google Scholar
  71. 71.
    Miltsakaki, E., Kukich, K.: Automated evaluation of coherence in student essays. In: Proceedings of the 2nd International Conference on Language Resources & Evaluation, Athens, Greece (2000)Google Scholar
  72. 72.
    Han, N.-R., Chodorow, M., Leacock, C.: Detecting errors in English article usage by nonnative speakers. Natural Language Engineering 12(2), 115–129 (2006)CrossRefGoogle Scholar
  73. 73.
    Tetreault, J., Chodorow, M.: The Ups and Downs of Preposition Error Detection in ESL Writing. In: COLING, Manchester, UK (2008)Google Scholar
  74. 74.
    Futagi, Y., Deane, P., Chodorow, M., Tetreault, J.: A computational approach to detecting collocation errors in the writing of non-native speakers of English Computer Assisted Language Learning 21(4), 353–367 (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Jill Burstein
    • 1
  1. 1.Educational Testing ServicePrincetonUSA

Personalised recommendations