Leveraging C-Rater’s Automated Scoring Capability for Providing Instructional Feedback for Short Constructed Responses

  • Jana Sukkarieh
  • Eleanor Bolge
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5091)

Abstract

Due to some progress on the natural language processing (NLP) front, researchers are able to pursue the problem of automatic content assessment for free text responses with some success. In particular, a concept-based scoring method implemented in c-rater, Educational Testing Service’s (ETS) technology for content scoring of short free-text answers makes c-rater capable of giving instantaneous formative individualized feedback without going fully into a dialog-based system nor restricting itself to just canned hints and corrective prompts.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Leacock, C., Chodorow, M.: C-rater: Automated Scoring of Short-Answer Questions. Computers and Humanities 37(4) (2003)Google Scholar
  2. 2.
    Mitchell, T., Russel, T., Broomhead, P., Aldrige, N.: Towards robust computerised marking of free-text responses. In: Proceedings of the 6th Computer Assisted Assessment Conference (2002)Google Scholar
  3. 3.
    Ratnaparkhi, A.: Maximum Entropy Models for Natural Language Ambiguity Resolution. Ph.D. thesis, University of Pennsylvania, Philadelphia, USA (1998)Google Scholar
  4. 4.
    Rosé, C.P., Roque, A., Bhembe, D., VanLehn, K.: A hybrid text classification approach for analysis of student essays. Building Educational Applications Using NLP (2003)Google Scholar
  5. 5.
    Sukkarieh, J.Z., Pulman, S.G., Raikes, N.: Auto-marking: using computational linguistics to score short, free text responses. In: 29th IAEA (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Jana Sukkarieh
    • 1
  • Eleanor Bolge
    • 1
  1. 1.Educational Testing ServicePrinceton NJUSA

Personalised recommendations