A Novel Semantic Similarity Based Technique for Computer Assisted Automatic Evaluation of Textual Answers

  • Udit Kr. Chakraborty
  • Samir Roy
  • Sankhayan Choudhury
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 27)

Abstract

We propose in this paper a unique approach for the automatic evaluation of free text answers. A question answering module has been developed for the evaluation of free text responses provided by the learner. The module is capable of automatically evaluating the free text response of the learner SA to a given question Q and its model text based answer MA on a scale [0, 1] with respect to the MA. This approach takes into consideration not only the important key-words but also stop words and the positional expressions present in the learners’ response. Here positional expression implies the pre-expression and post-expression appearing before and after a keyword in the learners’ response. The results obtained on using this approach are promising enough for investing into future efforts.

Keywords

evaluation learners’ response evaluation keywords preexpression post-expression 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lin, J., Fushman, D.D.: Automatically Evaluating Answers to Definition Questions. In: Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP), pp. 931–938 (2005)Google Scholar
  2. 2.
    Foltz, P.W., Laham, D., Landauer, T.K.: The Intelligent Essay Assessor: Applications to Educational Technology. Interactive Multimedia Education Journal of Computer Enhanced Learning 1(2) (1991)Google Scholar
  3. 3.
    Landauer, T.K., Foltz, P.W., Laham, D.: An Introduction to Latent Semantic Analysis. Discourse Processes 25(2&3), 259–284 (1998)CrossRefGoogle Scholar
  4. 4.
    Dessus, P., Lemaire, B., Vernier, A.: Free-text Assessment in a Virtual Campus. In: Proceedings of the 3rd International Conference on Human-Learning Systems, pp. 2–14 (2000)Google Scholar
  5. 5.
    Perez, D., Alfonseca, E.: Adapting the Automatic Assessment of Free-Text Answers to the Learners. In: Proceedings of the 9th International Computer-Assisted Assessment (CAA) Conference (2005)Google Scholar
  6. 6.
    Papineni, K., Roukos, S., Ward, T., Zhu, W.: BLEU: a Method for Automatic Evaluation of Machine Translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318 (2002)Google Scholar
  7. 7.
    Leacock, C., Chodorow, M.: C-rater: Automatic Content Scoring for Short Constructed Responses. In: Proceedings of the 22nd International FlAIRS Conference, pp. 290–295 (2009)Google Scholar
  8. 8.
    Mitchell, T., Russell, T., Broomhead, P., Aldridge, N.: Towards Robust Computerized Marking of Free-Text Responses. In: Proceedings of 6th International Computer Aided Assessment Conference, Loughborough (2002)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Udit Kr. Chakraborty
    • 1
  • Samir Roy
    • 2
  • Sankhayan Choudhury
    • 3
  1. 1.Department of Computer Science & EngineeringSikkim Manipal Institute of TechnologyGangtokIndia
  2. 2.Department of Computer Science & EngineeringNational Institute of Technical Teachers’ Training & ResearchKolkataIndia
  3. 3.Department of Computer Science & EngineeringUniversity of CalcuttaKolkataIndia

Personalised recommendations