Considering Misconceptions in Automatic Essay Scoring with A-TEST - Amrita Test Evaluation and Scoring Tool

  • Prema Nedungadi
  • Jyothi L
  • Raghu Raman
Conference paper
Part of the Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering book series (LNICST, volume 135)


In large classrooms with limited teacher time, there is a need for automatic evaluation of text answers and real-time personalized feedback during the learning process. In this paper, we discuss Amrita Test Evaluation & Scoring Tool (A-TEST), a text evaluation and scoring tool that learns from course materials and from human-rater scored text answers and also directly from teacher input. We use latent semantic analysis (LSA) to identify the key concepts. While most AES systems use LSA to compare students’ responses with a set of ideal essays, this ignores learning the common misconceptions that students may have about a topic. A-TEST also uses LSA to learn misconceptions from the lowest scoring essays using this as a factor for scoring. ‘A-TEST’ was evaluated using two datasets of 1400 and 1800 pre-scored text answers that were manually scored by two teachers. The scoring accuracy and kappa scores between the derived ‘A-TEST’ model and the human raters were comparable to those between the human raters.


Feature extraction Essay scoring Text analysis Text mining Latent semantic analysis (LSA) SVD Natural language process- NLP AES 



This work derives direction and inspiration from the Chancellor of Amrita University, Sri Mata Amritanandamayi Devi. We thank Dr. Ramachandra Kaimal for his valuable feedback.


  1. 1.
    Hiemstra, D., de Jong, F.: Statistical language models and information retrieval: natural language processing really meets retrieval. Glot Int. 5(8), 288–293 (2001)Google Scholar
  2. 2.
    Kakkonen, T., Myller, N., Sutinen, E., Timonen, J.: Comparison of dimension reduction methods for automated essay grading. Int. Forum Educ. Technol. Soc. (IFETS) J. 11, 275–288 (2008)Google Scholar
  3. 3.
    Adhitia, R., Purwarianti, A.: Automated essay grading system using SVM and LSA for essay answers in Indonesian. JSI 5(1) (2009) Google Scholar
  4. 4.
    Flor, M., Futagi, Y.: On using context for automatic correction of non-word misspellings in student essays. In: The 7th Workshop on the Innovative Use of NLP for Building Educational Applications, Montreal, Canada, 3–8 June 2012, pp. 105–115 (2012)Google Scholar
  5. 5.
    Turney, P.D., Littman, M.L.: Measuring praise and criticism: inference of semantic orientation from association. ACM Trans. Inf. Syst. 21, 315–346 (2003)CrossRefGoogle Scholar
  6. 6.
    Nagata, R., Kakegawa, J., Yabuta, Y.: A topic-independent method for automatically scoring essay content rivaling topic-dependent methods. In: 2009 Ninth IEEE International Conference on Advanced Learning Technologies (2009) Google Scholar
  7. 7.
    Liu, C.-L., Hsiao, W.-H., Lee, C.-H., Chi, H.-C.: An HMM-based algorithm for content ranking and coherence-feature extraction. IEEE Trans. Syst. Man Cybern. 42, 397–407 (2012)CrossRefGoogle Scholar
  8. 8.
    Miller, T.: Essay assessment with latent semantic analysis. Department of Computer Science, University of Toronto, Toronto, ON M5S 3G4, Canada (2002)Google Scholar
  9. 9.
    Loraksa, C., Peachavanish, R.: Automatic Thai language essay scoring using neural network and latent semantic analysis. In: Proceedings of the First Asia International Conference on Modeling & Simulation (AMS’07), pp. 400–402 (2007)Google Scholar
  10. 10.
    Haley, D.T., Thomas, P., Roeck, A.D., Petre, M.: Measuring improvement in latent semantic analysis based marking systems: using a computer to mark questions about HTML. In: Proceedings of the Ninth Australasian Computing Education Conference (ACE), pp. 35–52 (2007)Google Scholar
  11. 11.
    Monjurul Islam, M., Latiful Hoque, A.S.M.: Automated essay scoring using generalized latent semantic analysis. J. Comput. 7(3), (2012)Google Scholar
  12. 12.
    Berry, M.W., Dumais, S.T., O’Brien, G.W.: Using linear algebra for intelligent information retrieval. National Science Foundation under grant Nos. NSF-CDA-9115428 and NSF-ASC-92-03004Google Scholar
  13. 13.
    Kakkonen, T., Myller, N., Sutinen, E.: Applying part of speech enhanced LSA to automated essay grading. Automated Assessment Technologies for Free Text and Programming Assignments by Academy of Finland (2006)Google Scholar
  14. 14.
    Warrens, M.J.: Weighted kappa is higher than Cohen’s kappa for tridiagonal agreement tables. Stat. Methodol. 8, 268–272 (2011)CrossRefzbMATHMathSciNetGoogle Scholar
  15. 15.
    The William and Flora Hewlett Foundation (Hewlett Foundation), Automated Student Assessment Prize (ASAP) designed by The Common Pool, LLC, managed by Open Education Solutions, Inc., 12 April 2012Google Scholar

Copyright information

© Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2014

Authors and Affiliations

  1. 1.Amrita CREATEAmrita UniversityVallikavu, KollamIndia
  2. 2.Department of Computer ScienceAmrita UniversityVallikavu, KollamIndia

Personalised recommendations