Unsupervised Word Sense Disambiguation for Automatic Essay Scoring

Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 27)


The reliability of automated essay scoring (AES) has been the subject of debate among educators. Most systems treat essays as a bag of words and evaluate them based on LSA, LDA or other means. Many also incorporate syntactic information about essays such as the number of spelling mistakes, number of words and so on. Towards this goal, a challenging problem is to correctly understand the semantics of the essay to be evaluated so as to differentiate the intended meaning of terms used in the context of a sentence. We incorporate an unsupervised word sense disambiguation (WSD) algorithm which measures similarity between sentences as a preprocessing step to our existing AES system. We evaluate the enhanced AES model with the Kaggle AES dataset of 1400 pre-scored text answers that were manually scored by two human raters. Based on kappa scores, while both models had weighted kappa scores comparable to the human raters, the model with the WSD outperformed the model without the WSD.


Latent Semantic Analysis (LSA) SVD AES Word Sense Disambiguation 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Valenti, S., Neri, F., Cucchiarelli, A.: An overview of current research on automated essay grading. Journal of Information Technology Education 2, 319–330 (2003)Google Scholar
  2. 2.
    Sinha, R., Mihalcea, R.: Unsupervised Graph-based Word Sense Disambiguation Using Measures of Word Semantic Similarity. In: IEEE International Conference on Semantic Computing (2007)Google Scholar
  3. 3.
    Abdalgader, K., Skabar, A.: Unsupervised similarity-based word sense disambiguation using context vectors and sentential word importance. ACM Trans. Speech Lang. Process. 9(1) (2012)Google Scholar
  4. 4.
    Kakkonen, T., Myller, N., Sutinen, E., Timonen, J.: Comparison of Dimension Reduction Methods for Automated Essay Grading. Educational Technology and Society 11(3), 275–288 (2008)Google Scholar
  5. 5.
    Nedungadi, P., Jyothi, L., Raman: Considering Misconceptions in Automatic Essay Scoring with A-TEST - Amrita Test Evaluation & Scoring Tool. In: Fifth International Conference on e-Infrastructure and e-Services for Developing Countries (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Amrita Vishwa VidyapeethamCoimbatoreIndia

Personalised recommendations