Automatic assessment of descriptive answers in online examination system using semantic relational features

Article
  • 18 Downloads

Abstract

The revolution in technology reduces the effort of manpower in many of the areas. The boon of the technology and rapid advancements in education industry has provided a good learning environment. It offers qualification and credits at the desktop through online courses and evaluation. The prevailing system has its own pause in terms of volume, staffing, variation in the strategies of assessing. As of now, the objective-type questions alone can be practiced and assessed through online examinations. Researchers strive to build systems for evaluating descriptive answer as it is challenging and could not take up its full sway for complete automation. The challenge lies in recognizing the natural language answers and extracting the precise meaning so as to appropriately evaluate the knowledge obtained by the student. The proposed method contains stages such as question classification, answer classification and answer evaluation for the answers given by the student and grade them with appropriate score. A syntactical relation-based feature extraction technique is proposed for automatic evaluation of descriptive-type answers. The system has also adopted a cognitive-based approach where the student answers are judged for its correctness based on the phrases used for answering the questions. The score and feedback are provided to make aware of the understanding level of the subject. The experimental analysis shows .85% higher precision and recall when compared to the earlier systems.

Keywords

Question classification Descriptive answers Answer classification Automatic assessment 

References

  1. 1.
    Nehete C, Powar V, Upadhyay S, Wadhwani J (2017) Checkpoint—an online descriptive answers grading tool. Int J Adv Res Comput Sci 8(3):637–640Google Scholar
  2. 2.
    Cutrone L, Chang M (2011) Auto-assessor: computerized assessment system for marking student’s short-answers automatically. In: 2011 IEEE International Conference on Technology for Education (T4E). IEEE, pp 81–88Google Scholar
  3. 3.
    Collins M (2013) Lexicalized probabilistic context-free grammars. Lecture NotesGoogle Scholar
  4. 4.
    Huang Z, Thint M, Qin Z (2008) Question classification using head words and their hypernyms. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pp 927–936Google Scholar
  5. 5.
    Patwardhan S, Banerjee S, Pedersen T (2003) Using measures of semantic relatedness for word sense disambiguation. In: International Conference on Intelligent Text Processing and Computational Linguistics. Springer, Berlin, pp 241–257Google Scholar
  6. 6.
    Chakraborty P (2012) Developing an intelligent tutoring system for assessing students’ cognition and evaluating descriptive type answer. Int J Mod Eng Res 2(3):985–990Google Scholar
  7. 7.
    Saha M, Chakraborty M, Biswas T (2016) A novel approach for descriptive answer script evaluation. Int J Adv Res Comput Sci Softw Eng 6(3)Google Scholar
  8. 8.
    Mohler M, Bunescu R, Mihalcea R (2011) Learning to grade short answer questions using semantic similarity measures and dependency graph alignments. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, vol 1, pp 752–762Google Scholar
  9. 9.
    Das S. Survey of techniques used for answer evaluation using semantic networkGoogle Scholar
  10. 10.
    Vicedo JL, Ferrández A (2001) A Semantic Approach to Question Answering Systems, vol 249. NIST Special Publication SP, Washington, pp 511–516Google Scholar
  11. 11.
    Senthil Kumaran V, Sankar A (2015) Towards an automated system for short-answer assessment using ontology mapping. Int Arab J e-Technol 4(1):17–24Google Scholar
  12. 12.
    Paul DV, Pawar JD (2014) Use of syntactic similarity based similarity matrix for evaluating descriptive answer. In: 2014 IEEE Sixth International Conference on Technology for Education (T4E). IEEE, pp 253–256Google Scholar
  13. 13.
    Ahmed W, Babu D, Anto P (2017) Question analysis for arabic question answering systems. arXiv preprint arXiv:1701.02925
  14. 14.
    Qureshi J, Rizwan M (2015) A proposal of electronic examination system to evaluate descriptive answers. Sci Int 27(3):2143–2146Google Scholar
  15. 15.
    Das D, Martins AF (2007) A survey on automatic text summarization. Lit Surv Lang Stat II course CMU 4:192–195Google Scholar
  16. 16.
    Lally A, Prager JM, McCord MC, Boguraev BK, Patwardhan S, Fan J, Chu-Carroll J (2012) Question analysis: how Watson reads a clue. IBM J Res Dev 56(3.4):2:1CrossRefGoogle Scholar
  17. 17.
    Llamas-Nistal M, Fernández-Iglesias MJ, González-Tato J, Mikic-Fonte FA (2013) Blended e-assessment: migrating classical exams to the digital world. Comput Educ 62:72–87CrossRefGoogle Scholar
  18. 18.
    Siddiqi R, Harrison CJ, Siddiqi R (2010) Improving teaching and learning through automated short-answer marking. IEEE Trans Learn Technol 3(3):237–249CrossRefGoogle Scholar
  19. 19.
    Sharma DK (2016) Hindi word sense disambiguation using cosine similarity. In: Proceedings of International Conference on ICT for Sustainable Development. Springer, Singapore, pp 801–808Google Scholar
  20. 20.
    Samhith K, Tilak SA, Panda G (2016) Word sense disambiguation using WordNet Lexical Categories. In: 2016 International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES). IEEE, pp 1664–1666Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Sona College of TechnologySalemIndia
  2. 2.Anna UniversityChennaiIndia

Personalised recommendations