Skip to main content

A Machine Learning Prediction of Automatic Text Based Assessment for Open and Distance Learning: A Review

  • Conference paper
  • First Online:
Innovations in Bio-Inspired Computing and Applications (IBICA 2019)

Abstract

In this systematic literature review, automatic text-based and easy type assessment grading system using Machine Learning and Natural Language Processing (NLP) techniques was investigated. The major focus is on text-based and essay type assessment in ODL courses. Text-based and essay type questions is an important tool for performing quality examination and assessment to help the students gain mastery over the task and widen their horizon of knowledge and increase the learner’s development and learning than, for instance subjective question type, single choice question (SCQ), multiple choice question (MCQ) and true/false question type. Automatic text-based and essay type assessment grading system can be used as an important tool in ODL institutions, where assessment and examination can be quickly and easily evaluated for the purpose of efficient feedback. We carried out this study using quality, exclusion and inclusion criteria by selecting only studies that focuses on NLP and Machine Learning techniques for automatic text-based and essay type assessment grading task. Searches in ACM Digital Library, Semantic Scholar, Scopus, IEEE Xplore, Google Scholar, Microsoft Academic, Learn Tech Library and Springer is performed in order to retrieve important and relevant literature in this research domain. Conference papers, journals and articles between the year 2011 and 2019 were considered in this study. This study found 34 published articles describing automatic text-based and essay type assessment and examination grading task out of a total of 1260 articles that met our search criteria.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Agiomirgianakis, G., Serenis, D., Tsounis, N.: A distance learning university and its economic impact in a country’s peripheries: the case of Hellenic Open University. Oper. Res. 17(1), 165–186 (2015). https://doi.org/10.1007/s12351-015-0220-y

    Article  Google Scholar 

  2. Aldabe, I., de Lacalle, O., Lopez-Gazpio, I., Maritxalar, M.: Supervised hierarchical classification for student answer scoring (2015). https://arxiv.org/abs/1507.03462

  3. Burrows, S., Gurevych, I., Stein, B.: The eras and trends of automatic short answer grading. Int. J. Artif. Intell. Educ. 25(1), 60–117 (2014). https://doi.org/10.1007/s40593-014-0026-8

    Article  Google Scholar 

  4. Conort, X.: Short answer scoring–explanation of “Gxav” solution. In: ASAP 2012 SAS Methodology Paper, Gear Analytics (2012)

    Google Scholar 

  5. Dzikovska, M.O., Nielsen, R.D., Brew, C.: Towards effective tutorial feedback for explanation questions: a dataset and baselines. In: Proceedings of the Twelfth Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 200–210 (2012)

    Google Scholar 

  6. Galhardi, L.B., Brancher, J.D.: Machine learning approach for automatic short answer grading: a systematic review. In: Simari, G.R., Fermé, E., Gutiérrez Segura, F., Rodríguez Melquiades, J.A. (eds.) IBERAMIA 2018. LNCS (LNAI), vol. 11238, pp. 380–391. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-03928-8_31

    Chapter  Google Scholar 

  7. Gleize, M., Grau, B.: Basic english substitution for student answer assessment at SemEval (2013). https://www.aclweb.org/anthology/S13-2100

  8. Gomaa, W., Fahmy, A.: Automatic scoring for answers to Arabic test questions. Comput. Speech Lang. 28(4), 833–857 (2014)

    Article  Google Scholar 

  9. Heilman, M., Madnani, N.: ETS: domain adaptation and stacking for short answer scoring (2015). https://www.aclweb.org/anthology/S13-2046

  10. Higgins, D., Brew, C., Heilman, M., Ziai, R., Chen, L., Cahill, A., Flor, M., Madnani, N., Tetreault, J.R., Blanchard, D., Napolitano, D., Lee, C.M., Blackmore, J.: Is getting the right answer just about choosing the right words? The role of syntactically-informed features in short answer scoring (2014). https://arxiv.org/abs/1403.0801

  11. Horbach, A., Palmer, A., Pinkal, M.: Using the text to evaluate short answers for reading comprehension exercises. In: Proceedings of the Second Joint Conference on Lexical and Computational Semantics, vol. 1, pp. 286–295 (2013)

    Google Scholar 

  12. Hou, W.-J., Tsao, J.-H.: Automatic assessment of students’ free-text answers with different levels. Int. J. Artif. Intell. Tools 20(2), 327–347 (2011)

    Article  Google Scholar 

  13. Jimenez, S., Becerra, C., Universitaria, C., Gelbukh, A.: SOFTCARDINALITY: hierarchical text overlap for student response analysis. In: Proceedings of the Second Joint Conference on Lexical and Computational Semantics, vol. 2, pp. 280–284 (2013)

    Google Scholar 

  14. Kaur, A.: A novel approach for syntactic similarity between short text. Int. J. Sci. Technol. Res. 4(6), 216–219 (2015)

    Google Scholar 

  15. Kouylekov, M., Dini, L., Bosca, A., Trevisan, A.: Edits and generic text pair classification. SemEval@NAACL-HLT (2013)

    Google Scholar 

  16. Leeman-Munk, S.P., Wiebe, E.N., Lester, J.C.: Assessing elementary students’ science competency with text analytics. In: Proceedings of the Fourth International Conference on Learning Analytics and Knowledge, pp. 143–147 (2014)

    Google Scholar 

  17. Levy, O., Zesch, T., Dagan, I., Gurevych, I.: Recognizing partial textual entailment. In: Proceedings of the Fifty-First Annual Meeting of the Association for Computational Linguistics, vol. 2, pp. 451–455 (2013)

    Google Scholar 

  18. Liu, O., Rios, J., Heilman, M., Gerard, L., Linn, M.: Validation of automated scoring of science assessments. J. Res. Sci. Teach. 53(2), 215–233 (2016)

    Article  Google Scholar 

  19. Magooda, A.E., Zahran, M.A., Rashwan, M., Raafat, H.M., Fayek, M.B.: Vector based techniques for short answer grading. In: Proceedings of the Twenty-Ninth International Florida Artificial Intelligence Research Society Conference (2016)

    Google Scholar 

  20. Mancera, S., Jimenez, S., Gonzalez, F.A.: ZETEMA: a web service for automatic short-answer questions grading. In: 10th Computing Colombian Conference (10CCC), pp. 504–508 (2015)

    Google Scholar 

  21. Meurers, D., Ziai, R., Ott, N., Kopp, J.: Evaluating answers to reading comprehension questions in context: Results for German and the role of information structure. In: Proceedings of the TextInfer Workshop on Textual Entailment (2011)

    Google Scholar 

  22. Meurers, D., Ziai, R., Ott, N., Bailey, S.M.: Integrating parallel analysis modules to evaluate the meaning of answers to reading comprehension questions. Int. J. Cont. Eng. Educ. Life Long Learn. 21(4), 355–369 (2011)

    Article  Google Scholar 

  23. Moharreri, K., Ha, M., Nehm, R.H.: EvoGrader: an online formative assessment tool for automatically evaluating written evolutionary explanations. Evol. Educ. Outreach 7(1), 1–14 (2014). https://doi.org/10.1186/s12052-014-0015-2

    Article  Google Scholar 

  24. Mohler, M., Bunescu, R., Mihalcea, R.: Learning to grade short answer questions using semantic similarity measures and dependency graph alignments. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 752–762 (2011)

    Google Scholar 

  25. Moray, S., Choudhari, S., Salunke, N., Dhawale, A.: Automated short-answer grading system using machine learning: a review (2019). http://jgrcs.info/index.php/jgrcs/article/view/1095

  26. Osman, A.H.: An evaluation mode of teaching assistance using artificial neural network. VAWKUM Trans. Comput. Sci. 13, 1–9 (2017). ISSN 2411-6335

    Google Scholar 

  27. Ott, N., Ziai, R., Hahn, M., Meurers, D.: CoMeT: integrating different levels of linguistic modeling for meaning assessment. In: Proceedings of the Seventh International Workshop on Semantic Evaluation, pp. 608–616 (2013)

    Google Scholar 

  28. Pitsoe, V.J., Maila, M.W.: Quality and quality assurance in open distance learning (ODL) discourse: trends, challenges and perspectives. Anthropologist 18(1), 251–258 (2014)

    Article  Google Scholar 

  29. Ramachandran, L.P., Cheng, J., Foltz, P.W.: Identifying patterns for short answer scoring using graph-based lexico-semantic text matching. BEA@NAACL-HLT (2015)

    Google Scholar 

  30. Roy, S., Bhatt, H., Narahari, Y.: An iterative transfer learning based ensemble technique for automatic short answer grading (2016). https://arxiv.org/abs/1609.04909

  31. Sakaguchi, K., Heilman, M., Madnani, N.: Effective feature integration for automated short answer scoring. In: Human Language Technologies: The Annual Conference of the North American Chapter of the ACL, pp. 1049–1054 (2015)

    Google Scholar 

  32. Sorour, S.E., Mine, T., Goda, K., Hirokawa, S.: Predicting students’ grades based on free style comments data by artificial neural network. In: IEEE Frontiers in Education Conference (FIE) Proceedings, pp. 1–9 (2014)

    Google Scholar 

  33. Sultan, M.A., Salazar, C., Sumner, T.: Fast and easy short answer grading with high accuracy. In: Proceedings of NAACLHLT, pp. 1070–1075 (2016)

    Google Scholar 

  34. Wu, S., Shih, W.: A short answer grading system in chinese by support vector approach. In: Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pp. 125–129 (2018)

    Google Scholar 

  35. Zbontar, J.: Short answer scoring by stacking. ASAP Short Answer Scoring Competition System Description (2012). https://kaggle2.blob.corewindows.net/competitions/kaggle/212959/media/jzbontar.pdf

  36. Zesch, T., Heilman, M., Cahill, A.: Reducing annotation efforts in supervised short answer scoring. In: Proceedings of the Building Educational Applications Workshop at NAACL (2015)

    Google Scholar 

  37. Zhang, Y., Shah, R., Chi, M.: Deep Learning + Student Modeling + Clustering: A Recipe for Effective Automatic Short Answer Grading (2016). https://www.semanticscholar.org/paper/Deep-Learning-%2B-Student-Modeling%2B-Clustering%3A-a-Zhang-Shah/dd766863abfb82ea6140801c3e57d9d7f151b3ba

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sanjay Misra .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Blessing, G., Azeta, A., Misra, S., Chigozie, F., Ahuja, R. (2021). A Machine Learning Prediction of Automatic Text Based Assessment for Open and Distance Learning: A Review. In: Abraham, A., Panda, M., Pradhan, S., Garcia-Hernandez, L., Ma, K. (eds) Innovations in Bio-Inspired Computing and Applications. IBICA 2019. Advances in Intelligent Systems and Computing, vol 1180. Springer, Cham. https://doi.org/10.1007/978-3-030-49339-4_38

Download citation

Publish with us

Policies and ethics