Skip to main content

Semiautomatic Grading of Short Texts for Open Answers in Higher Education

  • Conference paper
  • First Online:
Higher Education Learning Methodologies and Technologies Online (HELMeTO 2021)

Abstract

Grading student activities in online courses is a time-expensive task, especially with a high number of students in the course. To avoid a bottleneck in the continuous evaluation process, quizzes with multiple choice questions are frequently used. However, a quiz fails on the provision of formative feedback to the student. This work presents PLeNTaS, a system for the automatic grading of short answers from open domains, that reduces the time required for the grading task and offers formative feedback to the students. It is based on the analysis of the text from the point of view of three different levels: orthography, syntax, and semantics. The validation of the system will consider the correlation of the assigned grade with the human grade, the utility of the automatically generated feedback and the pedagogical impact caused by the system usage in the course.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Westera, W., Dascalu, M., Kurvers, H., et al.: Automated essay scoring in applied games: reducing the teacher bandwidth problem in online training. Comput. Educ. 123, 212–224 (2018). https://doi.org/10.1016/J.COMPEDU.2018.05.010

    Article  Google Scholar 

  2. McNamara, D.S., Crossley, S.A., Roscoe, R.D., et al.: A hierarchical classification approach to automated essay scoring. Assess. Writ. 23, 35–59 (2015). https://doi.org/10.1016/J.ASW.2014.09.002

    Article  Google Scholar 

  3. Campbell, J.R.: Cognitive processes elicited by multiple-choice and constructed-response questions on an assessment of reading comprehension. Temple University (UMI No. 9938651) (1999)

    Google Scholar 

  4. Rodrigues, F., Oliveira, P.: A system for formative assessment and monitoring of students’ progress. Comput. Educ. 76, 30–41 (2014). https://doi.org/10.1016/J.COMPEDU.2014.03.001

    Article  Google Scholar 

  5. Brame C.J.: Rubrics: tools to make grading more fair and efficient. In: Science Teaching Essentials, pp. 175–184. Academic Press (2019)

    Google Scholar 

  6. Prasad Mudigonda, K.S., Sharma, P.: multi-sense embeddings using synonym sets and hypernym information from wordnet. Int. J. Interact. Multimed. Artif. Intell. 6, 68 (2020). https://doi.org/10.9781/ijimai.2020.07.001

    Article  Google Scholar 

  7. Zhou, S., Chen, B., Zhang, Y., et al.: A feature extraction method based on feature fusion and its application in the text-driven failure diagnosis field. Int. J. Interact. Multimed. Artif. Intell. 6, 121 (2020). https://doi.org/10.9781/ijimai.2020.11.006

    Article  Google Scholar 

  8. Rao, S.B.P., Agnihotri, M., Babu Jayagopi, D.: Improving asynchronous interview interaction with follow-up question generation. Int. J. Interact. Multimed. Artif. Intell. 6, 79 (2021). https://doi.org/10.9781/ijimai.2021.02.010

    Article  Google Scholar 

  9. Dascalu, M.: readerbench (1) - cohesion-based discourse analysis and dialogism, pp. 137–160 (2014)

    Google Scholar 

  10. Ramineni, C.: Automated essay scoring: psychometric guidelines and practices. Assess. Writ. 18, 25–39 (2013). https://doi.org/10.1016/J.ASW.2012.10.004

    Article  Google Scholar 

  11. McNamara, D.S., Levinstein, I.B., Boonthum, C.: iSTART: interactive strategy training for active reading and thinking. Behav. Res. Methods Instr. Comput. 36, 222–233 (2004). https://doi.org/10.3758/BF03195567

    Article  Google Scholar 

  12. Graesser, A.C., McNamara, D.S., Kulikowich, J.M.: Coh-metrix. Educ. Res. 40, 223–234 (2011). https://doi.org/10.3102/0013189X11413260

    Article  Google Scholar 

  13. Panaite, M., Dascalu, M., Johnson, A., et al.: Bring it on! Challenges encountered while building a comprehensive tutoring system using ReaderBench. In: Penstein, R.C., et al. (eds.) AIED 2018. LNCS (LNAI and LNB), vol. 10947, pp. 409–419. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93843-1_30

    Chapter  Google Scholar 

  14. Cuzzocrea, A., Bosco, G.L., Pilato, G., Schicchi, D.: Multi-class text complexity evaluation via deep neural networks. In: Yin, H., Camacho, D., Tino, P., Tallón-Ballesteros, A.J., Menezes, R., Allmendinger, R. (eds.) IDEAL 2019. LNCS, vol. 11872, pp. 313–322. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33617-2_32

    Chapter  Google Scholar 

  15. Zhang, Y., Chen, X.: Explainable Recommendation: A Survey and New Perspectives (2018)

    Google Scholar 

  16. Alonso, J.M., Casalino, G.: Explainable artificial intelligence for human-centric data analysis in virtual learning environments. In: Burgos, D., et al. (eds.) HELMeTO 2019. CCIS, vol. 1091, pp. 125–138. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-31284-8_10

    Chapter  Google Scholar 

  17. Saarela, M., Heilala, V., Jaaskela, P., et al.: Explainable student agency analytics. IEEE Access 9, 137444–137459 (2021). https://doi.org/10.1109/ACCESS.2021.3116664

    Article  Google Scholar 

  18. Kent, C., Laslo, E., Rafaeli, S.: Interactivity in online discussions and learning outcomes. Comput. Educ. 97, 116–128 (2016). https://doi.org/10.1016/J.COMPEDU.2016.03.002

    Article  Google Scholar 

  19. Burrows, S., Gurevych, I., Stein, B.: The eras and trends of automatic short answer grading. Int. J. Artif. Intell. Educ. 25, 60–117 (2015)

    Article  Google Scholar 

  20. Pérez-Marín, D., Pascual-Nieto, I., Rodríguez, P.: Computer-assisted assessment of free-text answers. Knowl. Eng. Rev. 24, 353–374 (2009). https://doi.org/10.1017/S026988890999018X

    Article  Google Scholar 

  21. Mohler, M., Mihalcea, R.: Text-to-text semantic similarity for automatic short answer grading (2009). (3AD)

    Google Scholar 

  22. Gautam, D., Rus, V.: Using neural tensor networks for open ended short answer assessment. In: Bittencourt, I.I., Cukurova, M., Muldner, K., Luckin, R., Millán, E. (eds.) AIED 2020. LNCS (LNAI), vol. 12163, pp. 191–203. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52237-7_16

    Chapter  Google Scholar 

  23. Muñoz Baquedano, M.: Legibilidad y variabilidad de los textos. Boletín Investig. Educ. 21, 13–25 (2006)

    Google Scholar 

  24. Fernandez Huerta, J.: Medidas sencillas de lecturabilidad. Consiga 214, 29–32 (1959)

    Google Scholar 

  25. Vázquez-Cano, E., González, A.I.H., Sáez-López, J.M.: An analysis of the orthographic errors found in university students’ asynchronous digital writing. J. Comput. High. Educ. 31(1), 1–20 (2018). https://doi.org/10.1007/s12528-018-9189-x

    Article  Google Scholar 

  26. Kukich, K.: Techniques for automatically correcting words in text. ACM Comput. Surv. 24, 377–439 (1992). https://doi.org/10.1145/146370.146380

    Article  Google Scholar 

  27. Hládek, D., Staš, J., Pleva, M.: Survey of automatic spelling correction. Electronics 9, 1–29 (2020)

    Article  Google Scholar 

  28. Klare, G.R.: The Measure of Readability. University of Iowa Press, Ames (1963)

    Google Scholar 

  29. Fry, E.: A readability formula that saves time. J. Read. 513–516, 575–578 (1968). (8 pages)

    Google Scholar 

  30. Raygor, A.L.: The Raygor readability estimate: a quick and easy way to determine difficulty. Read. Theory Res. Pract. 1977, 259–263 (1977)

    Google Scholar 

  31. Dale, E., Chall, J.S.: A formula for predicting readability. Educ. Res. Bull. 27(1), 11–28 (1948). http://www.jstor.org/stable/1473169

  32. Crossley, S.A., Skalicky, S., Dascalu, M.: Moving beyond classic readability formulas: new methods and new models. J. Res. Read. 42, 541–561 (2019). https://doi.org/10.1111/1467-9817.12283

    Article  Google Scholar 

  33. Morato, J., Iglesias, A., Campillo, A., Sanchez-Cuadrado, S.: Automated readability assessment for spanish e-government information. J. Inf. Syst. Eng. Manag. 6, em0137 (2021). https://doi.org/10.29333/jisem/9620

  34. Klare, G.R.: A second look at the validityl of readability formulas. J. Read. Behav. 8, 129–152 (1976). https://doi.org/10.1080/10862967609547171

    Article  Google Scholar 

  35. Taylor, Z.W.: College admissions for L2 students: comparing L1 and L2 readability of admissions materials for U.S. higher education. J. Coll. Access. 5(1) (2020). https://scholarworks.wmich.edu/jca/vol5/iss1/6. Article 6

  36. Selvi, P., Bnerjee, D.A.K.: Automatic short-answer grading system (ASAGS) (2010)

    Google Scholar 

  37. Ben, O.A.M., Ab Aziz, M.J.: Automatic essay grading system for short answers in English language. J. Comput. Sci. 9, 1369–1382 (2013). https://doi.org/10.3844/jcssp.2013.1369.1382

    Article  Google Scholar 

  38. Essay (auto-grade) question type - MoodleDocs

    Google Scholar 

  39. Chandrasekaran, D., Mago, V.: Evolution of semantic similarity – a survey. ACM Comput. Surv. 54 (2020). https://doi.org/10.1145/3440755

  40. Gorman, J., Curran, J.R.: Scaling distributional similarity to large corpora. In: COLING/ACL 2006 - 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pp. 361–368. Association for Computational Linguistics (ACL), Morristown (2006)

    Google Scholar 

  41. Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: EMNLP 2014 – Proceedings of 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1532–1543 (2014). https://doi.org/10.3115/V1/D14-1162

  42. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space (2013)

    Google Scholar 

  43. Xu, S., Shen, X., Fukumoto, F., et al.: Paraphrase identification with lexical, syntactic and sentential encodings. Appl. Sci. 10, 4144 (2020). https://doi.org/10.3390/APP10124144

    Article  Google Scholar 

  44. Qiu, X., Sun, T., Xu, Y., Shao, Y., Dai, N., Huang, X.: Pre-trained models for natural language processing: a survey. Sci. China Technol. Sci. 63(10), 1872–1897 (2020). https://doi.org/10.1007/s11431-020-1647-3

    Article  Google Scholar 

  45. Hahn, M.G., Navarro, S.M.B., De La Fuente, V.L., Burgos, D.: A systematic review of the effects of automatic scoring and automatic feedback in educational settings. IEEE Access 9, 108190–108198 (2021). https://doi.org/10.1109/ACCESS.2021.3100890

    Article  Google Scholar 

  46. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13, 319–339 (1989). https://doi.org/10.2307/249008

    Article  Google Scholar 

Download references

Acknowledgements

This Work is partially funded by the PLeNTaS project, “Proyectos I+D+i 2019”, PID2019-111430RB-I00, by the PL-NETO project, Proyecto PROPIO UNIR, projectId B0036, and by Universidad Internacional de la Rioja (UNIR), through the Research Institute for Innovation & Technology in Education (UNIR iTED, http://ited.unir.net).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luis de-la-Fuente-Valentín .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

de-la-Fuente-Valentín, L., Verdú, E., Padilla-Zea, N., Villalonga, C., Blanco Valencia, X.P., Baldiris Navarro, S.M. (2022). Semiautomatic Grading of Short Texts for Open Answers in Higher Education. In: Casalino, G., et al. Higher Education Learning Methodologies and Technologies Online. HELMeTO 2021. Communications in Computer and Information Science, vol 1542. Springer, Cham. https://doi.org/10.1007/978-3-030-96060-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-96060-5_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-96059-9

  • Online ISBN: 978-3-030-96060-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics