Measuring the Quality of Test-based Exercises Based on the Performance of Students

A Correction to this article was published on 07 March 2021

This article has been updated

Abstract

In order to be effective, a learning process requires the use of valid and suitable educational resources. However, measuring the quality of an educational resource is not an easy task for a teacher. The data of the performance of the students can be used to measure how appropriate the didactic resources are. Besides this data, adequate metrics and statistics are also needed. In this paper, TEA, a Visual Learning Analytics tool for measuring the quality of a particular type of educational resources, in particular test-based exercises, is presented. TEA is a teacher-oriented tool aimed at helping them to improve the quality of the learning material they have created by analyzing and visualizing the performance of the students. TEA evaluates not only the adequacy of individual items but also the appropriateness of a whole test. TEA provides the results of the evaluation so that they are easily interpretable by teachers and developers of educational material. The development of TEA required a thorough analysis and classification of metrics and statistics to identify those which are useful to measure the quality of test-based exercises using the data about the performance of the students. The tool provides visual representations of the performance of the students to allow teachers to evaluate the appropriateness of the test-based exercises they have created. The experimentation carried out with TEA at higher education level is also presented.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Change history

Notes

  1. 1.

    Basque language translation of “Test Exercises Examiner”.

References

  1. Amedei, J., Piwek, P., & Willis, A. (2918). Evaluation Methodologies in Automatic Question Generation 2013–2018. In Proceedings of the 11th International Conference on Natural Language Generation (pp. 307–317).

  2. Arruarte, A., Fernández, I., Ferrero, B., & Greer, J. (1997). The IRIS Shell: How to Build ITSs from Pedagogical and Design Requisites. Journal of Artificial Intelligence in Education, 8, 341–381.

    Google Scholar 

  3. Arruarte, A., Fernández, I., & Greer, J. (1996). The CLAI Model. A Cognitive Theory to Guide ITS Development. Journal of Artificial Intelligence in Education, 7(3/4), 277–313.

    Google Scholar 

  4. Arruarte, A., Ferrero, B., Fernández-Castro, I., Urretavizcaya, M., Álvarez, A., & Greer, J. (2003). The IRIS Authoring Tool. In T. Murray, S. Ainsworth & S. Blessing (Eds.), Automating Tools for Advanced Learning Environments (pp. 233–267). Dordrecht: Kluwer Academic Publishers.

  5. Baker, B. M. (2007). A conceptual framework for making knowledge actionable through capital formation, D.M. dissertation, University of Maryland, College Park, USA.

  6. Baker, F. B. (2001). The basics of item response theory. ERIC.

  7. Brooks, C., Bateman, S., McCalla, G., & Greer, J. (2006). Applying the Agent Metaphor to Learning Content Management Systems and Learning Object Repositories. In M. Ikeda, K. Ashley & T.-W. Chan (Eds.), ITS 2006, LNCS 4053 (pp. 808–810). Berlin Heidelberg: Springer-Verlag.

  8. Brooks, C., Thompson, C., & Greer, J. (2013). Visualizing lecture capture usage: A learning analytics case study. WAVe 2013 workshop at LAK13, April 8, 2013, Leuven, Belgium.

  9. Charleer, S., Moere, A. V., Klerkx, J., Verbert, K., & Laet, T. D. (2018). Learning analytics dashboards to support adviser-student dialogue. IEEE Transactions on Learning Technologies (pp. 1–1). https://doi.org/10.1109/TLT.2017.2720670.

  10. Chen, Q., Chen, Y., Liu, D., Shi, C., Wu, Y., & Qu, H. (2016). PeakVizor: Visual Analytics of Peaks in Video Clikstreams from Massive Open Online Courses. IEEE Transactions on Visualization and Computer Graphics, 22(10), 2315–2330.

    Article  Google Scholar 

  11. Clements, K., Pawlowski, J., & Manouselis, N. (2014). Why open educational resources repositories fail -review of quality assurance approaches. In International Association of Technology, Education and Development (IATED 2014).

  12. Conde, A., Larrañaga, M., Arruarte, A., & Elorriaga, J. A. (2019). A combined approach for eliciting relationships for educational ontologies using general-purpose knowledge bases. IEEE ACCESS, 7, 48339–48355.

    Article  Google Scholar 

  13. Conde, A., Larrañaga, M., Arruarte, A., Elorriaga, J. A., & Roth, D. (2016). LiTeWi: A combined term extraction and entity linking method for eliciting educational Ontologies from textbooks’. Journal of the Association for Information Science and Technology, 67(2), 380–399.

    Article  Google Scholar 

  14. Conejo, R., Barros, B., & Bertoa, M. F. (2019). Automated assessment of complex programming tasks using SIETTE. IEEE Transactions on Learning Technologies, 12(4), 470–484.

    Article  Google Scholar 

  15. Conejo, R., Guzmán, E., & Trella, M. (2016). The SIETTE automatic assessment environment. International Journal of Artificial Intelligence in Education, 26, 270–292.

    Article  Google Scholar 

  16. Greer, J., Molinaro, M., Ochoa, X., & McKay, T. (2016). Learning Analytics for Curriculum and Program Quality Improvement. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, pp. 494–495.

  17. Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory, vol. 2. Thousand Oaks: Sage

  18. Kurdi, G., Leo, J., Parsia, B., Sattler, U., & Al-Emari, S. (2019). A Systematic Review of Automatic Question Generation for Educational Purposes, International Journal of Artificial Intelligence in Education, https://doi.org/10.1007/s40593-019-00186-y.

  19. Larrañaga, M. (2002). Enhancing ITS building process with semi-automatic domain acquisition using ontologies and NLP techniques. Proceedings of the Young Researches Track. Intelligent Tutoring Systems 2002.

  20. Larrañaga, M., Conde, A., Calvo, I., Elorriaga, J. A., & Arruarte, A. (2014). Automatic generation of the domain module from electronic textbooks: Method and validation. IEEE Transactions on Knowledge and Data Engineering, 26(1), 69–82.

    Article  Google Scholar 

  21. Leitner, P., Khalil, M., & Ebner, M. (2017). Learning analytics in higher education—A literature review. In A. Peña-Ayala (Ed.), Learning Analytics: Fundaments, Applications, and Trends (vol. 94, pp. 1–23). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-52977-6_1.

  22. Mendez, G., Ochoa, X., Chiluiza, K., de Wever, B. (2014). Mendez, G., Ochoa, X., Chiluiza, K., & de Wever, B. (2014). Curricular Design Analysis: A Data-Driven Perspective. Journal of Learning Analytics, 1(3), 84–119. https://doi.org/10.18608/jla.2014.13.6.

  23. Mitchell, J., & Ryder, A. (2013). Developing and using dashboard indicators in student affairs assessment. New Directions for Student Services, 142, 71–81.

    Article  Google Scholar 

  24. Noorozi, O., Alikhani, I., Järvelä, S., Kirschner, P. A., Juuso, I., & Seppänen, T. (2019). Multimodal data to design visual learning analytics for understanding regulation of learning. Computers in Human Behavior, 100, 298–304. https://doi.org/10.1016/j.chb.2018.12.019.

    Article  Google Scholar 

  25. Novick, M. R. (1966). The axioms and principal results of classical test theory. Journal of Mathematical Psychology, 3(1), 1–18.

    MathSciNet  Article  Google Scholar 

  26. Paul, R., Gaftandzhieva, S., Kausar, S., Hussain, S., Doneva, R., & Baruah, A. K. (2020). Exploring student academicperformance using data mining tools. iJET International Journal of Emerging Technologies in Learning, 15(8), 195–208.

  27. Pelánek, R. (2020). Learning Analytics Challenges: Trade-offs, Methodology, Scalability. In Proceedings of the 10th International Learning Analytics & Knowledge (pp. 554–558). New York City: ACM.

  28. Peña, D. (2014). Fundamentos de estadística. Alianza editorial.

  29. Rigney, J. W. (1978). Learning Strategies: A Traditional Perspective. O’Neil, H.F. (Eds.), Learning Strategies (pp. 165–205). New York: Academic Press.

  30. Ross, S., Jordan, S., & Butcher, P. (2006). 10 online instantaneous and targeted feedback for remote learners. Innovative assessment in higher education (p. 123).

  31. Schwendimann, B. A., Rodríguez-Triana, M. J., Vozkiuk, A., Prieto, L. P., Boroujeni, M. S., Holzer, A., Gillet, D., & Dillenbourg, P. (2017). Perceiving Learning at a Glance: A Systematic Literature Review of Learning Dashboard Research. IEEE Transactions on Learning Technologies, 10(1), 30–41.

    Article  Google Scholar 

  32. Thomas, J. J., & Cook, K. A. (2006). A visual Analytics Agenda. IEEE Computer Graphics and Applications, 26(1), 10–13.

    Article  Google Scholar 

  33. Verbert, K., Duval, E., Klerkx, J., Govaerts, S., & Santos, J. L. (2013). Learning analytics dashboard applications. American Behavioral Scientist, 1–10, https://doi.org/10.1177/0002764213479363.

  34. Verbert, K., Ochoa, X., De Croon, R., Dourado, R. A., & De Laet. (2020). T. Learning analytics dashboards: the past, the present and the future. In Proceedings of the 10th International Learning Analytics & Knowledge (pp. 35–40). New York City: ACM.

  35. Vieira, C., Parsons, P., & Byrd, V. (2018). Visual Learning Analytics of Educational Data: A Systematic Literature Review and Research Agenda. Computers & Education, 122, 119–135.

    Article  Google Scholar 

  36. Virvou, M., Alepis, E., Tsihrintzis, G. A., & Jain, L. C. (2020). Machine Learning Paradigms. Advanced in Learning Analytics. Berlin: Springer.

  37. Weinstein, C. E., & Mayer, R. E. (1986). The teaching of Learning Strategies. In C. Wittrock (Ed.), Handbook of Research on Teaching (pp. 315–327). New York: Macmillan Publishing Company.

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Basque Government through ADIAN, under Grant IT980-16, and by MCIU/AEI/FEDER, UE, under Grant RTI2018-096846-B-C21.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Jon A. Elorriaga.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised: The last part of the Conclusion section (see below), which appeared in the proofs reviewed by the author, did not appear in the published article. “In honor to Jim We acknowledge the help and inspiration of Professor Greer during the time we were fortunate to share with him. The academic and friendship relation between Professor Jim Greer and several of the members of the Ga-Lan group began in the early 1990s. In those years Jim acted as mentor to some of us, PhD students, and instilled in us a formal, yet pragmatic, way of researching. In 1993, he welcomed us warmly at the ARIES laboratory in the cold Saskatoon winter. Then, we got to know his dedication to the students who worked with him, the closeness with which he treated those students, how open-minded he was with their ideas and how he improved them with his great experience and insights. Thereafter, informal meetings with Jim during conferences were a good reason to attend them. Jim’s influence prompted us to work on learning systems adapted to the needs of the students. Like him, we believe in the importance of student models for this adaptation to be possible. We also share other lines of research such as learning analytics, the focus of this article. Eskerrik asko, Jim! (Thanks, Jim).”

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Arruarte, J., Larrañaga, M., Arruarte, A. et al. Measuring the Quality of Test-based Exercises Based on the Performance of Students. Int J Artif Intell Educ (2020). https://doi.org/10.1007/s40593-020-00208-0

Download citation

Keywords

  • Test-based exercises
  • Visual Learning Analytics
  • Quality evaluation of test-based exercises