Skip to main content

Explainable AI for Intelligent Tutoring Systems

  • Conference paper
  • First Online:
Frontiers of Artificial Intelligence, Ethics, and Multidisciplinary Applications (FAIEMA 2023)

Abstract

This paper investigates the significance of Explainable AI (XAI) in educational technologies, employing the iRead project as a case study. The iRead project, an adaptive learning technology designed to bolster language learning, demonstrates the pivotal role of explainability in fostering effective, trustworthy, and engaging learning environments. Its transparent design, which adapts to individual learners, exemplifies the additional importance of explainability in facilitating personalized instruction and in managing classroom diversity. Observations across multiple pilot programs reveal that the iRead project significantly improves learners’ motivation, engagement, and literacy skills. Moreover, the clarity of its adaptive algorithms has eased teachers’ concerns, promoting its effective integration into classroom instruction. Beyond enhancing literacy skills, the project has also improved digital literacy among both teachers and students. This case study substantiates the need for transparent and understandable AI tools in fostering effective and inclusive educational experiences, highlighting their value in modern pedagogical approaches.

This work has been funded by the Horizon Europe Road-STEAMER project (https://www.road-steamer.eu) and the H2020 iRead project (https://iread-project.eu).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Akyuz Y (2020) Effects of intelligent tutoring systems (its) on personalized learning (pl). Creat Educ 11(6):953–978

    Article  Google Scholar 

  • Bao L, Redish EF (2006) Model analysis: representing and assessing the dynamics of student learning. Phys Rev Spec Topics-Phys Educ Res 2(1):010103

    Article  Google Scholar 

  • Berkling K, Guerrero RG (2019) Designing a comprehensive evaluation method for learning games-a general approach with specific application to iread. In: GamiFIN, pp 94–105

    Google Scholar 

  • Bietz MJ, Bloss CS, Calvert S, Godino JG, Gregory J, Claffey MP, Sheehan J, Patrick K (2019) Opportunities and challenges in the use of personal health data for health research. J Amer Med Inf Assoc 23(e1):e42–e48

    Article  Google Scholar 

  • Bull S, Kay J (2007) Student models that invite the learner in: the smili: () open learner modelling framework. Int J Artif Intell Educ 17(2):89–120

    Google Scholar 

  • Bunting L, af Segerstad YH, Barendregt W (2021) Swedish teachers’ views on the use of personalised learning technologies for teaching children reading in the english classroom. Int J Child-Comput Inter 27:100236. https://doi.org/10.1016/j.ijcci.2020.100236

  • Chiotaki D, Karpouzis K (2020) Open and cultural data games for learning. arXiv:2004.07521

  • Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. arXiv:1702.08608

  • El Naggar B, Berkling K (2020) Designing a gamified reading app with pupils in elementary school. CALL for widening participation: short papers from EUROCALL 2020, p 63

    Google Scholar 

  • Gilpin LH, Bau D, Yuan BZ, Bajwa A, Specter M, Kagal L (2018) Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th international conference on data science and advanced analytics (DSAA). IEEE, pp 80–89

    Google Scholar 

  • Holstein K, McLaren BM, Aleven V (2019) Student learning benefits of a mixed-reality teacher awareness tool in ai-enhanced classrooms. In: Proceedings of the 2019 CHI conference on human factors in computing systems, pp 1–14. https://doi.org/10.1145/3290605.3300239

  • Karpouzis K, Tsatiris GA (2022) Ai in (and for) games. Advances in machine learning/deep learning-based technologies: selected papers in Honour of Professor Nikolaos G. Bourbakis, vol 2, pp 27–43

    Google Scholar 

  • Kizilcec RF (2016) How much information?: effects of transparency on trust in an algorithmic interface. In: Proceedings of the 2016 CHI conference on human factors in computing systems, pp 2390–2395. https://doi.org/10.1145/2858036.2858402

  • Knijnenburg BP, Willemsen MC, Gantner Z, Soncu H, Newell C (2017) Explaining the user experience of recommender systems. User Model User-Adap Inter 22(4–5):441–504

    Google Scholar 

  • Mavrikis M, Vasalou A, Benton L, Raftopoulou C, Symvonis A, Karpouzis K, Wilkins D (2019) Towards evidence-informed design principles for adaptive reading games. In: Extended abstracts of the 2019 CHI conference on human factors in computing systems, pp 1–4

    Google Scholar 

  • Panagiotopoylos D, Symvonis A (2020) iread: infrastructure and integrated tools for personalized learning of reading skill. Inf Intell Syst Appl 1(1):44–46

    Google Scholar 

  • Pelánek R, Řihák J (2017) Experimental analysis of mastery learning criteria. In: Proceedings of the 25th conference on user modeling, adaptation and personalization, pp 156–163

    Google Scholar 

  • Révész A, Vasalou M, Florea A, Gilabert R, Bunting L, af Segerstad YH, Mihu I, Parry C, Benton L (2020) The effects of textual enhancement on development in l2 derivational morphology: a multi-site longitudinal study

    Google Scholar 

  • Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–215

    Article  Google Scholar 

  • Samek W, Wiegand T, Müller KR (2017) Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv:1708.08296

  • Serra J, Gilabert R (2021) Algorithmic versus teacher-led sequencing in a digital serious game and the development of second language reading fluency and accuracy. Brit J Educ Technol 52(5):1898–1916

    Article  Google Scholar 

  • Tsatiris G, Karpouzis K (2020) Developing for personalised learning: the long road from educational objectives to development and feedback. In: ACM interaction design and children (IDC) conference 2020, workshop on Technology-mediated personalized learning for younger learners: concepts, methods and practice

    Google Scholar 

  • Vargianniti I, Karpouzis K (2019) Effects of game-based learning on academic performance and student interest. In: Games and learning alliance: 8th international conference, GALA 2019, Athens, Greece, November 27–29, 2019, Proceedings 8. Springer, pp 332–341

    Google Scholar 

  • Vargianniti I, Karpouzis K (2020) Using big and open data to generate content for an educational game to increase student performance and interest. Big Data Cognit Comput 4(4):30

    Article  Google Scholar 

  • Vasalou A, Benton L, Ibrahim S, Sumner E, Joye N, Herbert E (2021) Do children with reading difficulties benefit from instructional game supports? exploring children’s attention and understanding of feedback. Brit J Educ Technol 52(6):2359–2373

    Article  Google Scholar 

  • Virvou M, Katsionis G, Manos K (2005) Combining software games with education: evaluation of its educational effectiveness. J Educ Technol & Soc 8(2):54–65

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kostas Karpouzis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Karpouzis, K. (2024). Explainable AI for Intelligent Tutoring Systems. In: Farmanbar, M., Tzamtzi, M., Verma, A.K., Chakravorty, A. (eds) Frontiers of Artificial Intelligence, Ethics, and Multidisciplinary Applications. FAIEMA 2023. Frontiers of Artificial Intelligence, Ethics and Multidisciplinary Applications. Springer, Singapore. https://doi.org/10.1007/978-981-99-9836-4_6

Download citation

Publish with us

Policies and ethics