Skip to main content

Trustworthy Academic Risk Prediction with Explainable Boosting Machines

  • Conference paper
  • First Online:
Artificial Intelligence in Education (AIED 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13916))

Included in the following conference series:

  • 3253 Accesses

Abstract

The use of predictive models in education promises individual support and personalization for students. To develop trustworthy models, we need to understand what factors and causes contribute to a prediction. Thus, it is necessary to develop models that are not only accurate but also explainable. Moreover, we need to conduct holistic model evaluations that also quantify explainability or other metrics next to established performance metrics. This paper explores the use of Explainable Boosting Machines (EBMs) for the task of academic risk prediction. EBMs are an extension of Generative Additive Models and promise a state-of-the-art performance on tabular datasets while being inherently interpretable. We demonstrate the benefits of using EBMs in the context of academic risk prediction trained on online learning behavior data and show the explainability of the model. Our study shows that EBMs are equally accurate as other state-of-the-art approaches while being competitive on relevant metrics for trustworthy academic risk prediction such as earliness, stability, fairness, and faithfulness of explanations. The results encourage the broader use of EBMs for other Artificial Intelligence in education tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The code is available under https://gitlab.com/vegeedsilva/trustworthy-academic-risk-prediction-with-explainable-boosting-machine.git.

References

  1. Adnan, M., et al.: Predicting at-risk students at different percentages of course length for early intervention using machine learning models. IEEE Access 9, 7519–7539 (2021)

    Article  Google Scholar 

  2. Alamri, R., Alharbi, B.: Explainable student performance prediction models: a systematic review. IEEE Access 9, 33132–33143 (2021)

    Article  Google Scholar 

  3. Baranyi, M., Nagy, M., Molontay, R.: Interpretable deep learning for university dropout prediction. In: Proceedings of the 21st Annual Conference on Information Technology Education, pp. 13–19 (2020)

    Google Scholar 

  4. Bussmann, N., Giudici, P., Marinelli, D., Papenbrock, J.: Explainable machine learning in credit risk management. Comput. Econ. 57(1), 203–216 (2021)

    Article  Google Scholar 

  5. Chen, F., Cui, Y.: Utilizing student time series behaviour in learning management systems for early prediction of course performance. J. Learn. Anal. 7(2), 1–17 (2020)

    Article  Google Scholar 

  6. Cohausz, L.: Towards real interpretability of student success prediction combining methods of XAI and social science. In: Proceedings of the 15th International Conference on Educational Data Mining, pp. 361–367 (2022)

    Google Scholar 

  7. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the Innovations in Theoretical CS Conference, pp. 214–226 (2012)

    Google Scholar 

  8. EU: Regulation EU 2016/679 of the European Parliament and of the Council of 27 April 2016. Official Journal of the European Union (2016)

    Google Scholar 

  9. Fiok, K., Farahani, F.V., Karwowski, W., Ahram, T.: Explainable artificial intelligence for education and training. J. Defense Model. Simul. 19(2), 133–144 (2022)

    Article  Google Scholar 

  10. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. Adv. Neural Inf. Process. Syst. 29, 3315–3323 (2016)

    Google Scholar 

  11. Hasan, R., Fritz, M.: Understanding utility and privacy of demographic data in education technology by causal analysis and adversarial-censoring. Proc. Priv. Enhanc. Technol. 2022(2), 245–262 (2022)

    Google Scholar 

  12. Hasib, K.M., Rahman, F., Hasnat, R., Alam, M.G.R.: A machine learning and explainable AI approach for predicting secondary school student performance. In: IEEE 12th Annual Computing and Communication Workshop and Conference, pp. 0399–0405. IEEE (2022)

    Google Scholar 

  13. Hastie, T., Tibshirani, R.: Generalized additive models: some applications. J. Am. Stat. Assoc. 82(398), 371–386 (1987)

    Article  MATH  Google Scholar 

  14. Holmes, W., et al.: Ethics of AI in education: towards a community-wide framework. Int. J. Artif. Intell. Educ. 32(3), 504–526 (2022)

    Article  Google Scholar 

  15. Hooker, S., Erhan, D., Kindermans, P.J., Kim, B.: A benchmark for interpretability methods in deep neural networks. Adv. Neural Inf. Process. Syst. 32, 9737–9748 (2019)

    Google Scholar 

  16. Jayasundara, S., Indika, A., Herath, D.: Interpretable student performance prediction using explainable boosting machine for multi-class classification. In: 2022 2nd International Conference on Advanced Research in Computing (ICARC), pp. 391–396. IEEE (2022)

    Google Scholar 

  17. Khosravi, H., et al.: Explainable artificial intelligence in education. Comput. Educ. Artif. Intell. 3, 100074 (2022)

    Article  Google Scholar 

  18. Kuzilek, J., Hlosta, M., Zdrahal, Z.: Open university learning analytics dataset. Sci. Data 4(1), 1–8 (2017)

    Article  Google Scholar 

  19. Lou, Y., Caruana, R., Gehrke, J., Hooker, G.: Accurate intelligible models with pairwise interactions. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 623–631 (2013)

    Google Scholar 

  20. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 30, 4765–4774 (2017)

    Google Scholar 

  21. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  22. Namoun, A., Alshanqiti, A.: Predicting student performance using data mining and learning analytics techniques: a systematic literature review. Appl. Sci. 11(1), 237 (2020)

    Article  Google Scholar 

  23. Nori, H., Caruana, R., Bu, Z., Shen, J.H., Kulkarni, J.: Accuracy, interpretability, and differential privacy via explainable boosting. In: International Conference on Machine Learning, pp. 8227–8237 (2021)

    Google Scholar 

  24. Nori, H., Jenkins, S., Koch, P., Caruana, R.: Interpretml: a unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223 (2019)

  25. de Oliveira, C.F., Sobral, S.R., Ferreira, M.J., Moreira, F.: How does learning analytics contribute to prevent students’ dropout in higher education: a systematic literature review. Big Data Cogn. Comput. 5(4), 64 (2021)

    Article  Google Scholar 

  26. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  27. Rubiano, S.M.M., Garcia, J.A.D.: Formulation of a predictive model for academic performance based on students’ academic and demographic data. In: 2015 IEEE Frontiers in Education Conference (FIE), pp. 1–7. IEEE (2015)

    Google Scholar 

  28. Schleiss, J., Günther, K., Stober, S.: Protecting student data in ML pipelines: an overview of privacy-preserving ML. In: Rodrigo, M.M., Matsuda, N., Cristea, A.I., Dimitrova, V. (eds.) Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners’ and Doctoral Consortium. AIED 2022. LNCS, vol. 13356, pp. 532–536. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-11647-6_109

  29. Sghir, N., Adadi, A., Lahmer, M.: Recent advances in predictive learning analytics: a decade systematic review (2012–2022). Educ. Inf. Technol. 1–35 (2022)

    Google Scholar 

  30. Shin, D.: The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum. Comput. Stud. 146, 102551 (2021)

    Article  Google Scholar 

  31. Soussia, A.B., Labba, C., Roussanaly, A., Boyer, A.: Assess performance prediction systems: Beyond precision indicators. In: Proceedings of the 14th International Conference on Computer Supported Education, pp. 489–496 (2022)

    Google Scholar 

  32. Soussia, A.B., Treuillier, C., Roussanaly, A., Boyer, A.: Learning profiles to assess educational prediction systems. In: Rodrigo, M.M., Matsuda, N., Cristea, A.I., Dimitrova, V. (eds.) Artificial Intelligence in Education. AIED 2022. LNCS, vol. 13355, pp. 41–52. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-11644-5_4

  33. Srinivasan, R., Chander, A.: Explanation perspectives from the cognitive sciences-a survey. In: Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pp. 4812–4818 (2021)

    Google Scholar 

  34. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Proceedings of the 34th International Conference on Machine Learning, pp. 3319–3328 (2017)

    Google Scholar 

  35. Swamy, V., Du, S., Marras, M., Kaser, T.: Trusting the explainers: teacher validation of explainable artificial intelligence for course design. In: LAK23: 13th International Learning Analytics and Knowledge Conference, pp. 345–356 (2023)

    Google Scholar 

  36. Swamy, V., Radmehr, B., Krco, N., Marras, M., Käser, T.: Evaluating the explainers: black-box explainable machine learning for student success prediction in MOOCS. In: Proceedings of the International Conference on Educational Data Mining (2022)

    Google Scholar 

  37. Vincent-Lancrin, S., van der Vlies, R.: Trustworthy artificial intelligence (AI) in education. OECD Educ. Work. Pap. 218 (2020)

    Google Scholar 

  38. Wang, C., Han, B., Patel, B., Rudin, C.: In pursuit of interpretable, fair and accurate machine learning for criminal recidivism prediction. J. Quant. Criminol. 39, 519–581 (2023)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johannes Schleiss .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dsilva, V., Schleiss, J., Stober, S. (2023). Trustworthy Academic Risk Prediction with Explainable Boosting Machines. In: Wang, N., Rebolledo-Mendez, G., Matsuda, N., Santos, O.C., Dimitrova, V. (eds) Artificial Intelligence in Education. AIED 2023. Lecture Notes in Computer Science(), vol 13916. Springer, Cham. https://doi.org/10.1007/978-3-031-36272-9_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-36272-9_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-36271-2

  • Online ISBN: 978-3-031-36272-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics