Skip to main content

An Interpretability Evaluation Framework for Decision Tree Surrogate Model-Based XAIs

  • Conference paper
  • First Online:
Frontiers of Artificial Intelligence, Ethics, and Multidisciplinary Applications (FAIEMA 2023)


Explainable Artificial Intelligence (XAI) plays a pivotal role in facilitating users’ understanding of AI model predictions. Among the various branches of XAI, the decision tree surrogate model has gained considerable popularity due to its ability to approximate predictions of black-box models while maintaining interpretability through its tree structure. Despite the abundance of proposed XAI methods, evaluating the interpretability of these methods remains challenging. Traditional subjective evaluation methods heavily rely on users’ domain knowledge, leading to potential biases and costly processes. Additionally, objective evaluation methods only address limited aspects of XAI interpretability and are rarely tailored for decision tree surrogate models. This paper proposes a comprehensive framework for evaluating the interpretability of decision tree surrogate model-based XAIs. The framework encompasses six quantitative properties, namely complexity, clarity, stability, consistency, sufficiency, and causality, and provides calculation methods for each indicator. Furthermore, we conduct extensive experiments on several classical decision tree-based XAIs using the proposed framework. Lastly, we demonstrate the practical application of the framework through a case study on methods aimed at enhancing interpretability. Our objective is to provide practitioners with valuable guidance in selecting appropriate XAI methods for their specific use cases and to assist developers in improving the performance of their XAI systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions


  1. 1.

  2. 2.

  3. 3.


  • Blanco-Justicia A, Domingo-Ferrer J (2019) Machine learning explainability through comprehensible decision trees. In: Machine learning and knowledge extraction: third IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 international cross-domain conference, CD-MAKE 2019, Canterbury, UK, 26–29 Aug 2019, Proceedings 3. Springer, Berlin, pp 15–26

    Google Scholar 

  • Breiman L, Shang N (1996) Born again trees. University of California, Berkeley, Berkeley, CA, Technical report, vol 1, issue 2, p 4

    Google Scholar 

  • Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. arXiv:1702.08608

  • Gunning D, Aha D (2019) DARPA’s explainable artificial intelligence (XAI) program. AI Mag 40(2):44–58

    Google Scholar 

  • Hall M, Harborne D, Tomsett R, Galetic V, Quintana-Amate S, Nottle A, Preece A (2019) A systematic method to understand requirements for explainable ai (xai) systems. In: Proceedings of the IJCAI workshop on explainable artificial intelligence (XAI 2019), Macau, China. vol 11

    Google Scholar 

  • Hoffman RR, Mueller ST, Klein G, Litman J (2018) Metrics for explainable AI: challenges and prospects. arXiv:1812.04608

  • Jesus S, Belém C, Balayan V, Bento J, Saleiro P, Bizarro P, Gama J (2021) How can i choose an explainer? an application-grounded evaluation of post-hoc explanations. In: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp 805–815

    Google Scholar 

  • Kim SS, Meister N, Ramaswamy VV, Fong R, Russakovsky O (2022) Hive: evaluating the human interpretability of visual explanations. In: Computer vision–ECCV 2022: 17th European conference, Tel Aviv, Israel, 23–27 Oct 2022, Proceedings, Part XII. Springer, Berlin, pp 280–298

    Google Scholar 

  • Kumarakulasinghe NB, Blomberg T, Liu J, Leao AS, Papapetrou P (2020) Evaluating local interpretable model-agnostic explanations on clinical machine learning classification models. In: 2020 IEEE 33rd international symposium on computer-based medical systems (CBMS). IEEE, pp 7–12

    Google Scholar 

  • Künzel SR, Sekhon JS, Bickel PJ, Yu B (2019) Metalearners for estimating heterogeneous treatment effects using machine learning. Proc Natl Acad Sci 116(10):4156–4165

    Article  Google Scholar 

  • Lim BY, Dey AK, Avrahami D (2009) Why and why not explanations improve the intelligibility of context-aware intelligent systems. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp 2119–2128

    Google Scholar 

  • Lin YS, Lee WC, Celik ZB (2021) What do you see? evaluation of explainable artificial intelligence (XAI) interpretability through neural backdoors. In: Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pp 1027–1035

    Google Scholar 

  • Lipton ZC (2018) The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3):31–57

    Article  Google Scholar 

  • Lu X, Tolmachev A, Yamamoto T, Takeuchi K, Okajima S, Takebayashi T, Maruhashi K, Kashima H (2021) Crowdsourcing evaluation of saliency-based XAI methods. In: Machine learning and knowledge discovery in databases. Applied data science track: European conference, ECML PKDD 2021, Bilbao, Spain, 13–17 Sept 2021, Proceedings, Part V 21. Springer, Berlin, pp 431–446

    Google Scholar 

  • Mohseni S, Block JE, Ragan ED (2018) A human-grounded evaluation benchmark for local explanations of machine learning (2018). arXiv:1801.05075

  • Moraffah R, Karami M, Guo R, Raglin A, Liu H (2020) Causal interpretability for machine learning-problems, methods and evaluation. ACM SIGKDD Explor Newsl 22(1):18–33

    Article  Google Scholar 

  • Ozyegen O, Ilic I, Cevik M (2022) Evaluation of interpretability methods for multivariate time series forecasting. Appl Intell 1–17

    Google Scholar 

  • Schaaf N, Huber M, Maucher J (2019) Enhancing decision tree based interpretation of deep neural networks through l1-orthogonal regularization. In: 2019 18th IEEE international conference on machine learning and applications (ICMLA). IEEE, pp 42–49

    Google Scholar 

  • Setiono R, Liu H (1998) Fragmentation problem and automated feature construction. In: Proceedings tenth IEEE international conference on tools with artificial intelligence (Cat. No. 98CH36294). IEEE, pp 208–215

    Google Scholar 

  • van der Waa J, Nieuwburg E, Cremers A, Neerincx M (2021) Evaluating XAI: a comparison of rule-based and example-based explanations. Artif Intell 291:103404

    Article  Google Scholar 

  • Woodward J (2023) Causation and manipulability. In: Zalta EN, Nodelman U (eds) The Stanford encyclopedia of philosophy. Metaphysics Research Lab, Stanford University, Summer

    Google Scholar 

  • Wu M, Hughes M, Parbhoo S, Zazzi M, Roth V, Doshi-Velez F (2018) Beyond sparsity: tree regularization of deep models for interpretability. In: Proceedings of the AAAI conference on artificial intelligence, vol 32

    Google Scholar 

  • Wu M, Parbhoo S, Hughes M, Kindle R, Celi L, Zazzi M, Roth V, Doshi-Velez F (2020) Regional tree regularization for interpretability in deep neural networks. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, pp 6413–6421

    Google Scholar 

  • Yeh CK, Hsieh CY, Suggala A, Inouye DI, Ravikumar PK (2019) On the (in) fidelity and sensitivity of explanations. Adv Neural Inf Process Syst 32

    Google Scholar 

  • Zhu Y, Tian D, Yan F (2020) Effectiveness of entropy weight method in decision-making. Math Probl Eng 2020:1–5

    Google Scholar 

  • Zschech P, Weinzierl S, Hambauer N, Zilker S, Kraus M (2022) Gam (e) changer or not? an evaluation of interpretable machine learning models based on additive model constraints. arXiv:2204.09123

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Hai Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, X., Huang, H., Zuo, X. (2024). An Interpretability Evaluation Framework for Decision Tree Surrogate Model-Based XAIs. In: Farmanbar, M., Tzamtzi, M., Verma, A.K., Chakravorty, A. (eds) Frontiers of Artificial Intelligence, Ethics, and Multidisciplinary Applications. FAIEMA 2023. Frontiers of Artificial Intelligence, Ethics and Multidisciplinary Applications. Springer, Singapore.

Download citation

Publish with us

Policies and ethics