Skip to main content

Towards Explainability of Tree-Based Ensemble Models. A Critical Overview

  • Conference paper
  • First Online:
New Advances in Dependability of Networks and Systems (DepCoS-RELCOMEX 2022)

Abstract

Tree-based ensemble models are widely applied in artificial intelligence systems due to their robustness and generality. However, those models are not transparent. For the sake of making systems trustworthy and dependable, multiple explanation techniques are developed.

This paper presents selected explainability techniques for tree-based ensemble models. First, the aspect of black-boxness and the definition of explainability are reported. Then, predominant model-agnostic (LIME, SHAP, counterfactual explanations), as well as model-specific techniques (fusion into a single decision tree, iForest) are described. Moreover, other methods are also briefly mentioned. Finally, a brief summary is presented.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Barredo Arrieta, A., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  2. Biecek, P., Burzykowski, T.: Explanatory Model Analysis. Chapman and Hall/CRC, New York (2021)

    Book  Google Scholar 

  3. Burkart, N., Huber, M.F.: A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. 70, 245–317 (2021)

    Article  MathSciNet  Google Scholar 

  4. Byrne, R.M.J.: Counterfactuals in explainable artificial intelligence (XAI): Evidence from human reasoning. In: IJCAI (2019)

    Google Scholar 

  5. Chazette, L., Brunotte, W., Speith, T.: Exploring explainability: a definition, a model, and a knowledge catalogue. In: 2021 IEEE 29th International Requirements Engineering Conference (RE), pp. 197–208 (2021)

    Google Scholar 

  6. Clinciu, M.A., Hastie, H.: A survey of explainable AI terminology. In: Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019), pp. 8–13. Association for Computational Linguistics (2019)

    Google Scholar 

  7. Cui, Z., Chen, W., He, Y., Chen, Y.: Optimal action extraction for random forests and boosted trees. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 179–188. KDD 2015, Association for Computing Machinery, New York (2015)

    Google Scholar 

  8. Deng, H.: Interpreting tree ensembles with inTrees. Int. J. Data Sci. Anal. 7(4), 277–287 (2019)

    Article  Google Scholar 

  9. Domingos, P.: Knowledge acquisition from examples via multiple models. In: Proceedings of the Fourteenth International Conference on Machine Learning, pp. 98–106. Morgan Kaufmann, San Francisco (1997)

    Google Scholar 

  10. Fernández, R.R., de Diego, I.M., Aceña, V., Fernández-Isabel, A., Moguerza, J.M.: Random forest explainability using counterfactual sets. Inf. Fusion 63, 196–207 (2020)

    Article  Google Scholar 

  11. Garreau, D., von Luxburg, U.: Explaining the explainer: a first theoretical analysis of lime. In: Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS). Proceedings of Machine Learning Research, vol. 108, pp. 1287–1296. PMLR (2020)

    Google Scholar 

  12. Greenwell, B.M., Boehmke, B.C., McCarthy, A.J.: A simple and effective model-based variable importance measure. ArXiv https://arxiv.org/abs/1805.04755 (2018)

  13. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5) (2018)

    Google Scholar 

  14. Hooker, G., Mentch, L., Zhou, S.: Unrestricted permutation forces extrapolation: variable importance requires at least one more model, or there is no free variable importance. Stat. Comput. 31(6) (2021)

    Google Scholar 

  15. Huysmans, J., Baesens, B., Vanthienen, J.: Using rule extraction to improve the comprehensibility of predictive models. Behav. Exp. Econ. (2006)

    Google Scholar 

  16. Li, J., Ma, S., Le, T., Liu, L., Liu, J.: Causal decision trees. IEEE Trans. Knowl. Data Eng. 29(2), 257–271 (2017)

    Article  Google Scholar 

  17. Ligęza, A.: An experiment in causal structure discovery. a constraint programming approach. In: Kryszkiewicz, M., Appice, A., Ślęzak, D., Rybinski, H., Skowron, A., Raś, Z.W. (eds.) ISMIS 2017. LNCS (LNAI), vol. 10352, pp. 261–268. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60438-1_26

    Chapter  Google Scholar 

  18. Ligęza, A., et al.: Explainable artificial intelligence. model discovery with constraint programming. In: Stettinger, M., Leitner, G., Felfernig, A., Ras, Z.W. (eds.) ISMIS 2020. SCI, vol. 949, pp. 171–191. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67148-8_13

    Chapter  Google Scholar 

  19. Ligęza, A., Kluza, K., Jemioło, P., Sepioło, D., Wiśniewski, P., Jobczyk, K.: Evaluation of selected artificial intelligence technologies for innovative business intelligence applications. In: Borzemski, L., Selvaraj, H., Świątek, J. (eds.) ICSEng 2021. LNNS, vol. 364, pp. 111–126. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-92604-5_11

    Chapter  Google Scholar 

  20. Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods. Entropy 23(1) (2021)

    Google Scholar 

  21. Lipton, Z.C.: The mythos of model interpretability. Queue 16, 31–57 (2018)

    Article  Google Scholar 

  22. Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.I.: From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2(1), 56–67 (2020)

    Article  Google Scholar 

  23. Lundberg, S.M., Erion, G.G., Lee, S.I.: Consistent individualized feature attribution for tree ensembles. ArXiv https://arxiv.org/abs/1802.03888 (2018)

  24. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems. vol. 30. Curran Associates, Inc. (2017)

    Google Scholar 

  25. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  Google Scholar 

  26. Molnar, C.: Interpretable machine learning. Lulu.com (2020)

    Google Scholar 

  27. Parmentier, A., Vidal, T.: Optimal counterfactual explanations in tree ensembles. In: International Conference on Machine Learning (2021)

    Google Scholar 

  28. Petkovic, D., Altman, R., Wong, M., Vigil, A.: Improving the explainability of random forest classifier - user centered approach. In: Biocomputing 2018. World Scientific (2017)

    Google Scholar 

  29. Ribeiro, M.T., Singh, S., Guestrin, C.: "Why should i trust you?": Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144. KDD 2016, Association for Computing Machinery, New York (2016)

    Google Scholar 

  30. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  31. Sagi, O., Rokach, L.: Explainable decision forest: transforming a decision forest into an interpretable tree. Inf. Fusion 61, 124–138 (2020)

    Article  Google Scholar 

  32. Schwalbe, G., Finzel, B.: XAI method properties: a (meta-)study. ArXiv https://arxiv.org/abs/2105.07190 (2021)

  33. Shi, S., Zhang, X., Fan, W.: A modified perturbed sampling method for local interpretable model-agnostic explanation. CoRR abs/2002.07434 (2020). https://arxiv.org/abs/2002.07434

  34. Turner, R.: A model explanation system. In: 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1–6 (2016)

    Google Scholar 

  35. Vandewiele, G., Lannoye, K., Janssens, O., Ongenae, F., De Turck, F., Van Hoecke, S.: A genetic algorithm for interpretable model extraction from decision tree ensembles. In: Kang, U., Lim, E.-P., Yu, J.X., Moon, Y.-S. (eds.) PAKDD 2017. LNCS (LNAI), vol. 10526, pp. 104–115. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67274-8_10

    Chapter  Google Scholar 

  36. Verma, S., Dickerson, J., Hines, K.: Counterfactual explanations for machine learning: challenges revisited (2021)

    Google Scholar 

  37. Verma, S., Dickerson, J.P., Hines, K.E.: Counterfactual explanations for machine learning: a review. ArXiv https://arxiv.org/abs/2010.10596 (2020)

  38. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harvard J. Law Technol. 31, 841–887 (2018)

    Google Scholar 

  39. Yu, K., Li, J., Liu, L.: A review on algorithms for constraint-based causal discovery (2016)

    Google Scholar 

  40. Zafar, M.R., Khan, N.: Deterministic local interpretable model-agnostic explanations for stable explainability. Mac. Learn. Knowl. Extract. 3(3), 525–541 (2021)

    Article  Google Scholar 

  41. Zhao, X., Huang, W., Huang, X., Robu, V., Flynn, D.: Baylime: Bayesian local interpretable model-agnostic explanations. In: de Campos, C., Maathuis, M.H. (eds.) Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence. Proceedings of Machine Learning Research, vol. 161, pp. 887–896. PMLR (2021)

    Google Scholar 

  42. Zhao, X., Wu, Y., Lee, Cui, W.: iforest: Interpreting random forests via visual analytics. IEEE Trans. Visual. Comput. Graph. 25, 407–416 (2019)

    Google Scholar 

  43. Zhou, Y., Hooker, G.: Interpreting models via single tree approximation (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dominik Sepiolo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sepiolo, D., Ligęza, A. (2022). Towards Explainability of Tree-Based Ensemble Models. A Critical Overview. In: Zamojski, W., Mazurkiewicz, J., Sugier, J., Walkowiak, T., Kacprzyk, J. (eds) New Advances in Dependability of Networks and Systems. DepCoS-RELCOMEX 2022. Lecture Notes in Networks and Systems, vol 484. Springer, Cham. https://doi.org/10.1007/978-3-031-06746-4_28

Download citation

Publish with us

Policies and ethics