Skip to main content

Relevance Metric for Counterfactuals Selection in Decision Trees

  • Conference paper
  • First Online:
Intelligent Data Engineering and Automated Learning – IDEAL 2019 (IDEAL 2019)

Abstract

Explainable Machine Learning is an emerging field in the Machine Learning domain. It addresses the explicability of Machine Learning models and the inherent rationale behind model predictions. In the particular case of example-based explanation methods, they are focused on using particular instances, previously defined or created, to explain the behaviour of models or predictions. Counterfactual-based explanation is one of these methods. A counterfactual is an hypothetical instance similar to an example whose explanation is of interest but with different predicted class. This paper presents a relevance metric for counterfactual selection called sGower designed to induce sparsity in Decision Trees models. It works with categorical and continuous features, while considering number of feature changes and distance between the counterfactual and the example. The proposed metric is evaluated against previous relevance metrics on several sets of categorical and continuous data, obtaining on average better results than previous approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Adhikari, A., Tax, D., Satta, R., Fath, M.: Example and feature importance-based explanations for black-box machine learning models. arXiv preprint arXiv:1812.09044 (2018)

  2. Dua, D., Graff, C.: UCI machine learning repository (2017). http://archive.ics.uci.edu/ml

  3. Gower, J.C.: A general coefficient of similarity and some of its properties. Biometrics 24, 857–871 (1971)

    Article  Google Scholar 

  4. Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., Giannotti, F.: Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820 (2018)

  5. Hall, P., Gill, N.: Introduction to Machine Learning Interpretability. O’Reilly Media, Sebastopol (2018)

    Google Scholar 

  6. Kim, B., Khanna, R., Koyejo, O.O.: Examples are not enough, learn to criticize! criticism for interpretability. In: Advances in Neural Information Processing Systems, pp. 2280–2288 (2016)

    Google Scholar 

  7. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1885–1894. JMLR. org (2017)

    Google Scholar 

  8. Laugel, T., Lesot, M.J., Marsala, C., Renard, X., Detyniecki, M.: Inverse classification for comparison-based interpretability in machine learning. arXiv preprint arXiv:1712.08443 (2017)

  9. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2018)

    Article  MathSciNet  Google Scholar 

  10. Molnar, C.: Interpretable machine learning (2018)

    Google Scholar 

  11. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)

    Google Scholar 

  12. Sokol, K., Flach, P.A.: Glass-box: explaining AI decisions with counterfactual statements through conversation with a voice-enabled virtual assistant. In: IJCAI, pp. 5868–5870 (2018)

    Google Scholar 

  13. Yeh, I.C., Yang, K.J., Ting, T.M.: Knowledge discovery on RFM model using Bernoulli sequence. Expert Syst. Appl. 36(3), 5866–5871 (2009)

    Article  Google Scholar 

Download references

Acknowledgements

Research supported by grant from the Spanish Ministry of Economy and Competitiveness: SABERMED (Ref: RTC-2017-6253-1); Retos-Investigación program: MODAS-IN (Ref: RTI2018-094269-B-I00); and NVIDIA Corporation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rubén R. Fernández .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fernández, R.R., de Diego, I.M., Aceña, V., Moguerza, J.M., Fernández-Isabel, A. (2019). Relevance Metric for Counterfactuals Selection in Decision Trees. In: Yin, H., Camacho, D., Tino, P., Tallón-Ballesteros, A., Menezes, R., Allmendinger, R. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2019. IDEAL 2019. Lecture Notes in Computer Science(), vol 11871. Springer, Cham. https://doi.org/10.1007/978-3-030-33607-3_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-33607-3_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-33606-6

  • Online ISBN: 978-3-030-33607-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics