Skip to main content

Post-hoc Explanation Options for XAI in Deep Learning: The Insight Centre for Data Analytics Perspective

  • Conference paper
  • First Online:
Pattern Recognition. ICPR International Workshops and Challenges (ICPR 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12663))

Included in the following conference series:

Abstract

This paper profiles the recent research work on eXplainable AI (XAI), at the Insight Centre for Data Analytics. This work concentrates on post-hoc explanation-by-example solutions to XAI as one approach to explaining black box deep-learning systems. Three different methods of post-hoc explanation are outlined for image and time-series datasets: that is, factual, counterfactual, and semi-factual methods). The future landscape for XAI solutions is discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Here, we consider factual examples as explanations; but LIME [36] gives factual information about the current test instance via feature importance scores also.

References

  1. Ala-Pietilä, P.: Landline - 10/10/20: High-Level Expert Group on Artificial Intelligence. https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence

  2. Ates, E., et al.: Counterfactual explanations for machine learning on multivariate time series data. arXiv:2008.10781 (2020)

  3. Bagnall, A., et al.: The great time series classification bake off: an experimental evaluation of recently proposed algorithms. Extended Version. arXiv:1602.01711 (2016)

  4. Byrne, R.M.J.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 2019) (2019)

    Google Scholar 

  5. Chen, C., et al.: This looks like that. In: NeurIPS (2020)

    Google Scholar 

  6. Dau, H.A., et al.: The UCR time series archive. arXiv:1810.07758 (2019)

  7. Delaney, E., et al.: Instance-based counterfactual explanations for time series classification. arXiv:2009.13211 (2020)

  8. Ford, C., et al.: Play MNIST for me! User studies on the effects of post-hoc, example-based explanations & error rates on debugging a deep learning, black-box classifier. In: IJCAI 2020 XAI Workshop (2020)

    Google Scholar 

  9. Forestier, G., et al.: Generating synthetic time series to augment sparse datasets. In: 2017 IEEE International Conference on Data Mining (2017)

    Google Scholar 

  10. Frosst, N., Hinton, G.: Distilling a neural network into a soft decision tree. arXiv:1711.09784 (2017)

  11. Gilpin, L.H., et al.: Explaining explanations: an approach to evaluating interpretability of machine learning. arXiv:1806.00069 (2018)

  12. Hahn, T.: Landline - 10/10/20: Strategic Research, Innovation and Deployment Agenda. https://ai-data-robotics-partnership.eu/wp-content/uploads/2020/09/AI-Data-Robotics-Partnership-SRIDA-V3.0.pdf

  13. Karlsson, I., et al.: Explainable time series tweaking via irreversible and reversible temporal transformations. arXiv:1809.05183 (2018)

  14. Keane, M., Kenny, E.: How case-based reasoning explains neural networks: a theoretical analysis of XAI using post-hoc explanation-by-example from a survey of ANN-CBR twin-systems. In: Bach, K., Marling, C. (eds.) ICCBR 2019. LNCS (LNAI), vol. 11680, pp. 155–171. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29249-2_11

    Chapter  Google Scholar 

  15. Keane, M.T., Kenny, E.M.: The twin-system approach as one generic solution for XAI. In: IJCAI 2019 XAI Workshop (2019)

    Google Scholar 

  16. Keane, M.T., Smyth, B.: Good counterfactuals and where to find them: a case-based technique for generating counterfactuals for explainable AI (XAI). In: Watson, I., Weber, R. (eds.) ICCBR 2020. LNCS (LNAI), vol. 12311, pp. 163–178. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58342-2_11

    Chapter  Google Scholar 

  17. Kenny, E.M., et al.: Bayesian case-exclusion and personalized explanations for sustainable dairy farming. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI 2020) (2020)

    Google Scholar 

  18. Kenny, E., et al.: Predicting grass growth for sustainable dairy farming: a CBR system using Bayesian case-exclusion and post-hoc, personalized explanation-by-example (XAI). In: Bach, K., Marling, C. (eds.) ICCBR 2019. LNCS (LNAI), vol. 11680, pp. 172–187. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29249-2_12

    Chapter  Google Scholar 

  19. Kenny, E.M., Keane, M.T.: On generating plausible counterfactual and semi-factual explanations for deep learning. arXiv:2009.06399 (2020)

  20. Kenny, E.M., Keane, M.T.: Twin-systems to explain artificial neural networks using case-based reasoning. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 2019) (2019)

    Google Scholar 

  21. Labaien, J., Zugasti, E., De Carlos, X.: Contrastive explanations for a deep learning model on time-series data. In: Song, M., Song, I.-Y., Kotsis, G., Tjoa, A.M., Khalil, I. (eds.) DaWaK 2020. LNCS, vol. 12393, pp. 235–244. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59065-9_19

    Chapter  Google Scholar 

  22. Laugel, T., et al.: Defining locality for surrogates in post-hoc interpretablity. arXiv:1806.07498 (2018)

  23. Laugel, T., et al.: The dangers of post-hoc interpretability: unjustified counterfactual explanations. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI 2019) (2019)

    Google Scholar 

  24. Leavy, S., et al.: Data, power and bias in artificial intelligence. arXiv:2008.0734 (2020)

  25. Leavy, S., Meaney, G., Wade, K., Greene, D.: Mitigating gender bias in machine learning data sets. In: Boratto, L., Faralli, S., Marras, M., Stilo, G. (eds.) BIAS 2020. CCIS, vol. 1245, pp. 12–26. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52485-2_2

    Chapter  Google Scholar 

  26. Linyi, Y., et al.: Generating plausible counterfactual explanations for deep transformers in financial text classification. In: Proceedings of the 28th International Conference on Computational Linguistics (2020)

    Google Scholar 

  27. Lipton, Z.C.: The mythos of model interpretability. arXiv:1606.03490 (2017)

  28. Mittelstadt, B., et al.: Explaining explanations in AI. In: Proceedings of the Conference on Fairness, Accountability, and Transparency (2019)

    Google Scholar 

  29. Mueen, A., Keogh, E.: Extracting optimal performance from dynamic time warping. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)

    Google Scholar 

  30. Nguyen, T.T., Le Nguyen, T., Ifrim, G.: A model-agnostic approach to quantifying the informativeness of explanation methods for time series classification. In: Lemaire, V., Malinowski, S., Bagnall, A., Guyet, T., Tavenard, R., Ifrim, G. (eds.) AALTD 2020. LNCS (LNAI), vol. 12588, pp. 77–94. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-65742-0_6

    Chapter  Google Scholar 

  31. Nugent, C., et al.: Gaining insight through case-based explanation. J. Intell. Inf. Syst. 32(3), 267–295 (2009). https://doi.org/10.1007/s10844-008-0069-0

    Article  Google Scholar 

  32. O’Sullivan, B.: Landline - 10/10/20: Towards a Magna Carta for Data: Expert Opinion Piece: Engineering and Computer Science Committee. https://www.ria.ie/sites/default/files/ria_magna_carta_data.pdf

  33. Papernot, N., McDaniel, P.: Deep k-Nearest neighbors: towards confident, interpretable and robust deep learning. arXiv:1803.04765 (2018)

  34. Petitjean, F., et al.: A global averaging method for dynamic time warping, with applications to clustering. Pattern Recogn. 44, 678–693 (2011)

    Article  Google Scholar 

  35. Prabhu, V.U., Birhane, A.: Large image datasets: a pyrrhic win for computer vision? arXiv:2006.16923 (2020)

  36. Ribeiro, M.T., et al.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD 2016 (2016)

    Google Scholar 

  37. Rudin, C.: Please stop explaining black box models for high stakes decisions. arXiv:1811.10154 (2018)

  38. Seah, J.C.Y., et al.: Chest radiographs in congestive heart failure: visualizing neural network learning. Radiology 290(2), 514–522 (2019)

    Article  Google Scholar 

  39. Sørmo, F., et al.: Explanation in case-based reasoning-perspectives and goals. Artif. Intell. Rev. 24, 109–143 (2005). https://doi.org/10.1007/s10462-005-4607-7

    Article  MATH  Google Scholar 

  40. Wachter, S., et al.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. SSRN J. 31 (2017)

    Google Scholar 

  41. Horta, V.A.C., Mileo, A.: Towards explaining deep neural networks through graph analysis. In: Anderst-Kotsis, G., et al. (eds.) DEXA 2019. CCIS, vol. 1062, pp. 155–165. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-27684-3_20

    Chapter  Google Scholar 

  42. Hohman, F., Kahng, M., Pienta, R., Chau, D.H.: Visual analytics in deep learning. IEEE Trans. Visual. Comput. Graphics 25, 2674–2693 (2018)

    Article  Google Scholar 

Download references

Acknowledgements

This paper emanated from research funded by (i) Science Foundation Ireland (SFI) to the Insight Centre for Data Analytics (12/RC/2289-P2), (ii) SFI and DAFM on behalf of the Government of Ireland to the VistaMilk SFI Research Centre (16/RC/3835).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mark T. Keane .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kenny, E.M., Delaney, E.D., Greene, D., Keane, M.T. (2021). Post-hoc Explanation Options for XAI in Deep Learning: The Insight Centre for Data Analytics Perspective. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12663. Springer, Cham. https://doi.org/10.1007/978-3-030-68796-0_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68796-0_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68795-3

  • Online ISBN: 978-3-030-68796-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics