Abstract
Organisations face growing legal requirements and ethical responsibilities to ensure that decisions made by their intelligent systems are explainable. However, provisioning of an explanation is often application dependent, causing an extended design phase and delayed deployment. In this paper we present an explainability framework formed of a catalogue of explanation methods, allowing integration to a range of projects within a telecommunications organisation. These methods are split into low-level explanations, high-level explanations and co-created explanations. We motivate and evaluate this framework using the specific case-study of explaining the conclusions of field engineering experts to non-technical planning staff. Feedback from an iterative co-creation process and a qualitative evaluation is indicative that this is a valuable development tool for use in future company projects.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Arras, L., Horn, F., Montavon, G., Müller, K.R., Samek, W.: “What is relevant in a text document?”: an interpretable machine learning approach. PLoS ONE 12(8), 1–23 (2017). https://doi.org/10.1371/journal.pone.0181142
Cheetham, W.: Case-based reasoning with confidence. In: Blanzieri, E., Portinale, L. (eds.) EWCBR 2000. LNCS, vol. 1898, pp. 15–25. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-44527-7_3
Collins, E., Augenstein, I., Riedel, S.: A supervised approach to extractive summarisation of scientific papers. arXiv preprint arXiv:1706.03946 (2017)
European Parliament and Council: Regulation (EU) 2016/679 of the European parliament and of the council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/EC (general data protection regulation)
Gunning, D.: Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), and Web (2017)
Hachey, B., Grover, C.: Extractive summarisation of legal texts. Artif. Intell. Law 14(4), 305–345 (2006)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Le, Q., Mikolov, T.: Distributed representations of sentences and documents. In: Proceedings of the 31st International Conference on International Conference on Machine Learning, ICML 2014, vol. 32, pp. II-1188–II-1196. JMLR.org (2014)
Lipton, Z.C.: The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016)
Luhn, H.P.: The automatic creation of literature abstracts. IBM J. Res. Dev. 2(2), 159–165 (1958)
Massie, S., Craw, S., Wiratunga, W.: Visualisation of case-base reasoning for explanation. In: Proceedings of the ECCBR 2004 Workshops, pp. 135–144 (2004)
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. CoRR abs/1301.3781 (2013)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 26, 1–38 (2018)
Mohseni, S., Zarei, N., Ragan, E.D.: A survey of evaluation methods and measures for interpretable machine learning. arXiv preprint arXiv:1811.11839 (2018)
Muhammad, K., Lawlor, A., Smyth, B.: On the pros and cons of explanation-based ranking. In: Aha, D.W., Lieber, J. (eds.) ICCBR 2017. LNCS (LNAI), vol. 10339, pp. 227–241. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-61030-6_16
Nordin, I.: Expert and non-expert knowledge in medical practice. Med. Health Care Philos. 3(3), 295–302 (2000). https://doi.org/10.1023/A:1026446214010
Ramos, J.: Using tf-idf to determine word relevance in document queries. In: Proceedings of the First Instructional Conference on Machine Learning, pp. 133–142 (2003)
Ras, G., van Gerven, M., Haselager, P.: Explanation methods in deep learning: users, values, concerns and challenges. In: Escalante, H.J., et al. (eds.) Explainable and Interpretable Models in Computer Vision and Machine Learning. TSSCML, pp. 19–36. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98131-4_2
Sørmo, F., Cassens, J., Aamodt, A.: Explanation in case-based reasoning - perspectives and goals. Artif. Intell. Rev. 24(2), 109–143 (2005)
Tsymbal, A.: The problem of concept drift: definitions and related work. Computer Science Department, Trinity College Dublin, 106(2), p. 58 (2004)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Martin, K., Liret, A., Wiratunga, N., Owusu, G., Kern, M. (2019). Developing a Catalogue of Explainability Methods to Support Expert and Non-expert Users. In: Bramer, M., Petridis, M. (eds) Artificial Intelligence XXXVI. SGAI 2019. Lecture Notes in Computer Science(), vol 11927. Springer, Cham. https://doi.org/10.1007/978-3-030-34885-4_24
Download citation
DOI: https://doi.org/10.1007/978-3-030-34885-4_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-34884-7
Online ISBN: 978-3-030-34885-4
eBook Packages: Computer ScienceComputer Science (R0)