Abstract
In the last years, XAI research has mainly been concerned with developing new technical approaches to explain deep learning models. Just recent research has started to acknowledge the need to tailor explanations to different contexts and requirements of stakeholders. Explanations must not only suit developers of models, but also domain experts as well as end users. Thus, in order to satisfy different stakeholders, explanation methods need to be combined. While multi-modal explanations have been used to make model predictions more transparent, less research has focused on treating explanation as a process, where users can ask for information according to the level of understanding gained at a certain point in time. Consequently, an opportunity to explore explanations on different levels of abstraction should be provided besides multi-modal explanations. We present a process-based approach that combines multi-level and multi-modal explanations. The user can ask for textual explanations or visualizations through conversational interaction in a drill-down manner. We use Inductive Logic Programming, an interpretable machine learning approach, to learn a comprehensible model. Further, we present an algorithm that creates an explanatory tree for each example for which a classifier decision is to be explained. The explanatory tree can be navigated by the user to get answers of different levels of detail. We provide a proof-of-concept implementation for concepts induced from a semantic net about living beings.
The work presented in this paper is part of the BMBF ML-3 project Transparent Medical Expert Companion (TraMeExCo), FKZ 01IS18056 B, 2018-2021.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Change history
18 December 2021
The original version of this chapter was inadvertently published with a misspelling in the explanatory dialogue of Fig. 5. It has been updated as follows:
Notes
- 1.
Gitlab repository of the proof-of-concept implementation: https://gitlab.rz.uni-bamberg.de/cogsys/public/multi-level-multi-modal-explanation.
- 2.
Retrieval through semantic search can be performed for example over semantic mediawiki: https://www.semantic-mediawiki.org/wiki/Help:Semantic_search.
References
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. 58, 82–115 (2020)
Baniecki, H., Biecek, P.: The grammar of interactive explanatory model analysis. CoRR abs/2005.00497 (2020)
Bratko, I.: Prolog Programming for Artificial Intelligence. Addison-Wesley Longman Publishing Co., Inc., Boston (1986)
Bruckert, S., Finzel, B., Schmid, U.: The next generation of medical decision support: a roadmap toward transparent expert companions. Front. Artif. Intell. 3, 75 (2020)
Calegari, R., Ciatto, G., Omicini, A.: On the integration of symbolic and sub-symbolic techniques for XAI: a survey. Intelligenza Artificiale 14(1), 7–32 (2020)
Chein, M., Mugnier, M.L.: Graph-Based Knowledge Representation: Computational Foundations of Conceptual Graphs. Springer, London (2008). https://doi.org/10.1007/978-1-84800-286-9
De Raedt, L., Lavrač, N.: The many faces of inductive logic programming. In: Komorowski, J., Raś, Z.W. (eds.) ISMIS 1993. LNCS, vol. 689, pp. 435–449. Springer, Heidelberg (1993). https://doi.org/10.1007/3-540-56804-2_41
El-Assady, M., et al.: Towards XAI: structuring the processes of explanations. In: Proceedings of the ACM CHI Conference Workshop on Human-Centered Machine Learning Perspectives at CHI’19, p. 13 (2019)
Gromowski, M., Siebers, M., Schmid, U.: A process framework for inducing and explaining datalog theories. ADAC 14(4), 821–835 (2020). https://doi.org/10.1007/s11634-020-00422-7
Gunning, D., Aha, D.: DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019)
Hartley, R.T., Barnden, J.A.: Semantic networks: visualizations of knowledge. Trends Cogn. Sci. 1(5), 169–175 (1997)
Hendricks, L.A., Hu, R., Darrell, T., Akata, Z.: Grounding visual explanations. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11206, pp. 269–286. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01216-8_17
Hilton, D.J.: A conversational model of causal explanation. Eur. Rev. Soc. Psychol. 2(1), 51–81 (1991)
Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H.: Causability and explainability of artificial intelligence in medicine. WIREs Data Min. Knowl. Discov. 9(4), e1312 (2019)
Holzinger, A., Malle, B., Saranti, A., Pfeifer, B.: Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI. Inf. Fus. 71, 28–37 (2021)
Kulesza, T., et al.: Explanatory debugging: Supporting end-user debugging of machine-learned programs. In: 2010 IEEE Symposium on Visual Languages and Human-Centric Computing, pp. 41–48. IEEE (2010)
Langer, M., et al.: What do we want from explainable artificial intelligence (XAI)? - a stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artif. Intell. 296, 103473 (2021)
Liebig, T., Scheele, S.: Explaining entailments and patching modelling flaws. Künstliche Intell. 22(2), 25–27 (2008)
Lombrozo, T.: Simplicity and probability in causal explanation. Cogn. Psychol. 55(3), 232–257 (2007)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (1997)
Možina, M., Žabkar, J., Bratko, I.: Argument based machine learning. Artif. Intell. 171(10), 922–937 (2007)
Muggleton, S., de Raedt, L.: Inductive logic programming: theory and methods. J. Logic Program. 19–20, 629–679 (1994)
Muggleton, S.H., Schmid, U., Zeller, C., Tamaddoni-Nezhad, A., Besold, T.: Ultra-strong machine learning: comprehensibility of programs learned with ILP. Mach. Learn. 107(7), 1119–1140 (2018). https://doi.org/10.1007/s10994-018-5707-3
Musto, C., Narducci, F., Lops, P., De Gemmis, M., Semeraro, G.: ExpLOD: a framework for explaining recommendations based on the linked open data cloud. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 151–154. ACM, Boston (2016)
Putnam, V., Conati, C.: Exploring the need for explainable artificial intelligence (XAI) in intelligent tutoring systems (ITS), Los Angeles p. 7 (2019)
Páez, A.: The pragmatic turn in explainable artificial intelligence (XAI). Mind. Mach. 29(3), 441–459 (2019). https://doi.org/10.1007/s11023-019-09502-w
Rabold, J., Siebers, M., Schmid, U.: Explaining black-box classifiers with ILP – empowering LIME with aleph to approximate non-linear decisions with relational rules. In: Riguzzi, F., Bellodi, E., Zese, R. (eds.) ILP 2018. LNCS (LNAI), vol. 11105, pp. 105–117. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99960-9_7
Roth-Berghofer, T., Forcher, B.: Improving understandability of semantic search explanations. Int. J. Knowl. Eng. Data Min. 1(3), 216–234 (2011)
Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)
Schmid, U., Finzel, B.: Mutual explanations for cooperative decision making in medicine. KI-Künstliche Intelligenz, pp. 1–7 (2020)
Siebers, M., Schmid, U.: Please delete that! Why should I? KI - Künstliche Intelligenz 33(1), 35–44 (2018). https://doi.org/10.1007/s13218-018-0565-5
Sperrle, F., El-Assady, M., Guo, G., Chau, D.H., Endert, A., Keim, D.: Should we trust (X)AI? Design dimensions for structured experimental evaluations. arXiv:2009.06433 [cs] (2020)
Srinivasan, A.: The Aleph Manual. http://www.cs.ox.ac.uk/activities/machinelearning/Aleph/
Sterling, L., Shapiro, E.: The Art of Prolog: Advanced Programming Techniques. MIT Press, Cambridge (1986)
Teso, S., Kersting, K.: Explanatory interactive machine learning. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 239–245 (2019)
Thaler, A., Schmid, U.: Explaining machine learned relational concepts in visual domains-effects of perceived accuracy on joint performance and trust. In: Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci’21, Vienna). Cognitive Science Society (to appear)
Walton, D.: A dialogue system for evaluating explanations. In: Argument Evaluation and Evidence. LGTS, vol. 23, pp. 69–116. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-19626-8_3
Weitz, K., Schiller, D., Schlagowski, R., Huber, T., André, E.: “Let me explain!’’ exploring the potential of virtual agents in explainable AI interaction design. J. Multimod. User Interfaces 15, 87–98 (2020)
Zemla, J.C., Sloman, S., Bechlivanidis, C., Lagnado, D.A.: Evaluating everyday explanations. Psychonomic Bull. Rev. 24(5), 1488–1500 (2017). https://doi.org/10.3758/s13423-017-1258-z
Acknowledgements
The authors would like to thank the anonymous referees, who provided useful comments on the submission version of the paper.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Finzel, B., Tafler, D.E., Scheele, S., Schmid, U. (2021). Explanation as a Process: User-Centric Construction of Multi-level and Multi-modal Explanations. In: Edelkamp, S., Möller, R., Rueckert, E. (eds) KI 2021: Advances in Artificial Intelligence. KI 2021. Lecture Notes in Computer Science(), vol 12873. Springer, Cham. https://doi.org/10.1007/978-3-030-87626-5_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-87626-5_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87625-8
Online ISBN: 978-3-030-87626-5
eBook Packages: Computer ScienceComputer Science (R0)