Skip to main content

Explanation as a Process: User-Centric Construction of Multi-level and Multi-modal Explanations

  • Conference paper
  • First Online:
KI 2021: Advances in Artificial Intelligence (KI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12873))

Included in the following conference series:

Abstract

In the last years, XAI research has mainly been concerned with developing new technical approaches to explain deep learning models. Just recent research has started to acknowledge the need to tailor explanations to different contexts and requirements of stakeholders. Explanations must not only suit developers of models, but also domain experts as well as end users. Thus, in order to satisfy different stakeholders, explanation methods need to be combined. While multi-modal explanations have been used to make model predictions more transparent, less research has focused on treating explanation as a process, where users can ask for information according to the level of understanding gained at a certain point in time. Consequently, an opportunity to explore explanations on different levels of abstraction should be provided besides multi-modal explanations. We present a process-based approach that combines multi-level and multi-modal explanations. The user can ask for textual explanations or visualizations through conversational interaction in a drill-down manner. We use Inductive Logic Programming, an interpretable machine learning approach, to learn a comprehensible model. Further, we present an algorithm that creates an explanatory tree for each example for which a classifier decision is to be explained. The explanatory tree can be navigated by the user to get answers of different levels of detail. We provide a proof-of-concept implementation for concepts induced from a semantic net about living beings.

The work presented in this paper is part of the BMBF ML-3 project Transparent Medical Expert Companion (TraMeExCo), FKZ 01IS18056 B, 2018-2021.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Change history

  • 18 December 2021

    The original version of this chapter was inadvertently published with a misspelling in the explanatory dialogue of Fig. 5. It has been updated as follows:

Notes

  1. 1.

    Gitlab repository of the proof-of-concept implementation: https://gitlab.rz.uni-bamberg.de/cogsys/public/multi-level-multi-modal-explanation.

  2. 2.

    Retrieval through semantic search can be performed for example over semantic mediawiki: https://www.semantic-mediawiki.org/wiki/Help:Semantic_search.

References

  1. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. 58, 82–115 (2020)

    Article  Google Scholar 

  2. Baniecki, H., Biecek, P.: The grammar of interactive explanatory model analysis. CoRR abs/2005.00497 (2020)

    Google Scholar 

  3. Bratko, I.: Prolog Programming for Artificial Intelligence. Addison-Wesley Longman Publishing Co., Inc., Boston (1986)

    MATH  Google Scholar 

  4. Bruckert, S., Finzel, B., Schmid, U.: The next generation of medical decision support: a roadmap toward transparent expert companions. Front. Artif. Intell. 3, 75 (2020)

    Article  Google Scholar 

  5. Calegari, R., Ciatto, G., Omicini, A.: On the integration of symbolic and sub-symbolic techniques for XAI: a survey. Intelligenza Artificiale 14(1), 7–32 (2020)

    Article  Google Scholar 

  6. Chein, M., Mugnier, M.L.: Graph-Based Knowledge Representation: Computational Foundations of Conceptual Graphs. Springer, London (2008). https://doi.org/10.1007/978-1-84800-286-9

    Book  MATH  Google Scholar 

  7. De Raedt, L., Lavrač, N.: The many faces of inductive logic programming. In: Komorowski, J., Raś, Z.W. (eds.) ISMIS 1993. LNCS, vol. 689, pp. 435–449. Springer, Heidelberg (1993). https://doi.org/10.1007/3-540-56804-2_41

    Chapter  Google Scholar 

  8. El-Assady, M., et al.: Towards XAI: structuring the processes of explanations. In: Proceedings of the ACM CHI Conference Workshop on Human-Centered Machine Learning Perspectives at CHI’19, p. 13 (2019)

    Google Scholar 

  9. Gromowski, M., Siebers, M., Schmid, U.: A process framework for inducing and explaining datalog theories. ADAC 14(4), 821–835 (2020). https://doi.org/10.1007/s11634-020-00422-7

    Article  MathSciNet  Google Scholar 

  10. Gunning, D., Aha, D.: DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019)

    Google Scholar 

  11. Hartley, R.T., Barnden, J.A.: Semantic networks: visualizations of knowledge. Trends Cogn. Sci. 1(5), 169–175 (1997)

    Article  Google Scholar 

  12. Hendricks, L.A., Hu, R., Darrell, T., Akata, Z.: Grounding visual explanations. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11206, pp. 269–286. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01216-8_17

    Chapter  Google Scholar 

  13. Hilton, D.J.: A conversational model of causal explanation. Eur. Rev. Soc. Psychol. 2(1), 51–81 (1991)

    Article  Google Scholar 

  14. Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H.: Causability and explainability of artificial intelligence in medicine. WIREs Data Min. Knowl. Discov. 9(4), e1312 (2019)

    Google Scholar 

  15. Holzinger, A., Malle, B., Saranti, A., Pfeifer, B.: Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI. Inf. Fus. 71, 28–37 (2021)

    Article  Google Scholar 

  16. Kulesza, T., et al.: Explanatory debugging: Supporting end-user debugging of machine-learned programs. In: 2010 IEEE Symposium on Visual Languages and Human-Centric Computing, pp. 41–48. IEEE (2010)

    Google Scholar 

  17. Langer, M., et al.: What do we want from explainable artificial intelligence (XAI)? - a stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artif. Intell. 296, 103473 (2021)

    Article  MathSciNet  Google Scholar 

  18. Liebig, T., Scheele, S.: Explaining entailments and patching modelling flaws. Künstliche Intell. 22(2), 25–27 (2008)

    Google Scholar 

  19. Lombrozo, T.: Simplicity and probability in causal explanation. Cogn. Psychol. 55(3), 232–257 (2007)

    Article  Google Scholar 

  20. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  Google Scholar 

  21. Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (1997)

    MATH  Google Scholar 

  22. Možina, M., Žabkar, J., Bratko, I.: Argument based machine learning. Artif. Intell. 171(10), 922–937 (2007)

    Article  MathSciNet  Google Scholar 

  23. Muggleton, S., de Raedt, L.: Inductive logic programming: theory and methods. J. Logic Program. 19–20, 629–679 (1994)

    Article  MathSciNet  Google Scholar 

  24. Muggleton, S.H., Schmid, U., Zeller, C., Tamaddoni-Nezhad, A., Besold, T.: Ultra-strong machine learning: comprehensibility of programs learned with ILP. Mach. Learn. 107(7), 1119–1140 (2018). https://doi.org/10.1007/s10994-018-5707-3

    Article  MathSciNet  MATH  Google Scholar 

  25. Musto, C., Narducci, F., Lops, P., De Gemmis, M., Semeraro, G.: ExpLOD: a framework for explaining recommendations based on the linked open data cloud. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 151–154. ACM, Boston (2016)

    Google Scholar 

  26. Putnam, V., Conati, C.: Exploring the need for explainable artificial intelligence (XAI) in intelligent tutoring systems (ITS), Los Angeles p. 7 (2019)

    Google Scholar 

  27. Páez, A.: The pragmatic turn in explainable artificial intelligence (XAI). Mind. Mach. 29(3), 441–459 (2019). https://doi.org/10.1007/s11023-019-09502-w

    Article  Google Scholar 

  28. Rabold, J., Siebers, M., Schmid, U.: Explaining black-box classifiers with ILP – empowering LIME with aleph to approximate non-linear decisions with relational rules. In: Riguzzi, F., Bellodi, E., Zese, R. (eds.) ILP 2018. LNCS (LNAI), vol. 11105, pp. 105–117. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99960-9_7

    Chapter  Google Scholar 

  29. Roth-Berghofer, T., Forcher, B.: Improving understandability of semantic search explanations. Int. J. Knowl. Eng. Data Min. 1(3), 216–234 (2011)

    Article  Google Scholar 

  30. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)

    Article  Google Scholar 

  31. Schmid, U., Finzel, B.: Mutual explanations for cooperative decision making in medicine. KI-Künstliche Intelligenz, pp. 1–7 (2020)

    Google Scholar 

  32. Siebers, M., Schmid, U.: Please delete that! Why should I? KI - Künstliche Intelligenz 33(1), 35–44 (2018). https://doi.org/10.1007/s13218-018-0565-5

    Article  Google Scholar 

  33. Sperrle, F., El-Assady, M., Guo, G., Chau, D.H., Endert, A., Keim, D.: Should we trust (X)AI? Design dimensions for structured experimental evaluations. arXiv:2009.06433 [cs] (2020)

  34. Srinivasan, A.: The Aleph Manual. http://www.cs.ox.ac.uk/activities/machinelearning/Aleph/

  35. Sterling, L., Shapiro, E.: The Art of Prolog: Advanced Programming Techniques. MIT Press, Cambridge (1986)

    MATH  Google Scholar 

  36. Teso, S., Kersting, K.: Explanatory interactive machine learning. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 239–245 (2019)

    Google Scholar 

  37. Thaler, A., Schmid, U.: Explaining machine learned relational concepts in visual domains-effects of perceived accuracy on joint performance and trust. In: Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci’21, Vienna). Cognitive Science Society (to appear)

    Google Scholar 

  38. Walton, D.: A dialogue system for evaluating explanations. In: Argument Evaluation and Evidence. LGTS, vol. 23, pp. 69–116. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-19626-8_3

    Chapter  Google Scholar 

  39. Weitz, K., Schiller, D., Schlagowski, R., Huber, T., André, E.: “Let me explain!’’ exploring the potential of virtual agents in explainable AI interaction design. J. Multimod. User Interfaces 15, 87–98 (2020)

    Article  Google Scholar 

  40. Zemla, J.C., Sloman, S., Bechlivanidis, C., Lagnado, D.A.: Evaluating everyday explanations. Psychonomic Bull. Rev. 24(5), 1488–1500 (2017). https://doi.org/10.3758/s13423-017-1258-z

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous referees, who provided useful comments on the submission version of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bettina Finzel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Finzel, B., Tafler, D.E., Scheele, S., Schmid, U. (2021). Explanation as a Process: User-Centric Construction of Multi-level and Multi-modal Explanations. In: Edelkamp, S., Möller, R., Rueckert, E. (eds) KI 2021: Advances in Artificial Intelligence. KI 2021. Lecture Notes in Computer Science(), vol 12873. Springer, Cham. https://doi.org/10.1007/978-3-030-87626-5_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87626-5_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87625-8

  • Online ISBN: 978-3-030-87626-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics