Abstract
Understanding when and why to apply any given eXplainable Artificial Intelligence (XAI) technique is not a straightforward task. There is no single approach that is best suited for a given context. This paper aims to address the challenge of selecting the most appropriate explainer given the context in which an explanation is required. For AI explainability to be effective, explanations and how they are presented needs to be oriented towards the stakeholder receiving the explanation. If—in general—no single explanation technique surpasses the rest, then reasoning over the available methods is required in order to select one that is context-appropriate. Due to the transparency they afford, we propose employing argumentation techniques to reach an agreement over the most suitable explainers from a given set of possible explainers.
In this paper, we propose a modular reasoning system consisting of a given mental model of the relevant stakeholder, a reasoner component that solves the argumentation problem generated by a multi-explainer component, and an AI model that is to be explained suitably to the stakeholder of interest. By formalizing supporting premises—and inferences—we can map stakeholder characteristics to those of explanation techniques. This allows us to reason over the techniques and prioritise the best one for the given context, while also offering transparency into the selection decision.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
It is worth noting that as decision trees scale, their interpretability may also decline due to the sheer size of the structure.
References
Aler Tubella, A., Theodorou, A., Dignum, V., Michael, L.: Contestable black boxes. In: Gutiérrez-Basulto, V., Kliegr, T., Soylu, A., Giese, M., Roman, D. (eds.) RuleML+RR 2020. LNCS, vol. 12173, pp. 159–167. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57977-7_12
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012 (2019)
Baroni, P., Caminada, M., Giacomin, M.: An introduction to argumentation semantics. Knowl. Eng. Rev. 26(4), 365–410 (2011)
Bell, A., Solano-Kamaiko, I., Nov, O., Stoyanovich, J.: It’s just not that simple: an empirical study of the accuracy-explainability trade-off in machine learning for public policy. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 248–266 (2022)
Bhatt, U., Andrus, M., Weller, A., Xiang, A.: Machine learning explainability for external stakeholders. arXiv preprint arXiv:2007.05408 (2020)
Carroll, J.M., Olson, J.R.: Mental models in human-computer interaction. Handbook of Human-Computer Interaction, pp. 45–65 (1988)
Cawsey, A.: Planning interactive explanations. Int. J. Man Mach. Stud. 38(2), 169–199 (1993)
Cugny, R., Aligon, J., Chevalier, M., Roman Jimenez, G., Teste, O.: AutoXAI: a framework to automatically select the most adapted XAI solution. In: Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pp. 315–324 (2022)
Dietz, E., Kakas, A., Michael, L.: Argumentation: a calculus for human-centric AI. Front. Artif. Intell. 5, 955579 (2022)
Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77(2), 321–357 (1995)
Ehn, P.: Scandinavian design: on participation and skill. In: Participatory Design, pp. 41–77. CRC Press (2017)
Ehsan, U., Riedl, M.O.: Human-centered explainable AI: towards a reflective sociotechnical approach. In: Stephanidis, C., Kurosu, M., Degen, H., Reinerman-Jones, L. (eds.) HCII 2020. LNCS, vol. 12424, pp. 449–466. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-60117-1_33
Ehsan, U., et al.: Human-centered explainable AI (HCXAI): beyond opening the black-box of AI. In: CHI Conference on Human Factors in Computing Systems Extended Abstracts, pp. 1–7 (2022)
Friedman, B., Kahn, P.H., Borning, A., Huldtgren, A.: Value sensitive design and information systems. In: Doorn, N., Schuurbiers, D., van de Poel, I., Gorman, M.E. (eds.) Early engagement and new technologies: Opening up the laboratory. PET, vol. 16, pp. 55–95. Springer, Dordrecht (2013). https://doi.org/10.1007/978-94-007-7844-3_4
Gebru, T., et al.: Datasheets for datasets. Commun. ACM 64(12), 86–92 (2021)
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018)
He, X., Zhao, K., Chu, X.: AutoML: a survey of the state-of-the-art. Knowl.-Based Syst. 212, 106622 (2021)
Kakas, A., Michael, L.: Abduction and argumentation for explainable machine learning: a position survey. arXiv preprint arXiv:2010.12896 (2020)
Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., Wortman Vaughan, J.: Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–14 (2020)
Lakkaraju, H., Slack, D., Chen, Y., Tan, C., Singh, S.: Rethinking explainability as a dialogue: a practitioner’s perspective. arXiv preprint arXiv:2202.01875 (2022)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Markus, A.F., Kors, J.A., Rijnbeek, P.R.: The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J. Biomed. Inform. 113, 103655 (2021). https://doi.org/10.1016/j.jbi.2020.103655, https://www.sciencedirect.com/science/article/pii/S1532046420302835
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Mitchell, M., et al.: Model cards for model reporting. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 220–229 (2019)
Molnar, C.: Interpretable Machine Learning, 2 edn (2022). https://christophm.github.io/interpretable-ml-book
Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 607–617 (2020)
Munn, M., Pitman, D.: Explainable AI for Practitioners. O’Reilly Media Inc, California (2022)
Noël, V., Kakas, A.: Gorgias-C: extending argumentation with constraint solving. In: Erdem, E., Lin, F., Schaub, T. (eds.) LPNMR 2009. LNCS (LNAI), vol. 5753, pp. 535–541. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04238-6_54
Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling lime and shap: adversarial attacks on post hoc explanation methods. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 180–186 (2020)
Spanoudakis, N.I., Gligoris, G., Kakas, A.C., Koumi, A.: Gorgias cloud: on-line explainable argumentation. In: System demonstration at the 9th International Conference on Computational Models of Argument (COMMA 2022) (2022)
Thimm, M.: Strategic argumentation in multi-agent systems. KI-Künstliche Intelligenz 28(3), 159–168 (2014)
Vassiliades, A., Papadimitriou, I., Bassiliades, N., Patkos, T.: Visual Gorgias: a mechanism for the visualization of an argumentation dialogue. In: 25th Pan-Hellenic Conference on Informatics, pp. 149–154 (2021)
Weiner, J.: Blah, a system which explains its reasoning. Artif. Intell. 15(1–2), 19–48 (1980)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Ethical Statement
The authors have no competing interests, research involving human participants and/or animals, or issues of informed consent to disclose.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Methnani, L., Dignum, V., Theodorou, A. (2024). Clash of the Explainers: Argumentation for Context-Appropriate Explanations. In: Nowaczyk, S., et al. Artificial Intelligence. ECAI 2023 International Workshops. ECAI 2023. Communications in Computer and Information Science, vol 1947. Springer, Cham. https://doi.org/10.1007/978-3-031-50396-2_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-50396-2_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-50395-5
Online ISBN: 978-3-031-50396-2
eBook Packages: Computer ScienceComputer Science (R0)