Skip to main content

Clash of the Explainers: Argumentation for Context-Appropriate Explanations

  • Conference paper
  • First Online:
Artificial Intelligence. ECAI 2023 International Workshops (ECAI 2023)

Abstract

Understanding when and why to apply any given eXplainable Artificial Intelligence (XAI) technique is not a straightforward task. There is no single approach that is best suited for a given context. This paper aims to address the challenge of selecting the most appropriate explainer given the context in which an explanation is required. For AI explainability to be effective, explanations and how they are presented needs to be oriented towards the stakeholder receiving the explanation. If—in general—no single explanation technique surpasses the rest, then reasoning over the available methods is required in order to select one that is context-appropriate. Due to the transparency they afford, we propose employing argumentation techniques to reach an agreement over the most suitable explainers from a given set of possible explainers.

In this paper, we propose a modular reasoning system consisting of a given mental model of the relevant stakeholder, a reasoner component that solves the argumentation problem generated by a multi-explainer component, and an AI model that is to be explained suitably to the stakeholder of interest. By formalizing supporting premises—and inferences—we can map stakeholder characteristics to those of explanation techniques. This allows us to reason over the techniques and prioritise the best one for the given context, while also offering transparency into the selection decision.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    It is worth noting that as decision trees scale, their interpretability may also decline due to the sheer size of the structure.

References

  1. Aler Tubella, A., Theodorou, A., Dignum, V., Michael, L.: Contestable black boxes. In: Gutiérrez-Basulto, V., Kliegr, T., Soylu, A., Giese, M., Roman, D. (eds.) RuleML+RR 2020. LNCS, vol. 12173, pp. 159–167. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57977-7_12

  2. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  3. Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012 (2019)

  4. Baroni, P., Caminada, M., Giacomin, M.: An introduction to argumentation semantics. Knowl. Eng. Rev. 26(4), 365–410 (2011)

    Article  Google Scholar 

  5. Bell, A., Solano-Kamaiko, I., Nov, O., Stoyanovich, J.: It’s just not that simple: an empirical study of the accuracy-explainability trade-off in machine learning for public policy. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 248–266 (2022)

    Google Scholar 

  6. Bhatt, U., Andrus, M., Weller, A., Xiang, A.: Machine learning explainability for external stakeholders. arXiv preprint arXiv:2007.05408 (2020)

  7. Carroll, J.M., Olson, J.R.: Mental models in human-computer interaction. Handbook of Human-Computer Interaction, pp. 45–65 (1988)

    Google Scholar 

  8. Cawsey, A.: Planning interactive explanations. Int. J. Man Mach. Stud. 38(2), 169–199 (1993)

    Article  Google Scholar 

  9. Cugny, R., Aligon, J., Chevalier, M., Roman Jimenez, G., Teste, O.: AutoXAI: a framework to automatically select the most adapted XAI solution. In: Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pp. 315–324 (2022)

    Google Scholar 

  10. Dietz, E., Kakas, A., Michael, L.: Argumentation: a calculus for human-centric AI. Front. Artif. Intell. 5, 955579 (2022)

    Article  Google Scholar 

  11. Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77(2), 321–357 (1995)

    Article  MathSciNet  Google Scholar 

  12. Ehn, P.: Scandinavian design: on participation and skill. In: Participatory Design, pp. 41–77. CRC Press (2017)

    Google Scholar 

  13. Ehsan, U., Riedl, M.O.: Human-centered explainable AI: towards a reflective sociotechnical approach. In: Stephanidis, C., Kurosu, M., Degen, H., Reinerman-Jones, L. (eds.) HCII 2020. LNCS, vol. 12424, pp. 449–466. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-60117-1_33

    Chapter  Google Scholar 

  14. Ehsan, U., et al.: Human-centered explainable AI (HCXAI): beyond opening the black-box of AI. In: CHI Conference on Human Factors in Computing Systems Extended Abstracts, pp. 1–7 (2022)

    Google Scholar 

  15. Friedman, B., Kahn, P.H., Borning, A., Huldtgren, A.: Value sensitive design and information systems. In: Doorn, N., Schuurbiers, D., van de Poel, I., Gorman, M.E. (eds.) Early engagement and new technologies: Opening up the laboratory. PET, vol. 16, pp. 55–95. Springer, Dordrecht (2013). https://doi.org/10.1007/978-94-007-7844-3_4

    Chapter  Google Scholar 

  16. Gebru, T., et al.: Datasheets for datasets. Commun. ACM 64(12), 86–92 (2021)

    Article  Google Scholar 

  17. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018)

    Article  Google Scholar 

  18. He, X., Zhao, K., Chu, X.: AutoML: a survey of the state-of-the-art. Knowl.-Based Syst. 212, 106622 (2021)

    Article  Google Scholar 

  19. Kakas, A., Michael, L.: Abduction and argumentation for explainable machine learning: a position survey. arXiv preprint arXiv:2010.12896 (2020)

  20. Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., Wortman Vaughan, J.: Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–14 (2020)

    Google Scholar 

  21. Lakkaraju, H., Slack, D., Chen, Y., Tan, C., Singh, S.: Rethinking explainability as a dialogue: a practitioner’s perspective. arXiv preprint arXiv:2202.01875 (2022)

  22. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  23. Markus, A.F., Kors, J.A., Rijnbeek, P.R.: The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J. Biomed. Inform. 113, 103655 (2021). https://doi.org/10.1016/j.jbi.2020.103655, https://www.sciencedirect.com/science/article/pii/S1532046420302835

  24. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  Google Scholar 

  25. Mitchell, M., et al.: Model cards for model reporting. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 220–229 (2019)

    Google Scholar 

  26. Molnar, C.: Interpretable Machine Learning, 2 edn (2022). https://christophm.github.io/interpretable-ml-book

  27. Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 607–617 (2020)

    Google Scholar 

  28. Munn, M., Pitman, D.: Explainable AI for Practitioners. O’Reilly Media Inc, California (2022)

    Google Scholar 

  29. Noël, V., Kakas, A.: Gorgias-C: extending argumentation with constraint solving. In: Erdem, E., Lin, F., Schaub, T. (eds.) LPNMR 2009. LNCS (LNAI), vol. 5753, pp. 535–541. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04238-6_54

    Chapter  Google Scholar 

  30. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  31. Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling lime and shap: adversarial attacks on post hoc explanation methods. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 180–186 (2020)

    Google Scholar 

  32. Spanoudakis, N.I., Gligoris, G., Kakas, A.C., Koumi, A.: Gorgias cloud: on-line explainable argumentation. In: System demonstration at the 9th International Conference on Computational Models of Argument (COMMA 2022) (2022)

    Google Scholar 

  33. Thimm, M.: Strategic argumentation in multi-agent systems. KI-Künstliche Intelligenz 28(3), 159–168 (2014)

    Article  Google Scholar 

  34. Vassiliades, A., Papadimitriou, I., Bassiliades, N., Patkos, T.: Visual Gorgias: a mechanism for the visualization of an argumentation dialogue. In: 25th Pan-Hellenic Conference on Informatics, pp. 149–154 (2021)

    Google Scholar 

  35. Weiner, J.: Blah, a system which explains its reasoning. Artif. Intell. 15(1–2), 19–48 (1980)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leila Methnani .

Editor information

Editors and Affiliations

Ethics declarations

Ethical Statement

The authors have no competing interests, research involving human participants and/or animals, or issues of informed consent to disclose.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Methnani, L., Dignum, V., Theodorou, A. (2024). Clash of the Explainers: Argumentation for Context-Appropriate Explanations. In: Nowaczyk, S., et al. Artificial Intelligence. ECAI 2023 International Workshops. ECAI 2023. Communications in Computer and Information Science, vol 1947. Springer, Cham. https://doi.org/10.1007/978-3-031-50396-2_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-50396-2_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-50395-5

  • Online ISBN: 978-3-031-50396-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics