Skip to main content

Explanation via Machine Arguing

  • Chapter
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12258))

Abstract

As AI becomes ever more ubiquitous in our everyday lives, its ability to explain to and interact with humans is evolving into a critical research area. Explainable AI (XAI) has therefore emerged as a popular topic but its research landscape is currently very fragmented. Explanations in the literature have generally been aimed at addressing individual challenges and are often ad-hoc, tailored to specific AIs and/or narrow settings. Further, the extraction of explanations is no simple task; the design of the explanations must be fit for purpose, with considerations including, but not limited to: Is the model or a result being explained? Is the explanation suited to skilled or unskilled explainees? By which means is the information best exhibited? How may users interact with the explanation? As these considerations rise in number, it quickly becomes clear that a systematic way to obtain a variety of explanations for a variety of users and interactions is much needed. In this tutorial we will overview recent approaches showing how these challenges can be addressed by utilising forms of machine arguing as the scaffolding underpinning explanations that are delivered to users. Machine arguing amounts to the deployment of methods from computational argumentation in AI with suitably mined argumentation frameworks, which provide abstractions of “debates”. Computational argumentation has been widely used to support applications requiring information exchange between AI systems and users , facilitated by the fact that the capability of arguing is pervasive in human affairs and arguing is core to a multitude of human activities: humans argue to explain, interact and exchange information. Our lecture will focus on how machine arguing can serve as the driving force of explanations in AI in different ways, namely: by building explainable systems with argumentative foundations from linguistic data focusing on reviews), or by extracting argumentative reasoning from existing systems (focusing on a recommender system) .

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    These alternative notations are used interchangeably in the literature, as we do here.

  2. 2.

    Note that several other notions could be used, as overviewed in [7]. we have chosen this specific notion because it satisfies some desirable properties [7] as well as performing well in practice [11].

  3. 3.

    Note that this recursively defined notion treats strengths of attackers and supporters as sets, but needs to consider them in sequence (thus the mention of ‘an arbitrary permutation’).

  4. 4.

    https://www.rottentomatoes.com.

References

  1. Albini, E., Rago, A., Baroni, P., Toni, F.: Relation-based counterfactual explanations for Bayesian network classifiers. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI (2020, To Appear)

    Google Scholar 

  2. Atkinson, K., et al.: Towards artificial argumentation. AI Mag. 38(3), 25–36 (2017)

    Article  MathSciNet  Google Scholar 

  3. Balog, K., Radlinski, F., Arakelyan, S.: Transparent, scrutable and explainable user models for personalized recommendation. In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR, pp. 265–274 (2019)

    Google Scholar 

  4. Baroni, P., Comini, G., Rago, A., Toni, F.: Abstract games of argumentation strategy and game-theoretical argument strength. In: An, B., Bazzan, A., Leite, J., Villata, S., van der Torre, L. (eds.) PRIMA 2017. LNCS (LNAI), vol. 10621, pp. 403–419. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-69131-2_24

    Chapter  Google Scholar 

  5. Baroni, P., Gabbay, D., Giacomin, M., van der Torre, L. (eds.): Handbook of Formal Argumentation. College Publications (2018)

    Google Scholar 

  6. Baroni, P., Rago, A., Toni, F.: How many properties do we need for gradual argumentation? In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, The 30th innovative Applications of Artificial Intelligence (IAAI), and The 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI), pp. 1736–1743 (2018)

    Google Scholar 

  7. Baroni, P., Rago, A., Toni, F.: From fine-grained properties to broad principles for gradual argumentation: a principled spectrum. Int. J. Approximate Reasoning 105, 252–286 (2019)

    Article  MathSciNet  Google Scholar 

  8. Briguez, C.E., Budán, M.C., Deagustini, C.A.D., Maguitman, A.G., Capobianco, M., Simari, G.R.: Argument-based mixed recommenders and their application to movie suggestion. Expert Syst. Appl. 41(14), 6467–6482 (2014)

    Article  Google Scholar 

  9. Cayrol, C., Lagasquie-Schiex, M.C.: On the acceptability of arguments in bipolar argumentation frameworks. In: ECSQARU, pp. 378–389 (2005)

    Google Scholar 

  10. Chen, X., Zhang, Y., Qin, Z.: Dynamic explainable recommendation based on neural attentive models. In: The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI, pp. 53–60 (2019)

    Google Scholar 

  11. Cocarascu, O., Rago, A., Toni, F.: Extracting dialogical explanations for review aggregations with argumentative dialogical agents. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS, pp. 1261–1269 (2019)

    Google Scholar 

  12. Cocarascu, O., Stylianou, A., Cyras, K., Toni, F.: Data-empowered argumentation for dialectically explainable predictions. In: Proceedings of European Conference on Artificial Intelligence, ECAI 2020 (2020)

    Google Scholar 

  13. Cohen, A., Gottifredi, S., García, A.J., Simari, G.R.: A survey of different approaches to support in argumentation systems. Knowl. Eng. Rev. 29(5), 513–550 (2014)

    Article  Google Scholar 

  14. Cohen, A., Parsons, S., Sklar, E.I., McBurney, P.: A characterization of types of support between structured arguments and their relationship with support in abstract argumentation. Int. J. Approximate Reasoning 94, 76–104 (2018)

    Article  MathSciNet  Google Scholar 

  15. Cyras, K., Letsios, D., Misener, R., Toni, F.: Argumentation for explainable scheduling. In: The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI, pp. 2752–2759. AAAI Press (2019)

    Google Scholar 

  16. Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77(2), 321–357 (1995)

    Article  MathSciNet  Google Scholar 

  17. Gabbay, D.M.: Logical foundations for bipolar and tripolar argumentation networks: preliminary results. J. Logic Comput. 26(1), 247–292 (2016)

    Article  MathSciNet  Google Scholar 

  18. Gedikli, F., Jannach, D., Ge, M.: How should I explain? A comparison of different explanation types for recommender systems. Int. J. Hum. Comput. Stud. 72(4), 367–382 (2014)

    Article  Google Scholar 

  19. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93:1–93:42 (2019). https://doi.org/10.1145/3236009

    Article  Google Scholar 

  20. Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. In: Proceeding on the ACM Conference on Computer Supported Cooperative Work, CSCW, pp. 241–250 (2000)

    Google Scholar 

  21. Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267, 1–38 (2019). https://doi.org/10.1016/j.artint.2018.07.007

    Article  MathSciNet  MATH  Google Scholar 

  22. Naveed, S., Donkers, T., Ziegler, J.: Argumentation-based explanations in recommender systems: conceptual framework and empirical results. In: Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization, UMAP, pp. 293–298 (2018)

    Google Scholar 

  23. Rago, A., Cocarascu, O., Toni, F.: Argumentation-based recommendations: fantastic explanations and how to find them. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI, pp. 1949–1955 (2018)

    Google Scholar 

  24. Rago, A., Toni, F.: Quantitative argumentation debates with votes for opinion polling. In: An, B., Bazzan, A., Leite, J., Villata, S., van der Torre, L. (eds.) PRIMA 2017. LNCS (LNAI), vol. 10621, pp. 369–385. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-69131-2_22

    Chapter  Google Scholar 

  25. Rago, A., Toni, F., Aurisicchio, M., Baroni, P.: Discontinuity-free decision support with quantitative argumentation debates. In: KR, pp. 63–73 (2016)

    Google Scholar 

  26. Vig, J., Sen, S., Riedl, J.: Tagsplanations: explaining recommendations using tags. In: Proceedings of the 14th International Conference on Intelligent User Interfaces, IUI, pp. 47–56 (2009)

    Google Scholar 

  27. Čyras, K., Karamlou, A., Lee, M., Letsios, D., Misener, R., Toni, F.: AI-assisted schedule explainer for nurse rostering - Demonstration. In: International Conference on Autonomous Agents and Multi-Agent Systems (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Francesca Toni .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Cocarascu, O., Rago, A., Toni, F. (2020). Explanation via Machine Arguing. In: Manna, M., Pieris, A. (eds) Reasoning Web. Declarative Artificial Intelligence. Reasoning Web 2020. Lecture Notes in Computer Science(), vol 12258. Springer, Cham. https://doi.org/10.1007/978-3-030-60067-9_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-60067-9_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-60066-2

  • Online ISBN: 978-3-030-60067-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics