Skip to main content

Expectation: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge

  • Conference paper
  • First Online:
Explainable and Transparent AI and Multi-Agent Systems (EXTRAAMAS 2021)

Abstract

Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis. However, explanation techniques are still embryotic, and they mainly target ML experts rather than heterogeneous end-users. Furthermore, existing solutions assume data to be centralised, homogeneous, and fully/continuously accessible—circumstances seldom found altogether in practice. Arguably, a system-wide perspective is currently missing.

The project named “Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge ” (Expectation) aims at overcoming such limitations. This manuscript presents the overall objectives and approach of the Expectation project, focusing on the theoretical and practical advance of the state of the art of XAI towards the construction of personalised explanations in spite of decentralisation and heterogeneity of knowledge, agents, and explainees (both humans or virtual).

To tackle the challenges posed by personalisation, decentralisation, and heterogeneity, the project fruitfully combines abstractions, methods, and approaches from the multi-agent systems, knowledge extraction/injection, negotiation, argumentation, and symbolic reasoning communities.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.chistera.eu/projects-call-2019.

References

  1. Fadlullah, Z.M., et al.: State-of-the-art deep learning: evolving machine intelligence toward tomorrow’s intelligent network traffic control systems. IEEE Commun. Surv. Tutor. 19(4), 2432–2455 (2017)

    Article  Google Scholar 

  2. Helbing, D.: Societal, economic, ethical and legal challenges of the digital revolution: from big data to deep learning, artificial intelligence, and manipulative technologies. In: Helbing, D. (ed.) Towards Digital Enlightenment, pp. 47–72. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-90869-4_6

    Chapter  Google Scholar 

  3. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93:1-93:42 (2019)

    Article  Google Scholar 

  4. Arrieta, A.B., et al.: Explainable explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  5. Calegari, R., Ciatto, G., Omicini, A.: On the integration of symbolic and sub-symbolic techniques for XAI: a survey. Intell. Artif. 14(1), 7–32 (2020)

    Google Scholar 

  6. Dosilovic, F.K., Brcic, M., Hlupic, N.: Explainable artificial intelligence: a survey. In: Skala, K., et al. (eds.) 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO 2018), Opatija, Croatia, 21–25 May 2018, pp. 210–215. IEEE (2018)

    Google Scholar 

  7. Dağlarli, E.: Explainable artificial intelligence (xAI) approaches and deep meta-learning models. In: Aceves-Fernandez, M.A. (ed.) Advances and Applications in Deep Learning, chapter 5. IntechOpen, London, UK (2020)

    Google Scholar 

  8. Ciatto, G., Calegari, R., Omicini, A., Calvaresi, D.: Towards XMAS: eXplainability through multi-agent systems. In: Savaglio, C., Fortino, G., Ciatto, G., Omicini, A. (eds.) AI&IoT 2019 - Artificial Intelligence and Internet of Things 2019. CEUR Workshop Proceedings, vol. 2502, pp. 40–53. Sun SITE Central Europe, RWTH Aachen University (2019)

    Google Scholar 

  9. Ciatto, G., Schumacher, M.I., Omicini, A., Calvaresi, D.: Agent-based explanations in AI: towards an abstract framework. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) EXTRAAMAS 2020. LNCS (LNAI), vol. 12175, pp. 3–20. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51924-7_1

    Chapter  Google Scholar 

  10. Ciatto, G., Calvaresi, D., Schumacher, M.I., Omicini, A.: An abstract framework for agent-based explanations in AI. In: Seghrouchni, A.E.F., Sukthankar, G., An, B., Yorke-Smith, N. (eds.) 19th International Conference on Autonomous Agents and MultiAgent Systems, Auckland, New Zeland, May 2020, pp. 1816–1818. International Foundation for Autonomous Agents and Multiagent Systems. Extended Abstract (2020)

    Google Scholar 

  11. Pisano, G., Ciatto, G., Calegari, R., Omicini, A.: Neuro-symbolic computation for XAI: towards a unified model. In: Calegari, R., Ciatto, G., Denti, E., Omicini, A., Sartor, G. (eds.) WOA 2020–21th Workshop “From Objects to Agents”, Volume 2706 of CEUR Workshop Proceedings, Aachen, Germany, October 2020, pp. 101–117. Sun SITE Central Europe, RWTH Aachen University, Bologna, 14–16 September 2020

    Google Scholar 

  12. Wagner, B., d’Avila Garcez, A.: Neural-symbolic integration for fairness in AI. In: Martin, A., et al. (eds.) Proceedings of the AAAI 2021 Spring Symposium on Combining Machine Learning and Knowledge Engineering (AAAI-MAKE 2021), Volume 2846 of CEUR Workshop Proceedings, Stanford University, Palo Alto, CA, USA, 22–24 March 2021. CEUR-WS.org (2021)

    Google Scholar 

  13. Smolensky, P.: Connectionist AI, symbolic AI, and the brain. Artif. Intell. Rev. 1(2), 95–109 (1987). https://doi.org/10.1007/BF00130011

    Article  Google Scholar 

  14. Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: results from a systematic literature review. In: Elkind, E., Veloso, M., Agmon, N., Taylor, M.E. (eds.) 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2019), Montreal, QC, Canada, 13–17 May 2019, pp. 1078–1088. International Foundation for Autonomous Agents and Multiagent Systems (2019)

    Google Scholar 

  15. Calegari, R., Ciatto, G., Dellaluce, J., Omicini, A.: Interpretable narrative explanation for ML predictors with LP: a case study for XAI. In: Bergenti, F., Monica, S. (eds.) WOA 2019–20th Workshop “From Objects to Agents”, Volume 2404 of CEUR Workshop Proceedings, Sun SITE Central Europe, RWTH Aachen University, Parma, Italy, 26–28 June 2019, pp. 105–112 (2019)

    Google Scholar 

  16. Andrews, R., Diederich, J., Tickle, A.B.: Survey and critique of techniques for extracting rules from trained artificial neural networks. Knowl.-Based Syst. 8(6), 373–389 (1995)

    Article  Google Scholar 

  17. Calegari, R., Ciatto, G., Mascardi, V., Omicini, A.: Logic-based technologies for multi-agent systems: a systematic literature review. Auton. Agents Multi-Agent Syst. 35(1), 1:1-1:67 (2021). https://doi.org/10.1007/s10458-020-09478-3. Collection “Current Trends in Research on Software Agents and Agent-Based Software Development’’

    Article  Google Scholar 

  18. Hellström, T., Bensch, S.: Understandable robots - what, why, and how. Paladyn J. Behav. Robot. 9(1), 110–123 (2018)

    Article  Google Scholar 

  19. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  20. Baarslag, T., Kaisers, M., Gerding, E.H., Jonker, C.M., Gratch, J.: Computers that negotiate on our behalf: major challenges for self-sufficient, self-directed, and interdependent negotiating agents. In: Sukthankar, G., Rodriguez-Aguilar, J.A. (eds.) AAMAS 2017. LNCS (LNAI), vol. 10643, pp. 143–163. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-71679-4_10

    Chapter  Google Scholar 

  21. Jonker, C.M., et al.: An introduction to the pocket negotiator: a general purpose negotiation support system. In: Criado Pacheco, N., Carrascosa, C., Osman, N., Julián Inglada, V. (eds.) EUMAS/AT -2016. LNCS (LNAI), vol. 10207, pp. 13–27. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59294-7_2

    Chapter  Google Scholar 

  22. Ossowski, S. (ed.): Agreement Technologies. Law, Governance and Technology Series, vol. 3. Springer, Dordrecht (2012). https://doi.org/10.1007/978-94-007-5583-3

    Book  Google Scholar 

  23. Baarslag, T., Hendrikx, M.J.C., Hindriks, K.V., Jonker, C.M.: Learning about the opponent in automated bilateral negotiation: a comprehensive survey of opponent modeling techniques. Auton. Agent. Multi-Agent Syst. 30(5), 849–898 (2015). https://doi.org/10.1007/s10458-015-9309-1

    Article  Google Scholar 

  24. Aydoğan, R., Baarslag, T., Hindriks, K.V., Jonker, C.M., Yolum, P.: Heuristics for using CP-nets in utility-based negotiation without knowing utilities. Knowl. Inf. Syst. 45(2), 357–388 (2014). https://doi.org/10.1007/s10115-014-0798-z

    Article  Google Scholar 

  25. Jennings, N., Faratin, P., Lomuscio, A., Parsons, S., Wooldridge, M., Sierra, C.: Automated negotiation: prospects, methods and challenges. Group Decis. Negot. 10, 199–215 (2001)

    Article  Google Scholar 

  26. Aydoğan, R., Marsa-Maestre, I., Klein, M., Jonker, C.M.: A machine learning approach for mechanism selection in complex negotiations. J. Syst. Sci. Syst. Eng. 27(2), 134–155 (2018). https://doi.org/10.1007/s11518-018-5369-5

    Article  Google Scholar 

  27. Ilany, L., Gal, Y.: Algorithm selection in bilateral negotiation. Auton. Agent. Multi-Agent Syst. 30(4), 697–723 (2015). https://doi.org/10.1007/s10458-015-9302-8

    Article  Google Scholar 

  28. Hindriks, K.V., Tykhonov, D.: Opponent modelling in automated multi-issue negotiation using Bayesian learning. In: Padgham, L., Parkes, D.C., Müller, J.P., Parsons, S. (eds.) 7th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2008), Estoril, Portugal, 12–16 May 2008, vol. 1, pp. 331–338. IFAAMAS (2008)

    Google Scholar 

  29. Yu, C., Ren, F., Zhang, M.: An adaptive bilateral negotiation model based on Bayesian learning. In: Ito, T., Zhang, M., Robu, V., Matsuo, T. (eds.) Complex Automated Negotiations: Theories, Models, and Software Competitions. SCI, vol. 435, pp. 75–93. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-30737-9_5

    Chapter  Google Scholar 

  30. Zeng, D., Sycara, K.: Bayesian learning in negotiation. Int. J. Hum Comput Stud. 48(1), 125–141 (1998)

    Article  Google Scholar 

  31. Aydogan, R., Yolum, P.: Ontology-based learning for negotiation. In: 2009 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT 2009), vol. 2, pp. 177–184, January 2009

    Google Scholar 

  32. Galitsky, B.A., Kuznetsov, S.O., Samokhin, M.V.: Analyzing conflicts with concept-based learning. In: Dau, F., Mugnier, M.-L., Stumme, G. (eds.) ICCS-ConceptStruct 2005. LNCS (LNAI), vol. 3596, pp. 307–322. Springer, Heidelberg (2005). https://doi.org/10.1007/11524564_21

    Chapter  Google Scholar 

  33. Marsa-Maestre, I., Klein, M., Jonker, C.M., Aydoğan, R.: From problems to protocols: towards a negotiation handbook. Decis. Support Syst. 60, 39–54 (2014)

    Article  Google Scholar 

  34. Oshrat, Y., Lin, R., Kraus, S.: Facing the challenge of human-agent negotiations via effective general opponent modeling. In: 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2009), vol. 1, pp. 377–384. IFAAMAS (2009)

    Google Scholar 

  35. Güngör, O., Çakan, U., Aydoğan, R., Özturk, P.: Effect of awareness of other side’s gain on negotiation outcome, emotion, argument, and bidding behavior. In: Aydoğan, R., Ito, T., Moustafa, A., Otsuka, T., Zhang, M. (eds.) ACAN 2019. SCI, vol. 958, pp. 3–20. Springer, Singapore (2021). https://doi.org/10.1007/978-981-16-0471-3_1

    Chapter  Google Scholar 

  36. Pasquier, P., Hollands, R., Dignum, F., Rahwan, I., Sonenberg, L.: An empirical study of interest-based negotiation. Auton. Agent. Multi-Agent Syst. 22, 249–288 (2011). https://doi.org/10.1007/s10458-010-9125-6

    Article  Google Scholar 

  37. Kaptein, F., Broekens, J., Hindriks, K., Neerincx, M.: Personalised self-explanation by robots: the role of goals versus beliefs in robot-action explanation for children and adults. In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 676–682 (2017)

    Google Scholar 

  38. Moor, J.H.: The nature, importance, and difficulty of machine ethics. IEEE Intell. Syst. 21(4), 18–21 (2006)

    Article  Google Scholar 

  39. Calvaresi, D., Schumacher, M., Calbimonte, J.-P.: Personal data privacy semantics in multi-agent systems interactions. In: Demazeau, Y., Holvoet, T., Corchado, J.M., Costantini, S. (eds.) PAAMS 2020. LNCS (LNAI), vol. 12092, pp. 55–67. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-49778-1_5

    Chapter  Google Scholar 

Download references

Acknowledgments

This work has been partially supported by the Chist-Era grant CHIST-ERA-19-XAI-005, and by (i) the Swiss National Science Foundation (G.A. 20CH21_195530), (ii) the Italian Ministry for Universities and Research, (iii) the Luxembourg National Research Fund (G.A. INTER/CHIST/19/14589586 and INTER/Mobility/19/13995684/DLAl/van), (iv) the Scientific and Research Council of Turkey (TÜBİTAK, G.A. 120N680).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Davide Calvaresi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Calvaresi, D. et al. (2021). Expectation: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds) Explainable and Transparent AI and Multi-Agent Systems. EXTRAAMAS 2021. Lecture Notes in Computer Science(), vol 12688. Springer, Cham. https://doi.org/10.1007/978-3-030-82017-6_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-82017-6_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-82016-9

  • Online ISBN: 978-3-030-82017-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics