Skip to main content

Explainable and Ethical AI: A Perspective on Argumentation and Logic Programming

  • Conference paper
  • First Online:
AIxIA 2020 – Advances in Artificial Intelligence (AIxIA 2020)

Abstract

In this paper we sketch a vision of explainability of intelligent systems as a logic approach suitable to be injected into and exploited by the system actors once integrated with sub-symbolic techniques.

In particular, we show how argumentation could be combined with different extensions of logic programming – namely, abduction, inductive logic programming, and probabilistic logic programming – to address the issues of explainable AI as well as some ethical concerns about AI.

Roberta Calegari and Giovanni Sartor have been supported by the H2020 ERC Project “CompuLaw” (G.A. 833647). Andrea Omicini has been supported by the H2020 Project “AI4EU” (G.A. 825619).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://arg2p.apice.unibo.it.

References

  1. Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: results from a systematic literature review. In: 18th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2019), pp. 1078–1088. IFAAMAS, May 2019. https://dl.acm.org/doi/10.5555/3306127.3331806

  2. Arrieta, A.B., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012

    Article  Google Scholar 

  3. Belle, V.: Symbolic logic meets machine learning: a brief survey in infinite domains. In: Davis, J., Tabia, K. (eds.) SUM 2020. LNCS (LNAI), vol. 12322, pp. 3–16. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58449-8_1

    Chapter  Google Scholar 

  4. Borning, A., Maher, M.J., Martindale, A., Wilson, M.: Constraint hierarchies and logic programming. In: Levi, G., Martelli, M. (eds.) 6th International Conference on Logic Programming, vol. 89, pp. 149–164. MIT Press, Lisbon, Portugal (1989)

    Google Scholar 

  5. Calegari, R.: Micro-intelligence for the IoT: logic-based models and technologies. Ph.D. thesis, Alma Mater Studiorum-Università di Bologna, Bologna, Italy (2018). https://doi.org/10.6092/unibo/amsdottorato/8521

  6. Calegari, R., Ciatto, G., Denti, E., Omicini, A.: Engineering micro-intelligence at the edge of CPCS: design guidelines. In: Montella, R., Ciaramella, A., Fortino, G., Guerrieri, A., Liotta, A. (eds.) IDCS 2019. LNCS, vol. 11874, pp. 260–270. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-34914-1_25

    Chapter  Google Scholar 

  7. Calegari, R., Ciatto, G., Denti, E., Omicini, A.: Logic-based technologies for intelligent systems: state of the art and perspectives. Information 11(3), 1–29 (2020). https://doi.org/10.3390/info11030167

    Article  Google Scholar 

  8. Calegari, R., Ciatto, G., Mascardi, V., Omicini, A.: Logic-based technologies for multi-agent systems: a systematic literature review. Auton. Agent. Multi-Agent Syst. 35(1), 1–67 (2020). https://doi.org/10.1007/s10458-020-09478-3

    Article  Google Scholar 

  9. Calegari, R., Ciatto, G., Omicini, A.: On the integration of symbolic and sub-symbolic techniques for XAI: a survey. Intelligenza Artificiale 14(1), 7–32 (2020). https://doi.org/10.3233/IA-190036

    Article  Google Scholar 

  10. Calegari, R., Contissa, G., Lagioia, F., Omicini, A., Sartor, G.: Defeasible systems in legal reasoning: a comparative assessment. In: Araszkiewicz, M., Rodríguez-Doncel, V. (eds.) Legal Knowledge and Information Systems. JURIX 2019: The Thirty-second Annual Conference, Frontiers in Artificial Intelligence and Applications, vol. 322, pp. 169–174. IOS Press, 11–13 December 2019. https://doi.org/10.3233/FAIA190320

  11. Calegari, R., Denti, E., Dovier, A., Omicini, A.: Extending logic programming with labelled variables: model and semantics. Fund. Inform. 161(1–2), 53–74 (2018). https://doi.org/10.3233/FI-2018-1695

    Article  MathSciNet  MATH  Google Scholar 

  12. Calegari, R., Denti, E., Mariani, S., Omicini, A.: Logic programming as a service. Theory Pract. Logic Program. 18(3–4), 1–28 (2018). https://doi.org/10.1017/S1471068418000364

    Article  MathSciNet  MATH  Google Scholar 

  13. Calegari, R., Omicini, A., Sartor, G.: Computable law as argumentation-based MAS. In: Calegari, R., Ciatto, G., Denti, E., Omicini, A., Sartor, G. (eds.) WOA 2020–21st Workshop "From Objects to Agents". CEUR Workshop Proceedings, vol. 2706, pp. 54–68. Sun SITE Central Europe, RWTH Aachen University, Aachen, Germany, October 2020. http://ceur-ws.org/Vol-2706/paper10.pdf

  14. Caminada, M.: Argumentation semantics as formal discussion. J. Appl. Logics 4(8), 2457–2492 (2017)

    MathSciNet  Google Scholar 

  15. Ciatto, G., Calegari, R., Omicini, A., Calvaresi, D.: Towards XMAS: eXplainability through Multi-Agent Systems. In: Savaglio, C., Fortino, G., Ciatto, G., Omicini, A. (eds.) AI&IoT 2019 - Artificial Intelligence and Internet of Things 2019, CEUR Workshop Proceedings, vol. 2502, pp. 40–53. Sun SITE Central Europe, RWTH Aachen University, November 2019. http://ceur-ws.org/Vol-2502/paper3.pdf

  16. Ciatto, G., Schumacher, M.I., Omicini, A., Calvaresi, D.: Agent-based explanations in AI: towards an abstract framework. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) Explainable, Transparent Autonomous Agents and Multi-Agent Systems. LNCS, vol. 12175, pp. 3–20. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51924-7_1

  17. Cyras, K., Letsios, D., Misener, R., Toni, F.: Argumentation for explainable scheduling. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 2752–2759 (2019). https://doi.org/10.1609/aaai.v33i01.33012752

  18. Dung, P.M.: Negations as hypotheses: an abductive foundation for logic programming. In: International Conference on Logic Programming, vol. 91, pp. 3–17 (1991)

    Google Scholar 

  19. Dung, P.M., Mancarella, P., Toni, F.: Computing ideal sceptical argumentation. Artif. Intell. 171(10–15), 642–674 (2007). https://doi.org/10.1016/j.artint.2007.05.003

    Article  MathSciNet  MATH  Google Scholar 

  20. Dyckhoff, Roy, Herre, Heinrich, Schroeder-Heister, Peter (eds.): ELP 1996. LNCS, vol. 1050. Springer, Heidelberg (1996). https://doi.org/10.1007/3-540-60983-0

  21. Esposito, F., Fanizzi, N., Iannone, L., Palmisano, I., Semeraro, G.: A counterfactual-based learning algorithm for \(\cal{ALC}\) description logic. In: Bandini, S., Manzoni, S. (eds.) AI*IA 2005. LNCS (LNAI), vol. 3673, pp. 406–417. Springer, Heidelberg (2005). https://doi.org/10.1007/11558590_41

    Chapter  Google Scholar 

  22. Ferilli, S.: Extending expressivity and flexibility of abductive logic programming. J. Intell. Inf. Syst. 51(3), 647–672 (2018). https://doi.org/10.1007/s10844-018-0531-6

    Article  Google Scholar 

  23. Fernández, R.R., de Diego, I.M., Aceña, V., Fernández-Isabel, A., Moguerza, J.M.: Random forest explainability using counterfactual sets. Inf. Fusion 63, 196–207 (2020). https://doi.org/10.1016/j.inffus.2020.07.001

    Article  Google Scholar 

  24. Guidotti, R., Monreale, A., Turini, F., Pedreschi, D., Giannotti, F.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2019). https://doi.org/10.1145/3236009

    Article  Google Scholar 

  25. Hulstijn, J., van der Torre, L.W.: Combining goal generation and planning in an argumentation framework. In: Hunter, A. (ed.) International Workshop on Non-monotonic Reasoning (NMR 2004), pp. 212–218. Pacific Institute, Whistler, Canada, January 2004

    Google Scholar 

  26. Kakas, A., Michael, L.: Abduction and argumentation for explainable machine learning: A position survey. arXiv preprint arXiv:2010.12896 (2020)

  27. Kemker, R., McClure, M., Abitino, A., Hayes, T., Kanan, C.: Measuring catastrophic forgetting in neural networks. In: McIlraith, S.A., Weinberger, K.Q. (eds.) AAAI Conference on Artificial Intelligence. pp. 3390–3398. AAAI Press (2018). https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16410

  28. Mariani, S., Omicini, A.: Coordination in situated systems: engineering MAS environment in TuCSoN. In: Fortino, G., Di Fatta, G., Li, W., Ochoa, S., Cuzzocrea, A., Pathan, M. (eds.) IDCS 2014. LNCS, vol. 8729, pp. 99–110. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11692-1_9

    Chapter  Google Scholar 

  29. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019). https://doi.org/10.1016/j.artint.2018.07.007

    Article  MathSciNet  MATH  Google Scholar 

  30. Modgil, S., Caminada, M.: Proof theories and algorithms for abstract argumentation frameworks. In: Simari, G., Rahwan, I. (eds.) Argumentation in artificial intelligence, pp. 105–129. Springer, Boston (2009). https://doi.org/10.1007/978-0-387-98197-0_6

  31. Mooney, R.J.: Integrating abduction and induction in machine learning. In: Flach, P.A., Kakas, A.C. (eds.) Abduction and Induction, pp. 181–191. Springer (2000). https://doi.org/10.1007/978-94-017-0606-3_12

  32. Omicini, A.: Not just for humans: explanation for agent-to-agent communication. In: Vizzari, G., Palmonari, M., Orlandini, A. (eds.) AIxIA 2020 DP - AIxIA 2020 Discussion Papers Workshop. AI*IA Series, vol. 2776, pp. 1–11. Sun SITE Central Europe, RWTH Aachen University, Aachen, Germany, November 2020. http://ceur-ws.org/Vol-2776/paper-1.pdf

  33. Omicini, A., Calegari, R.: Injecting (micro)intelligence in the IoT: logic-based approaches for (M)MAS. In: Lin, D., Ishida, T., Zambonelli, F., Noda, I. (eds.) MMAS 2018. LNCS (LNAI), vol. 11422, pp. 21–35. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20937-7_2

    Chapter  Google Scholar 

  34. Pereira, L.M., Saptawijaya, A.: Programming Machine Ethics. SAPERE, vol. 26. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-29354-7

  35. Pereira, L.M., Saptawijaya, A.: Counterfactuals, logic programming and agent morality. In: Urbaniak, R., Payette, G. (eds.) Applications of Formal Philosophy. LAR, vol. 14, pp. 25–53. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58507-9_3

    Chapter  Google Scholar 

  36. Pisano, G., Calegari, R., Omicini, A., Sartor, G.: Arg-tuProlog: a tuProlog-based argumentation framework. In: Calimeri, F., Perri, S., Zumpano, E. (eds.) CILC 2020 - Italian Conference on Computational Logic. Proceedings of the 35th Italian Conference on Computational Logic. CEUR Workshop Proceedings, vol. 2719, pp. 51–66. Sun SITE Central Europe, RWTH Aachen University, CEUR-WS, Aachen, Germany, 13–15 October 2020. http://ceur-ws.org/Vol-2710/paper4.pdf

  37. Poole, D.: Logic programming, abduction and probability. N. Gener. Comput. 11(3–4), 377 (1993). https://doi.org/10.1007/BF03037184

    Article  MATH  Google Scholar 

  38. Riveret, R., Oren, N., Sartor, G.: A probabilistic deontic argumentation framework. Int. J. Approximate Reasoning 126, 249–271 (2020). https://doi.org/10.1016/j.ijar.2020.08.012

    Article  MathSciNet  MATH  Google Scholar 

  39. Rosenfeld, A., Richardson, A.: Explainability in human–agent systems. Auton. Agent. Multi-Agent Syst. 33(6), 673–705 (2019). https://doi.org/10.1007/s10458-019-09408-y

    Article  Google Scholar 

  40. Saptawijaya, A., Pereira, L.M.: From logic programming to machine ethics. In: Bendel, O. (ed.) Handbuch Maschinenethik. LAR, pp. 209–227. Springer, Wiesbaden (2019). https://doi.org/10.1007/978-3-658-17483-5_14

    Chapter  Google Scholar 

  41. Stone, P., Veloso, M.: Multiagent systems: a survey from a machine learning perspective. Auton. Robot. 8(3), 345–383 (2000). https://doi.org/10.1023/A:1008942012299

    Article  Google Scholar 

  42. Vranes, S., Stanojevic, M.: Integrating multiple paradigms within the blackboard framework. IEEE Trans. Softw. Eng. 21(3), 244–262 (1995). https://doi.org/10.1109/32.372151

    Article  Google Scholar 

  43. Wellman, H.M.: The Child’s Theory of Mind. The MIT Press (1992)

    Google Scholar 

  44. Wooldridge, M.J., Jennings, N.R.: Intelligent agents: theory and practice. Knowl. Eng. Rev. 10(2), 115–152 (1995). https://doi.org/10.1017/S0269888900008122

    Article  Google Scholar 

  45. Xhafa, F., Patnaik, S., Tavana, M. (eds.): IISA 2019. AISC, vol. 1084. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-34387-3

  46. Zhong, Q., Fan, X., Luo, X., Toni, F.: An explainable multi-attribute decision model based on argumentation. Expert Syst. Appl. 117, 42–61 (2019). https://doi.org/10.1016/j.eswa.2018.09.038

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roberta Calegari .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Calegari, R., Omicini, A., Sartor, G. (2021). Explainable and Ethical AI: A Perspective on Argumentation and Logic Programming. In: Baldoni, M., Bandini, S. (eds) AIxIA 2020 – Advances in Artificial Intelligence. AIxIA 2020. Lecture Notes in Computer Science(), vol 12414. Springer, Cham. https://doi.org/10.1007/978-3-030-77091-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77091-4_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77090-7

  • Online ISBN: 978-3-030-77091-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics