Abstract
Through the General Data Protection Regulation (GDPR), the European Union has set out its vision for Automated Decision-Making (ADM) and AI, which must be reliable and human-centred. In particular we are interested on the Right to Explanation, that requires industry to produce explanations of ADM. The High-Level Expert Group on Artificial Intelligence (AI-HLEG), set up to support the implementation of this vision, has produced guidelines discussing the types of explanations that are appropriate for user-centred (interactive) Explanatory Tools. In this paper we propose our version of Explanatory Narratives (EN), based on user-centred concepts drawn from ISO 9241, as a model for user-centred explanations aligned with the GDPR and the AI-HLEG guidelines. Through the use of ENs we convert the problem of generating explanations for ADM into the identification of an appropriate path over an Explanatory Space, allowing explainees to interactively explore it and produce the explanation best suited to their needs. To this end we list suitable exploration heuristics, we study the properties and structure of explanations, and discuss the proposed model identifying its weaknesses and strengths.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
These ontologies do not have necessarily to be explicit, formal or complete.
- 2.
It is not excluded that the original purposes might change during the explanatory process.
References
Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012 (2019)
Athan, T., Boley, H., Governatori, G., Palmirani, M., Paschke, A., Wyner, A.Z.: OASIS LegalRuleML. In: ICAIL, vol. 13, pp. 3–12 (2013)
Bennet, W.L., Feldman, M.S.: Reconstructing Reality in the Courtroom. Quid Pro Books, Tavistock (1981)
Berland, L.K., Reiser, B.J.: Making sense of argumentation and explanation. Sci. Educ. 93(1), 26–55 (2009)
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., Floridi, L.: Artificial intelligence and the ‘good society’: the US, EU, and UK approach. Sci. Eng. Ethics 24(2), 505–528 (2017). https://doi.org/10.1007/s11948-017-9901-7
Cocarascu, O., Rago, A., Toni, F.: Extracting dialogical explanations for review aggregations with argumentative dialogical agents. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1261–1269. International Foundation for Autonomous Agents and Multiagent Systems (2019)
Čyras, K., et al.: Explanations by arbitrated argumentative dispute. Expert Syst. Appl. 127, 141–156 (2019)
Driver, R., Newton, P., Osborne, J.: Establishing the norms of scientific argumentation in classrooms. Sci. Educ. 84(3), 287–312 (2000)
Floridi, L., et al.: Ai4People–an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and Machines 28(4), 689–707 (2018)
Fox, M., Long, D., Magazzeni, D.: Explainable planning. arXiv preprint arXiv:1709.10256 (2017)
Hleg, A.: Ethics guidelines for trustworthy AI (2019)
Hleg, A.: Policy and investment recommendations (2019)
ICO: Project explain interim report (2019). https://ico.org.uk/about-the-ico/research-and-reports/project-explain-interim-report/. Accessed 05 Jan 2020
Lipton, P.: What good is an explanation? In: Hon, G., Rakover, S.S. (eds.) Explanation. Synthese Library (Studies in Epistemology, Logic, Methodology, and Philosophy of Science), vol. 302, pp. 43–59. Springer, Dordrecht (2001). https://doi.org/10.1007/978-94-015-9731-9_2
Meyer, J.J. C.: Deontic logic: a concise overview. In: Deontic Logic in Computer Science: Normative System Specification, pp. 3–16. Wiley (1993)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2018)
Norris, S.P., Guilbert, S.M., Smith, M.L., Hakimelahi, S., Phillips, L.M.: A theoretical framework for narrative explanation in science. Sci. Educ. 89(4), 535–563 (2005)
Palmirani, M., Governatori, G.: Modelling legal knowledge for GDPR compliance checking. In: JURIX, pp. 101–110 (2018)
Passmore, J.: Explanation in everyday life, in science, and in history. Hist. Theory 2(2), 105–123 (1962)
Pearl, J.: The seven tools of causal inference, with reflections on machine learning. Commun. ACM 62(3), 54–60 (2019)
Prakken, H.: An argumentation-based analysis of the Simonshaven case. In: Topics in Cognitive Science (2019)
Raymond, A., Gunes, H., Prorok, A.: Culture-based explainable human-agent deconfliction. arXiv preprint arXiv:1911.10098 (2019)
Sandoval, W.A., Reiser, B.J.: Explanation-driven inquiry: integrating conceptual and epistemic scaffolds for scientific inquiry. Sci. Educ. 88(3), 345–372 (2004)
Suthers, D.D., Toth, E.E., Weiner, A.: An integrated approach to implementing collaborative inquiry in the classroom. In: Proceedings of the 2nd International Conference on Computer Support for Collaborative Learning, pp. 275–282. International Society of the Learning Sciences (1997)
Verheij, B., et al.: Arguments, scenarios and probabilities: connections between three normative frameworks for evidential reasoning. Law Probab. Risk 15(1), 35–70 (2015)
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GPDR. Harv. JL Tech. 31, 841 (2017)
WP29: guidelines on automated individual decision-making and profiling for the purposes of regulation 2016/679 (wp251rev.01). European Commission (2016)
Zhong, Q., Fan, X., Luo, X., Toni, F.: An explainable multi-attribute decision model based on argumentation. Expert Syst. Appl. 117, 42–61 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Sovrano, F., Vitali, F., Palmirani, M. (2020). Modelling GDPR-Compliant Explanations for Trustworthy AI. In: Kő, A., Francesconi, E., Kotsis, G., Tjoa, A., Khalil, I. (eds) Electronic Government and the Information Systems Perspective. EGOVIS 2020. Lecture Notes in Computer Science(), vol 12394. Springer, Cham. https://doi.org/10.1007/978-3-030-58957-8_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-58957-8_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58956-1
Online ISBN: 978-3-030-58957-8
eBook Packages: Computer ScienceComputer Science (R0)