Skip to main content
Log in

Beyond explainability: justifiability and contestability of algorithmic decision systems

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

In this paper, we point out that explainability is useful but not sufficient to ensure the legitimacy of algorithmic decision systems. We argue that the key requirements for high-stakes decision systems should be justifiability and contestability. We highlight the conceptual differences between explanations and justifications, provide dual definitions of justifications and contestations, and suggest different ways to operationalize justifiability and contestability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. Or, more generally, that the outcomes of an ADS are appropriate (global justification).

  2. This is also the case for “causal explanations”: even though the notion of cause is very complex and it is used with a variety of different meanings in the literature, causal explanations are generally based on relations between ADS inputs and outputs, without reference to any external norm (Alvarez-Melis and Jaakkola 2017).

  3. Mireille Hildebrandt takes as an illustration the example of courts of justice: “When a court decides a case, it cannot justify its decision by spelling out the heuristics of the judge(s) involved, such as their political preferences, what they had for breakfast or how they prepared the case.”

  4. For example, the factors (input values) that had the strongest impact on the outcome.

  5. For example, in the form of a decision tree or a list of rules.

  6. Global versus local legitimacy.

  7. More precisely “a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her.”

  8. It should be noted however that the interpretation of the GDPR regarding explainability requirements has stimulated some debate among law experts (Wachter et al. 2016; Malgieri and Comandé 2017). Some lawyers have also suggested that the GDPR may provide for certain types of justifications but this idea requires further analysis (Hamon et al. 2021).

  9. Together with respect for human autonomy, prevention of harm and fairness.

  10. We discuss in Sect. 5 contexts, such as autonomous agents, in which an ADS can incorporate certain norms.

  11. The punishment must fit the crime and be proportionate to the severity of the infraction.

  12. The punishment discourages people from committing crimes.

  13. The punishment positively prevents someone from offending, for example through imprisonment.

  14. Which emphasizes instead the potential recovery of offenders and their inclusion in the social body.

  15. John Monahan and Jennifer L. Skeem argue in the same direction in their analysis of risk assessment in criminal sentencing (Monahan and Skeem 2016). Chelsea Barabas and her co-authors go further, suggesting that machine learning should not be used for risk prediction but for risk mitigation because empirical analysis has demonstrated that it is ``ineffective at lowering near-term risks (failure to appear and new criminal activity) and long-term recidivism rates.''.

  16. Reuben Binns’ example (Binns 2018) illustrates the fact that justifications and contestations are essential parts of accountability: “For instance, a bank deploying an automated credit scoring system might be held accountable by a customer whose loan application has been automatically denied. Accountability in this scenario might consist of a demand by the customer that the bank provide justification for the decision; […] and a final step, in which the customer either accepts the justification, or rejects it, in which case the bank might have to revise or reprocess their decision with a human agent, or face some form of sanction.”

  17. Because they use it, explicitly or implicitly, or they are subject to decisions taken by professionals.

  18. As stated by Finale Doshi-Velez and Been Kim (2017), “for complex tasks, the end-to-end system is almost never completely testable; one cannot create a complete list of scenarios in which the system may fail. Enumerating all possible outputs given all possible inputs be computationally or logistically infeasible, and we may be unable to flag all undesirable outputs”.

  19. For example, as mentioned in (Doshi-Velez and Kim 2017), “the human may want to guard against certain kinds of discrimination, and their notion of fairness may be too abstract to be completely encoded into the system”.

  20. With standard keywords, such as first, furthermore, however, etc.

  21. Note that the word “explanation” is used in the sense of “justification” by the authors of (Doshi-Velez et al., 2019), or “motivation” in legal parlance.

  22. The norm, in our terminology.

  23. As an illustration, a recent survey (BEUC The European Consumer Organization 2019) conducted by BEUC across nine EU countries shows that in all of them, a majority of people ``agree or strongly agree that companies are using AI to manipulate consumer decisions.''.

  24. In the proposal, ‘‘user’’ is defined as ‘‘any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.’’.

References

  • Abdul A, Vermeulen J, Wang D et al (2018) Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. Proceedings of the 2018 CHI conference on human factors in computing systems-CHI ’18. ACM Press, London, pp 1–18

    Google Scholar 

  • Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052

    Article  Google Scholar 

  • Alvarez-Melis D, Jaakkola TS (2017) A causal framework for explaining the predictions of black-box sequence-to-sequence models. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP'17). https://www.aclweb.org/anthology/D17-1042

  • Ananny M, Crawford K (2018) Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc 20:973–989. https://doi.org/10.1177/1461444816676645

    Article  Google Scholar 

  • Arrieta AB, Díaz-Rodríguez N, Del Ser J, et al (2019) Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, vol 58. Elsevier, Amsterdam, pp 82–115

    Google Scholar 

  • Atkinson K, Baroni P, Giacomin M et al (2017) Towards artificial argumentation. AI Mag 38:25–36. https://doi.org/10.1609/aimag.v38i3.2704

    Article  Google Scholar 

  • Beek MH, Gnesi S, Knapp A (2018) Formal methods for transport systems. Int J Softw Tools Technol Transf 20:237–241. https://doi.org/10.1007/s10009-018-0487-4

    Article  Google Scholar 

  • Berk R, Heidari H, Jabbari S, et al (2017) Fairness in criminal justice risk assessments: the state of the art. Sociol Methods Res. https://doi.org/10.1177/0049124118782533

    Article  Google Scholar 

  • Bernstein S (2005) Legitimacy in global environmental governance. Int J Compar Lab Law Ind Relat 1:139–166

    Google Scholar 

  • BEUC The European Consumer Organization (2019) Artificial Intelligence: what consumers say. Findings and policy recommendations of a multi-country survey on AI. https://www.beuc.eu/publications/beuc-x-2020-078_artificial_intelligence_what_consumers_say_report.pdf

  • Bex F, Walton D (2011) Combining explanation and argumentation in dialogue. Argum Comput 7:55–68

    Article  Google Scholar 

  • Binns R (2018) Algorithmic accountability and public reason. Philos Technol 31:543–556

    Article  Google Scholar 

  • Biran O, Cotton C (2017) Explanation and justification in machine learning: A survey. In: IJCAI-17 Workshop on Explainable AI (XAI), p 8

  • Biran O, McKeown KR (2014) Justification narratives for individual classifications. In: ICML

  • Biran O, McKeown K (2017a) Human-centric justification of machine learning predictions. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pp 1461–1467

  • Biran O, McKeown KR (2017b) Human-centric justification of machine learning predictions. In: IJCAI, pp 1461–1467

  • Black J (2008) Constructing and contesting legitimacy and accountability in polycentric regulatory regimes. Regul Gov 2:137–164. https://doi.org/10.1111/j.1748-5991.2008.00034.x

    Article  Google Scholar 

  • Bovens M (2007) Analysing and assessing accountability: a conceptual framework. Eur Law J 13(4):447–468

    Article  MathSciNet  Google Scholar 

  • Bovens M (2006) Analysing and assessing public accountability. a conceptual framework. CONNEX and EUROGOV networks. https://www.ihs.ac.at/publications/lib/ep7.pdf

  • Castelluccia C, Le Métayer D (2019) Understanding algorithmic decision-making: opportunities and challenges. Report for the European Parliament (Panel for the Future of Science and Technology-STOA)

  • Castelluccia C, Le Métayer D (2020) Position paper: analyzing the impacts of facial recognition. In: Antunes L, Naldi M, Italiano GF et al (eds) Privacy technologies and policy. 8th Annual Privacy Forum, APF 2020. Springer International Publishing, Cham, pp 43–57

    Google Scholar 

  • Center for Data Ethics and Innovation (CDEI) (2020) AI Barometer Report

  • Chetali B, Nguyen Q-H (2008) Industrial use of formal methods for a high-level security evaluation. In: Cuellar J, Maibaum T, Sere K (eds) FM 2008: formal methods. Springer, Berlin, pp 198–213

    Chapter  Google Scholar 

  • Christin A, Rosenblat A, Boyd D (2015) Courts and Predictive Algorithms. Primer for the data and civil rights conference: a new era of policing and justice. Springer, Cham

    Google Scholar 

  • Corfield D (2010) Varieties of justification in machine learning. Mind Mach 20:291–301. https://doi.org/10.1007/s11023-010-9191-1

    Article  Google Scholar 

  • Cowls J, Floridi L (2018) Prolegomena to a white paper on an ethical framework for a good AI society. SSRN Electron J. https://doi.org/10.2139/ssrn.3198732

    Article  Google Scholar 

  • Crawford K, Schultz J (2014) Big data and due process: toward a framework to redress predictive privacy harms. Boston Coll Law Rev 55:93

    Google Scholar 

  • Danaher J (2016) The threat of algocracy: reality, resistance and accommodation. Philos Technol 29:245–268

    Article  Google Scholar 

  • de Licht KF, de Licht JF (2020) Artificial intelligence, transparency, and public decision-making. AI Soc 35:917–926

    Article  Google Scholar 

  • Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. https://arxiv.org/abs/1702.08608

  • Doshi-Velez F, Kortz M, Budish R, et al (2019) Accountability of AI under the law: the role of explanation. https://arxiv.org/ftp/arxiv/papers/1711/1711.01134.pdf

  • European Commission (2020) Proposal for a regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC.

  • European Commission (2021) Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.

  • European Parliament (2020) Report with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies.

  • Fredriksson M, Tritter J (2017) Disentangling patient and public involvement in healthcare decisions: why the difference matters. Sociol Health Illn 39(1):95–111

    Article  Google Scholar 

  • Gebru T, Morgenstern J, Vecchione B, et al (2020) Datasheets for datasets. https://arxiv.org/pdf/1803.09010.pdf

  • Government of Canada (2019) Directive on automated decision-making. https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592

  • Guidotti R, Monreale A, Ruggieri S et al (2018) A survey of methods for explaining black box models. ACM Comput Surv (CSUR) 51:93

    Google Scholar 

  • Hamon R, Junklewitz H, Malgieri G et al (2021) Impossible explanations? Beyond explainable AI in the GDPR from a COVID-19 use case scenario. Proc ACM Conf Fairness Account Transpar. https://doi.org/10.1145/3442188.3445917

    Article  Google Scholar 

  • Henin C, Le Métayer D (2020) A generic framework for black-box Explanations. In: Proceedings of the International Workshop on Fair and Interpretable Learning Algorithms (FILA 2020), IEEE

  • Henin C, Le Métayer D (2021a) A multi-layered approach for tailored black-box explanations. Pattern recognition. ICPR international workshops and challenges. Springer Verlag, Cham, p 12663

    Google Scholar 

  • Henin C, Le Métayer D (2021b) A framework to contest and justify algorithmic decisions. Springer AI Ethics. https://doi.org/10.1007/s43681-021-00054-3

    Article  Google Scholar 

  • Hildebrandt M (2019) Privacy as protection of the incomputable self: from agnostic to agonistic machine learning. Theor Inq Law 20:83–122

    Article  Google Scholar 

  • Hirsch T, Merced K, Narayanan S, et al (2017) Designing Contestability: Interaction Design, Machine Learning, and Mental Health. In: Proceedings of the 2017 Conference on Designing Interactive Systems. Association for Computing Machinery, New York, NY, USA, pp 95–99

  • HLEG-AI (2019) Ethics guidelines for trustworthy AI. European Commission High-Level Expert Group on Artificial Intelligence. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

  • Irving G, Christiano P, Amodei D (2018) AI safety via debate. https://arxiv.org/abs/1805.00899

  • Kaminski ME (2019) Binary governance: Lessons from the GDPR’s approach to algorithmic accountability. SSRN J. https://doi.org/10.2139/ssrn.3351404

    Article  Google Scholar 

  • Kaminski ME, Malgieri G (2020) Algorithmic impact assessments under the GDPR: producing multi-layered explanations. Int Data Priv Law. https://doi.org/10.1093/idpl/ipaa020

    Article  Google Scholar 

  • Kim B (2015) Interactive and interpretable machine learning models for human machine collaboration. PhD Thesis, Massachusetts Institute of Technology

  • Kluttz DN, Kohli N, Mulligan DK (2020) Shaping our tools: contestability as a means to promote responsible algorithmic decision making in the professions. In: Werbach K (ed) After the digital tornado: networks, algorithms, humanity. Cambridge University Press, Cambridge, pp 137–152

    Google Scholar 

  • Langley P (2019) Explainable, normative, and justified agency. Proc AAAI Conf Artif Intell 33:9775–9779. https://doi.org/10.1609/aaai.v33i01.33019775

    Article  Google Scholar 

  • Laugel T, Lesot M-J, Marsala C, et al (2019) The dangers of post-hoc interpretability: unjustified counterfactual explanations. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI-19). https://www.ijcai.org/proceedings/2019/0388.pdf

  • Lei T, Barzilay R, Jaakkola T (2016) Rationalizing neural predictions. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pp 107–117

  • Liao B, Anderson M, Anderson SL (2020) Representation, justification, and explanation in a value-driven agent: an argumentation-based approach. AI Ethics. https://doi.org/10.1007/s43681-020-00001-8

    Article  Google Scholar 

  • Loi M, Ferrario A, Vigano E (2020) Transparency as design publicity: explaining and justifying inscrutable algorithms. Ethics Inf Technol. https://doi.org/10.1007/s10676-020-09564-w

    Article  Google Scholar 

  • Madumal P, Miller T, Sonenberg L, Vetere F (2019) A Grounded Interaction Protocol for Explainable Artificial Intelligence. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, pp 1033–1041

  • Malgieri G, Comandé G (2017) Why a right to legibility of automated decision-making exists in the general data protection regulation. Int Data Priv Law. https://doi.org/10.1093/idpl/ipx019

    Article  Google Scholar 

  • Miller T (2017) Explanation in artificial intelligence: insights from the social sciences. Artif Intell. https://doi.org/10.1016/j.artint.2018.07.007

    Article  MATH  Google Scholar 

  • Miller T, Howe P, Sonenberg L (2017) Explainable AI: beware of inmates running the asylum. In: IJCAI-17 Workshop on Explainable AI (XAI)

  • Mitchell M, Wu S, Zaldivar A et al (2019) Model cards for model reporting. Proc Conf Fairness Account Transp. https://doi.org/10.1145/3287560.3287596

    Article  Google Scholar 

  • Mittelstadt B, Russell C, Wachter S (2018) Explaining explanations in AI. Proc Conf Fairness Account Transp. https://doi.org/10.1145/3287560.3287574

    Article  Google Scholar 

  • Mohseni S, Zarei N, Ragan ED (2020) A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Trans Interact Intell Syst 1:1

    Google Scholar 

  • Monahan J, Skeem J (2016) Risk assessment in criminal sentencing. Annu Rev Clin Psychol 12:489–513

    Article  Google Scholar 

  • Morley J, Floridi L, Kinsey L, Elhalal A (2020) From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci Eng Ethics 26:2141–2168

    Article  Google Scholar 

  • Mueller ST, Hoffman RR, Clancey W, Emrey A, Klein G (2019) Explanation in human-AI systems: a literature meta-review synopsis of key ideas and publications and bibliography for explainable AI. https://arxiv.org/abs/1902.01876

  • Narayanan A (2019) How to recognize AI snake oil.

  • Opdebeek I, Somer SD (2016) The duty to give reasons in the European Legal Area: a Mechanism for transparent and accountable administrative decision-making? a comparison of Belgian, Dutch, French and EU Administrative Law. Rocznik Administracji Publicznej, Cham, p 2

    Google Scholar 

  • Persad G, Wertheimer A, Emanuel EJ (2009) Principles for allocation of scarce medical interventions. Lancet 373:423–431. https://doi.org/10.1016/S0140-6736(09)60137-9

    Article  Google Scholar 

  • Peter F (2017) Political legitimacy. In: Zalta EN (ed) The stanford encyclopedia of philosophy. Springer, Berlin

    Google Scholar 

  • Reisman D, Schultz J, Crawford K, Whittaker M (2018) Algorithm impact assessment: a practical frameworks for public agency accountability (AINow Institute Report)

  • Robbins S (2019) A misdirected principle with a catch: explicability for AI. Mind Mach 29:495–514

    Article  Google Scholar 

  • Rouvroy A (2013) The end(s) of critique: data-behaviourism vs. due process. Privacy, due process and the computational turn. Philosophers of Law Meet Philosophers of Technology. Routledge, London

    Google Scholar 

  • Rouvroy A (2015) A few thoughts in preparation for the discrimination and big data conference organized by constant at the CPDP. https://www.academia.edu/10177775/A_few_thoughts_in_preparation_for_the_Discrimination_and_Big_Data_conference_organized_by_Constant_at_the_CPDP_Brussels_22_january_2015_paper_video_

  • Suchman MC (1995) Managing legitimacy: strategic and institutional approaches. Acad Manag Rev 20:571–610

    Article  Google Scholar 

  • Swartout WR (1981) Producing explanations and justifications of expert consulting programs. MIT Laboratory for Computer Science, Technical Report MIT/LCS/TR-251.

  • Taddeo M, Floridi L (2018) How AI can be a force for good. Science 361:751–752. https://doi.org/10.1126/science.aat5991

    Article  MathSciNet  MATH  Google Scholar 

  • van Kersbergen K, van Waarden F (2004) ‘Governance’ as a Bridge between Disciplines. Cross-Disciplinary Inspiration regarding Shifts in Governance and Problems of Governability, Accountability, and Legitimacy. Eur J Polit Res 43:143–171

    Article  Google Scholar 

  • Wachter S, Mittelstadt B, Floridi L (2016) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int Data Priv Law. https://doi.org/10.1093/idpl/ipx005

    Article  Google Scholar 

  • Waldman AE (2019) Power, process, and automated decision-making. Fordham Law Rev 88:613

    Google Scholar 

  • Wroblewski J (1971) Legal decision and its justification. Logique Et Anal (NS) 14:409–419

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Clément Henin.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Henin, C., Le Métayer, D. Beyond explainability: justifiability and contestability of algorithmic decision systems. AI & Soc 37, 1397–1410 (2022). https://doi.org/10.1007/s00146-021-01251-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01251-8

Keywords

Navigation