Algorithmic and human decision making: for a double standard of transparency


Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments to the contrary and specify two kinds of situations for which higher standards of transparency are required from algorithmic decisions as compared to humans. Our arguments have direct implications on the demands from explainable algorithms in decision-making contexts such as automated transportation.

This is a preview of subscription content, access via your institution.


  1. 1.

    Zerilli et al. do not consider different reasons that might justify a double standard. One such reason might be that, unlike machines, human beings have a right to privacy and so are protected from intrusive forms of transparency. It may turn out that AI systems are perhaps not demanded to be transparent because the transparency of human decision making is overestimated – but because humans enjoy rights machines do not. Here, however, we will not develop this possibility any further.

  2. 2.

    We suspect that our example generalises: whenever an artefact is malfunctioning due to a technical detail, design level explanations are called for. Otherwise we will not understand the artefact’s defective behavior.

  3. 3.

    Zerilli et al. use the terms ‘action’ and ‘behavior’ interchangeably in their paper. For the purpose of this paper, we do the same.

  4. 4.

    See Guidotti et al. (2018) for a survey about the methods of explainable AI and Kasirzadeh (2021) for a critical discussion.


  1. Binns R, Van Kleek M, Veale M, Lyngs U, Zhao J, Shadbolt N (2018) ‘It’s reducing a human being to a percentage’: perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM New York, NY, USA

  2. Corbett-Davies S, Pierson E, Feller A, Goel S, Huq A (2017) Algorithmic decision making and the cost of fairness. In: Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, p 797–806

  3. Creel KA (2020) Transparency in complex computational systems. Philos Sci 87(4):568–589

    MathSciNet  Article  Google Scholar 

  4. Davis RH, Edelman D, Gammerman A (1992) Machine-learning algorithms for credit-card applications. IMA J Manag Math 4(1):43–51

    Article  Google Scholar 

  5. de Fine Licht K, de Fine Licht J (2020) Artificial intelligence, transparency, and public decision-making. AI Soc 1–10

  6. Dennett DC (1987) The intentional stance. MIT Press

    Google Scholar 

  7. Feller A, Pierson E, Corbett-Davies S, Goel S (2016) A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. The Washington Post, vol 17

  8. Gonzalez MF, Capman JF, Oswald FL, Theys ER, Tomczak DL (2019) “Where’s the IO?” Artificial intelligence and machine learning in talent management systems. Pers Assess Decis 5(3):5

    Google Scholar 

  9. Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2018) A survey of methods for explaining black box models. ACM Comput Surv 51(5):1–42

    Article  Google Scholar 

  10. Johnston P, Harris R (2019) The Boeing 737 MAX saga: lessons for software organizations. Softw Qual Prof 21(3):4–12

    Google Scholar 

  11. Kasirzadeh A (2021) Reasons, values, stakeholders: a philosophical framework for explainable artificial intelligence. In: Proceedings of the 2021 ACM conference on Fairness, Accountability, and Transparency (FAccT 2021): 14

  12. Obermeyer Z, Powers B, Vogeli C, Mullainathan S (2019) Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464):447–453

    Article  Google Scholar 

  13. Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–215

    Article  Google Scholar 

  14. Schroeder T (2005) Moral responsibility and tourette syndrome. Philos Phenomenol Res 71(1):106–123

    Article  Google Scholar 

  15. Walmsley, J. (2020). Artificial intelligence and the value of transparency. AI Soc 1–11

  16. Zerilli J, Knott A, Maclaurin J, Gavaghan C (2019) Transparency in algorithmic and human decision-making: is there a double standard? Philos Technol 32(4):661–683

    Article  Google Scholar 

Download references


We would like to express our deep gratitude to Alistair Knott, James Maclaurin, and Colin Gavaghan for valuable discussions and suggestions at the University of Otago, New Zealand. Special thanks go to John Zerilli for extensive comments on an earlier draft of this paper. Furthermore, we are very grateful for the opportunity to present this work at a seminar of the Humanising Machine Intelligence project at the Australian National University. We are thankful for insightful feedback from the members of the project, in particular Seth Lazar, Sylvie Thiebaux, Damian Clifford, Pamela Robinson, and Jenny Davis. Finally, we would like to thank each other.

Author information



Corresponding author

Correspondence to Atoosa Kasirzadeh.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Günther, M., Kasirzadeh, A. Algorithmic and human decision making: for a double standard of transparency. AI & Soc (2021).

Download citation


  • Algorithmic decision making
  • Transparency
  • Explainable AI