Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments to the contrary and specify two kinds of situations for which higher standards of transparency are required from algorithmic decisions as compared to humans. Our arguments have direct implications on the demands from explainable algorithms in decision-making contexts such as automated transportation.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
Zerilli et al. do not consider different reasons that might justify a double standard. One such reason might be that, unlike machines, human beings have a right to privacy and so are protected from intrusive forms of transparency. It may turn out that AI systems are perhaps not demanded to be transparent because the transparency of human decision making is overestimated – but because humans enjoy rights machines do not. Here, however, we will not develop this possibility any further.
We suspect that our example generalises: whenever an artefact is malfunctioning due to a technical detail, design level explanations are called for. Otherwise we will not understand the artefact’s defective behavior.
Zerilli et al. use the terms ‘action’ and ‘behavior’ interchangeably in their paper. For the purpose of this paper, we do the same.
Binns R, Van Kleek M, Veale M, Lyngs U, Zhao J, Shadbolt N (2018) ‘It’s reducing a human being to a percentage’: perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM New York, NY, USA
Corbett-Davies S, Pierson E, Feller A, Goel S, Huq A (2017) Algorithmic decision making and the cost of fairness. In: Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, p 797–806
Creel KA (2020) Transparency in complex computational systems. Philos Sci 87(4):568–589
Davis RH, Edelman D, Gammerman A (1992) Machine-learning algorithms for credit-card applications. IMA J Manag Math 4(1):43–51
de Fine Licht K, de Fine Licht J (2020) Artificial intelligence, transparency, and public decision-making. AI Soc 1–10
Dennett DC (1987) The intentional stance. MIT Press
Feller A, Pierson E, Corbett-Davies S, Goel S (2016) A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. The Washington Post, vol 17
Gonzalez MF, Capman JF, Oswald FL, Theys ER, Tomczak DL (2019) “Where’s the IO?” Artificial intelligence and machine learning in talent management systems. Pers Assess Decis 5(3):5
Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2018) A survey of methods for explaining black box models. ACM Comput Surv 51(5):1–42
Johnston P, Harris R (2019) The Boeing 737 MAX saga: lessons for software organizations. Softw Qual Prof 21(3):4–12
Kasirzadeh A (2021) Reasons, values, stakeholders: a philosophical framework for explainable artificial intelligence. In: Proceedings of the 2021 ACM conference on Fairness, Accountability, and Transparency (FAccT 2021): 14
Obermeyer Z, Powers B, Vogeli C, Mullainathan S (2019) Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464):447–453
Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–215
Schroeder T (2005) Moral responsibility and tourette syndrome. Philos Phenomenol Res 71(1):106–123
Walmsley, J. (2020). Artificial intelligence and the value of transparency. AI Soc 1–11
Zerilli J, Knott A, Maclaurin J, Gavaghan C (2019) Transparency in algorithmic and human decision-making: is there a double standard? Philos Technol 32(4):661–683
We would like to express our deep gratitude to Alistair Knott, James Maclaurin, and Colin Gavaghan for valuable discussions and suggestions at the University of Otago, New Zealand. Special thanks go to John Zerilli for extensive comments on an earlier draft of this paper. Furthermore, we are very grateful for the opportunity to present this work at a seminar of the Humanising Machine Intelligence project at the Australian National University. We are thankful for insightful feedback from the members of the project, in particular Seth Lazar, Sylvie Thiebaux, Damian Clifford, Pamela Robinson, and Jenny Davis. Finally, we would like to thank each other.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Günther, M., Kasirzadeh, A. Algorithmic and human decision making: for a double standard of transparency. AI & Soc (2021). https://doi.org/10.1007/s00146-021-01200-5
- Algorithmic decision making
- Explainable AI