Abstract
The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a more fruitful exploration of this question will involve a different comparison class. We routinely treat judgments made by highly trained experts in specialized fields as fair or well grounded even though—by the nature of expert/layperson division of epistemic labor—an expert will not be able to provide an explanation of the reasoning behind these judgments that makes sense to most other people. Regardless, laypeople are thought to be acting reasonably—and ethically—in deferring to the judgments of experts that concern their areas of specialization. I suggest that we reframe our question regarding the appropriate standards of transparency in AI as one that asks when, why, and to what degree it would be ethical to accept opacity in AI. I argue that our epistemic relation to certain opaque AI technology may be relevantly similar to the layperson’s epistemic relation to the expert in certain respects, such that the successful expert/layperson division of epistemic labor can serve as a blueprint for the ethical use of opaque AI.
Similar content being viewed by others
Notes
See Barocas (2018).
These principles concern failure transparency (if an AI system causes harm, it should be possible to ascertain why), and judicial transparency (any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority).
Harm here is broadly construed to include (at minimum) opportunity costs, as well as intangible/unquantifiable harms such as rights violations, insufficient or inaccurate representation, harm to social reputation, and harm to self-esteem.
See Skerker, Purves, and Jenkins (2015) on the anti-codifiability problem in robot and machine ethics.
See Robbins (2019) on valuing efficiency rather than transparency in certain non-trivial cases.
There are epistemic advantages to increasing transparency in AI models, but for the sake of this paper we are focusing solely on the ethical goals of requiring transparency in AI.
While “opaque” has a standard meaning in the literature on this topic, “transparent” has several common meanings when used in the context of AI models. A satisfactorily transparent AI model might be an interpretable model, or an explicable model, or it may be comprehensible to the relevant practitioner or stakeholder, etc. A thorough account of how “transparency” has been interpreted in the literature on AI regulations is beyond the scope of this discussion, but see Chen 2018; Li et al. 2018; Lipton 2016; Miller 2017; Mittelstadt et al. 2019; Molnar 2019; Riberio 2016; Rudin 2019; Zerilli 2002.
to whatever extent is required such that it would be ethically responsible to utilize that opaque technology in the particular ethically significant context in question.
References
Ahmed M (2018) Aided by Palantir, the LAPD uses predictive policing to monitor specific people and neighborhoods. The Intercept. https://theintercept.com/2018/05/11/predictive-policing-surveillance-los-angeles/
Barocas S (2018) Accounting for artificial intelligence: rules, reasons, rationales. In: Human rights, ethics, and artificial intelligence, 30 Nov. Harvard Kennedy School Carr Center for Human Rights Policy. Lecture
Barry-Jester A, Casselman B, Goldstein D (2015) The new science of sentencing. The Marshall Project. https://www.themarshallproject.org/2015/08/04/the-new-science-of-sentencing
Berk RA, Sorenson SB, Barnes G (2016) Forecasting domestic violence: a machine learning approach to help inform arraignment decisions. J Empir Leg Stud 13(1):94–115. https://doi.org/10.1111/jels.12098
Chen C et al (2018) This looks like that: deep learning for interpretable image recognition. Preprint at https://arxiv.org/abs/1806.10574
de Bruijne M (2016) Machine learning approaches in medical image analysis: from detection to diagnosis. Med Image Anal 33:94–97. https://doi.org/10.1016/j.media.2016.06.032
Dhar J, Ranganathan A (2015) Machine learning capabilities in medical diagnosis applications: computational results for hepatitis disease. Int J Biomed Eng Technol 17(4):330–340. https://doi.org/10.1504/IJBET.2015.069398
Erickson BJ, Korfiatis P, Akkus Z, Kline TL (2017) Machine learning for medical imaging. Radiographics 37(2):505–515. https://doi.org/10.1148/rg.2017160130
Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., & Venkatasubramanian, S. (2017). Run away feedback loops in predictive policing. In Proceedings of machine learning research, 81, 1–12. Retrieved from http://arxiv.org/abs/1706.09847
European Commission (2019) Ethics Guidelines for Trustworthy AI, https://ec.europa.eu/futurium/en/ai-allianceconsultation.
Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Vayena E (2018) AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind Mach 28(4):689–707. https://doi.org/10.1007/s11023-018-9482-5
Future of Life Institute (2017) Asilomar AI Principles. https://futureoflife.org/ai-principles/
Goldman AI (2001) Experts: which ones should you trust? Philos Phenomenol Res 63(1):85–110
Goldman AI (2014) Social process reliabilism: solving justification problems in collective epistemology. Lackey 2014:11–41. https://doi.org/10.1093/acprof:oso/9780199665792.003.0002
Günther M, Kasirzadeh A (2022) Algorithmic and human decision making: for a double standard of transparency. AI & Soc. https://doi.org/10.1007/s00146-021-01200-5
Hardwig J (1985) Epistemic dependence. J Philos 82(7):335–349
Joh, E. E. (2017). Feeding the machine: Policing, crime data, & algorithms. William & Mary Bill of Rights Journal, 26, 287.
Lackey J (2016) What is justified group belief? Philos Rev 125(3):341–396. https://doi.org/10.1215/00318108-3516946
Li O, Liu H, Chen C, Rudin C (2018) Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: Proceedings of AAAI Conference on Artificial Intelligence 3530–3537 (AAAI, 2018).
Lipton ZC (2016) The mythos of model interpretability. In: ICML Workshop on Human Interpretability in Machine Learning, vol 2017, pp. 96–100, 24
London AJ (2019) Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Center Rep 49. https://doi.org/10.1002/hast.973
Miller T (2017) Explanation in artificial intelligence: insights from the social sciences. arXiv
Mittelstadt B, Russell C, Wachter S (2019) Explaining explanations in AI. In: Proceedings of fairness, accountability, and transparency (FAT*) (ACM, 2019)
Molnar C (2019) Interpretable machine learning
Nadella S (2016) Microsoft’s CEO explores how humans and AI Can solve society’s challenges—together. Slate. https://slate.com/technology/2016/06/microsoft-ceo-satya-nadella-humans-and-a-i-can-work-together-to-solve-societyschallenges.html
O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Crown
Ribeiro MT, Singh S, Guestrin C (2016) “Why should I trust you?”: explaining the predictions of any classifier. KDD
Robbins S (2019) A misdirected principle with a catch: explicability for AI. Mind Mach 29:495–514. https://doi.org/10.1007/s11023-019-09509-3
Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1:206–215
Skerker M, Purves D, Jenkins R (2015) Autonomous machines, moral judgment, and acting for the right reasons. Ethical Theory Moral Pract 18(4):851–872 (Special Issue: BSET-2014)
Vincent J (2018) AI that detects cardiac arrests during emergency calls will be tested across Europe this summer. The Verge. https://www.theverge.com/2018/4/25/17278 994/ai-cardiac-arrest-corti-emergency-call-response
Zerilli J (2022) Explaining machine learning decisions. Philos Sci 89(1):1–19
Zerilli J, Knott A, Maclaurin J et al (2019) Transparency in algorithmic and human decision-making: is there a double standard? Philos Technol 32:661–683. https://doi.org/10.1007/s13347-018-0330-6
Funding
No funding was received to assist with the preparation of this manuscript.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Ross, A. AI and the expert; a blueprint for the ethical use of opaque AI. AI & Soc (2022). https://doi.org/10.1007/s00146-022-01564-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00146-022-01564-2