Skip to main content
Log in

AI and the expert; a blueprint for the ethical use of opaque AI

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a more fruitful exploration of this question will involve a different comparison class. We routinely treat judgments made by highly trained experts in specialized fields as fair or well grounded even though—by the nature of expert/layperson division of epistemic labor—an expert will not be able to provide an explanation of the reasoning behind these judgments that makes sense to most other people. Regardless, laypeople are thought to be acting reasonably—and ethically—in deferring to the judgments of experts that concern their areas of specialization. I suggest that we reframe our question regarding the appropriate standards of transparency in AI as one that asks when, why, and to what degree it would be ethical to accept opacity in AI. I argue that our epistemic relation to certain opaque AI technology  may be relevantly similar to the layperson’s epistemic relation to the expert in certain respects, such that the successful expert/layperson division of epistemic labor can serve as a blueprint for the ethical use of opaque AI.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.

  2. See Barocas (2018).

  3. These principles concern failure transparency (if an AI system causes harm, it should be possible to ascertain why), and judicial transparency (any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority).

  4. Harm here is broadly  construed to include (at minimum) opportunity costs, as well as intangible/unquantifiable harms such as rights violations, insufficient or inaccurate representation, harm to social reputation, and harm to self-esteem.

  5. See Skerker, Purves, and Jenkins (2015) on the anti-codifiability problem in robot and machine ethics.

  6. See London (2019) and Vincent (2018)

  7. See Robbins (2019) on valuing efficiency rather than transparency in certain non-trivial cases.

  8. There are epistemic advantages to increasing transparency in AI models, but for the sake of this paper we are focusing solely on the ethical goals of requiring transparency in AI.

  9. see Goldman 2001; Goldman 2014; Lackey 2016.

  10. While “opaque” has a standard meaning in the literature on this topic, “transparent” has several common meanings when used in the context of AI models. A satisfactorily transparent AI model might be an interpretable model, or an explicable model, or it may be comprehensible to the relevant practitioner or stakeholder, etc. A thorough account of how “transparency” has been interpreted in the literature on AI regulations is beyond the scope of this discussion, but see Chen 2018; Li et al. 2018; Lipton 2016; Miller 2017; Mittelstadt et al. 2019; Molnar 2019; Riberio 2016; Rudin 2019; Zerilli 2002.

  11. to whatever extent is required such that it would be ethically responsible to utilize that opaque technology  in the particular ethically significant context in question.

References

Download references

Funding

No funding was received to assist with the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amber Ross.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ross, A. AI and the expert; a blueprint for the ethical use of opaque AI. AI & Soc (2022). https://doi.org/10.1007/s00146-022-01564-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00146-022-01564-2

Keywords

Navigation