Skip to main content

Advertisement

Log in

Transparency and the Black Box Problem: Why We Do Not Trust AI

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence (AI) that uses deep learning (DL), an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open question to what extent we can trust these systems. The question of trust becomes more urgent as we delegate more and more decision-making to and increasingly rely on AI to safeguard significant human goods, such as security, healthcare, and safety. Models that “open the black box” by making the non-linear and complex decision process understandable by human observers are promising solutions to the black box problem in AI but are limited, at least in their current state, in their ability to make these processes less opaque to most observers. A philosophical analysis of trust will show why transparency is a necessary condition for trust and eventually for judging AI to be trustworthy. A more fruitful route for establishing trust in AI is to acknowledge that AI is situated within a socio-technical system that mediates trust, and by increasing the trustworthiness of these systems, we thereby increase trust in AI.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Opacity and the black box problem are not exclusive to DL as other forms of machine learning also can be opaque. Because DL is paradigmatic of the black box problem, it is the focus of this paper.

  2. ProPublica’s analysis of COMPAS is useful for heuristic purposes but not without criticism. Subsequent analyses have raised questions about ProPublica’s conclusions regarding racial bias but uncovered other serious concerns that remain hidden due to a lack of transparency (Fisher et. al., 2019; Rudin et. al., 2020). Other analysis has suggested that it is no more fair or accurate than human judgment (Dressel & Farid, 2018).

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Warren J. von Eschenbach.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

von Eschenbach, W.J. Transparency and the Black Box Problem: Why We Do Not Trust AI. Philos. Technol. 34, 1607–1622 (2021). https://doi.org/10.1007/s13347-021-00477-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-021-00477-0

Keywords

Navigation