Collection

Article Symposium: Trustworthy AI (Mona Simion & Chris Kelp)

This paper develops an account of trustworthy AI. Its central idea is that whether AIs are trustworthy is a matter of whether they live up to their function-based obligations. We argue that this account serves to advance the literature in a couple of important ways. First, it serves to provide a rationale for why a range of properties that are widely assumed in the scientific literature, as well as in policy, to be required of trustworthy AI, such as safety, justice, and explainability, are properties (often) instantiated by trustworthy AI. Second, we connect the discussion on trustworthy AI in policy, industry, and the sciences with the philosophical discussion of trustworthiness. We argue that extant accounts of trustworthiness in the philosophy literature cannot make proper sense of trustworthy AI and that our account compares favourably with its competitors on this front. Critical engagement by J. Adam Carter, Fei Song and Shane Ryan, Dong Yong Choi, and Rune Nyrup. Replies by Mona Simion and Chris Kelp.

Editors

  • Nikolaj Jang Lee Linding Pedersen

    Nikolaj Jang Lee Linding Pedersen is Underwood Distinguished Professor and Professor of Philosophy at Underwood International College, Yonsei University. His main research areas are epistemology, truth, metaphysics, and the philosophies of logic, mathematics, and technology. He is the founder of the Veritas Research Center (Yonsei University) and a co-founder of the Asian Epistemology Network and Eastern Hemisphere Language and Metaphysics Network.

Articles (5 in this collection)