Ethics and Information Technology

, Volume 13, Issue 1, pp 17–27

Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”

Authors

  • F. S. Grodzinsky
    • Sacred Heart University
  • K. W. Miller
    • University of Illinois Springfield
    • Bemidji State University
Original Paper

DOI: 10.1007/s10676-010-9255-1

Cite this article as:
Grodzinsky, F.S., Miller, K.W. & Wolf, M.J. Ethics Inf Technol (2011) 13: 17. doi:10.1007/s10676-010-9255-1
  • 233 Views

Abstract

There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in this area has been on the artificial agents and the humans they may encounter after they are deployed. We contend that the humans who design, implement, and deploy the artificial agents are crucial to any discussion of e-trust and to understanding the distinctions among the concepts of trust, e-trust and face-to-face trust.

Keywords

Artificial agentsTrustE-trustElectronic trustModeling trust

Copyright information

© Springer Science+Business Media B.V. 2010