Ethics and Information Technology

, Volume 13, Issue 1, pp 17–27

Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”

Original Paper

DOI: 10.1007/s10676-010-9255-1

Cite this article as:
Grodzinsky, F.S., Miller, K.W. & Wolf, M.J. Ethics Inf Technol (2011) 13: 17. doi:10.1007/s10676-010-9255-1


There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in this area has been on the artificial agents and the humans they may encounter after they are deployed. We contend that the humans who design, implement, and deploy the artificial agents are crucial to any discussion of e-trust and to understanding the distinctions among the concepts of trust, e-trust and face-to-face trust.


Artificial agents Trust E-trust Electronic trust Modeling trust 

Copyright information

© Springer Science+Business Media B.V. 2010

Authors and Affiliations

  1. 1.Sacred Heart UniversityFairfieldUSA
  2. 2.University of Illinois SpringfieldSpringfieldUSA
  3. 3.Bemidji State UniversityBemidjiUSA

Personalised recommendations