Original Paper

Ethics and Information Technology

, Volume 13, Issue 1, pp 17-27

First online:

Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”

  • F. S. GrodzinskyAffiliated withSacred Heart University
  • , K. W. MillerAffiliated withUniversity of Illinois Springfield
  • , M. J. WolfAffiliated withBemidji State University Email author 

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in this area has been on the artificial agents and the humans they may encounter after they are deployed. We contend that the humans who design, implement, and deploy the artificial agents are crucial to any discussion of e-trust and to understanding the distinctions among the concepts of trust, e-trust and face-to-face trust.


Artificial agents Trust E-trust Electronic trust Modeling trust