Ethics and Information Technology

, Volume 13, Issue 1, pp 17–27

Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”

Original Paper


There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in this area has been on the artificial agents and the humans they may encounter after they are deployed. We contend that the humans who design, implement, and deploy the artificial agents are crucial to any discussion of e-trust and to understanding the distinctions among the concepts of trust, e-trust and face-to-face trust.


Artificial agents Trust E-trust Electronic trust Modeling trust 


  1. Baier, A. (Jan, 1986). Trust and antitrust. Ethics, 96(2), 231–260.Google Scholar
  2. Benbasat, I., Gefen, D., & Pavlou, P. A. (Spring, 2008). Special issue: Trust in online environments. Journal of Management Information Systems, 24(4), 5–11.Google Scholar
  3. Bok, S. (1978). Lying (p. 31n). Pantheon Books: New York.Google Scholar
  4. Bruemmer, D., Few, D., Goodrich, M., Norman, D., Sarkar, N., Scholtz, J., Smart, B., Swinson, M. L., & Yanco, H. (2004). How to trust robots further than we can throw them. In CHI ‘04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria, April 24–29, 2004). CHI ’04 (pp. 1576–1577). ACM, New York.Google Scholar
  5. Castlefranchi, C. & Falcone, R. (2008). Socio-cognitive model of trust: basic ingredients, online at Accessed March, 2008.
  6. Clarke, E. M. & Wing, J. M. (Dec, 1996) Formal methods: state of the art and future directions. ACM Computing Surveys, 28(4), 626–643.Google Scholar
  7. DiSalvo, C. F., Gemperle, F., Forlizzi, J., & Kiesler, S. (2002). All robots are not created equal: The design and perception of humanoid robot heads. In Proceedings of the 4th conference on designing interactive systems: processes, practices, methods, and techniques (London, England, June 25–28, 2002). DIS ’02 (pp. 321–326). ACM, New York, NY.Google Scholar
  8. Durante, M. (2008). What model of trust for networked cooperation? Online social trust in the production of common goods (knowledge sharing). In Ethicomp 2008, conference proceedings (pp 211–223). University of Pavia, Mantua, Italy.Google Scholar
  9. Grodzinsky, F. S., Miller, K., & Wolf, M. J. (2010) Toward a model of trust and e-trust processes using object-oriented methodologies. In Ethicomp 2010 Proceedings, April 14–16, 2010.Google Scholar
  10. Falcone, R. & Castelfranchi, C. (2001) Social trust—A cognitive approach, online at Accessed 3/2008.
  11. Floridi, L. & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.Google Scholar
  12. Fule, P. & Roddick, J. F. (2004) Detecting privacy and ethical sensitivity in data mining results. In Proceedings of the 27th Australasian conference on computer scienceVol. 26 (Dunedin, New Zealand). Estivill-Castro, Ed. ACM International conference proceeding series (vol. 56, pp. 159–166). Australian Computer Society, Darlinghurst, Australia.Google Scholar
  13. Gambetta, D. (1998) Can we trust trust? In Trust: Making and breaking cooperative relations. D. Gambetta, Ed. (pp. 213–238).Google Scholar
  14. Greiner, R., Darken, C., & Santoso, N. I. (Mar, 2001) Efficient reasoning. ACM Computing Surveys, 33(1), 1–30.Google Scholar
  15. Grodzinsky, F. S., Miller, K., & Wolf, M. J. (2008). The ethics of designing artificial agents. Ethics and Information Technology, 10, 115–121.CrossRefGoogle Scholar
  16. Grodzinsky, F. S., Miller, K. & Wolf, M. J. (2009) Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?” In Proceedings of computer ethics, philosophical enquiry conference, June 26–29, 2009.Google Scholar
  17. Gunkel, D. J. (2007). Thinking otherwise: Ethics, technology and other subjects. Ethics and Information Technology, 9, 165–177.CrossRefGoogle Scholar
  18. Howe, D. & Nissenbaum, H. (2009) TrackMeNot., accessed April 7, 2009.
  19. Kloth, R. (2009) List of bad bots., accessed March 31, 2009.
  20. Luhmann, N. (1979). Trust and power. Chichester: Wiley.Google Scholar
  21. Mcknight, D. H., & Chervany, N. L. (1996) The meanings of trust., accessed June 1, 2010.
  22. Miller, J. (June, 1988) Discrete and continuous models of human information processing: Theoretical distinctions and empirical results, Acta Psychologica, 67(3), 191–257.Google Scholar
  23. Miller, K., Grodzinsky, F., & Wolf, M. J. (2009). Why turing shouldn’t have to guess. The Fifth Asia-Pacific Computing and Philosophy Conference, October 1–2, 2009.Google Scholar
  24. Nissenbaum, H. (2001). Securing trust online: Wisdom or oxymoron. Boston University Law Review, 81(3), 635–664.Google Scholar
  25. Schumann, J. & Nelson, S. (2002). Toward V&V of neural network based controllers. In D. Garlan, J. Kramer, & A. Wolf (Eds.). Proceedings of the first workshop on self-healing systems (Charleston, South Carolina, November 18–19, 2002), WOSS ’02 (pp. 67–72). ACM, New York, NY.Google Scholar
  26. Sun, Y., Councill, I. G., & Giles, C. L. (2008). BotSeer: An automated information system for analyzing web robots. In Proceedings of the 2008 eighth international conference on web engineeringVolume 00 (July 14-18, 2008) (pp. 108–114) International Conference On Web Engineering. IEEE Computer Society, Washington, DC.Google Scholar
  27. Sun, Y., Zhuang, Z., & Giles, C. L., (2007). A large-scale study of robots.txt. In Proceedings of the 16th international conference on world wide web (Banff, Alberta, Canada, May 08-12, 2007). WWW’07. (pp. 1123–1124) ACM, New York, NY.Google Scholar
  28. Taddeo, M. (2009) Defining trust and e-trust: from old theories to new problems. International Journal of Technology and Human Interaction 5(2) April–June 2009.Google Scholar
  29. Taddeo, M. (2010) Modeling trust in artificial agents, a first step toward the analysis of e-trust. Minds and Machines 20(2), 243–257.Google Scholar
  30. Tuomela, M., & Hofmann, S. (2003). Simulating rational social normative trust, predictive trust, and predictive reliance between agents. Ethics and Information Technology, 5(3), 163–176.CrossRefGoogle Scholar
  31. Weckert, J. (2005). Trust in cyberspace. In R. J. Cavalier (Ed.), The impact of the internet on our moral lives (pp. 95–120). Albany: University of New York Press.Google Scholar
  32. Wolf, M. J. & Grodzinsky, F. S. (2006). Good/fast/cheap: contexts, relationships and professional responsibility during software development. In Proceedings of the 2006 ACM Symposium on Applied Computing (Dijon, France, April 23–27, 2006). SAC’06 (pp. 261–266). ACM, New York, NY.Google Scholar
  33. Zheng, J., Veinott, E., Bos, N., Olson, J. S., & Olson, G. M. (2002). Trust without touch: jumpstarting long-distance trust with initial social activities. In Proceedings of the SIGCHI conference on human factors in computing systems: Changing our world, changing ourselves (Minneapolis, MN, USA, April 20–25, 2002). CHI’02 (pp. 141–146). ACM, New York, NY.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2010

Authors and Affiliations

  1. 1.Sacred Heart UniversityFairfieldUSA
  2. 2.University of Illinois SpringfieldSpringfieldUSA
  3. 3.Bemidji State UniversityBemidjiUSA

Personalised recommendations