Skip to main content
Log in

Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in this area has been on the artificial agents and the humans they may encounter after they are deployed. We contend that the humans who design, implement, and deploy the artificial agents are crucial to any discussion of e-trust and to understanding the distinctions among the concepts of trust, e-trust and face-to-face trust.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. See the overview article by Benbasat et al. (2008) for a list of influences and reflections on state of the art research in this area.

  2. We will not replay the arguments behind this analysis here; interested readers should see Taddeo’s paper, as well as ideas she criticizes in Luhmann (1979), Gambetta (1998), Nissenbaum (2001), Tuomela and Hofmann (2003), Weckert (2005).

  3. In the information systems literature this type of risk is referred to as channel, web, or Internet risk.

  4. See Floridi and Sanders (2004).

  5. See Grodzinsky et al. (2008).

  6. The idea that AAs “make decisions” is philosophically controversial; we are not claiming here that AAs make decisions in a way that is identical or comparable to the way humans make decisions. Instead, we are only positing that the program running inside an AA will take different courses of actions based on decision control structures and the state of the computation at a given moment. Although such issues as whether AAs have free will are fascinating and are related to trust, we explicitly are not exploring those issues in this paper.

  7. See Castelfranchi and Falcone and the work of the Unit of AI Cognitive Modelling and Interaction, National Research Council Institute of Psychology, Rome, Italy (2001).

  8. See Kloth (2009).

  9. There are both theoretical and practical reasons why neural net decisions are unlikely to be easily explained to humans. For example, systems that can give a comprehensible explanation to a human of why a decision was reached are far more resource intensive than systems that are less expressive about their reasoning (Greiner et al. 2001).

  10. The learning that can take place studying AA trust decisions will be facilitated if that software is available widely. Thus, source code available software (including Free Software) will be of particular interest.

  11. See Miller et al. (2009).

References

  • Baier, A. (Jan, 1986). Trust and antitrust. Ethics, 96(2), 231–260.

  • Benbasat, I., Gefen, D., & Pavlou, P. A. (Spring, 2008). Special issue: Trust in online environments. Journal of Management Information Systems, 24(4), 5–11.

  • Bok, S. (1978). Lying (p. 31n). Pantheon Books: New York.

    Google Scholar 

  • Bruemmer, D., Few, D., Goodrich, M., Norman, D., Sarkar, N., Scholtz, J., Smart, B., Swinson, M. L., & Yanco, H. (2004). How to trust robots further than we can throw them. In CHI ‘04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria, April 24–29, 2004). CHI ’04 (pp. 1576–1577). ACM, New York.

  • Castlefranchi, C. & Falcone, R. (2008). Socio-cognitive model of trust: basic ingredients, online at http://istc.cnr.it/T3/trust. Accessed March, 2008.

  • Clarke, E. M. & Wing, J. M. (Dec, 1996) Formal methods: state of the art and future directions. ACM Computing Surveys, 28(4), 626–643.

  • DiSalvo, C. F., Gemperle, F., Forlizzi, J., & Kiesler, S. (2002). All robots are not created equal: The design and perception of humanoid robot heads. In Proceedings of the 4th conference on designing interactive systems: processes, practices, methods, and techniques (London, England, June 25–28, 2002). DIS ’02 (pp. 321–326). ACM, New York, NY.

  • Durante, M. (2008). What model of trust for networked cooperation? Online social trust in the production of common goods (knowledge sharing). In Ethicomp 2008, conference proceedings (pp 211–223). University of Pavia, Mantua, Italy.

  • Grodzinsky, F. S., Miller, K., & Wolf, M. J. (2010) Toward a model of trust and e-trust processes using object-oriented methodologies. In Ethicomp 2010 Proceedings, April 14–16, 2010.

  • Falcone, R. & Castelfranchi, C. (2001) Social trust—A cognitive approach, online at http://istc.cnr.it/T3/trust. Accessed 3/2008.

  • Floridi, L. & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.

    Google Scholar 

  • Fule, P. & Roddick, J. F. (2004) Detecting privacy and ethical sensitivity in data mining results. In Proceedings of the 27th Australasian conference on computer scienceVol. 26 (Dunedin, New Zealand). Estivill-Castro, Ed. ACM International conference proceeding series (vol. 56, pp. 159–166). Australian Computer Society, Darlinghurst, Australia.

  • Gambetta, D. (1998) Can we trust trust? In Trust: Making and breaking cooperative relations. D. Gambetta, Ed. (pp. 213–238).

  • Greiner, R., Darken, C., & Santoso, N. I. (Mar, 2001) Efficient reasoning. ACM Computing Surveys, 33(1), 1–30.

  • Grodzinsky, F. S., Miller, K., & Wolf, M. J. (2008). The ethics of designing artificial agents. Ethics and Information Technology, 10, 115–121.

    Article  Google Scholar 

  • Grodzinsky, F. S., Miller, K. & Wolf, M. J. (2009) Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?” In Proceedings of computer ethics, philosophical enquiry conference, June 26–29, 2009.

  • Gunkel, D. J. (2007). Thinking otherwise: Ethics, technology and other subjects. Ethics and Information Technology, 9, 165–177.

    Article  Google Scholar 

  • Howe, D. & Nissenbaum, H. (2009) TrackMeNot. http://mrl.nyu.edu/~dhowe/TrackMeNot/, accessed April 7, 2009.

  • Kloth, R. (2009) List of bad bots. http://www.kloth.net/internet/badbots.php, accessed March 31, 2009.

  • Luhmann, N. (1979). Trust and power. Chichester: Wiley.

    Google Scholar 

  • Mcknight, D. H., & Chervany, N. L. (1996) The meanings of trust. http://misrc.umn.edu/wpaper/WorkingPapers/9604.pdf, accessed June 1, 2010.

  • Miller, J. (June, 1988) Discrete and continuous models of human information processing: Theoretical distinctions and empirical results, Acta Psychologica, 67(3), 191–257.

  • Miller, K., Grodzinsky, F., & Wolf, M. J. (2009). Why turing shouldn’t have to guess. The Fifth Asia-Pacific Computing and Philosophy Conference, October 1–2, 2009.

  • Nissenbaum, H. (2001). Securing trust online: Wisdom or oxymoron. Boston University Law Review, 81(3), 635–664.

    Google Scholar 

  • Schumann, J. & Nelson, S. (2002). Toward V&V of neural network based controllers. In D. Garlan, J. Kramer, & A. Wolf (Eds.). Proceedings of the first workshop on self-healing systems (Charleston, South Carolina, November 18–19, 2002), WOSS ’02 (pp. 67–72). ACM, New York, NY.

  • Sun, Y., Councill, I. G., & Giles, C. L. (2008). BotSeer: An automated information system for analyzing web robots. In Proceedings of the 2008 eighth international conference on web engineeringVolume 00 (July 14-18, 2008) (pp. 108–114) International Conference On Web Engineering. IEEE Computer Society, Washington, DC.

  • Sun, Y., Zhuang, Z., & Giles, C. L., (2007). A large-scale study of robots.txt. In Proceedings of the 16th international conference on world wide web (Banff, Alberta, Canada, May 08-12, 2007). WWW’07. (pp. 1123–1124) ACM, New York, NY.

  • Taddeo, M. (2009) Defining trust and e-trust: from old theories to new problems. International Journal of Technology and Human Interaction 5(2) April–June 2009.

  • Taddeo, M. (2010) Modeling trust in artificial agents, a first step toward the analysis of e-trust. Minds and Machines 20(2), 243–257.

    Google Scholar 

  • Tuomela, M., & Hofmann, S. (2003). Simulating rational social normative trust, predictive trust, and predictive reliance between agents. Ethics and Information Technology, 5(3), 163–176.

    Article  Google Scholar 

  • Weckert, J. (2005). Trust in cyberspace. In R. J. Cavalier (Ed.), The impact of the internet on our moral lives (pp. 95–120). Albany: University of New York Press.

    Google Scholar 

  • Wolf, M. J. & Grodzinsky, F. S. (2006). Good/fast/cheap: contexts, relationships and professional responsibility during software development. In Proceedings of the 2006 ACM Symposium on Applied Computing (Dijon, France, April 23–27, 2006). SAC’06 (pp. 261–266). ACM, New York, NY.

  • Zheng, J., Veinott, E., Bos, N., Olson, J. S., & Olson, G. M. (2002). Trust without touch: jumpstarting long-distance trust with initial social activities. In Proceedings of the SIGCHI conference on human factors in computing systems: Changing our world, changing ourselves (Minneapolis, MN, USA, April 20–25, 2002). CHI’02 (pp. 141–146). ACM, New York, NY.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. J. Wolf.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Grodzinsky, F.S., Miller, K.W. & Wolf, M.J. Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. Ethics Inf Technol 13, 17–27 (2011). https://doi.org/10.1007/s10676-010-9255-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-010-9255-1

Keywords

Navigation