Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”
Purchase on Springer.com
$39.95 / €34.95 / £29.95*
Rent the article at a discountRent now
* Final gross prices may vary according to local VAT.
There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in this area has been on the artificial agents and the humans they may encounter after they are deployed. We contend that the humans who design, implement, and deploy the artificial agents are crucial to any discussion of e-trust and to understanding the distinctions among the concepts of trust, e-trust and face-to-face trust.
- Baier, A. (Jan, 1986). Trust and antitrust. Ethics, 96(2), 231–260.
- Benbasat, I., Gefen, D., & Pavlou, P. A. (Spring, 2008). Special issue: Trust in online environments. Journal of Management Information Systems, 24(4), 5–11.
- Bok, S. (1978). Lying (p. 31n). Pantheon Books: New York.
- Bruemmer, D., Few, D., Goodrich, M., Norman, D., Sarkar, N., Scholtz, J., Smart, B., Swinson, M. L., & Yanco, H. (2004). How to trust robots further than we can throw them. In CHI ‘04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria, April 24–29, 2004). CHI ’04 (pp. 1576–1577). ACM, New York.
- Castlefranchi, C. & Falcone, R. (2008). Socio-cognitive model of trust: basic ingredients, online at http://istc.cnr.it/T3/trust. Accessed March, 2008.
- Clarke, E. M. & Wing, J. M. (Dec, 1996) Formal methods: state of the art and future directions. ACM Computing Surveys, 28(4), 626–643.
- DiSalvo, C. F., Gemperle, F., Forlizzi, J., & Kiesler, S. (2002). All robots are not created equal: The design and perception of humanoid robot heads. In Proceedings of the 4th conference on designing interactive systems: processes, practices, methods, and techniques (London, England, June 25–28, 2002). DIS ’02 (pp. 321–326). ACM, New York, NY.
- Durante, M. (2008). What model of trust for networked cooperation? Online social trust in the production of common goods (knowledge sharing). In Ethicomp 2008, conference proceedings (pp 211–223). University of Pavia, Mantua, Italy.
- Grodzinsky, F. S., Miller, K., & Wolf, M. J. (2010) Toward a model of trust and e-trust processes using object-oriented methodologies. In Ethicomp 2010 Proceedings, April 14–16, 2010.
- Falcone, R. & Castelfranchi, C. (2001) Social trust—A cognitive approach, online at http://istc.cnr.it/T3/trust. Accessed 3/2008.
- Floridi, L. & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.
- Fule, P. & Roddick, J. F. (2004) Detecting privacy and ethical sensitivity in data mining results. In Proceedings of the 27th Australasian conference on computer science—Vol. 26 (Dunedin, New Zealand). Estivill-Castro, Ed. ACM International conference proceeding series (vol. 56, pp. 159–166). Australian Computer Society, Darlinghurst, Australia.
- Gambetta, D. (1998) Can we trust trust? In Trust: Making and breaking cooperative relations. D. Gambetta, Ed. (pp. 213–238).
- Greiner, R., Darken, C., & Santoso, N. I. (Mar, 2001) Efficient reasoning. ACM Computing Surveys, 33(1), 1–30.
- Grodzinsky, F. S., Miller, K., & Wolf, M. J. (2008). The ethics of designing artificial agents. Ethics and Information Technology, 10, 115–121. CrossRef
- Grodzinsky, F. S., Miller, K. & Wolf, M. J. (2009) Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?” In Proceedings of computer ethics, philosophical enquiry conference, June 26–29, 2009.
- Gunkel, D. J. (2007). Thinking otherwise: Ethics, technology and other subjects. Ethics and Information Technology, 9, 165–177. CrossRef
- Howe, D. & Nissenbaum, H. (2009) TrackMeNot. http://mrl.nyu.edu/~dhowe/TrackMeNot/, accessed April 7, 2009.
- Kloth, R. (2009) List of bad bots. http://www.kloth.net/internet/badbots.php, accessed March 31, 2009.
- Luhmann, N. (1979). Trust and power. Chichester: Wiley.
- Mcknight, D. H., & Chervany, N. L. (1996) The meanings of trust. http://misrc.umn.edu/wpaper/WorkingPapers/9604.pdf, accessed June 1, 2010.
- Miller, J. (June, 1988) Discrete and continuous models of human information processing: Theoretical distinctions and empirical results, Acta Psychologica, 67(3), 191–257.
- Miller, K., Grodzinsky, F., & Wolf, M. J. (2009). Why turing shouldn’t have to guess. The Fifth Asia-Pacific Computing and Philosophy Conference, October 1–2, 2009.
- Nissenbaum, H. (2001). Securing trust online: Wisdom or oxymoron. Boston University Law Review, 81(3), 635–664.
- Schumann, J. & Nelson, S. (2002). Toward V&V of neural network based controllers. In D. Garlan, J. Kramer, & A. Wolf (Eds.). Proceedings of the first workshop on self-healing systems (Charleston, South Carolina, November 18–19, 2002), WOSS ’02 (pp. 67–72). ACM, New York, NY.
- Sun, Y., Councill, I. G., & Giles, C. L. (2008). BotSeer: An automated information system for analyzing web robots. In Proceedings of the 2008 eighth international conference on web engineering—Volume 00 (July 14-18, 2008) (pp. 108–114) International Conference On Web Engineering. IEEE Computer Society, Washington, DC.
- Sun, Y., Zhuang, Z., & Giles, C. L., (2007). A large-scale study of robots.txt. In Proceedings of the 16th international conference on world wide web (Banff, Alberta, Canada, May 08-12, 2007). WWW’07. (pp. 1123–1124) ACM, New York, NY.
- Taddeo, M. (2009) Defining trust and e-trust: from old theories to new problems. International Journal of Technology and Human Interaction 5(2) April–June 2009.
- Taddeo, M. (2010) Modeling trust in artificial agents, a first step toward the analysis of e-trust. Minds and Machines 20(2), 243–257.
- Tuomela, M., & Hofmann, S. (2003). Simulating rational social normative trust, predictive trust, and predictive reliance between agents. Ethics and Information Technology, 5(3), 163–176. CrossRef
- Weckert, J. (2005). Trust in cyberspace. In R. J. Cavalier (Ed.), The impact of the internet on our moral lives (pp. 95–120). Albany: University of New York Press.
- Wolf, M. J. & Grodzinsky, F. S. (2006). Good/fast/cheap: contexts, relationships and professional responsibility during software development. In Proceedings of the 2006 ACM Symposium on Applied Computing (Dijon, France, April 23–27, 2006). SAC’06 (pp. 261–266). ACM, New York, NY.
- Zheng, J., Veinott, E., Bos, N., Olson, J. S., & Olson, G. M. (2002). Trust without touch: jumpstarting long-distance trust with initial social activities. In Proceedings of the SIGCHI conference on human factors in computing systems: Changing our world, changing ourselves (Minneapolis, MN, USA, April 20–25, 2002). CHI’02 (pp. 141–146). ACM, New York, NY.
- Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”
Ethics and Information Technology
Volume 13, Issue 1 , pp 17-27
- Cover Date
- Print ISSN
- Online ISSN
- Springer Netherlands
- Additional Links
- Artificial agents
- Electronic trust
- Modeling trust
- Industry Sectors
- Author Affiliations
- 1. Sacred Heart University, 5151 Park Avenue, Fairfield, CT, USA
- 2. University of Illinois Springfield, One University Plaza, Springfield, IL, USA
- 3. Bemidji State University, 1500 Birchmont Drive NE, #23, Bemidji, MN, USA