Minds and Machines

, Volume 20, Issue 2, pp 243–257

Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust

Article

DOI: 10.1007/s11023-010-9201-3

Cite this article as:
Taddeo, M. Minds & Machines (2010) 20: 243. doi:10.1007/s11023-010-9201-3

Abstract

This paper provides a new analysis of e-trust, trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. The analysis first focuses on an agent’s trustworthiness, this one is presented as the necessary requirement for e-trust to occur. Then, a new definition of e-trust as a second-order-property of first-order relations is presented. It is shown that the second-order-property of e-trust has the effect of minimising an agent’s effort and commitment in the achievement of a given goal. On this basis, a method is provided for the objective assessment of the levels of e-trust occurring among the artificial agents of a distributed artificial system.

Keywords

Artificial agent Artificial distributed system e-Trust Trust Trustworthiness 

Copyright information

© Springer Science+Business Media B.V. 2010

Authors and Affiliations

  1. 1.Information Ethics GroupUniversity of OxfordOxfordUK
  2. 2.Department of PhilosophyUniversity of HertfordshireHatfieldUK

Personalised recommendations