This paper provides a new analysis of e-trust, trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. The analysis first focuses on an agent’s trustworthiness, this one is presented as the necessary requirement for e-trust to occur. Then, a new definition of e-trust as a second-order-property of first-order relations is presented. It is shown that the second-order-property of e-trust has the effect of minimising an agent’s effort and commitment in the achievement of a given goal. On this basis, a method is provided for the objective assessment of the levels of e-trust occurring among the artificial agents of a distributed artificial system.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
AAs are computational systems situated in a specific environment and able to adapt themselves to changes in it. They are also able to interact with the environment and with other agents, both human and artificial, and to act autonomously to achieve their goals. AAs are not endowed with mental states, feelings or emotions. For a more in depth analysis of the features of AAs see Floridi and Sanders (2004).
These AAs are assumed to comply with the axioms of rational choice theory. The axioms are: (1) completeness: for any pair of alternatives (x and y), the AA either prefers x to y, prefers y to x, or is indifferent between x and y. (2) Transitivity: if an AA prefers x to y and y to z, then it necessarily prefers x to z. If it is indifferent between x and y, and indifferent between y and z, then it is necessarily indifferent between x and z. (3) Priority: the AA will choose the most preferred alternative. If the AA is indifferent between two or more alternatives that are preferred to all others, it will choose one of those alternatives, with the specific choice among them remaining indeterminate.
These systems are widely diffused, there is a plethora of MAS able to perform tasks such as product brokering, merchant brokering and negotiation. Such systems are also able to address problems like security, trust, reputation, law, payment mechanisms, and advertising (Guttman et al. 1998; Nwana et al. 1998).
The reader may consider this process similar to the one that occurs in e-commerce contexts where HAs are involved, like e-Bay for example.
Note that “action” indicates here any performance of an AA, from, for example, controlling an unmanned vehicle to communicating information or data to another AA. For the role of trust in informative processes see (reference removed for double blind review).
As the reader might already know a mini-max rule is a decision rule used in decision and game theory. The rule is used to maximise the minimum gain, or inversely to minimise the maximum loss.
In the theory of level of abstraction (LoA), discrete mathematics is used to specify and analyse the behaviour of information systems. The definition of a LoA is this: given a well-defined set X of values, an observable of type X is a variable whose value ranges over X. A LoA consists of a collection of observables of given types. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. A LoA consists of a collection of observables, each with a well-defined possible set of values or outcomes. Each LoA makes possible an analysis of the system, the result of which is called a model of the system. Evidently, a system may be described at a range of LoAs and so can have a range of models. More intuitively, a LoA is comparable to an ‘interface’, which consists of a set of features, the observables.
Castelfranchi, C., & Falcone, R. (1998). Principles of trust for MAS: Cognitive anatomy, social importance, and quantification. In Third International Conference on Multi-Agent Systems (ICMAS’98). Paris: IEEE Computer Society.
Corritore, C. L., Kracher, B., Wiedenbeck, S. (2003). On-line trust: Concepts, evolving themes, a model. International Journal of Human-Computer Studies, 58(6), 737–758.
de Vries, P. (2006). Social presence as a conduit to the social dimensions of online trust. In W. IJsselsteijn, Y. de Kort, C. Midden, B. Eggen, & E. van den Hoven (Eds.), Persuasive technology (pp. 55–59). Berlin: Springer.
Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18(3), 303–329.
Floridi, L., & Sanders, J. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.
Gambetta, D. (1998). Can we trust trust? In D. Gambetta (Ed.), Trust: Making and breaking cooperative relations (pp. 213–238). Oxford: Blackwell.
Guttman, R., Moukas, A., Maes, P. (1998). Agent-mediated electronic commerce: A survey. Knowledge Engineering Review, 13(2), 147–159.
Lagenspetz, O. (1992). Legitimacy and trust. Philosophical Investigations, 15(1), 1–21.
Luhmann, N. (1979). Trust and power. Chichester: Wiley.
Nissenbaum, H. (2001). Securing trust online: Wisdom or oxymoron. Boston University Law Review, 81(3), 635–664.
Nwana, H., Rosenschein, J., et al. (1998). Agent-mediated electronic commerce: Issues, challenges and some viewpoints. Autonomous Agents 98, ACM Press.
Papadopoulou, P. (2007). Applying virtual reality for trust-building e-commerce environments. Virtual Reality, 11(2–3), 107–127.
Seamons, K. E., Winslett, M., Yu, T., Lu, L., Jarvis, R. (2003). Protecting privacy during on-line trust negotiation. In R. Dingledine, P. Syverson, et al. (Eds.), Privacy enhancing technologies (pp. 249–253). Berlin: Springer.
Taddeo, M. (2009). Defining trust and e-trust: Old theories and new problems. International Journal of Technology and Human Interaction (IJTHI), 5(2), 23–35.
Tuomela, M., & Hofmann, S. (2003). Simulating rational social normative trust, predictive trust, and predictive reliance between agents. Ethics and Information Technology, 5(3), 163–176.
Weckert, J. (2005). Trust in cyberspace. In R. J. Cavalier (Ed.), The impact of the internet on our moral lives (pp. 95–120). Albany: University of New York Press.
Wooldridge, M. (2002). An introduction to multiagent systems. Chichester: Wiley.
I am very grateful to Terrell W. Bynum, Charles M. Ess, Luciano Floridi, and Matteo Turilli for their helpful suggestions and conversations on previous drafts on which this article is based. They are responsible only for the improvements not for any remaining mistake.
About this article
Cite this article
Taddeo, M. Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust. Minds & Machines 20, 243–257 (2010). https://doi.org/10.1007/s11023-010-9201-3
- Artificial agent
- Artificial distributed system