Learning Initial Trust Among Interacting Agents
- Cite this paper as:
- Rettinger A., Nickles M., Tresp V. (2007) Learning Initial Trust Among Interacting Agents. In: Klusch M., Hindriks K.V., Papazoglou M.P., Sterling L. (eds) Cooperative Information Agents XI. CIA 2007. Lecture Notes in Computer Science, vol 4676. Springer, Berlin, Heidelberg
Trust learning is a crucial aspect of information exchange, negotiation, and any other kind of social interaction among autonomous agents in open systems. But most current probabilistic models for computational trust learning lack the ability to take context into account when trying to predict future behavior of interacting agents. Moreover, they are not able to transfer knowledge gained in a specific context to a related context. Humans, by contrast, have proven to be especially skilled in perceiving traits like trustworthiness in such so-called initial trust situations. The same restriction applies to most multiagent learning problems. In complex scenarios most algorithms do not scale well to large state-spaces and need numerous interactions to learn. We argue that trust related scenarios are best represented in a system of relations to capture semantic knowledge. Following recent work on nonparametric Bayesian models we propose a flexible and context sensitive way to model and learn multidimensional trust values which is particularly well suited to establish trust among strangers without prior relationship. To evaluate our approach we extend a multiagent framework by allowing agents to break an agreed interaction outcome retrospectively. The results suggest that the inherent ability to discover clusters and relationships between clusters that are best supported by the data allows to make predictions about future behavior of agents especially when initial trust is involved.
KeywordsTrust in Multiagent Systems Information Agents Agent Negotiation Initial Trust Relational Learning
Unable to display preview. Download preview PDF.