Advertisement

Experiments in Building Experiential Trust in a Society of Objective-Trust Based Agents

  • Mark Witkowski
  • Alexander Artikis
  • Jeremy Pitt
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2246)

Abstract

In this paper we develop a notion of “objective trust” for Software Agents, that is trust of, or between, Agents based on actual experiences between those Agents. Experiential objective trust allows Agents to make decisions about how to select other Agents when a choice has to be made. We define a mechanism for such an “objective Trust-Based Agent” (oTB-Agent), and present experimental results in a simulated trading environment based on an Intelligent Networks (IN) scenario. The trust one Agent places in another is dynamic, updated on the basis of each experience. We use this to investigate three questions related to trust in Multi-Agent Systems (MAS), first how trust affects the formation of trading partnerships, second, whether trust developed over a period can equate to “loyalty” and third whether a less than scrupulous Agent can exploit the individual nature of trust to its advantage.

Keywords

MultiAgent System Trading Partner Trust Relationship Agent Society Intelligent Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Artikis A., Kamara L. and Pitt J. (2001) “Towards an Open Society Model and Animation”, in the Proceedings of Agent-Based Simulation II Workshop, Passau, pp. 48–55Google Scholar
  2. 2.
    Axelrod, R. (1990) “The Evolution of Cooperation”, London: Penguin BooksGoogle Scholar
  3. 3.
    Barber, K.S. and Kim, J. (2000) “Belief Revision Process Based on Trust: Agents Evaluating Reputation of Information Sources”, Workshop on “Deception, Fraud and Trust in Agent Societies”, Autonomous Agents, 2000, pp. 15–26Google Scholar
  4. 4.
    Biswas, A., Debnath, S. and Sen, S. (1999) “Believing Others: Pros and Cons”, Proc. IJCAI-99 Workshop “Agents Learning About, From and With Other Agents”Google Scholar
  5. 5.
    Cantwell, J. (1998) “Resolving Conflicting Information”, in the Journal of Logic, Language and Information, Vol. 7, No. 2, pp. 191–220zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Castelfranchi C., Conte R., Paolucci M (1998) “Normative Reputation and the Costs of Compliance”, in Journal of Artificial Societies and Social Simulation, Vol. 1, No. 3, http://jasss.soc.surrey.ac.uk/1/3/3.html
  7. 7.
    Castelfranchi, C. and Falcone, R. (1998) “Principles of Trust for MAS: Cognitive Anatomy, Social Importance, and Quantification”, ICMAS-98, pp. 72–79Google Scholar
  8. 8.
    Dignum, F. and Linder, B. (1996) “Modelling Social Agents: Communication as Action”, in “Intelligent Agents III: Agent Theories, Architectures, and Languages”, pp. 205–217Google Scholar
  9. 9.
    FIPA (1997) “FIPA-97 Specification, Part 2: Agent Communication Language”, http://www.fipa.org
  10. 10.
    Gambetta, D. (ed.) (1988) “Trust: making and Breaking Cooperative Relations”, Basil Blackwell, Oxford.Google Scholar
  11. 11.
    Griffiths, N. and Luck, M. (1999) “Cooperative Plan Selection Through Trust”, in Proc. 9th European Workshop on Multi-Agent Systems Engineering (MAAMAW’99), Springer Verlag, Berlin, pp. 162–174Google Scholar
  12. 12.
    He, Q, Sycara, K. and Su, Z. (1998) “Security Infrastructure for Software Agent Society”, in Trust and Deception in Virtual Societies, eds. Castelfranchi and Hua-Tan, pp. 137–154Google Scholar
  13. 13.
    Jennings, N., et al. (1999) “FIPA-compliant Agents for Real-time Control of Intelligent Network Traffic”, Computer Networks, Vol. 31, pp. 2017–2036CrossRefGoogle Scholar
  14. 14.
    Jones, A. and Firozabadi, B. (1998) “On the Characterisation of a Trusting Agent-Aspects of a Formal Approach”, in Castelfranchi, C. and Hua-Tan (eds.) “Trust and Deception in Virtual Societies”, pp. 155–166Google Scholar
  15. 15.
    Jonker, C.M. and Treur, J. (1999) “Formal Analysis of Models for the Dynamics of Trust Based on Experiences”, in Proc. 9th European Workshop on Multi-Agent Systems Engineering (MAAMAW’ 99), Springer Verlag, Berlin, pp. 221–232Google Scholar
  16. 16.
    Lorenz, E.H. (1988) “Neither Friends nor Strangers: Informal Networks of Subcontracting in French Industry”, in: [10], pp. 194–209Google Scholar
  17. 17.
    Marsh, S. (1994) “Trust in Distributed Artificial Intelligence”, in Proc. 4th European Workshop on Multi-Agent Systems Engineering (MAAMAW’ 92), Springer Verlag, Berlin, pp. 94–113Google Scholar
  18. 18.
    Patel, A., Prouskas, K., Barria, J. and Pitt, J. (2000) “IN Load Control using a Competitive Market-based Multi-Agent System”, in Proc. Intelligence and Services in Networks 2000 (IS&N-2000), pp. 239–254Google Scholar
  19. 19.
    Pham, X.H. and Betts, R. (1994) “Congestion Control in Intelligent Networks”, Computer Networks and ISDN Systems, Vol. 26, No. 5, pp. 511–524CrossRefGoogle Scholar
  20. 20.
    Prouskas, K., Patel, A., Pitt, J. and Barria, J. (2000) “A Multi-Agent System for Intelligent Network Load Control Using a Market-based Approach”, in Proc. 4th Int. Conf. on MultiAgent Systems (ICMAS-2000), pp. 231–238Google Scholar
  21. 21.
    Schaerf, A., Shoham, Y. and Tennenholtz, M. (1995) “Adaptive Load Balancing: A Study in Multi-Agent Learning”, Journal of Artificial Intelligence Research, Vol. 2, pp. 475–500zbMATHGoogle Scholar
  22. 22.
    Schillo, M. and Funk, P. (1999) “Learning from and about other Agents in Terms of Social Metaphors”, in Proc. “Agents Learning about, from and with other Agents” Workshop.Google Scholar
  23. 23.
    Sholtz, F. and Hanrahan, H. (1999) “Market Based Control of SCP Congestion in Intelligent Networks”, South African Telecommunications Networks and Applications Conference (SATNAC-99)Google Scholar
  24. 24.
    Thirunavukkarasu, C., Finin, T. and Mayfield, J. (1995) “Secret Agents-A Security Architecture for the KQML”, Proc. ACM CIKM Intelligent Information Agents Workshop, Baltimore, December 1995Google Scholar
  25. 25.
    Watkins, C.J.C.H. (1989) “Learning from Delayed Rewards”, King’s College, Cambridge University (Ph.D. thesis)Google Scholar
  26. 26.
    Williams, B. (1988) “Formal Structures and Social Reality”, in: [10], pp. 3–13Google Scholar
  27. 27.
    Wong, H.C. and Sycara, K. (1999) “Adding Security and Trust to Multi-Agent Systems”, in: Proc. of Autonomous Agents’ 99 Workshop on Deception, Fraud, and Trust in Agent Societies, pp. 149–161Google Scholar
  28. 28.
    Zacharia, G. (1999) “Trust Management Through Reputation Mechanisms”, Workshop on “Deception, Fraud and Trust in Agent Societies”, Autonomous Agents, 1999, pp. 163–167Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Mark Witkowski
    • 1
  • Alexander Artikis
    • 1
  • Jeremy Pitt
    • 1
  1. 1.Intelligent and Interactive Systems Group Department of Electrical and Electronic EngineeringImperial College of Science, Technology and MedicineLondonUnited Kingdom

Personalised recommendations