Advertisement

A Trust/Honesty Model in Multiagent Semi-competitive Environments

  • Ka-man Lam
  • Ho-fung Leung
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3371)

Abstract

Much research has been done on the calculation of trust, impression and reputation, as well as using these information to decide whether to cooperate with other agents in cooperative environments. However, little is about how to use these information to help agents make decision on whether to believe a particular message when the message sender has intention to be honest as well as dishonest, and make decision on whether to lie. In this paper, we describe a framework to help agents make these decisions in a semi-competitive environment, and show that agents adopting the proposed model have better performance than agents adopting previous models or strategies.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Castelfranchi, C., Falcone, R.: Principles of trust for mas: Cognitive anatomy, social importance, and quantification. In: Proceedings of the Third International Conference on Multiagent Systems, pp. 72–79 (1998)Google Scholar
  2. 2.
    Castelfranchi, C., Falcone, R., Pezzulo, G.: Trust in information sources as a source for trust: A fuzzy approac. In: Proceedings of The Second International Joint Conference on Autonomous Agent and Multiagent Systems, pp. 89–96 (2003)Google Scholar
  3. 3.
    Falcone, R., Castelfranchi, C.: Social trust: A cognitive approach. In: Trust and Deception in Virtual Societies, pp. 55–90. Kluwer Academic Publishers, Dordrecht (2001)Google Scholar
  4. 4.
    Glass, A., Grosz, B.: Socially conscious decision-making. In: Proceedings of the Fourth International Conference on Autonomous Agents, pp. 217–224 (2000)Google Scholar
  5. 5.
    Gmytrasiewicz, P.J., Durfee, E.H.: Toward a theory of honesty and trust among communicating autonomous agents. Group Decision and Negotiation 2, 237–258 (1993)CrossRefGoogle Scholar
  6. 6.
    Gmytrasiewicz, P.J., Durfee, E.H.: A rigorous, operational formalization of recursive modeling. In: Proceedings of the First International Conference on Multi-Agent Systems, pp. 125–132 (1995)Google Scholar
  7. 7.
    Gmytrasiewicz, P.J., Durfee, E.H.: Rational coordination in multi-agent environments. Autonomous Agents and Multi-Agent Systems 3(4), 319–350 (2000)CrossRefGoogle Scholar
  8. 8.
    Gmytrasiewicz, P.J., Durfee, E.H.: Rational communication in multi-agent environments. Autonomous Agents and Multi-Agent Systems 4, 233–272 (2001)CrossRefGoogle Scholar
  9. 9.
    Griffiths, N., Luck, M.: Coalition formation through motivation and trust. In: Proceedings of The Second International Joint Conference on Autonomous Agent and Multiagent Systems, pp. 17–24 (2003)Google Scholar
  10. 10.
    Cambridge dictionaries online, http://dictionary.cambridge.org/
  11. 11.
    Merriam-webster online, http://www.webster.com/
  12. 12.
    Lam, K.M., Leung, H.F.: An infinite belief hierarchy based on the recursive modeling method. In: Proceedings of Sixth Pacific Rim International Workshop on Multi-Agents, pp. 25–36 (2003)Google Scholar
  13. 13.
    Marsh, S.: Formalising Trust as a Computational Concept. PhD thesis, University of Stirling (1994)Google Scholar
  14. 14.
    Mui, L., Halberstadt, A., Mohtashemi, M.: Notions of reputation in multi-agent systems: A review. In: Proceedings of Autonomous Agents and Multi-Agent Systems (2002)Google Scholar
  15. 15.
    Mui, L., Mohtashemi, M., Ang, C., Szolovits, P., Halberstadt, A.: Ratings in distributed systems: A bayesian approach. In: Workshop on Information Technologies and Systems (2001)Google Scholar
  16. 16.
    Mui, L., Mohtashemi, M., Halberstadt, A.: A computational model of trust and reputation. In: Proceedings of 35th Hawaii International Conference on System Science (2002)Google Scholar
  17. 17.
    Rosenschein, J.S., Genesereth, M.R.: Deals among rational agents. In: Proceedings of the Ninth International Joint Conference on Artificial Intelligence, pp. 91–99 (1985)Google Scholar
  18. 18.
    Rubiera, J.C., Lopez, J.M.M., Muro, J.D.: A fuzzy model of reputation in multi-agent systems. In: Proceedings of the Fifth International Conference on Autonomous Agents, pp. 25–26 (2001)Google Scholar
  19. 19.
    Sabater, J., Sierra, C.: Regret: A reputation model for gregarious societies. In: Proceedings of Fourth International Workshop on Deception, Fraud and Trust in Agent Societies (2001)Google Scholar
  20. 20.
    Yu, B., Singh, M.P.: Towards a probabilistic model of distributed reputation management. In: Proceedings of Fourth International Workshop on Deception, Fraud and Trust in Agent Societies, pp. 125–137 (2001)Google Scholar
  21. 21.
    Yu, B., Singh, M.P.: Detecting deception in reputation management. In: Proceedings of The Second International Joint Conference on Autonomous Agent and Multiagent Systems, pp. 73–80 (2003)Google Scholar
  22. 22.
    Zlotkin, G., Rosenschein, J.S.: Negotiation and task sharing among autonomous agents in cooperative domains. In: Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, pp. 912–917 (1989)Google Scholar
  23. 23.
    Zlotkin, G., Rosenschein, J.S.: Negotiation and conflict resolution in noncooperative domains. In: Proceedings of the National Conference on Artificial Intelligence, pp. 100–105 (1990)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Ka-man Lam
    • 1
  • Ho-fung Leung
    • 1
  1. 1.Department of Computer Science and EngineeringThe Chinese University of Hong KongSha Tin, Hong KongChina

Personalised recommendations