Advertisement

Trust Alignment: A Sine Qua Non of Open Multi-agent Systems

  • Andrew Koster
  • Jordi Sabater-Mir
  • Marco Schorlemmer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7044)

Abstract

In open multi-agent systems trust is necessary to improve cooperation by enabling agents to choose good partners. Most trust models work by taking, in addition to direct experiences, other agents’ communicated evaluations into account. However, in an open multi-agent system other agents may use different trust models and as such the evaluations they communicate are based on different principles. This article shows that trust alignment is a crucial tool in this communication. Furthermore we show that trust alignment improves significantly if the description of the evidence, upon which a trust evaluation is based, is taken into account.

Keywords

Trust Model Multiagent System Trust Evaluation Alignment Method Random Strategy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Jøsang, A., Ismail, R., Boyd, C.: A survey of trust and reputation systems for online service provision. Decision Support Systems 43(2), 618–644 (2007)CrossRefGoogle Scholar
  2. 2.
    Conte, R., Paolucci, M.: Reputation in Artificial Societies: Social beliefs for social order. Kluwer Academic Publishers (2002)Google Scholar
  3. 3.
    Castelfranchi, C., Falcone, R.: Trust Theory: A Socio-cognitive and Computational Model. Wiley (2010)Google Scholar
  4. 4.
    Koster, A.: Why does trust need aligning? In: Proc. of 13th Workshop “Trust in Agent Societies”, Toronto, pp. 125–136. IFAAMAS (2010)Google Scholar
  5. 5.
    Euzenat, J., Shvaiko, P.: Ontology matching. Springer, Heidelberg (2007)zbMATHGoogle Scholar
  6. 6.
    Schorlemmer, M., Kalfoglou, Y., Atencia, M.: A formal foundation for ontology-alignment interaction models. International Journal on Semantic Web and Information Systems 3(2), 50–68 (2007)CrossRefGoogle Scholar
  7. 7.
    Teacy, W.T.L., Patel, J., Jennings, N.R., Luck, M.: Travos: Trust and reputation in the context of inaccurate information sources. Journal of Autonomous Agents and Multi-Agent Systems 12(2), 183–198 (2006)CrossRefGoogle Scholar
  8. 8.
    Şensoy, M., Zhang, J., Yolum, P., Cohen, R.: Context-aware service selection under deception. Computational Intelligence 25(4), 335–366 (2009)CrossRefMathSciNetGoogle Scholar
  9. 9.
    Pinyol, I., Sabater-Mir, J.: Arguing About Reputation: The lRep Language. In: Artikis, A., O’Hare, G.M.P., Stathis, K., Vouros, G.A. (eds.) ESAW 2007. LNCS (LNAI), vol. 4995, pp. 284–299. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  10. 10.
    Casare, S., Sichman, J.: Towards a functional ontology of reputation. In: AAMAS 2005: Proc. of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, Utrecht, The Netherlands, pp. 505–511. ACM (2005)Google Scholar
  11. 11.
    Nardin, L.G., Brandão, A.A.F., Muller, G., Sichman, J.S.: Effects of expressiveness and heterogeneity of reputation models in the art-testbed: Some preliminar experiments using the soari architecture. In: Proc. of the Twelfth Workshop Trust in Agent Societies at AAMAS 2009, Budapest, Hungary (2009)Google Scholar
  12. 12.
    Brandão, A.A.F., Vercouter, L., Casare, S., Sichman, J.: Exchanging reputation values among heterogeneous agent reputation models: An experience on art testbed. In: Proc. of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2007), Honolulu, Hawaii, pp. 1047–1049. IFAAMAS (2007)Google Scholar
  13. 13.
    Abdul-Rahman, A., Hailes, S.: Supporting trust in virtual communities. In: Proceedings of the 33rd Hawaii International Conference on System Sciences, vol. 6, pp. 4–7 (2000)Google Scholar
  14. 14.
    Regan, K., Poupart, P., Cohen, R.: Bayesian reputation modeling in e-marketplaces sensitive to subjectivity, deception and change. In: Proceedings of the 21st National Conference on Artificial Intelligence (AAAI), Boston, MA, USA, pp. 1206–1212. AAAI Press (2006)Google Scholar
  15. 15.
    Koster, A., Sabater-Mir, J., Schorlemmer, M.: Engineering trust alignment: a first approach. In: Proc. of the Thirteenth Workshop “Trust in Agent Societies” at AAMAS 2010, Toronto, Canada, pp. 111–122. IFAAMAS (2010)Google Scholar
  16. 16.
    De Raedt, L.: Logical and Relational Learning. Springer, Heidelberg (2008)CrossRefzbMATHGoogle Scholar
  17. 17.
    Liquid publications: Scientific publications meet the web. September 2, (2010), http://liquidpub.org
  18. 18.
    Uwents, W., Blockeel, H.: A Comparison Between Neural Network Methods for Learning Aggregate Functions. In: Boulicaut, J.-F., Berthold, M.R., Horváth, T. (eds.) DS 2008. LNCS (LNAI), vol. 5255, pp. 88–99. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  19. 19.
    Jøsang, A., Ismail, R.: The beta reputation system. In: Proceedings of the Fifteenth Bled Electronic Commerce Conference e-Reality: Constructing the e-Economy, Bled, Slovenia (2002)Google Scholar
  20. 20.
    Yu, B., Singh, M.P.: An evidential model of distributed reputation management. In: AAMAS 2002: Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 294–301. ACM, New York (2002)CrossRefGoogle Scholar
  21. 21.
    Friedman, N., Geiger, D., Goldszmidt, M.: Bayesian network classifiers. Machine Learning 29(2–3), 131–163 (1997)CrossRefzbMATHGoogle Scholar
  22. 22.
    Koster, A., Sabater-Mir, J., Schorlemmer, M.: Engineering trust alignment: Theory and practice. Technical Report TR-2010-02, CSIC-IIIA (2010)Google Scholar
  23. 23.
    Blockeel, H., De Raedt, L., Ramon, J.: Top-down induction of clustering trees. In: Shavlik, J. (ed.) Proceedings of the 15th International Conference on Machine Learning, pp. 55–63 (1998)Google Scholar
  24. 24.
    Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006)zbMATHGoogle Scholar
  25. 25.
    Blockeel, H., Dehaspe, L., Demoen, B., Janssens, G., Ramon, J., Vandecasteele, H.: Improving the efficiency of inductive logic programming through the use of query packs. Journal of Artificial Intelligence Research 16, 135–166 (2002)zbMATHGoogle Scholar
  26. 26.
    Corder, G.W., Foreman, D.I.: Nonparametric Statistics for Non-Statisticions: A Step-by-Step Approach. Wiley (2009)Google Scholar
  27. 27.
    Aamodt, A., Plaza, E.: Case-based reasoning: Foundational issues, methodological variations, and system approaches. AI Communications 7(1), 39–59 (1994)Google Scholar
  28. 28.
    Witkowski, J.: Truthful feedback for sanctioning reputation mechanisms. In: Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence (UAI 2010), Corvallis, Oregon, pp. 658–665. AUAI Press (2010)Google Scholar
  29. 29.
    Su, X., Khoshgoftaar, T.M.: A survey of collaborative filtering techniques. Advances in Artificial Intelligence 2009. Article no. 421425 (January 2009)Google Scholar
  30. 30.
    Pinyol, I., Sabater-Mir, J.: An argumentation-based protocol for social evaluations exchange. In: Proceedings of The 19th European Conference on Artificial Intelligence (ECAI 2010), Lisbon, Portugal (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Andrew Koster
    • 1
    • 2
  • Jordi Sabater-Mir
    • 1
  • Marco Schorlemmer
    • 1
    • 2
  1. 1.IIIA - CSICSpain
  2. 2.Universitat Autònoma de BarcelonaBellaterraSpain

Personalised recommendations