Robust Trust: Prior Knowledge, Time and Context

  • John Debenham
  • Carles Sierra
Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 123)

Abstract

Trust is an agent’s expectation of the value it will observe when it evaluates the enactment of another agent’s commitment. There are two steps involved in trust: first the action that another agent is expected to enact given that it has made a commitment, and second the expected valuation of that action when the result of that action is eventually consumed. A computational model of trust is presented that takes account of: prior knowledge of other agents, the evolution of trust estimates in time, and the evolution of trust estimates in response to changes in context. This model is founded on the principle of information-based agency that each and every utterance made contains valuable information. The computational basis for the model is substantially simpler and is more theoretically grounded than previously reported.

Keywords

Maximum Entropy Multiagent System Autonomous Agent Reputation Model Interaction History 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ramchurn, S., Huynh, T., Jennings, N.: Trust in multi-agent systems. The Knowledge Engineering Review 19(1), 1–25 (2004)CrossRefGoogle Scholar
  2. 2.
    Artz, D., Gil, Y.: A survey of trust in computer science and the semantic web. Web Semantics: Science, Services and Agents on the World Wide Web 5(2), 58–71 (2007)CrossRefGoogle Scholar
  3. 3.
    Viljanen, L.: Towards an Ontology of Trust. In: Katsikas, S.K., López, J., Pernul, G. (eds.) TrustBus 2005. LNCS, vol. 3592, pp. 175–184. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  4. 4.
    Huynh, T., Jennings, N., Shadbolt, N.: An integrated trust and reputation model for open multi-agent systems. Autonomous Agents and Multi-Agent Systems 13(2), 119–154 (2006)CrossRefGoogle Scholar
  5. 5.
    Sabater, J., Sierra, C.: Review on computational trust and reputation models. Artificial Intelligence Review 24(1), 33–60 (2005)CrossRefGoogle Scholar
  6. 6.
    Sierra, C., Debenham, J.: Information-based agency. In: Proceedings of Twentieth International Joint Conference on Artificial Intelligence IJCAI 2007, Hyderabad, India, pp. 1513–1518 (January 2007)Google Scholar
  7. 7.
    Sierra, C., Debenham, J.: The LOGIC Negotiation Model. In: Proceedings Sixth International Conference on Autonomous Agents and Multi Agent Systems AAMAS 2007, Honolulu, Hawai’i, pp. 1026–1033 (May 2007)Google Scholar
  8. 8.
    Debenham, J., Sierra, C.: When Trust Is Not Enough. In: Huemer, C., Setzer, T., Aalst, W., Mylopoulos, J., Rosemann, M., Shaw, M.J., Szyperski, C. (eds.) EC-Web 2011. LNBIP, vol. 85, pp. 246–257. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  9. 9.
    Matt, P.A., Morge, M., Toni, F.: Combining statistics and arguments to compute trust. In: Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, Richland, SC, pp. 209–216. International Foundation for Autonomous Agents and Multiagent Systems (2010)Google Scholar
  10. 10.
    Burnett, C., Norman, T.J., Sycara, K.: Bootstrapping trust evaluations through stereotypes. In: Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, Richland, SC, pp. 241–248. International Foundation for Autonomous Agents and Multiagent Systems (2010)Google Scholar
  11. 11.
    Sierra, C., Debenham, J.: Information-based reputation. In: Paolucci, M. (ed.) First International Conference on Reputation: Theory and Technology (ICORE 2009), Gargonza, Italy, pp. 5–19 (2009)Google Scholar
  12. 12.
    MacKay, D.: Information Theory, Inference and Learning Algorithms. Cambridge University Press (2003)Google Scholar
  13. 13.
    Buechner, J., Tavani, H.: Trust and multi-agent systems: applying the “diffuse, default model” of trust to experiments involving artificial agents. Ethics and Information Technology 13(1), 39–51 (2011)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • John Debenham
    • 1
  • Carles Sierra
    • 2
  1. 1.QCISUniversity of TechnologySydneyAustralia
  2. 2.Institut d’Investigació en Intel·ligència Artificial - IIIASpanish Scientific Research Council, CSICBellaterraSpain

Personalised recommendations