Validation and Verification of Agent Models for Trust: Independent Compared to Relative Trust

  • Mark Hoogendoorn
  • S. Waqar Jaffry
  • Peter-Paul van Maanen
Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 358)


In this paper, the results of a validation experiment for two existing computational trust models describing human trust are reported. One model uses experiences of performance in order to estimate the trust in different trustees. The second model in addition carries the notion of relative trust. The idea of relative trust is that trust in a certain trustee not solely depends on the experiences with that trustee, but also on trustees that are considered competitors of that trustee. In order to validate the models, parameter adaptation has been used to tailor the models towards human behavior. A comparison between the two models has also been made to see whether the notion of relative trust describes human trust behavior in a more accurate way. The results show that taking trust relativity into account indeed leads to a higher accuracy of the trust model. Finally, a number of assumptions underlying the two models are verified using an automated verification tool.


Trust Model Multiagent System Parameter Adaptation Agent Model Software Agent 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Bosse, T., Jonker, C., Meij, L.v.d., Sharpanskykh, A., Treur, J.: Specification and verification of dynamics in agent models. International Journal of Cooperative Information Systems 18, 167–193 (2009)CrossRefGoogle Scholar
  2. 2.
    da Costa Hernandez, J.M., dos Santos, C.C.: Development-based trust: Proposing and validating a new trust measurement model for buyer-seller relationships. Brazilian Administration Review 7, 172–197 (2010)Google Scholar
  3. 3.
    Falcone, R., Castelfranchi, C.: Trust dynamics: How trust is influenced by direct experiences and by trust itself. In: Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2004), New York, USA, pp. 740–747 (July 2004)Google Scholar
  4. 4.
    Gefen, D., Straub, D.W.: Consumer trust in b2c e-commerce and the importance of social presence: experiments in e-products and e-services. Omega 32, 407–424 (2004)CrossRefGoogle Scholar
  5. 5.
    Guha, R., Kumar, R., Raghavan, P., Tomkins, A.: Propagation of trust and distrust. In: Proceedings of the 13th International Conference on World Wide Web (WWW 2004), pp. 403–412. ACM, New York (2004)CrossRefGoogle Scholar
  6. 6.
    Hoogendoorn, M., Jaffry, S., Treur, J.: Modeling dynamics of relative trust of competitive information agents. In: Klusch, M., Pěchouček, M., Polleres, A. (eds.) CIA 2008. LNCS (LNAI), vol. 5180, pp. 55–70. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  7. 7.
    Hoogendoorn, M., Jaffry, S., Treur, J.: Modelling trust dynamics from a neurological perspective. In: Wang, R., Gu, F. (eds.) Proceedings of the Second International Conference on Cognitive Neurodynamics, ICCN 2009, Advances in Cognitive Neurodynamics II, pp. 523–536. Springer, Heidelberg (2011)Google Scholar
  8. 8.
    Hoogendoorn, M., Jaffry, S.W., Treur, J.: An adaptive agent model estimating human trust in information sources. In: Baeza-Yates, R., Lang, J., Mitra, S., Parsons, S., Pasi, G. (eds.) Proceedings of the 9th IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT 2009), pp. 458–465 (2009)Google Scholar
  9. 9.
    Jonker, C.M., Schalken, J.J.P., Theeuwes, J., Treur, J.: Human experiments in trust dynamics. In: Jensen, C., Poslad, S., Dimitrakos, T. (eds.) iTrust 2004. LNCS, vol. 2995, pp. 206–220. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  10. 10.
    Jonker, C.M., Treur, J.: Formal analysis of models for the dynamics of trust based on experiences. In: Garijo, F.J., Boman, M. (eds.) MAAMAW 1999. LNCS, vol. 1647, pp. 221–232. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  11. 11.
    Maanen, P.-P.v., Klos, T., Dongen, K.v.: Aiding human reliance decision making using computational models of trust. In: Proceedings of the Workshop on Communication between Human and Artificial Agents (CHAA 2007), Fremont, California, USA, pp. 372–376. IEEE Computer Society Press, Los Alamitos (2007); Co-located with The 2007 IEEE IAT/WIC/ACM International Conference on Intelligent Agent TechnologyGoogle Scholar
  12. 12.
    McKnight, D.H., Choudhury, V., Kacmar, C.: Developing and validatin trust measures for e-commerce: An integrative topology. Information Systems Research 13(3), 334–359 (2001)CrossRefGoogle Scholar
  13. 13.
    Ramchurn, S., Huynh, D., Jennings, N.: Trust in multi-agent systems. The Knowledge Engineering Review 19, 1–25 (2004)CrossRefGoogle Scholar
  14. 14.
    Sabater, J., Sierra, C.: Review on computational trust and reputation models. Artificial Intelligence Review 24, 33–60 (2005)zbMATHCrossRefGoogle Scholar

Copyright information

© International Federation for Information Processing 2011

Authors and Affiliations

  • Mark Hoogendoorn
    • 1
  • S. Waqar Jaffry
    • 1
  • Peter-Paul van Maanen
    • 1
    • 2
  1. 1.Department of Artificial IntelligenceVU University AmsterdamAmsterdamThe Netherlands
  2. 2.Department of Cognitive Systems EngineeringTNO Human FactorsSoesterbergThe Netherlands

Personalised recommendations