A Framework for Using Trust to Assess Risk in Information Sharing

  • Chatschik Bisdikian
  • Yuqing Tang
  • Federico Cerutti
  • Nir Oren
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8068)


In this paper we describe a decision process framework allowing an agent to decide what information it should reveal to its neighbours within a communication graph in order to maximise its utility. We assume that these neighbours can pass information onto others within the graph, and that the communicating agent gains and loses utility based on the information which can be inferred by specific agents following the original communicative act. To this end, we construct an initial model of information propagation and describe an optimal decision procedure for the agent.


Average Impact Communication Graph Army Research Laboratory Disclosure Level Transitive Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Chakraborty, S., Raghavan, K.R., Srivastava, M.B., Bisdikian, C., Kaplan, L.M.: Balancing value and risk in information sharing through obfuscation. In: Proceedings of the 15th Int’l Conf. on Information Fusion, FUSION 2012 (2012)Google Scholar
  2. 2.
    Jøsang, A., Ismail, R.: The beta reputation system. In: Proceedings of the 15th Bled Electronic Commerce Conference (2002)Google Scholar
  3. 3.
    Teacy, W.T.L., Patel, J., Jennings, N.R., Luck, M.: Travos: Trust and reputation in the context of inaccurate information sources. Autonomous Agents and Multi-Agent Systems 12(2), 183–198 (2006)CrossRefGoogle Scholar
  4. 4.
    Castelfranchi, C., Falcone, R.: Trust theory: A socio-cognitive and computational model. Wiley Series in Agent Technology (2010)Google Scholar
  5. 5.
    Urbano, J., Rocha, A., Oliveira, E.: A socio-cognitive perspective of trust. In: Ossowski, S. (ed.) Agreement Technologies. Law, Governance and Technology Series, vol. 8, pp. 419–429. Springer Netherlands (2013)Google Scholar
  6. 6.
    Burnett, C., Norman, T.J., Sycara, K.: Trust decision-making in multi-agent systems. In: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, IJCAI 2011, pp. 115–120. AAAI Press (2011)Google Scholar
  7. 7.
    Mardziel, P., Magill, S., Hicks, M., Srivatsa, M.: Dynamic enforcement of knowledge-based security policies. In: Proceedings of the 24th IEEE Computer Security Foundations Symposium, pp. 114–128 (2011)Google Scholar
  8. 8.
    Wang, X., Williams, M.A.: Risk, uncertainty and possible worlds. In: Privacy, Security, Risk and Trust (passat). In: IEEE Third International Conference on Social Computing (SOCIALCOM), pp. 1278–1283 (2011)Google Scholar
  9. 9.
    Kaplan, S., Garrick, B.J.: On the quantitative definition of risk. Risk Analysis 1(1), 11–27 (1981)CrossRefGoogle Scholar
  10. 10.
    Tan, Y.-H., Thoen, W.: Formal aspects of a generic model of trust for electronic commerce. Decision Support Systems 33(3), 233–246 (2002)CrossRefGoogle Scholar
  11. 11.
    Das, T.K., Teng, B.-S.: The Risk-Based View of Trust: A Conceptual Framework. Journal of Business and Psychology 19(1), 85–116 (2004)CrossRefGoogle Scholar
  12. 12.
    Caminada, M.W.: Truth, lies and bullshit; distinguishing classes of dishonesty. In: Social Simulation Workshop (SS@IJCAI), 39–50 (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Chatschik Bisdikian
    • 1
  • Yuqing Tang
    • 2
  • Federico Cerutti
    • 3
  • Nir Oren
    • 3
  1. 1.Thomas J. Watson Research CenterIBM Research DivisionUSA
  2. 2.Robotics InstituteCarnegie Mellon UniversityPittsburghUSA
  3. 3.School of Natural and Computing ScienceUniversity of AberdeenAberdeenUK

Personalised recommendations