Reasoning about the impacts of information sharing
Shared information can benefit an agent, allowing others to aid it in its goals. However, such information can also harm, for example when malicious agents are aware of these goals, and can then thereby subvert the goal-maker’s plans. In this paper we describe a decision process framework allowing an agent to decide what information it should reveal to its neighbours within a communication network in order to maximise its utility. We assume that these neighbours can pass information onto others within the network. The inferences made by agents receiving the messages can have a positive or negative impact on the information providing agent, and our decision process seeks to assess how a message should be modified in order to be most beneficial to the information producer. Our decision process is based on the provider’s subjective beliefs about others in the system, and therefore makes extensive use of the notion of trust with regards to the likelihood that a message will be passed on by the receiver, and the likelihood that an agent will use the information against the provider. Our core contributions are therefore the construction of a model of information propagation; the description of the agent’s decision procedure; and an analysis of some of its properties.
KeywordsInformation sharing Impacts Trust Risk
- Bisdikian, C., Tang Y., Cerutti F., Oren N. (2013) A framework for using trust to assess risk in information sharing Chesśevar, C. Onaindia, E. Ossowski, S. Vouros, G. (eds), Agreement Technologies, Lecture Notes in Computer Science, vol 8068 Springer Berlin Heidelberg, pp 135–149, doi:10.1007/978-3-642-39860-5-11.
- Burnett, C., Norman T.J., Sycara K. (2011). Trust decision-making in multi-agent systems. Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume One, AAAI Press, IJCAI’11 , 115–120.Google Scholar
- Caminada, M.W. (2009). Truth, Lies and Bullshit; distinguishing classes of dishonesty. In: Social Simulation workshop (SS@IJCAI), 39–50.Google Scholar
- Castelfranchi, C., & Falcone, R. (2010). In Trust theory A socio-cognitive and computational model. Wiley Series in Agent Technology.Google Scholar
- Chakraborty, S., Raghavan K.R., Srivastava M.B., Bisdikian C., Kaplan L.M. (2012). Balancing value and risk in information sharing through obfuscation. In: Proceedings of the 15th Int’l Conf. on Information Fusion (FUSION ’12).Google Scholar
- Goffman, E. (1970). Strategic Interaction. Basil Blackwell Oxford.Google Scholar
- Jøsang, A., & Ismail, R. (2002). The beta reputation system. In: Proceedings of the 15th Bled Electronic Commerce Conference.Google Scholar
- Mardziel, P., Magill, S., Hicks, M., Srivatsa, M. (2011). Dynamic enforcement of knowledge-based security policies. In Proceedings of the 24th IEEE Computer Security Foundations Symposium, (pp. 114–128).Google Scholar
- Stentz, K., & Ferson, S. (2002). Combination of evidence in Dempster-Shafer theory. Tech. Rep. SAND 2002-0835, Sandia National Laboratories.Google Scholar
- Tang, Y., Cai, K., McBurney, P., Sklar, E., Parsons, S. (2011). Using argumentation to reason about trust and belief. Journal of Logic and Computation. doi:10.1093/logcom/exr038.
- Urbano, J., Rocha, A., Oliveira, E. (2013). A socio-cognitive perspective of trust. Ossowski, S., Agreement Technologies, Law, Governance and Technology Series, vol 8, Springer Netherlands, 419–429, doi:10.1007/978-94-007-5583-3-23.
- Wang, X., Williams, M.A. (2011). Risk, uncertainty and possible worlds. In: Privacy, security, risk and trust (passat), IEEE Third International Conference on Social Computing (SOCIALCOM) 1278–1283 doi:10.1109/PASSAT/SocialCom.2011.130.