Skip to main content
Log in

Reasoning about the impacts of information sharing

  • Published:
Information Systems Frontiers Aims and scope Submit manuscript

Abstract

Shared information can benefit an agent, allowing others to aid it in its goals. However, such information can also harm, for example when malicious agents are aware of these goals, and can then thereby subvert the goal-maker’s plans. In this paper we describe a decision process framework allowing an agent to decide what information it should reveal to its neighbours within a communication network in order to maximise its utility. We assume that these neighbours can pass information onto others within the network. The inferences made by agents receiving the messages can have a positive or negative impact on the information providing agent, and our decision process seeks to assess how a message should be modified in order to be most beneficial to the information producer. Our decision process is based on the provider’s subjective beliefs about others in the system, and therefore makes extensive use of the notion of trust with regards to the likelihood that a message will be passed on by the receiver, and the likelihood that an agent will use the information against the provider. Our core contributions are therefore the construction of a model of information propagation; the description of the agent’s decision procedure; and an analysis of some of its properties.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. We can treat the possibility that an agent does not forward a message by assuming a special “empty message” within \(\mathcal {M}\).

  2. The second column of BI’s disclosure policy deals with the case that m 2 is the goal message to be shared. Having the second column in the producer’s disclosure policy as a placeholder helps to unify the notion of disclosure policy for both producer and the other agents.

  3. Please note that matrix multiplication order is π k,j ×π i,k which is in a reverse order of the message propagation direction (a g i to a g k and then from a g k to a g j ). This is due to our choice of the left stochastic matrix to represent the disclosure policy π in the Markov Chain over message passing. In the notion of right stochastic matrix, it will be \({\uppi }_{i, j}^{T} = {\uppi }_{i, k}^{T} \times {\uppi }_{k,j}^{T}\).

  4. Please note again that the matrix multiplication order in Eq. 3 is reverse in the message propogation direction in the same maner as in the discount operator (2) due to the left stochastic matrix representation of dislcoure policies.

  5. Although not considered in Wang and Williams (2011), the definition provided in Castelfranchi and Falcone (2010) follows the others.

References

  • Bisdikian, C., Tang Y., Cerutti F., Oren N. (2013) A framework for using trust to assess risk in information sharing Chesśevar, C. Onaindia, E. Ossowski, S. Vouros, G. (eds), Agreement Technologies, Lecture Notes in Computer Science, vol 8068 Springer Berlin Heidelberg, pp 135–149, doi:10.1007/978-3-642-39860-5-11.

  • Burnett, C., Norman T.J., Sycara K. (2011). Trust decision-making in multi-agent systems. Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume One, AAAI Press, IJCAI’11 , 115–120.

  • Caminada, M.W. (2009). Truth, Lies and Bullshit; distinguishing classes of dishonesty. In: Social Simulation workshop (SS@IJCAI), 39–50.

  • Castelfranchi, C., & Falcone, R. (2010). In Trust theory A socio-cognitive and computational model. Wiley Series in Agent Technology.

  • Chakraborty, S., Raghavan K.R., Srivastava M.B., Bisdikian C., Kaplan L.M. (2012). Balancing value and risk in information sharing through obfuscation. In: Proceedings of the 15th Int’l Conf. on Information Fusion (FUSION ’12).

  • Das, T.K., Teng B.S. (2004). The Risk-Based View of Trust: A Conceptual Framework. Journal of Business and Psychology, 19(1), 85–116. doi:10.1023/B:JOBU.0000040274.23551.1b.

    Article  Google Scholar 

  • Goffman, E. (1970). Strategic Interaction. Basil Blackwell Oxford.

  • Jøsang, A., (2001). A logic for uncertain probabilities. International Journal of Uncertainty. Fuzziness and Knowledge-Based Systems, 9(3), 279–311.

    Article  Google Scholar 

  • Jøsang, A., & Ismail, R. (2002). The beta reputation system. In: Proceedings of the 15th Bled Electronic Commerce Conference.

  • Kaplan, S., & Garrick, B.J. (1981). On the quantitative definition of risk. Risk Analysis, 1(1), 11–27.

    Article  Google Scholar 

  • Mardziel, P., Magill, S., Hicks, M., Srivatsa, M. (2011). Dynamic enforcement of knowledge-based security policies. In Proceedings of the 24th IEEE Computer Security Foundations Symposium, (pp. 114–128).

  • Stentz, K., & Ferson, S. (2002). Combination of evidence in Dempster-Shafer theory. Tech. Rep. SAND 2002-0835, Sandia National Laboratories.

  • Tan, Y.H., & Thoen, W. (2002). Formal aspects of a generic model of trust for electronic commerce. Decision Support Systems, 33(3), 233–246. doi:10.1016/S0167-9236(02)00014-3.

    Article  Google Scholar 

  • Tang, Y., Cai, K., McBurney, P., Sklar, E., Parsons, S. (2011). Using argumentation to reason about trust and belief. Journal of Logic and Computation. doi:10.1093/logcom/exr038.

  • Teacy, W.T.L., Patel, J., Jennings, N.R., Luck, M. (2006). Travos: Trust and reputation in the context of inaccurate information sources. Autonomous Agents and Multi-Agent Systems, 12(2), 183–198.

    Article  Google Scholar 

  • Urbano, J., Rocha, A., Oliveira, E. (2013). A socio-cognitive perspective of trust. Ossowski, S., Agreement Technologies, Law, Governance and Technology Series, vol 8, Springer Netherlands, 419–429, doi:10.1007/978-94-007-5583-3-23.

  • Wang, X., Williams, M.A. (2011). Risk, uncertainty and possible worlds. In: Privacy, security, risk and trust (passat), IEEE Third International Conference on Social Computing (SOCIALCOM) 1278–1283 doi:10.1109/PASSAT/SocialCom.2011.130.

Download references

Acknowledgments

We would like to thank Leon van der Torre for his insightful comments on an early draft of this work. This research was sponsored by US Army Research laboratory and the UK Ministry of Defence and was accomplished under Agreement Number W911NF-06-3-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the US Army Research Laboratory, the U.S. Government, the UK Ministry of Defence, or the UK Government. The US and UK Governments are authorised to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuqing Tang.

Additional information

This paper is dedicated to the memory of Chatschik Bisdikian who recently passed away.

Appendix : The Case of Continuous Random Variables

Appendix : The Case of Continuous Random Variables

By utilising Definitions 8 and 9 we can describe the impact of disclosing a message to the consumers on the producer a g 0.

Proposition 1

Given a FCA \( {\langle {\mathcal {A}}, {\mathcal {C}}, \mathcal {M}, m \rangle }\);a consumer \( { {\mathit {ag}_{q}}} \in {\mathcal {A}}\);and the equivalent degree of disclosure x 0,q of the producer over ag q . Let y q be the information inferred by ag q according to the r.v. I q (x 0,q ) (with probability \(\approx f_{I_{q}}(y_{q};{x}_{0,q}) \, dy_{q}\) ). Then, assuming that the impact z q is independent of the degree of disclosure distribution x 0,q given the inferred information y q , ag 0 expects an impact z q described by the r.v. Z q (x 0,q ) with density:

$$f_{Z_{q}}(z_{q} ; x_{0,q}) = {{\int}_{0}^{1}} f_{Z_{q}} (z_{q} ; y_{q}) \, f_{I_{q}} (y_{q} ; {x}_{0,q} ) \, d y_{q} . $$

Proof

$$\begin{array}{@{}rcl@{}} F_{Z_{q}}(z_{q} ; {x}_{0,q}) &=& \Pr \{ Z_{q} \leq z_{q} | {x}_{0,q} \} \\ &=& {\int_{0}^{1}} \Pr \{ Z_{q} \leq z_{q}, I_{q} = y_{q} | {x}_{0,q} \} \, dy_{q} \\ &=& {\int_{0}^{1}} \Pr \{ Z_{q} \leq z_{q} | I_{q} = y_{q} , {x}_{0,q} \} f_{I_{q}}(y_{q};{x}_{0,q}) \, dy_{q} \\ &=& {\int_{0}^{1}} F_{Z_{q}} (z_{q} ; y_{q}) \, f_{I_{q}} (y_{q} ; {x}_{0,q} ) \, d y_{q} , \end{array} $$

The density function is easily derived from the distribution \(F_{Z_{q}}(z_{q} ; {{x_{0,q}}})\) since \(f_{Z_{q}}(z_{q} ; {x}_{0,q}) = \frac { d }{d z_{q}}F_{Z_{q}} (z_{q} ; {x}_{0,q})\).

Moreover, any time we need a single value characterisation of a distribution, we can exploit the same idea of descriptors of a random variable, by introducing descriptors for trust and risk.

Definition 13

Let h(⋅) be a function defined on [0,1], and y∈[0,1] be a level of inference. We define

$$ t_{h}^{Z_{q}} (x) = {{\int}_{0}^{1}} h(w) \, f_{Z_{q}} (w;y) \, d w , $$
(9)

to be the y-trust descriptor induced by h(⋅).

We can do the same to obtain a impact descriptor:

Definition 14

Let h(⋅) be a function defined on [0,1], and x∈[0,1] be a level of disclosure.We define

$$ t_{h}^{Z_{q}} (x) = {{\int}_{0}^{1}} h(w) \, f_{Z_{q}} (w;x) \, d w , $$
(10)

to be the x-impact descriptor induced by h(⋅).

Typical h(⋅) include the moment generating functions, such as h(k) = k,k 2, etc., and entropy \(h(k)= - ln \bigl (\allowbreak f_{K} (k) \bigr )\) for the density of some r.v. K. In the following we use the expectation as the risk descriptor, leaving consideration of other possible functions for future work.

Finally, let us illustrate two notable properties of our model. The first one is with regards to the case where a consumer can derive the full original message, which, unsurprisingly, leads to the worst case impact.

Proposition 2

When a consumer is capable of gaining maximum knowledge, then f I (y;x)=δ(y−1), where δ(⋅) is the Dirac delta function, and \(F_{Z} (z ; x ) = F_{Z} (z) \triangleq F_{Z} (z;1)\), i.e., the risk coincides with the 1-trust (Definition 9).

Proof

By the definition of the inference r.v. I(x), when a g x is believed to gain maximum knowledge then the density f I (y;x) carries all its weight at the point y=1 for all x. Hence, f I (y;x) = δ(y−1) and it follows from the definition of the Dirac delta function, see also Prop. 1

$$\begin{array}{@{}rcl@{}} F_{Z} ( z ; x ) &=& {\int_{0}^{1}} F_{Z} (z ; y) \, f_{I} (y ; x ) \, d y \\ &=& {\int_{0}^{1}} F_{Z} (z ; y) \, \delta (y-1) \, d y = F_{Z} (z;1) . \end{array} $$
(11)

The second property pertains to the case where agent a g 0 shares information with more than one consumer. Such situations are typically non-homogeneous as the trust and impact levels with regards to each consumer are different. Clearly, it is beneficial to identify conditions where these impacts balance (and, hence, indicate crossover thresholds) across the multiple agents.

For two agents a g 1, a g 2 having corresponding inference and behavioural trust distributions \(F_{I_{j}} (y;x)\) and \(F_{Z_{j}} (z;y)\), j∈{1,2}, for the shared information to have similar impact, x 1 and x 2 should be selected, such that the following holds.

$$\begin{array}{@{}rcl@{}} F_{Z_{1}}(z ; x_{1}) & =& F_{Z_{2}}(z ; x_{2}) \Leftrightarrow \\ {{\int}_{0}^{1}} F_{Z_{1}} (z;y) \, f_{I_{1}} (y;x_{1}) \, d y & =& {{\int}_{0}^{1}} F_{Z_{2}}(z;y) \, f_{I_{2}} (y;x_{2}) \, d y . \end{array} $$
(12)

Note that the above relationship implies the r.v.s Z 1 and Z 2 are drawn from the same distribution. Such a requirement is typically unrealistic. Therefore in general one may want to consider equalities on the average, such as, finding x 1 and x 2 satisfying the following for appropriate g(⋅) functions.

$$ \mathbb{E}\{ g (Z_{1} (x_{1}) ) \} = \mathbb{E}\{ g (Z_{2} (x_{2}) ) \} , $$
(13)

Proposition 3

Given that g(z) = z, in order to attain the same level of impact when ag 0 shares information with ag 1, ag 2, the degrees of disclosure x 1 and x 2 for ag 1 ,ag 2 respectively must satisfy the following.

$$ \mathbb{E}_{I_{1}} \bigl \{ \mathbb{E} \{ Z_{1} (x_{1}) | I_{1} \} \bigr \} = \mathbb{E}_{I_{2}} \bigl \{ \mathbb{E} \{ Z_{2} (x_{2}) | I_{2} \} \bigr \} . $$
(14)

Proof

The case where g(z) = z corresponds to the regular averaging operator, and Eq. 13 becomes:

$$\begin{array}{@{}rcl@{}} &&{\int_{0}^{1}} {\int_{0}^{1}} z f_{Z_{1}} (z ; y) \, f_{I_{1}} (y ; x_{1} ) \, d y \, dz \\ &&\kern1.5pc= {\int_{0}^{1}} {\int_{0}^{1}} z f_{Z_{2}} (z ; y) \, f_{I_{2}} (y ; x_{2} ) \, d y \, dz \\ &&\Leftrightarrow \\ &&{\int_{0}^{1}} f_{I_{1}} (y ; x_{1} ) \biggl [ {\int_{0}^{1}} z f_{Z_{1}} (z ; y) \, d z \biggr ] \, dy \\ &&\kern1.5pc= {\int_{0}^{1}} f_{I_{2}} (y ; x_{2} ) \biggl [ {\int_{0}^{1}} z f_{Z_{2}} (z ; y) \, d z \biggr ] \, dy \\ && \Leftrightarrow \\ && \mathbb{E}_{I_{1}} \bigl \{ \mathbb{E} \{ Z_{1} (x_{1}) | I_{1} \} \bigr \} \\ && \kern1.5pc= \mathbb{E}_{I_{2}} \bigl \{ \mathbb{E} \{ Z_{2} (x_{2}) | I_{2} \} \bigr \} . \end{array} $$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tang, Y., Cerutti, F., Oren, N. et al. Reasoning about the impacts of information sharing. Inf Syst Front 17, 725–742 (2015). https://doi.org/10.1007/s10796-014-9521-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10796-014-9521-6

Keywords

Navigation