Keywords

1 Introduction

Among the debates currently open in AI research, some have a notoriously long tradition, and a variety of methodological approaches and solutions. First, resource-bounded rationality aims to account for agents who may have limited inferential abilities or informational resources, like humans in their interactions with computational agents. Second, modeling dynamic rationality considers aids to knowledge and computational processes by externally received information. Third, a large number of models for trustworthy communication is emerging, in which information may be considered reliable if consensus among a sufficiently large or relevant set of sources is reached. While logics that address these aspects individually abound in the literature, a model to formalize trustworthy communications within a resource-bounded context is yet to be offered, and would be highly desirable. A logic to model coordinated reasoning in a multi-agent systems in which agents may suffer from limited abilities but can rely on reputable external source to receive information would represent a useful tool in both knowledge representation, planning and learning in complex environments. The present work aims at offering a semantics with these features.

Regarding resource-bounded rationality, Depth-Bounded Boolean Logic (DBBL) [13] is a logic for single-agent reasoning characterized by an informational semantics that allows to distinguish between actual and virtual information. The former is information actually held by the agent; when an agent limits herself to actual information, she is said to reason at 0-depth. The latter is best explained proof-theoretically as the information that an agent might assume and then discharge for the derivation of new knowledge through an application of the rule of bivalence (RB):

figure a

When an agent employs k nested instances of RB, she is said to reason at depth k. At each k-depth a tractable inference relation is obtained, and the limit of this sequence is the classical entailment relation. We start from DBBL as our basis to model agents with limited inferential capabilities.

As for dynamic rationality, Multi-Agent Depth-Bounded Boolean Logic (MA-DBBL) [12] is an extension of DBBL modelling a multi-agent setting by shifting the interpretation of bound from the cognitive abilities of the agents to their ability to acquire information through the use of external resources. Under this interpretation, the depth k at which an agent is able to infer measures the number of distinct external sources that offer information necessary for the inference. Accordingly, MA-DBBL accounts for dynamic contexts where agents share information, via a modal operator of “becoming informed”  inspired by [20, 21], simulating the epistemic action of a private announcement. The interepretation of bound offered by MA-DBBL seems to be a good fit for complex networks of agents exchanging information when the modeler wants to keep track of the reliability of information received in terms of intermediate transmissions between source and receiver.

Finally for trustworthy communications: the notion of trust has received great attention in order to reason about the security of a system [3, 22, 23]. MA-DBBL contains itself a policy of trust: an additional operator for “being informed” inspired by [6, 15] holds when an agent receives the same message by every other agent more informed than herself, thus expressing truthful information as content on which consensus between agents holds [26]. Hence an agent is truthfully informed only in case of trustworthy information. Such distinction seems crucial in contexts of information exchange where possible biases or disinformation campaigns are in place, also through artificial agents.

In this light, MA-DBBL seems well-placed to model systems where the above mentioned requirements are needed. But it suffers from a major limitation, since it allows no secrecy: every agent transmits every information she possesses. This might be a welcomed feature when the aim is to model communication in highly collaborative settings, but it is not realistic in many ordinary contexts. Moreover, MA-DBBL is developed only proof-theoretically. In this paper we present DBBL-BI\(_n\), a variant logic equipped with a relational semantics that accounts for a multi-agent system where agents are ordered hierarchically and have access to increasingly extensive information states. In the tradition of role based access control theory [24], we intend such a hierarchy as defined among agents with shared competencies but with different degrees of access to sensitive or relevant information. Agents whose epistemic states do not allow to infer the truth value of given contents can obtain information externally and are free to share it or keep it private. Truthful and trustworthy information for an agent is characterised by content shared by all agents higher in the hierarchy. Given the condition imposed on the hierarchy, trustworthiness does not reduce to a form of democratic consensus. Indeed, when an agent evaluates whether she can trust some information, she considers only the most reputable sources, i.e. those agents that are higher than herself with respect to the aforementioned hierarchy. In this sense, it is neither unanimity (which might be considered condition too strong for trustworthiness) nor majority: it is fairly possible that an agent might trust information that is justified only by a minority of agents, but these represent reputable sources as far as the agent is concerned. Hence we believe that it is appropriate in this context to qualify this information as trustworthy.

The paper is structured as follows. In Sect. 2 we draw some comparisons with some related works, in Sect. 3 we consider some scenarios that our logic models, in Sect. 4 we introduce the syntax and in Sect. 5 the semantics. In Sect. 6 we come back to the examples to show their formalization. Finally, in Sect. 7 we suggest some possible extensions of the logic.

2 Related Works

Resource-bounded reasoning is a long-standing and crucial problem in knowledge representation and reasoning, with extensive research especially in variations of temporal logics like RBTL [4, 5, 8]. More specifically for our model, the kind of bound that agents may suffer can be due to cognitive or computational capabilities, like the inability of iterated applications of a specific inferential rule described above and motivating the static, single-agent setting of DBBL. Overcoming such limitation in a multi-agent, dynamic setting may be possible through dynamic setting, which is what we set to do in the present paper with DBBL-BI\(_n\). Notice that this type of resolution is not a core task in RBTL and as such our work answer not just a modelling task for resource-bounded agents, but one where such agents are able to overcome their limitations through communication.

When considering aids to reasoning in the form of dynamic information modeling, Dynamic Epistemic Logic (DEL) is one of the main frameworks that deal with information update. Within DEL, standard epistemic models are updated with action models in order to represent how knowledge changes when a certain action takes place [7]. Notwithstanding the success of DEL, our choice for a different framework is motivated by three reasons.

First, DBBL-BI\(_n\) interprets states differently from DEL. In the latter case, states represent possible ways the world could be according to someone’s knowledge. For the former, states are distinct repositories containing pieces of information. Therefore, we are able to represent within our logic the authorizations that each agent possess for reading the content of a particular source. The representation of distinct partition of an agent’s memory space is a way to embed without much technicalities a notion of secrecy, as an agent becomes able to share only parts of her memory space, while preserving other parts. This makes the relational semantics of DBBL-BI\(_n\) able to account for important phenomena (e.g. in security protocols and cryptography) that so far has been modelled in terms of e.g. typed logic calculi or untyped logic programs for authentication [1], semantics for security [18] and most recently session types for information flow control type systems [14].

The second reason regards how actions are described. Even though DEL is able to account for highly complex and structured actions via appropriate action models, they lack a relevant feature that we are interested in. Indeed, an action model is underspecified with respect to the agent (or group of agents) that is responsible for the occurrence of an action. On the contrary, we include formulae which make explicit the agent involved in the information-changing action. This is particularly relevant when trust assessments enter the picture: it does make a difference whether information is sent by a trusted or by an untrusted source.

The third point regards depth-boundedness. In the present work, differently from DBBL [13], virtual information is interpreted as information received by a distinct agent, and the depth of the reasoning is the distance between the receiver of a content and its original source. Therefore, depth measures how much a content has been shared among the agents. This is another parameter missing in DEL that could be important for trust assessment.

Trust is a central notion for the analysis of secure computational systems. In particular, the cognitive theory of Castelfranchi and Falcone [9], which analyses the notion of trust in terms of goals, beliefs, capabilities, and intentions, has received great attention from logicians. For example, [17] builds BDI-like [19] logical models of trust based on [9]. A distinct approach on trust is encompassed by [2, 16]. Those works are characterized by a “speaks for” modality which expresses delegation among agents. Moreover, [16] formalizes a “says” modality, and in this logic a content is trusted if it is said by an agent who is delegated via the “speaks for” modality. Finally, in [22] trust is conceived as a consistency checking function: an agent trusts an incoming message if it is consistent with her own profile; when this is not the case, two distinct policies may occur. On the one hand, if the message was issued by a less reputable agent, then that content is distrusted, i.e. rejected. On the other hand, if the message was issued by a more reputable agent, then an operation of mistrust follows, leading to accepting the message and to a contraction of the reader’s initial profile. In the present work, we conceive trust as agreement between a relevant set of agents: an agent needs to check if everyone more informed than herself agrees on a content in order to trust it.

3 Examples

We start by providing an example to illustrate the type of situations which our logic aims at modelling. Anne, Bob, and Charles have distinct authorizations to access three servers: \(s_1\), \(s_2\), and \(s_3\). Charles is authorized to read from all of them, Bob from all but \(s_3\), and Anne only from \(s_1\). This is reflected by an order \(c\prec b\prec a\). Each of them is at liberty to share information with the others. In this context the transmission of information is represented by acquiring new authorizations to access more servers.

Example 1

Bob knows that \(\lnot (p\wedge u)\), since p and u cannot both be true at the same time but he does not know whether \(\lnot p\) or \(\lnot u\) is the case. This gap is filled by Charles, who decides to share information p and r stored on \(s_3\) with Bob who can read these contents from its access to \(s_2\). As no one in the hierarchy above Bob disagrees on those formulae, he decides to write them on \(s_2\), and now Bob is in a position to determine a previously unnoticed fact, i.e. that \(\lnot u\) is true. Bob chooses to write p and r on different parts of \(s_2\) based on whether he wants to share or not those contents as each partition is allowed different accessibility rights to other agents.

Example 2

Bob decides to share p with Anne who can now read this content from its access to \(s_1\). But as long as she gets it only from Bob, she might be in doubt whether to trust it. But if Charles shares himself p from \(s_3\) in a way that Anne can read it from \(s_1\), then every agent above Anne will have shared p with her directly, and now she trusts p, and writes that content on \(s_1\).

Example 3

Bob shares \(\lnot q\) from \(s_2\) in a way that Anne can read it from \(s_1\). But Anne does not trust \(\lnot q\), because she does not receive it by every agent more informed than herself. Finally, Charles shares also r from \(s_3\) in a way that Bob can read it from \(s_2\). Bob trusts r. He is therefore allowed to share r on its own, but he chooses to not transmit r to Anne, possibly because r is reserved information. Indeed, Bob gave to Anne only the authorization to access the part of \(s_2\) containing \(\lnot q\), and not that with r.

Few remarks are worth making about the above scenarios. The ability to save data on separate partitions of an agent’s available memory is a crucial way to model selective access granted to other agents: this is what happens for Bob in Example 1, where he wants to make p accessible to others, but r private. Another important aspects in communication between resource-bounded agents is that they might receive redundant information, and use it as a confirmation proxy: this is what happens to Anne in Example 2, who receives p from Bob and then separately from Charles, both agents who she knows have access to more information than herself, thereby getting the kind of trustworthiness she is seeking to accept p. Finally, the combination of the above characteristics makes it clear that although sometimes information is trustworthy (in the sense of all agents being potentially able to transmit it to others), secrecy can reduce the amount of information available to the group as a whole, as it happens to Anne in Example 3.

4 Syntax

In the present section we specify the elements of the syntax. Firstly, we declare what are the symbols employed within the language, then we explain how formulae are built from those sets of symbols.

For the language of DBBL-BI\(_n\) we need the following objects: a finite set \(\mathcal {A}\) of variables for agents; a finite set \(\mathcal {S}^0\) of variables for atomic informational states; a symbol for a composition function \(+\) which takes as arguments a pair of states, and returns as output a state which is the composition of the two. The latter is a composition function which allows us to distinguish between the separate partitions of an agent’s memory to preserve separate access granted to other agents, as above in Example 2. \(\mathcal {R}\) is a set of relation symbols for accessibility relations: \(R_{i}\) is for single agent i accessibility among states and \(R_{i,j}\) for information transmission from agent i to agent j. \(R_G\) and \(R_{G,j}\) are the relations for accessibility and for transmissions from group of agents \(G\subseteq \mathcal {A}\). \(\mathcal {P}\) is a finite set of propositional variables. Formation of complex formulae is closed under the set \(\mathcal {C}\) of classical connectives, and under the set \(\mathcal {K}\) of epistemic operators.

Definition 1

(Syntax of DBBL-BI\(_n\)). The following sets constitutes the elements of the syntax.

$$\begin{aligned} \mathcal {A}=\{i, j, \dots ,l\} \end{aligned}$$
$$\begin{aligned} \mathcal {S}^0=\{s,s',s'',\dots , s^n\} \end{aligned}$$
$$\begin{aligned} \mathcal {F}=\{+\} \end{aligned}$$
$$\begin{aligned} \mathcal {R}=\{R_i,R_{i,j}, R_G, R_{G,j}\} \end{aligned}$$
$$\begin{aligned} \mathcal {P}=\{p,q,\dots , r\} \end{aligned}$$
$$\begin{aligned} \mathcal {C}=\{\wedge , \vee , \rightarrow , \lnot \} \end{aligned}$$
$$\begin{aligned} \mathcal {K}=\{\Diamond , BI_i, DBI_i, I_i\} \end{aligned}$$

Definition 2

(Language of DBBL-BI\(_n\)). Formulae of the language of DBBL-BI\(_n\) are inductively defined by the following grammar in BNF:

\( \begin{array}{llll} &{} s:\phi _j &{}::= &{} s:p_j\mid s:(\lnot \phi )_j\mid s:(\phi \wedge \phi )_j\mid s:(\phi \vee \phi )_j\mid s:(\phi \rightarrow \phi )_j\mid \\ &{} &{} &{} s:\Diamond \phi _j\mid s:BI(\phi _i)_j\mid s:DBI(\phi _G)_j\mid s:I(\phi _{i})_j \\ &{}\mathfrak {r} &{}::= &{} R_j(s,s')\mid R_{i,j}(s,s') \end{array} \)

Note that as it is standard in labelled logics [27], also relational formulae are introduced within the syntax. Labelled formulae of the form \(s:\phi _j\) are read as “agent j has access to information \(\phi \) at state s”. Moreover, since labelled formulae could be indexed by groups of agents as well, \(s:\phi _G\) is read as “group G distributively holds information \(\phi \) at state s”. In the following, to aid readability we rewrite respectively \(s:BI(\phi _i)_j\) as \(s:BI_j\phi _i\), \(s:DBI(\phi _G)_j\) as \(s:DBI_j\phi _G\) and \(s:I(\phi _{i})_j\) as \(s:I_j\phi _{i}\).

Relational formulae express accessibility between states:

  • we read \(R_j(s,s')\) as “state \(s'\) is accessible from state s for agent j”;

  • we read \(R_{i,j}(s',s)\) as “agent i gives the authorization to agent j to access state \(s'\) from state s”.

Modal formulae allow to reason about processes of information access and transmission:

  • we read \(s:\Diamond \phi _j\) as “agent j can access from state s a state that contains information \(\phi \)”;

  • we read \(s:BI_j\phi _i\) as “agent j becomes informed at state s of information \(\phi \) by agent i”;

  • we read \(s:DBI_j\phi _G\) as “agent j becomes distributively informed at state s of information \(\phi \) by group G”;

  • we read \(s:I_j\phi _i\) as “agent j is informed at state s of information \(\phi \)”.

The meanings of the last three sentences have substantial differences. If \(s:BI_j\phi _i\) holds, then there is a channel across which agent i makes available to agent j information \(\phi \). The DBI operator is analogous to the notion of distributed knowledge in standard epistemic logic: in this case a channel is established by possibly many agents in the group G for a single agent j to access information available to them. Note that in the case of DBI is the group G who possesses distributed information about \(\phi \). The difference between becoming informed and being informed lies in the degree of warranty that an agent has towards some information. If \(s:BI_j\phi _i\) holds, then agent j has just received access to a piece of information \(\phi \) from i at state s. If \(s:I_j\phi _i\), additionally the agent possesses a sufficient amount of warranty that the information \(\phi \) she received access to at state s is trustworthy in terms of the same information becoming accessible from all agents that stand in a certain relation with her, namely those higher in a shared hierarchy. For this reason, if agent j is informed of \(\phi \), then she might make that content available to other agents. This does not happen if \(s:BI_j\phi _i\) but not \(s:I_j\phi _i\).

5 Semantics

In the semantics the elements of the syntax are interpreted as agents, states, functions, and accessibility relations. We don’t use another notation to distinguish the symbols of the syntax from their interpretation in the semantics in order to avoid unnecessary burden on the reader. The choice of model will determine which set of states are selected and accordingly which formulae are valid at those states.

Definition 3

(Model). A model for DBBL-BI\(_n\) is a tuple:

$$\mathcal {M}=((\mathcal {S}^0,+), \mathcal {A},\{R_i\}_{i\in \mathcal {A}},\{R_{i,j}\}_{\{i,j\}\subseteq \mathcal {A}}, \preceq , \mathcal {P}, v)$$

where:

  • \(+\) is a commutative, associative, and idempotent dyadic function such that:

    $$\begin{aligned}&\mathcal {S}^{n+1} = \mathcal {S}^{n}\cup \{s'+s''\mid s',s''\in \mathcal {S}^{n}\};\\&\mathcal {S} = \underset{n\in \mathbb {N}}{\bigcup }\mathcal {S}^n; \end{aligned}$$

    i.e. \(\mathcal {S}^0\) is the set of atomic states; \(\mathcal {S}^{n+1}\) is the union of \(\mathcal {S}^n\) and of the set of all states composed from elements of \(\mathcal {S}^n\); \(\mathcal {S}\) is the union of all \(\mathcal {S}^{n}\);

  • \(\mathcal {A}\) is a finite set of agents;

  • \(R_i\subseteq \mathcal {S}\times \mathcal {S}\) is a preorder such that:

    $$\begin{aligned}&\text {if } (s, s'+s'')\in R_i, \text { then } (s'+s'',s')\in R_i \text { and } (s'+s'',s'')\in R_i; \\&\text {if } (s, s')\in R_i \text { and } (s, s'')\in R_i, \text { then } (s,s'+s'')\in R_i; \end{aligned}$$

    i.e. if an agent is able to access a composite state \(s'+s''\), then she can access also to its parts \(s'\) and \(s''\). Moreover, if an agent is able to access to two distinct states \(s'\) and \(s''\), then she can access also to their composition \(s'+s''\).

  • \(R_{i,j}\subseteq \{(s',s)\mid (s',s')\in R_i, (s,s)\in R_j\}\) such that:

    $$\begin{aligned}&\text {if } (s,s)\in R_i, \text { then } (s,s)\in R_{i,i}; \end{aligned}$$

    i.e. \(R_{i,j}\) satisfies a trivial condition for self-information: if an agent is authorized to access s, then she receives by herself all information stored at s.

  • \(\preceq \subseteq \mathcal {A}\times \mathcal {A}\) is a preorder;

  • \(\mathcal {P}=\{p,q,\dots , r\}\);

  • \(v:\mathcal {S}\mapsto (\mathcal {P}\rightharpoonup \{1,0\})\) is the valuation function (with \(\rightharpoonup \) denoting a partial function) s.t. for all \(s, s', s''\in \mathcal {S}\):

    • if \((p,1)\in v(s)\), then \((p,0)\not \in v(s')\);

    • if \((p,0)\in v(s)\), then \((p,1)\not \in v(s')\);

    • if \(s'+s''=s\), then \(v(s')\subseteq v(s) \text { and } v(s'')\subseteq v(s)\).

The function v associates to each state a partial valuation over \(\mathcal {P}\). The valuation function satisfies three constraints. The first two impose monotonicity: if a proposition is true (resp. false) at some state, then it cannot be false (resp. true) elsewhere. Accordingly, the present work does not consider the transmission of contradictory information. The third condition is needed in order to correctly represent the fact that some states are part of other states.

Via these elements of the semantics, we are now in a position to define new objects: the hierarchy \(\preceq \subseteq \mathcal {A}\times \mathcal {A}\), and the accessibility relations for groups of agents. We start with the former.

Definition 4

Agent i is informed at least as much as agent j if and only if i has access to every state at which j has access: \(i\preceq j\) iff \(S_j\subseteq S_i\), where \(S_i=\{s\in \mathcal {S}\mid R_i(s,s)\}\).

Both \(R_i\) and \(R_{i,j}\) could be extended for groups of agents. \((s,s')\in R_G\) means that group G is authorized to access state \(s'\) from s; \((s,s')\in R_{G,j}\) means that the group G gives the authorization to j to access the composite set s from \(s'\). As standard, we denote with \(\prec \) the non-reflexive counterpart of \(\preceq \). The extension to groups of agents is obtained as follows.

Definition 5

(Accessibility relations for groups of agents).

$$\begin{aligned}&(s,s')\in R_G \text { iff } (s,s')\in R_i \text { is an element of the transitive closure of} \bigcup _{i\in G} R_i; \\&(s_1+\cdots + s_n, s)\in R_{G,j} \text { iff for every } 1\le m\le n \text { there is } i\in G \text { s.t. } (s_m,s)\in R_{i,j}. \end{aligned}$$

These definitions say that a group G can access a state if and only if at least one agent \(i\in G\) can. Moreover, a group G gives the authorization to access a composite state if and only if every component of that state is made accessible to j by some agent \(i\in G\). Note that when a group is formed by a single agent we treat \(R_{\{i\},j}\) and \(R_{i,j}\) as equivalent.

Below we use A as a meta-variable for both labelled and relational formulae.

5.1 Satisfiability Relations

In this subsection we introduce the satisfiability relations. \(\mathcal {M}\Vdash ^k \texttt{A}\) means that model \(\mathcal {M}\) makes A true at depth k. Falsity is standard by negation. A model makes a labelled formula undetermined (\(*\)) just in case it makes it neither true nor false, i.e. \(\mathcal {M}\Vdash _*^k s:\phi _i\) iff \(\mathcal {M}\nVdash ^k s:\phi _i\) and \(\mathcal {M}\nVdash ^k s:\lnot \phi _i\), where \(s:\phi _i\) means that i holds \(\phi \) true at s, and \(s:\lnot \phi _i\) means that i holds \(\lnot \phi \) true at s, i.e. \(\phi \) is false at s. When a model \(\mathcal {M}\) does not satisfy at a depth k either of these two formulae, agent i lacks any information about the truth-value of \(\phi \) at s, remaining undetermined for her.

Recall that informally the depth at which a formula is validated is a parameter that measures the distance between the agent who evaluates a formula and the original source. Hence, for example, \(\mathcal {M}\Vdash ^k s:\phi _i\) means that \(\phi \) is true at state s for agent i after that formula went through at most k many informational channels. We consider this depth as a meta-information not available to the agents, but known to the modeller. Nonetheless, agents are conscious of the lowest k-bound by counting the nested operators in formulae in which their index occurs as the outermost one for a BI operator. For example, if an agent h holds at some state that \(BI_hBI_jBI_lp_i\), she knows that the distance between herself and the agent who issued p, i.e. agent i, is at least 3. Satisfaction of relational formulae is not qualified by a depth, since they do not express epistemic states of the agents but properties of the model.

In the following we employ two special function symbols, \(\mathcal {F}_{\lnot }^{v}\) and \(\mathcal {F}_{\bullet }^{v}\), with \(\bullet \) ranging over \(\{\wedge ,\vee ,\rightarrow \}\). \(\mathcal {F}_\lnot ^v\) is the deterministic function that computes the truth-value of the negation of formulae given valuation v, and \(\mathcal {F}_\bullet ^v\) is the non-deterministic function that computes the truth-value of formulae whose main connective is one of \(\{\wedge , \vee , \rightarrow \}\) given valuation v. Those functions agree with the truth-tables of Table 1. Note that in the following clauses, s and \(s'\) ranges over the full set of states \(\mathcal {S}\).

Definition 6

(Satisfaction of formulae).

  1. 1.

    \(\mathcal {M}\Vdash R_i(s,s')\) iff \((s,s')\in R_i\)

  2. 2.

    \(\mathcal {M}\Vdash R_{i,j}(s,s')\) iff \((s,s')\in R_{i,j}\)

  3. 3.

    \(\mathcal {M}\Vdash ^0 s:p_i\) iff \((p,1)\in v(s)\) and \(\mathcal {M}\Vdash R_i(s,s)\)

  4. 4.

    \(\mathcal {M}\Vdash ^k s:\lnot \phi _i\) iff \(\mathcal {F}_\lnot ^v(s:\phi _i)=1\)

  5. 5.

    \(\mathcal {M}\Vdash ^k s:(s:\phi _i\bullet s:\psi _i)\) iff \(\mathcal {F}_{\bullet }^{v}(s:\phi _i,s:\psi _i)=1\) with \(\bullet \in \{\wedge , \vee , \rightarrow \}\)

  6. 6.

    \(\mathcal {M}\Vdash ^k s:\Diamond \phi _j\) iff \(\mathcal {M}\Vdash ^k s':\phi _j\) for some \(s'\) s.t. \(\mathcal {M}\Vdash R_j(s,s')\)

  7. 7.

    \(\mathcal {M}\Vdash ^{k+1} s:BI_j\phi _i\) iff \(\mathcal {M}\Vdash ^{k} s':\phi _i\) for some \(s'\) s.t. \(\mathcal {M}\Vdash R_{i,j}(s',s)\)

  8. 8.

    \(\mathcal {M}\Vdash ^{k+1} s:DBI_j\phi _G\) iff \(\mathcal {M}\Vdash ^{k} s':\phi _G\) for some \(s'\) s.t. \(\mathcal {M}\Vdash R_{G,j}(s',s)\)

  9. 9.

    \(\mathcal {M}\Vdash ^{k+1} s:I_j\phi _i\) iff \(\mathcal {M}\Vdash ^{k'+1} s:BI_j\phi _i\) for all (at least one) \(i\prec j\), and \(k'\le k\)

The formula \(R_i(s,s')\) is true in a model \(\mathcal {M}\) if an access relation for agent i holds in \(\mathcal {M}\) from state s to state \(s'\) in \(\mathcal {S}\). The semantic clause for \(R_{i,j}\) is similar.

The formula \(s:p_{i}\) is true at depth 0 in \(\mathcal {M}\) iff (p, 1) is in the valuations at s and the agent i has access to s. The negation and other connectives are as by the Table 1.

Table 1. Informational truth-tables.

Clause 6 is for the standard modal operator \(\Diamond \). Informally, if \(\Diamond \phi _i\) holds at s, then agent i has access to a state reachable from s where \(\phi _i\) holds.

Clause 7 introduces the BI operator. Agent j becomes informed at depth \(k+1\) and at state s of \(\phi \) from agent i iff: at the lower depth k agent i gives the authorization to j to access a state \(s'\) where \(\phi _i\) is true. By this definition and clause 4, the interpretation of \(\mathcal {M}\Vdash ^{k+1} s:\lnot BI_j\phi _i\) is that \(\mathcal {F}_\lnot ^v(s:BI_{j}\phi _i)=1\) and this holds iff for all \(s'\) s.t. \(\mathcal {M}\Vdash R_{i,j}(s',s)\), then \(\mathcal {M}\Vdash ^{k} s':\lnot \phi _i\) holds. The same reasoning holds for the other modal operators. Note that redundant and trivial information transmissions are allowed: an agent might become informed of a formula she already holds, and since \(R_i(s,s)\) implies \(R_{i,i}(s,s)\) (see Definition 3), then every agent becomes informed by herself of every formula she holds. Moreover, this clause accounts also for satisfaction of formulae with nested BI operators: for example \(\mathcal {M}\Vdash ^{k+2} s:BI_hBI_jp_i\) is satisfied when h becomes informed at s by j that j becomes informed by i that p. As before, an analogous reasoning holds for the other modal operators.

Clause 8 introduces the DBI operator for distributed becoming informed. This operator works as a closure under connectives for BI formulae. Suppose \(s:BI_h\phi _i\) and \(s:BI_h\psi _j\). It seems reasonable to hold also \(s:BI_h(\phi \wedge \psi )_{i,j}\). However, the semantics of BI forbids this inference, because BI represents the transmission of information as a one-to-one relation between agents: exactly one agent is the access provider and exactly one other agent is the access recipient. On the contrary, DBI represents a many-to-one transmission of information between agents: there is exactly one agent who is the recipient of access authorization, but there are possibly many providers in \(\mathcal {A}\). In other words, an agent j is distributively informed of \(\phi \) by a group G when G is distributively informed that \(\phi \) is true, and \(\phi \) is sent by G to j.

Finally, clause 9 says that an agent is informed that \(\phi \) at s and at depth \(k+1\) iff she becomes informed at s and at a maximum depth \(k+1\) that \(\phi \) by every other agent (at least one) higher than herself in the hierarchy imposed by \(\prec \). For conceptual clarity, note that we assume that each agent is aware of this hierarchy.

Definition 7

(Structural conditions).

The clause of state composition says that if an arbitrary \(\phi _i\) is true at state s, then it is also true also at \(s+s'\).

The grouping clause says that if agent i holds that \(\phi \), then also any group \(\{i,j\}\) including i distributively holds that \(\phi \).

It is worth highlighting the importance of depth-monotonicity: if a formula is determined after at most k steps of information transmission, it remains determined even after \(k+1\) processes. What this conditions says is that the transmission of information is conservative (no information is lost), and that it is cumulative (the indeterminacy may be eventually reduced). Moreover, it produces a desirable side-effect: it permits to manipulate formulae that hold at different depths. For example, suppose \(\mathcal {M}\Vdash ^0 s:p_i\) and \(\mathcal {M}\Vdash ^1 s:q_i\). By Depth-Monotonicity, \(\mathcal {M}\Vdash ^1 s:p_i\), and finally \(\mathcal {M}\Vdash ^1 s:(p\wedge q)_i\). Without the help of Depth-Monotonicity, this kind of inference would require a more complex reasoning.

The I operator yields a policy of trust: when an agent is informed that \(\phi \) then she can write within her state that \(\phi \).

Finally, clause 14 produces a kind of restricted transitivity for \(R_{i,j}\) relations. It says that when an agent j is informed at s of every formula \(\phi _i\) satisfied at a state \(s''\), then \(R_{i,j}(s'',s)\) and \(R_{j,h}(s,s')\) entail \(R_{i,h}(s'',s')\). Informally, if there is a channel from agent i to j and one from j to h, and if j checked that every content from the former channel is trustworthy, then there is also an indirect channel from i to h. We give a simple example in order to make the idea clear.

Fig. 1.
figure 1

Model \(\mathcal {M}_{1}\)

In model \(\mathcal {M}_1\) (see Fig. 1) there are two channels represented by the following statements: \(\mathcal {M}_1\Vdash R_{i,j}(s_3,s_2)\) and \(\mathcal {M}_1\Vdash R_{j,h}(s_2,s_1)\) (these relations are represented by dashed lines in the model). Therefore, at \(s_2\) agent j receives p from i, and at \(s_1\) agent h receives \(BI_jp_i\) from j. Hence, \(\mathcal {M}_1\Vdash ^1 s_2:BI_jp_i\) and \(\mathcal {M}_1\Vdash ^2 s_1:BI_hBI_jp_i\). Suppose that we are interested in the satisfaction of the formula \(s_1:I_hBI_jp_i\). By clause 9, \(\mathcal {M}_1\Vdash ^2 s_1:I_hBI_xp_i\) iff \(\mathcal {M}_1\Vdash ^2 s_1:BI_hBI_xp_i\) for all \(x\prec h\), i.e. iff \(\mathcal {M}_1\Vdash ^2 s_1:BI_hBI_jp_i\) and \(\mathcal {M}_1\Vdash ^2 s_1:BI_hBI_ip_i\). \(\mathcal {M}_1\Vdash ^2 s_1:BI_hBI_jp_i\) holds just by clause 7. But clause 14 is needed for the satisfaction of \(\mathcal {M}_1\Vdash ^2 s_1:BI_hBI_ip_i\). Indeed, in this case h must receive \(BI_ip_i\) from i, and by clause 7 this means that model \(\mathcal {M}_1\) should satisfy \(R_{i,h}(s_3,s_1)\). We know that agent j considers \(s_3\) a trusted source, i.e. \(\mathcal {M}_1\Vdash ^1 s_2:I_j\phi _i\) for every \(\phi _i\) that holds at \(s_3\), and that \(s_2\) is accessed by h via the authorization granted by j (\(\mathcal {M}_1\Vdash R_{j,h}(s_2,s_1)\)). These conditions are sufficient to establish a new indirect informational channel from i to h through the mediation of j. Therefore, clause 14 entails \(\mathcal {M}_1\Vdash R_{i,h}(s_3,s_1)\). Now, both \(\mathcal {M}_1\Vdash ^2 s_1:BI_hBI_jp_i\) and \(\mathcal {M}_1\Vdash ^2 s_1:BI_hBI_ip_i\) hold. Then it is also the case that \(\mathcal {M}\Vdash ^2 s_1:I_hBI_jp_i\), and by Trust (clause 13) \(\mathcal {M}\Vdash ^2 s_1:BI_hp_i\). This conclusion is perfectly consistent with the semantic clause for BI, since clause 14 entails \(\mathcal {M}_1\Vdash R_{i,h}(s_3,s_1)\).

Note that when two agents are unrelated, e.g. \(j\not \preceq i\) and \(i\not \preceq j\) there is no propagation of trust. Indeed, according to clause 14 this may occur only when a hierarchy can be established. Consider for example the variant model \(\mathcal {M}_{1b}\) (see Fig. 2). In this example i and j are unrelated by \(\preceq \). It is easy to check that in this case there is no propagation of trust from i to j to h as it occurs in \(\mathcal {M}_1\) because there is no trust at all between j and i. In order to trust a formula issued by i, agent j needs to be lower than i in the hierarchy imposed by \(\prec \), i.e. it is required that \(i \prec j\). Since they are unrelated, j is not able to trust any information coming from agent i, i.e. she is not able to infer any formula \(s_2:I_{j}\phi _{i}\), for any \(\mathcal {M}_{1b}\Vdash ^k s_{3}: \phi _{i}\).

Fig. 2.
figure 2

Model \(\mathcal {M}_{1b}\)

6 Back to the Example

We now provide a detailed analysis of model \(\mathcal {M}_{2}\) which represents the transmission of information between Anne, Bob, and Charles as for the Examples in Sect. 3, see Fig. 3. Dotted lines in the model represent state composition, e.g. \(s_{2.1}\) and \(s_{2.2}\) jointly compose \(s_2\).

Consider Example 1. Bob knows that \(\lnot (p\wedge u)\), but he does not know whether \(\lnot p\) or \(\lnot u\) is the case (this fact is legitimate since propositional connectives have a non-deterministic semantics). Charles gives the authorization to Bob to access \(s_3\) from his access to \(s_2\), thus receiving both p and r. However, Bob has distinct plans for those two pieces of information: he is prepared to share p, but not r. Therefore, he decides to store incoming information on different parts of \(s_2\). This is reflected by the satisfaction of the following relational formulae:

  • \(\mathcal {M}_2\Vdash R_{c,b}(s_{3.2},s_{2.2})\), says that information stored at \(s_{3.2}\) i.e. r is made available for access at \(s_{2.2}\);

  • \(\mathcal {M}_2\Vdash R_{c,b}(s_{3.1},s_{2.1})\), says that information stored at \(s_{3.1}\) i.e. p is made available for access at \(s_{2.1}\).

Since p is information that Charles owns on its own, then it holds at depth 0 for him, i.e. \(\mathcal {M}_2\Vdash ^0 s_{3.1}:p_c\). Bob receives that formula at depth 1: \(\mathcal {M}_2\Vdash ^1 s_{2.1}:BI_bp_c\). As Bob receives p from Charles, and there is no one else in the hierarchy above Bob who disagrees with p, then Bob is informed of p at depth 1: \(\mathcal {M}_2\Vdash ^1 s_{2.1}:I_b p\). Now Bob satisfies the constraint to trust p, therefore he writes that content on \(s_2\). Hence, \(\mathcal {M}_2\Vdash ^1 s_{2.1}:p_b\). An analogous analysis holds with respect to r. Before receiving information from Charles, Bob knew that \(\lnot (p\wedge u)\) but he lacked any information about the truth-value of \(\lnot p\) and \(\lnot u\), i.e. \(\mathcal {M}_2\Vdash ^0 s_2:\lnot (p\wedge u)_b\), \(\mathcal {M}_2\Vdash ^0_*s_2:\lnot p_b\), and \(\mathcal {M}_2\Vdash ^0_*s_2:\lnot u_b\). But having trusted p, at depth 1 Bob is able to fill these truth-value gaps concluding that \(\lnot u\) is the case: \(\mathcal {M}_2\Vdash ^1 s_2:\lnot u_b\).

Fig. 3.
figure 3

Model \(\mathcal {M}_{2}\)

Consider now Example 2. After having written p on \(s_{2.1}\), Bob shares that content with Anne. Therefore, \(\mathcal {M}_2\Vdash R_{b,a}(s_{2.1},s_1)\). Since the original source of p is Charles, it means that Anne receives p after it went through 2 channels (the first is from Charles to Bob, and the second from Bob to Anne). Therefore: \(\mathcal {M}_2\Vdash ^2 s_1:BI_a p_b\). Additionally, Anne receives p at depth 1 from Charles, i.e. \(\mathcal {M}_2\Vdash R_{c,a}(s_{3.1},s_{1})\). Then, every agent above Anne has shared p with her. We can conclude that Anne is informed that p at depth 2 and that she trusts p at the same depth: i.e. \(\mathcal {M}_2\Vdash ^2 s_1:I_a p\) and \(\mathcal {M}_2\Vdash ^2 s_1:p_a\).

Example 3 is now straightforward. Anne receives \(\lnot q\) from Bob, but not from Charles. This is reflected in the relational formulae satisfied by the model: \(\mathcal {M}_2\Vdash R_{b,a}(s_{2.1},s_1)\) and \(\mathcal {M}_2\nVdash R_{c,a}(s_{2.1},s_1)\). For this reason, \(\mathcal {M}_2\nVdash ^1 s_1:BI_a p_c\), and therefore Anne is not able to trust p.

Finally, we highlight two more facts. The first is about DBI. At \(s_1\), Anne receives \(\lnot q\) from Bob, and p by Charles. So, \(\mathcal {M}_2\Vdash ^1 s_1:BI_a\lnot q_b\) and \(\mathcal {M}_2\Vdash ^1 s_1:BI_a p_c\). Thanks to the structural rules and by the definition of transmission by a group (Definition 5), we infer that the group formed by Bob and Charles has distributed information that \((p\wedge \lnot q)\), and they transmit that information from the composite state \(s_{2.1}+s_{3.1}\) to Anne, who can read that information at \(s_1\): \(\mathcal {M}_2\Vdash R_{\{b,c\},a}(s_{2.1}+s_{3.1},s_1)\). Hence, Anne receives distributed information at \(s_1\) that \((p\wedge \lnot q)\) by Bob and Charles: \(\mathcal {M}_2\Vdash ^1 s_1:DBI_a (p\wedge \lnot q)_{b,c}\). As for the second fact, note that information might or might not be preserved at reachable states. For example, Bob can read t at \(s_1\), but that information is not preserved when Bob reaches \(s_2\). Therefore, \(s_1\) might be a state that is only temporarily accessible, and Bob loses the authorization to read into \(s_1\) when he access to \(s_2\). On the contrary, Charles does not lose any piece of information when he reaches \(s_3\) from \(s_2\), because in this case the accessibility relation from these two states is symmetric for Charles.

7 Conclusions and Future Works

DBBL-BI\(_n\) models information transmission by agents through access authorization to parts of available memory and preserving secrecy when required. Agents can receive private communications from an agent, or from a group of agents by the operator of becoming informed (BI), and distributed becoming informed (DBI) respectively. The I operator models a policy of trust: when the same information is received by all agents with more access, the receiver is safe in trusting the message.

Several extensions are foreseen. Firstly, DBBL-BI\(_n\) has an appropriate proof-theory formulated in natural deduction style, and standard soundness and completeness results. These results are not included here for reasons of space. Since the aim of depth-bounded logics is to account for computationally tractable consequence relations, then it is desirable to study computational complexity for DBBL-BI\(_n\), devising a decision procedure working in polynomial time.

Secondly, DBBL-BI\(_n\) can be extended with an additional parameter expressing different degrees of inferential ability as standardly understood in DBBL [13], to complement the measure of the distance between source and receiver presented here.

Finally, we aim at enriching DBBL-BI\(_n\) with a suitable way to compute trustworthiness by means of a threshold function and of degrees of beliefs as in [3, 10, 11, 25]. Moreover, it would be desirable to model updates with contradictory information, and have a method to eliminate inconsistencies by means of operations of negative trust as in [22].