Verification and Strategy Synthesis for Coalition Announcement Logic

Coalition announcement logic (CAL) is one of the family of the logics of quantified announcements. It allows us to reason about what a coalition of agents can achieve by making announcements in the setting where the anti-coalition may have an announcement of their own to preclude the former from reaching its epistemic goals. In this paper, we describe a PSPACE-complete model checking algorithm for CAL that produces winning strategies for coalitions. The algorithm is implemented in a proof-of-concept model checker.


Introduction
In the multi-agent logic of knowledge we investigate what agents know about their factual environment and what they know about knowledge of each other (Hintikka 1962 logic of knowledge with modalities for public announcements. Such modalities model the event of incorporating trusted information that is similarly observed by all agents (Plaza 2007). The 'truthful' part relates to the trusted aspect of the information: we assume that the novel information is true.
In Ågotnes and van Ditmarsch (2008) the authors propose two generalisations of public announcement logic, GAL (group announcement logic) and CAL (coalition announcement logic). These logics allow for quantification over public announcements made by agents modelled in the system. In particular, the GAL quantifier G ϕ (parametrised by a subset G of the set of all agents A) says 'there is a truthful announcement made by the agents in G, after which ϕ (holds)'. Here, the truthful aspect means that the agents in G only announce what they know: if a in G announces ϕ a , this is interpreted as a public announcement K a ϕ a , and a truthful group announcement by agents in G is a conjunction of such known announcements. The CAL quantifier [G] ϕ is motivated by game logic (Pauly 2002;Parikh 1985) and Van Benthem's playability operator (van Benthem 2014). Here, the modality means 'there is a truthful announcement made by the agents in G such that no matter what the agents not in G simultaneously announce, ϕ holds afterwards'. In (Ågotnes and van Ditmarsch 2008) it is, for example, shown that CAL subsumes game logic.
CAL has been far less investigated than other logics of quantified announcements, such as APAL (Balbiani et al. 2008) and GAL, although some combined results have been achieved (Ågotnes et al. 2016;French et al. 2019;van Ditmarsch et al. 2021). In particular, model checking for CAL, which has potential practical implications, has not been studied. For example, in CAL it is possible to express that a group of agents (for instance, a subset of bidders in an auction) can make an announcement such that no matter what other agents announce simultaneously, after this announcement certain knowledge is increased (all agents know that G has won the bid) but certain ignorance also remains (for example, the maximal amount of money G could have offered). The main contribution of this paper is a thorough analysis of the model checking problem for CAL and a description of an implemented open source model checker for CAL and GAL formulas. This paper is a revised and extended version of Galimullin et al. (2018), with detailed proofs and a new section on the implementation of the model-checking algorithm, which also contains a large worked example. The structure of the paper is as follows. Section 2 provides the necessary background on GAL and CAL, and in Sect. 3 we use distinguishing formulas to make a shift from an infinite number of agents' announcements to a finite number of strategies available to them. The model checking algorithm is given in Section 4. The algorithm presented here differs from the one presented in Galimullin et al. (2018) in several respects. Instead of iterating over an explicit list of strategies, it generates and tests strategies one at a time, so that it only uses polynomial space. Instead of returning true and false, the version presented here returns a set of states satisfying the formula; for true formulas starting with a GAL or CAL modality, it also outputs a strategy. The model checking algorithm and the proof of PSPACE-completeness build on those for GAL (Ågotnes et al. 2010), but the algorithm for CAL requires some modifications; in particular the algorithms in Ågotnes et al. (2010) runs in APTIME by 'guessing' strategies while our algorithm is deterministic. We also describe an efficient (PTIME) special case. The algorithm is implemented in a proof of concept model checker MCCAL available on https://github. com/Twelvelines/MCCAL. The implementation and its performance are described in detail in Wang (2019), and briefly in Sect. 5 of this paper.

Introductory Example
Two agents, a and b, want to buy the same item, and whoever offers the greatest sum, gets it. Agents may have 5, 10, or 15 pounds, and they do not know which sum the opponent has. Let agent a have 15 pounds, and agent b have 5 pounds. This situation is presented in Fig. 1. In this model (let us call it M), state names denote money distribution. Thus, 10 a 5 b means that agent a has 10 pounds, and agent b has 5 pounds. Labelled edges connect the states that a corresponding agent cannot distinguish. For example, in the actual state (boxed), agent a knows that she has 15 pounds, but she does not know how much money agent b has. Formally, M 15 a 5 b | K a 15 a ∧ ¬(K a 5 b ∨ K a 10 b ∨ K a 15 b ) (which means M 15 a 5 b satisfies the formula, where K i ϕ stands for 'agent i knows that ϕ', ∧ is logical 'and', ¬ is 'not', and ∨ is 'or'). Note that edges represent equivalence relations, and in the figure we omit transitive and reflexive transitions.
Next, suppose that agents bid in order to buy the item. Once one of the agents, let us say a, announces her bid, she also wants the other agent to remain ignorant of the total sum at her disposal. Formally, we can express this goal as formula ϕ := K b (10 a ∨ 15 a ) ∧ ¬(K b 10 a ∨ K b 15 a ) (for bid 10 by agent a). Informally, if a commits to pay 10 pounds, agent b knows that a has 10 or more pounds, but b does not know the exact amount. If agent b does not participate in announcing (bidding), a can achieve the target formula ϕ by announcing K a 10 a ∨K a 15 a . In other words, agent a commits to pay 10 pounds, which denotes that she has at least that sum at her disposal. In general, this means that there is an announcement by a such that after this announcements ϕ holds. Formally, M 15 a 5 b | a ϕ. The updated model M K a 10 a ∨K a 15 a 15 a 5 b , which is a restriction of the original model to the states where K a 10 a ∨ K a 15 a holds, is presented in Fig. 2.
Indeed, in the updated model agent b knows that a has at least 10 pounds, but not the exact sum. The same holds if agent b announces her bid simultaneously with a in the initial situation. Moreover, a can achieve ϕ no matter what agent b announces, e.g.
, since all their announcements made in the conjunction with a's announcement K a 10 a ∨ K a 15 a result in the updated models satisfying ϕ. Formally, M 15 a 5 b | [a] ϕ.

Syntax and Semantics of CAL
Let A denote a finite set of agents, and P denote a countable set of propositional variables.

Definition 1
The language of coalition announcement logic L C AL is defined by the following BNF: Let L G E L denote the set of formulas of the form i∈G K i ϕ i , where for every i ∈ G it holds that ϕ i ∈ L E L .

Definition 4
Let M w be a pointed epistemic model. The semantics is inductively defined as follows: Also note that, in order to avoid circularity, quantification in the condition for coalition announcements is restricted to formulas of epistemic logic. Since in the model checking procedure we will also be considering GAL modalities, we provide a truth definition for [G]ϕ: ϕ is read as 'whatever agents from G announce, ϕ holds.' The operator for coalition announcements [ G ]ϕ is read as 'whatever agents from G announce, there is a simultaneous announcement by agents from A \ G such that ϕ holds.' The semantics for the 'diamond' versions of group/strategic announcement operators is as follows: They are read as 'there is a combined simultaneous announcements by agents from G, such that ϕ holds,' and 'there is an announcement by agents from G, such that whatever agents from A \ G announce at the same time, ϕ holds' correspondingly.

Definition 5
We call formula ϕ a validity if and only if for any pointed model M w it holds that M w | ϕ. Given M w and ϕ, we say that ϕ is satisfied in M w if and only if M w | ϕ.

Bisimulation
The notion of bisimulation (Blackburn et al. 2001, Chapter 2) plays a significant role in the paper.
If there is a bisimulation between models M and N linking states w and t, we say that M w and N t are bisimilar, and write M w N t .
Next, we show an extension of the well-known result that bisimulation between states implies that these states satisfy the same formulas.

Proof
The proof is by induction on the structure of ϕ. Note that it is straightforward to define a size relation between formulas in such a way that the quantifier depth of formulas is considered before the modal depth and subformula relation.
The boolean cases are immediate, and the proof for the case of public announcements can be found, for example, in vanDitmarschandFrench (2020). Here we prove only the case for coalition announcements, since the case for group announcements is similar (and simpler).
The bisimulation contraction of a model is, informally, the most compact representation of that model.
It is a standard result that M w M [w] (see, for example, (Goranko and Otto 2007)).
Corollary 1 For all ϕ ∈ L G AL ∪ L C AL , M [w] | ϕ if and only if M w | ϕ.

Distinguishing Formulas
In this section we introduce distinguishing formulas that are satisfied in only one (up to bisimulation) state in a finite model. The discussion is based on van Ditmarsch et al. (2014). Although agents know and can possibly announce an infinite number of formulas, using distinguishing formulas allows us to consider only finitely many different announcements. This is done by associating strategies of agents with corresponding distinguishing formulas, where a strategy of agent a is a union of a-equivalence classes.
Without loss of generality, we assume that the set of propositional variables P is finite. This is justified by the fact that in a finite epistemic model M = (W , ∼, V ) there are 2 |W | unique truth assignments for a propositional variable, and a truth assignment for any p 2 |W | +1 will repeat one from p 1 , ..., p 2 |W | .
We continue with the formal definition of distinguishing formulas.

Definition 8 Let a finite epistemic model
If a formula distinguishes state w from all other non-bisimilar states in M, we abuse the notation and write δ w . Instead of technical detail we sketch the argument. Let M w be a finite pointed epistemic model. Without loss of generality we assume that M is bisimulation contracted. A distinguishing formula δ w is constructed recursively as follows, where k ∈ N: and K a w∼ a v δ k v respectively emulate conditions Atoms, Forth, and Back of the definition of bisimulation. Indeed, it is then easy to see that the binary relation Z on W defined by: , is a bisimulation on finite models. Therefore, for each w ∈ W that is not bisimilar to v there is a max ∈ N such that M w | δ max v . One can take max = |W | 2 , as also observed in van Benthem (1998, Section 5).
Assumptions regarding some given model being finite and bisimulation contracted are of vital importance for the construction of distinguishing formulas. If the model is infinite, then we may either need an infinite amount of propositional variables to describe the given state, or there may be infinite branches of accessibility relations. If the model is not bisimulation contracted, i.e. there are bisimilar states in the model, then distinguishing formulas cease to be unique-the same formula describes all bisimilar states in the model. This becomes a problem if we want to switch from agents announcing formulas to agents 'choosing' a definable submodel: in the latter case agents may distinguish between bisimilar states.
Having defined distinguishing formulas for states, we can define distinguishing formulas for sets of states.

Definition 9
Let M w be a finite model and S be a set of states in M. A distinguishing formula for S is Let us recall the bidding example from Sect. 2.1, and construct the distinguishing formula δ 15 a 5 b . Note that for this particular example it is enough to construct distinguishing formulas of depth 0 only. This is due to the fact that each state in the example has a unique valuation of propositional variables. We, however, proceed with the full construction for illustrative purposes.
First, we start with the propositional description of the state: Let us assume that we calculated δ 0 's in the same fashion for all other states. Next, we proceed with the first iteration of δ 15 a 5 b : The process continues for |W | 2 iterations. Informally, each iteration of a distinguishing formula construction adds one layer for each state in a model. Hence, in our example with 9 states distinguishing formula δ 15 a 5 b looks as follows (assuming that all previous δ k 15 a 5 b 's with k ≤ |W | 2 − 1 have been calculated): Note that since models we are dealing with in this paper are finite, distinguishing formulas always exist.

Strategies
In this section we introduce strategies and connect them to public announcements using distinguishing formulas. In the setting of GAL, strategies are sets of states that agents can ensure to be in the updated model by announcing a formula that holds in those states. For CAL, however, this is not always true, as the anti-coalition may have a counter-strategy to reduce the set initially chosen by the coalition. Still, we use word 'strategy' in both contexts. The formal definition of a strategy is presented below.
Definition 10 A strategy for an agent a in a finite model M w is a union of equivalence classes of a containing the a-equivalence class of w. Let S(a, w) be the set of all strategies for agent a in M w . A strategy for group G is defined as i∈G X i such that for all i ∈ G, X i ∈ S(i, w). The set of available strategies for a group of agents G in M w is denoted by S(G, w).
Strategies are implemented by agents, and generally public announcements do not correspond to strategies. Consider model M 15 a 5 b in Fig. 1 and formula It is easy to see that public announcement of ϕ does not correspond to any strategy of a and b, that is W ϕ / ∈ S({a, b}, 15 a 5 b ). Note that for any M w and G ⊆ A, S(G, w) is not empty, since the trivial strategy that includes all the states of the current model is available to all agents. We denote the trivial strategy by X .

Proposition 3 In a finite model M w , for any G ⊆ A, S(G, w) is finite.
Proof This is due to the fact that in a finite model there is a finite number of equivalence classes for each agent.
Now we tie together announcements and strategies. Each of infinitely many possible announcements by agents in a finite model corresponds to a set of states where it is true (a strategy). In a finite bisimulation contracted model, each strategy is definable by a distinguishing formula, hence it corresponds to an announcement. This allows us to consider finitely many strategies instead of considering infinitely many possible announcements: there are only finitely many non-equivalent (in terms of model updates) announcements for each finite model, and each of them has a corresponding distinguishing formula of some strategy.
Given a finite and bisimulation contracted model M w and strategy X ∈ S(G, w), a distinguishing formula δ X for X can be obtained from Definition 9 as v∈X δ v .
Next, we show that agents know their strategies and thus can make corresponding announcements.
Proposition 4 Let M w be a finite bisimulation contracted model, and X ∈ S(a, w).
Proof We show just the first part of the proposition, since the second part follows easily. By the definition of a strategy, The same holds for other equivalence classes of a including the one with w, and we have The following proposition states that given a strategy, the corresponding public announcement yields exactly the model with states specified by the strategy.
Proposition 5 Given a finite bisimulation contracted model M = (W , ∼, V ) and a strategy X ∈ S(a, w), W K a δ X = X . More generally, W i∈G K i δ X i = X G , where X G := i∈G X i such that for all i ∈ G, X i ∈ S(i, w).
Proof In order to prove that W K a δ X = X for X ∈ S(a, w), we need to show that for Finally, let us consider the case of group G. It is clear that The latter is equivalent to X G by the definition of a group strategy.
We also show that true group announcements correspond to group strategies.

Proposition 6 Let M w be a finite bisimulation contracted epistemic model, and
Let us consider some particular K a ψ a . By the semantics we have that Note that all states reachable from the given one via a form an a-equivalence class [w] a . In the same way, K a ψ a may be true in other a-equivalence classes [u] a , ...[t] a . Hence, formula K a ψ a holds in the union of these equivalence classes, i.e. it holds in and the latter is a group strategy X ∈ S(G, w). Now, let us reformulate semantics for the group and coalition announcement operators in terms of strategies.

Proposition 7 For a finite bisimulation contracted model M
By the semantics this means that ∃ψ ∈ L G E L : M w | ψ ϕ. The latter is equivalent to M w | ψ and M ψ w | ϕ. By Definition 3 and Proposition 6, this implies M X w | ϕ for some X ∈ S(G, w).
⇐: Let X ∈ S(G, w) be a group strategy such that M X w | ϕ. Then, by Propositions 4 and 5 , there is an announcement of distinguishing formulas by agents from G such that X Hence, by Proposition 6, we have M X ∩Y w | ϕ for some X ∈ S(G, w) and all Y ∈ S(A\G, w). ⇐: Assume that there is some strategy X ∈ S(G, w) such that for all strategies Y ∈ S(A \ G, w) it holds that M X ∩Y w | ϕ. We need to show that Let ψ = i∈G K i δ X i (assuming X = i∈G X i ). By Proposition 4, M w | ψ, and by Proposition 5, W ψ = X . Take an arbitrary Sometimes we may be interested in situations where it is beneficial for agents to be as informative as possible (or, equivalently, leave as little uncertainty as possible). We recall the Maxim of Quantity postulated by Grice (1975;1989): Make your contribution as informative as is required.

However, he also adds
Do not make your contribution more informative than is required.
What is as informative as required depends on the goal of the communication. So, in terms of epistemic logic, it depends on the epistemic goal to be satisfied in the model restriction resulting from the announcement, or from the sequence of announcements (as in a conversation consisting of various statements by different people, exactly the CAL setting). If the epistemic goal is full information on the value of all propositional variables, then the most informative announcement is the adequate announcement. However, there are other settings wherein the most informative announcement is not adequate. Typical settings of that kind are security protocols wherein the communicating principals want to be as informative as required (namely satisfying the information goal) but not more than that. They should guarantee safety: the eavesdropper should not be able to learn the information. The most informative announcement may then backfire. For example, a Bridge player had better not declare that she has the Queen of Hearts. This is very informative for her partner, but equally informative for the opposing team.
The type of announcements that fulfill the requirement that they are as informative as possible is defined in Definition 11.

Definition 11
Let M w be a finite bisimulation contracted model. A maximally informative announcement by G is a formula ψ ∈ L G E L such that w ∈ W ψ and for all χ ∈ L G E L such that w ∈ W χ it holds that W ψ ⊆ W χ . For finite models such an announcement always exists Ågotnes and van Ditmarsch (2011). We will call the corresponding strategy X ∈ S(G, w) the strongest strategy on a given model.
Intuitively, the strongest strategy is the smallest available strategy. Note that in a bisimulation contracted model M w , the strongest strategy of agents G is X = i∈G [w] i , that is agents' strategies consist of the single equivalence classes that include the current state.
In model M 15 a 5 b in Fig. 1 a's strongest strategy is {15 a 5 b , 15 a 10 b , 15 a 15 b }, and b's strongest strategy is {15 a 5 b , 10 a 5 b , 5 a 5 b }. So, the strongest strategy of group {a, b} is the intersection of strongest strategies of agents from the group :  {15 a 5 b , 15 a 10 b 15 a 15 b }∩ {15 a 5 b , 10 a 5 b , 5 a 5

Model Checking for CAL
Employing strategies allows for a rather simple model checking algorithm for CAL. We switch from quantification over an infinite number of epistemic formulas to quantification over a finite set of strategies (Sect. 4.1). Moreover, we show that if the target formula is a positive PAL formula, then model checking is even more effective (Sect. 4.2).

General Case
First, let us define the model checking problem.

Definition 12
Let M w be a finite epistemic model, and ϕ ∈ L C AL ∪ L G AL . The model checking problem is the problem to determine whether ϕ is satisfied in M w .
We are going to solve this problem by providing an algorithm mc that, given a finite epistemic model M = (W , ∼, V ) and some formula ϕ, computes W ϕ . Then the answer to the model checking problem for M w will be yes if w ∈ W ϕ , and no otherwise.
As a side effect, for formulas of the form G ψ or [G] ψ and for each state in W ϕ , mc also writes out a strategy of G (a set of states) that ensures ψ. We could have defined mc to return a pair consisting of W ϕ and a strategy (or an empty set, for formulas that are not of form G ϕ or [G] ϕ) but we have decided to output the strategy as a side effect for ease of presentation.
Algorithm 1 takes a finite model M and ϕ 0 ∈ L C AL ∪ L G AL as an input, and returns W ϕ 0 , while also writing out a list of 'witness' strategies for group and coalition announcement operators. The case for GAL modalities is treated similarly to the model checking algorithm introduced in Ågotnes et al. (2010), apart from also printing out the witness strategy. The case for CAL modalities requires checking each strategy against all possible strategies by the opponents. Unlike the algorithm in Ågotnes et al. (2010) which runs in APTIME, we state a deterministic PSPACE algorithm.
But before providing the algorithm, we first need to introduce a function next(G, M, w, X ), that given a group of agents G, a model M, a state w and a strategy X , returns the next strategy X in S(G, w).
We assume that in the input M = (W , ∼, V ), ∼ a for each a is given as a set of equivalence classes of states, and that for each agent a there are n a such classes (clearly n a ≤ |W |; observe also that this way of specifying the equivalence relation is linear rather than quadratic in |W |). Each strategy in S({a}, w) should include the equivalence class [w] a . There are 2 n a −1 subsets of the set of the remaining ∼ a -equivalence classes, and hence |S({a}, w)| = 2 n a −1 .
The set S(G, w) can be ordered using the order on the set of agents A and on the equivalence classes of each agent a in G. For example, if an agent a has equivalence classes e 1 , . . . , e m in M, and e 1 contains w, 1 then the order on S({a}, w) by shortest first and then lexicographically on S({a}, w) is: e 1 = [w] a e 1 ∪ e 2 e 1 ∪ e 3 . . . e 1 ∪ e m e 1 ∪ e 2 ∪ e 3 e 1 ∪ e 2 ∪ e 4 . . . e 1 ∪ e 2 . . . ∪ e m = W Note that the first and the last strategies of a can be computed in time and space at most linear in the size of the model. Given an arbitrary element X in this order, the next one in the order (the function next({a}, M, w, X )) can be computed in time and space polynomial in the size of the model. For a union of length j, we first check whether the last element can be 'incremented' (whether it is not e 1 or e m ) and if yes, increment it. If it cannot be incremented, then we check if the element j − 1 can be incremented (if it is not e 1 or e m−1 ). If it can be incremented to the next equivalence class e , we increment j − 1 and change j to e where e follows e in the order of equivalence classes. If it cannot be incremented, we repeat until we either produce the next union of length j or we produce the first union of length j + 1 which is e 1 ∪ e 2 ∪ . . . ∪ e j+1 .
Similarly, given the order on agents in G, say a 1 , . . . , a k , each agent's strategies, s i 1 , . . . , s i N i , where N i = 2 n i −1 , the set S(G, w) can be ordered lexicographically (below, N i = 2 n i −1 , and s i N i = W is the last strategy of agent i): Again, the first and the last strategies of G can be computed in time at most linear in the size of the model, and the next element in time and space polynomial in the size of the model. Similarly to the single agent case, if the kth agent's strategy X is not W , we increment it by calling next({a k }, M, w, X ), else we attempt to increment the strategy of a k−1 and reset the strategy of a k to its first strategy [w] a k , etc. For a group G of k agents, each with 2 n a i −1 strategies, we have that |S(G, w)| = 2 k n a i −1 ≤ 2 |W |−1 . Hence the size of S(G, w) is bounded by the exponential in the size of the model (although not in |G|). A straightforward model-checking algorithm would generate S(G, w) (and S(A \ G, w) for coalition announcements) and iterate over it to check if the group or coalition announcement formula is satisfied. However generating S(G, w) explicitly requires exponential amount of space. Instead of use the function next(G, M, w, X) to generate the strategy that follows X in the ordering of S(G, w). Generating and testing strategies one at a time only requires polynomial amount of space. For technical convenience, we also define

Proposition 9 Model checking for CAL is PSPACE-complete.
Proof All the cases of the model checking algorithm apart from the case for [G] (and G for GAL) require polynomial time, both in the size of the model and the size of the formula (hence, polynomial space as a consequence).
The cases for G and [G] generate and test exponentially many strategies. The running time of the algorithm is therefore exponential in the size of the model (but polynomial in the size of the formula).
However the cases for G and [G] use only polynomial amount of space. Observe that next (G, M, w, X ) can be implemented to generate and return the successor strategy of X in time and space polynomial in M and G. Each check of a particular strategy can be computed using only polynomial amount of space to represent M [w] (which contains at most the same number of states as the input model M, and can be computed in polynomial time (see Appendix A)) and the result of the update (which at most the size of M [w] ) and make a recursive call to check whether ϕ holds in the update.
Hardness can be obtained by a slight modification of the proof of PSPACEhardness of the model-checking problem for GAL in Ågotnes et al. (2010). The proof encodes satisfiability of a quantified boolean formula (QBF) as a problem whether Fig. 3 Model M that corresponds to a QBF a particular GAL formula is true in a model corresponding to the QBF. We highlight just some parts of the proof from Ågotnes et al. (2010). Given some QBF Ψ := Q 1 x 1 ...Q n x n (x 1 , ..., x n ), the authors construct a model that depends on the number of variables in the formula. We have depicted the model in Fig. 3, wherein those variables possibly indexed with 0 or 1 have become the names of the states. Apart from agent i, whose relation is universal, there is also agent g, whose relation is the identity. Next, the authors define properties q j 'only one of x 0 j and x 1 j is in the model' and r j 'both x 0 j and x 1 j are in the model'. These properties are used to recursively define a GAL formula ψ(Ψ ) that will be then evaluated in model M x | ψ(Ψ ). An example of a corresponding GAL formula for the given QBF ∀x 1 ∃x 2 ∀x 3 : ). For our proof, however, it is enough to notice the following. Since the encoding uses only two agents: an omniscient g and a universal i, we can replace [g] and g with [ g ] and [g] (since i's only strategy is equivalent to and no other GAL operators are used in the encoding) and obtain a CAL encoding.

Positive Case
In this section we demonstrate the following result: if in a given formula the subformulas within the scope of coalition and group announcement operators are positive PAL formulas, then complexity of model checking is polynomial.
Allowing coalition announcement modalities to bind only positive formulas is a natural restriction. Positive formulas have a special property: if the sum of knowledge of agents in G (their distributed knowledge) includes a positive formula ϕ, then ϕ can be made common knowledge by a group or coalition announcement by G. Formally, for a positive ϕ, M w | D G ϕ implies M w | [G] C G ϕ, where D G stands for distributed knowledge which is interpreted by the intersection of all ∼ a relations, and C G stands for common knowledge which is interpreted by the transitive closure of the union of all ∼ a relations. See van Ditmarsch and Kooi (2006), and also Ågotnes and Wáng (2017) where the process of making distributed knowledge common knowledge is called resolving distributed knowledge. In other words, positive epistemic formulas can always be resolved by cooperative communication.
Negative formulas do not have this property. For example, it can be distributed knowledge of agents a and b that p and ¬K b p: D {a,b} ( p ∧ ¬K b p). However it is impossible to achieve common knowledge of this formula: C {a,b} ( p ∧ ¬K b p) is inconsistent, since it implies both K b p and ¬K b p. Going back to the example in Sect. 2.1, it is distributed knowledge of a and b that K a 15 a and K b 5 b . Both formulas are positive and can be made common knowledge if a and b honestly report the amount of money they have. However it is also distributed knowledge that ¬K a 5 b and ¬K b 15 a . The conjunction is distributed knowledge, but it cannot be made common knowledge for the same reasons as above.
We should also observe that positive formulas are maybe not as rare as it may appear on first sight. In the first place, in models where all states have different valuations, every announcement is equivalent to the disjunction of the characteristic formulas of depth 0 in the states in the denotation of the announcement. In particular, this is the case for the model in Fig. 1.
However, in the second place, there are still other cases where announcement formulas are equivalent to positive formulas on some given model. This is not well-explored territory. A very relevant result by Van Benthem is that on finite models any epistemic formula ψ is equivalent to a formula ϕ that remains true after being announced. Such formulas ϕ are now often known as successful formulas (van Benthem 2006; van Ditmarsch and Kooi 2006) (the term employed in van Benthem (2006) is persistent).
The formula constructed in Van Benthem's proof is a disjunction of characteristic formulas of states in the original and in the restricted models. This successful formula contains diamonds K i and may not be positive (another problematic issue is that it also contains common knowledge modalities). The standard example of a successful formula that is not positive is the formula ¬K a p. However, this and similar constructions may well lead to expand the use of the positive fragment. It is further relevant to observe that such positive formulas are a good candidate to characterise what are known as the preserved formulas (those that remain true after any update, see Definition 13 below), which is also shown in van Benthem (2006), but for the slightly smaller positive fragment excluding the clause [¬ψ]ϕ for announcements given below.
The positive formulas are also relevant in an entirely different way for logics with quantification over announcements, namely in the logic called APAL + wherein the quantification is over positive formulas only (in contrast to the situation investigated in this section, as the CAL quantifier is over all known formulas, which need not be positive, whereas the formula bound by the CAL quantifier must be positive). This logic is investigated in van Ditmarsch et al. (2020). It is incomparable in expressivity to APAL, and it is also reputed to be decidable. No version of CAL quantifying over known positive formulas has been investigated to our knowledge.

Definition 13
The language L P AL + of the positive fragment of public announcement logic PAL is defined by the following BNF: where p ∈ P and a ∈ A.
Definition 14 Formula ϕ is preserved under submodels if for any models M and N , A known result that we use in this section states that formulas of L P AL + are preserved under submodels (van Ditmarsch and Kooi 2006).
Proposition 10 Let M w be a finite epistemic model, and let ϕ ∈ L C AL ∪ L G AL be a formula such that for all its subformulas of form [G] ψ and G ψ, ψ belongs to the positive fragment L P AL + . It is possible to decide by means of a deterministic algorithm working in polynomial time whether M w | ϕ.
Proof For positive formulas, we can replace Algorithm 1 by Algorithm 2.
Algorithm 2 Model checking for positive formulas 1: function mcp(M, ϕ 0 ) 2: For all subformulas of ϕ 0 , the algorithm runs in polynomial time. Consider the modified call for G ϕ and [G] ϕ. Instead of checking all possible strategies as in the general case, it requires constructing a single update model given a single (strongest) strategy, which is a simple case of restricting the input model to the set of states in the strategy. This can be done in polynomial time. Then we call the algorithm on the updated model for ϕ, which by assumption requires polynomial time.
Observe that the cases of CAL and GAL modalities for the positive fragment are treated in an identical way: we check whether the strongest strategy of G can be used to make the goal formula true. Intuitively, this is because every positive formula that can be made true with any strategy, can be made true with the strongest strategy. And in the case of CAL, the announcement by the opponents does not matter, since [G] ϕ implies G ϕ, and any further restrictions of a model do not change the valuation of positive ϕ. Now, let us show that Algorithm 2 is correct. (Hagland 2018) to check for the existence of group strategies in the Russian Cards problem van Ditmarsch (2003). There are no general purpose model checkers for GAL and CAL. The model checker MCCAL is implemented in Java by Wang (2019). The code is available on https://github.com/Twelvelines/MCCAL. The model checker implementation is not optimised and is intended as a proof of concept. A non-trivial example from Galimullin (2019) is presented in the next section.

Households and Burglars: An Example
In the city of N 2 the local authorities have decided to gather information about, and publish statistics on, electricity consumption in each neighbourhood. Consumption information is submitted by each neighbourhood in the city, indicating the total number of households that have been using electricity in the last month. Data about neighbourhoods is public, and data about individual households is private, i.e., particular users of electricity are not revealed, but the total number of such users in the area is common knowledge. And there is a reason for such a requirement.
A group of local burglars is also interested in the public report on electricity consumption: they hope to deduce which households have not used electricity recently since it is an indication that property occupiers are not in their houses (most probably, they are on vacation). However, the burglars want to be certain that a house is empty, and will not risk burglary unless they know for sure that the property occupiers are away. They are also very reluctant to lurk around a neighbourhood trying to learn who is away, as such behaviour is very suspicious. Therefore, the only way to know about 'vacant' households is through the public energy consumption report.
In N, there is a small neighbourhood of only four houses: a, b, c, and d. They are situated around a park in a circular fashion such that neighbours on the left and on the right are equidistant. The park is quite large and the occupants of each house knows only their immediate neighbours on the left and on the right. Thus, for example, the occupant of c knows the occupants of b and d, and about their plans, but she is unaware of the plans of the occupants of a.
The epistemic model TES describing the neighbourhood containing a, b, c, and d is shown in Fig. 4. In the model, the names of states indicate who is at home; for instance, 1001 means that the occupants of a and d are at home, and that the occupants of b and c are not. Burglars v (for 'villains') do not have any information regarding occupancy, and their epistemic relation is universal. We do not present the v-relation in the figure, for readability. We will refer to the occupant of house i as agent i.
Let the actual state be 0101, and let 0101 also abbreviate ¬ p a ∧ p b ∧¬ p c ∧ p d , where p i stands for 'agent i is at home.' Note that neither burglars nor the householders possess the full information about the neighbourhood: TES 0101 | ¬(K a 0101∨ K b 0101∨ K c 0101∨ K d 0101∨ K v 0101). Also note that householders are aware of their own state and of the states of their left-and right-hand-side neighbours, but not about the state of the furthest house. E.g. TES 0101 | K c ¬ p c ∧ K c p b ∧ K c p d ∧ ¬(K c ¬ p a ∨ K c p a ). where mis stands for 'mutual informative statement.' Thus we have that TES 0101 | mis sofa. Since mis is an announcement of agents' knowledge, we can conclude that there is an announcement by a, b, c and d such that sofa holds in the resulting model, i.e. TES 0101 | {a, b, c, d} sofa. Result of updating TES 0101 with mis is presented in Fig. 5.
All the relations in the model are v equivalence relations. Hence, indeed, in TES mis 0101 exactly two households have been using electricity recently, and although the public (and burglars as well) knows that fact, it cannot name particular houses that are 'vacant'. A 'side-effect' of group announcement mis is that all residents in the neighbourhood know exactly who is on vacation, and it is common knowledge. Note that we can state a fact stronger than TES 0101 | {a, b, c, d} sofa. Since v's relation is universal, they cannot prevent the group to make sofa true whatever they (i.e. v) announce. In other words, TES 0101 | [{a, b, c, d}] sofa.
Interestingly, in this particular example even two agents can make an announcement such that sofa holds in the resulting model. Consider the following announcement by agents a and b: The resulting updated model is shown in Fig. 6 (all the relations are v-relations).
The reader can verify that TES mis a,b 0101 | sofa, and hence TES 0101 | {a, b} sofa. Note that compared to model TES mis 0101 (Fig. 5), model TES mis a,b 0101 has fewer states. This means that householders gave a bit more information than necessary, but they still managed to inform authorities that exactly two households have been using electricity while not revealing the exact state of affairs.
Even though two householders can make a successful announcement, they must ensure that none of the other agents has been conspiring with burglars. For assume this is the case that agent c, for example, decides to reveal to burglars which houses are empty. She can pass the following information with a's and b's submission: . This announcement made in conjunction with mis a,b results in a singleton model with 0101 as the only state. Moreover, whatever a and b announce, c always has an announcement to make sofa false in the resulting model (and, alas, to let the burglars know that she is on vacation We have seen that an announcement by two householders is enough to make sofa true. What about the single-agent case? As householders possess information about themselves and two closest neighbours, they do not know the actual state of the world, i.e. they do not have enough information about their furthest neighbour. However, it is possible for some agents to make an announcement such that it informs the public that at least two of the households have been using electricity recently, and particular users and non-users remain incognito. Formally, such a target formula is as follows: Agent a, for instance, can make sofa 1 true in TES 0101 by announcing The result of such an announcement is presented in Fig. 7 (relation v is universal) .

Experiments
For trivial examples with two or three agents and two or three states, the running time of MCCAL is less than the time needed to print the output to the screen.  The experiment was carried out on a quad-core 64-bit Processor running at 2.2 GHz with 16GB of memory. The results of model checking these formulas and the average runtime (including outputting lists of strategies) of 10 computations is presented in Table 1. The times taken to check {a, b, c, d} sofa and [{a, b, c, d}] sofa are significantly longer because the current implementation explicitly computes the set of all strategies for {a, b, c, d}, and this set is larger than the set of {a, b} and {a} strategies. However, the set of strategies does not grow exponentially with the size of the group. For formulas with the outermost occurrence of diamond versions of GAL and CAL modalities (formulas 2, 3, 4, and 6), MCCAL returns the corresponding group and individual strategies. The output of the model checker for state 0101 is presented in Table 2.
The reader can verify that strategies for formulas {a, b, c, d} sofa and [{a, b, c, d}] sofa are identical. Indeed, the only agent outside of group {a, b, c, d} is v, whose relation is universal. For formula [{a}] sofa 1 , strategy of the group consisting of a single agent coincides with the agent's individual strategy. Strategies in the table differ from the ones presented in Sect. 5.1. Our algorithm can be easily modified to return all successful strategies.

Concluding Remarks
We have shown that the model checking problem for CAL is PSPACE-complete, just like the one for GAL (Ågotnes et al. 2010) and APAL (Balbiani et al. 2008). We also presented a model checker for both CAL and GAL formulas. An interesting direction for future work is to optimise the performance of MCCAL.
In the special case when formulas within scopes of coalition modalities are positive PAL formulas, the model checking problem is in P. The same result would apply to GAL and APAL; in fact, in those cases the formulas in the scope of group and arbitrary announcement modalities can belong to a larger positive fragment (the positive fragment of GAL and of APAL, respectively, rather than of PAL). The latter is due to the fact that GAL and APAL operators are purely universal, while CAL operators combine universal and existential quantification, and CAL does not appear to have a non-trivial positive fragment extending that of PAL.
An interesting special case we would like to consider in the future is the case of models where each state has a different assignment of propositional variables such that the models are already bisimulation contracted.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Appendix: Bisimulation Contraction Algorithm
To state the bisimulation contraction algorithm, we need the following operations: -for two sets X and Y , We will refer to Y as a splitter of X if split(X , Y ) = {X ∩ Y , X ∩ Y }. -If Q is a set of sets, and Y a set, then We will refer to Y as a splitter of Q if for some X ∈ Q, split(X , -If Y is a set of states, and ∼ a an indistinguishability relation, ∼ a (Y ) = {x | ∃y ∈ Y such that x ∼ a y} (the preimage of Y with respect to ∼ a ). Clearly, The algorithm below is essentially the naive version of the relational coarsest partition algorithm by Paige and Tarjan (1987). It starts with all states placed in a single block of the partition, and repeatedly splits the blocks until the states in the same block are bisimilar (that is, until the blocks correspond to bisimulation equivalence classes).
The first loop of the algorithm makes sure that the states in the same block satisfy the same propositional variables. In this loop, each set V ( p) is used as a (potential) splitter. This enforces the Atoms condition of the bisimulation relation.
The second loop enforces the property that the Paige-Tarjan relational coarsest partition algorithm was designed to achieve: for every indistinguishability relation ∼ a , and every pair of blocks X , Y , either X ⊆ ∼ a (Y ), or X ∩ ∼ a (Y ) = ∅. This ensures that either every element of X has an a-successor in Y , or none do (the Back and Forth conditions of bisimulation relation). 3 We repeatedly use splitters of the form ∼ a (Y ) until the partition does not change.
The algorithm returns a partition of W into bisimulation equivalence classes, which corresponds to W . for p ∈ P do 4: Q ← split(Q, V ( p)) 5: repeat 6: pick Y ∈ Q and a ∈ A such that ∼ a (Y ) is a splitter for Q 7: Q ← split(Q, ∼ a (Y )) 8: until there is no change to Q 9: return Q The algorithm runs in time polynomial in |W |, |P| and |∼|. The first loop in the algorithm iterates over |P| and in the worst case terminates when every block in Q is a singleton set, that is, the size of Q at the end of the loop is at most |W |, so at most |W | splits are performed. This means that the time complexity of the first loop is O(|W | × |P|).
If sets [w] a are given as part of the model, then computing the splitters is O(|∼|), and the number of times the splits are performed is O(|W |) again. So the time complexity of the second loop is O(|W | × |∼|).
The total time complexity of the algorithm is O(|W | × (|P| + |∼|)). A more efficient (logarithmic in |W |) version is possible as shown by Paige and Tarjan (1987), but here we state the simplest polynomial algorithm which is implemented in the model-checking tool.