1 Introduction

1.1 The framework

We consider discrete concurrent multi-agent transition systems, i.e. multi-agent systems (MAS) in which the transitions take place in a discrete succession of steps, as a result of a simultaneous (or, at least mutually independent) actions performed by all agents. Such MAS are typically modelled as concurrent game models (cf [1] or [6]).

Here we focus on a special type of concurrent MAS, which are homogeneous and dynamic, in a sense explained below.

The homogeneity means that all agents are essentially indistinguishable from each other, as their possible behaviours are determined by the same protocol. In particular, they have the same available actions at each state and the effect of these actions depends not on which agents perform them, but only on how many agents perform each action. Thus, the transitions in such systems are determined not by the specific action profiles, but only by the vector of numbers of agents that perform each of the possible actions in these action profiles. The latter can be regarded as an abstraction of the action profile. The transitions are specified symbolically, by means of conditions on these vectors, definable in Presburger arithmetic.

Typical examples of such homogeneous systems include:

  • voting procedures where the outcome only depends on how many agents vote for each possible alternative, but not who votes for what. These also involve voting procedures where anonymity is required and the identity of agents should not be inferred by observing the system’s evolution [14, 18];

  • sensor networks of a type where protocols only depend on how many sensors send any given signal [21];

  • computer network servers, the functioning of which only depends on how many currently connected users are performing any given action (e.g. uploading or downloading data, sending printing jobs, communicating over common channels, etc);

  • markets, the dynamics of which only depends on how many agents are selling and how many are buying any given stock (assuming the transactions are per unit) but not exactly who does what.

The dynamicity of the systems that we consider means that the set (hence, the number) of agents being present (or, just acting) in the system may vary throughout the system evolution, possibly at every transition from a state to a state. All examples listed above naturally have that dynamic feature. There are different ways to interpret such dynamicity. In the extreme version, agents literally appear and disappear from the system, e.g. users joining and leaving an open network. A less radical interpretation is where the agents are in the system all the time but may become active and inactive from time to time, e.g. voters, or members of a committee, may abstain from voting in one election or decision making round, and then become active again in the next one. A more refined version is where at every state of the system performance each agent decides to act (i.e. take one of the available actions) or pass/idle, formally by performing the ‘pass/idle’ action. Technically, all these interpretations seem to be reducible to the latter one. However, the way we model the dynamicity here is by assuming that there is an unbounded, and possibly infinite set of ‘potentially existing’ agents, but that only finitely many of them are ‘actually existing/present’ at each stage of the evolution of the system. Therefore, at each transition round, only finitely many currently existing agents can possibly perform an action, and each of these may also choose not to perform any action (i.e., remain inactive in that round). However, the currently inactive (or, ‘non-existing’) agents do not have any individual influence on the transitions. Thus, the number of currently active agents, who determine the next transition, can change from any instant to the next one, while always remaining finite. We note, however, the difference between dynamic systems, in the sense described above, and simply parametric systems, where the number of agents is taken as a parameter but remains fixed during the whole evolution of the system. In that sense, the present study applies both to parametric and truly dynamic systems.

In this work we develop a logic-based framework for formal specification and algorithmic verification of the behaviour of homogeneous dynamic multi-agent systems (hdmas) of the type described above. We focus, in particular, on scenarios where the agents are divided into controllable (by the system supervisor or controller) and uncontrollable, representing the environment or an adversary. Both numbers, of controllable and uncontrollable agents, may be fixed or varying throughout the system evolution, possibly at every transition. The controllable agents are assumed to act according to a joint strategy prescribed by the supervisor/controller, with the objective to ensure the desired behaviour of the system (e.g. reaching an outcome in the voting procedure, or keeping the demand and supply of a given stock within desired bounds, or ensuring that the server will not be deadlocked by a malicious attack of adversary users, etc).

As a logical language for formal specification we introduce a suitably extended version, \({\mathcal {L}}_{\textsc {hdmas}}\), of the alternating time temporal logic (ATL) [1]. In \({\mathcal {L}}_{\textsc {hdmas}}\) one can specify properties of the type “A team of (at least) n controllable agents can ensure, against at most m active uncontrollable agents, that any possible evolution of the system satisfies a given objective \(\varphi\)″, where the objective \(\varphi\) is specified again as a formula of that language, and each of n and m is either a fixed number, a parameter, or a variable that can be quantified over.

To summarise the comparison: in the standard concurrent game models of MAS agents are explicitly distinguished and in the logic ATL they are explicitly referred to by their names (individually, or in coalitions). In the HDMAS framework developed here, the only distinction between the agents is whether they are controllable or not, and in the language both are referred to only by numbers.

Here is an indicative, yet generic scenario, where our framework is readily applicable for both modelling and verification.

A military fortress has k protected points of entry: \(A_1, A_2 \ldots A_k\), with \(k > 2\). The commander of the fortress has C soldiers, hereafter called ‘defenders’, that can be deployed to protect these points of entry against an invading army. For each \(A_i\), a number \(c_i\) of defenders, with \(m_i \le c_i \le M_i\), can be deployed against \(n_i\) ‘invaders’. If \(c_i = M_i\), then the defenders successfully protect \(A_i\) against any number of invaders; if \(c_i \not = M_i\), then entry point \(A_i\) is lost when \(n_i > m_i\). Moreover, both the defender and the invading commander may receive reinforcements and re-deploy their soldiers among the entry points once a day (say, at noon), whereas the attacks can only take place at night. However, neither of them can observe the precise distribution of the soldiers of the other party, but they can observe which points of entry are currently “outpowered” by not being sufficiently protected by defenders. It is also known that the enemy must outpower more than 2 points of entry at the same time in order to successfully invade the fortress.

The framework hdmas that we develop here will enable modelling the scenario above as well as specifying and algorithmically verifying claims of the kind: “The fortress commander has a strategy to protect the fortress for at least d days, for a given d (or, forever) with C defenders against at most V (or, against any number of) invaders”.

1.2 Structure and content of the paper

In Sect. 2 we introduce the hdmas framework, provide a running example, and prove some technical results needed to introduce counting abstractions of joint actions and strategy profiles. Using these counting abstractions, in Sect. 3 we provide formal semantics in hdmas models for the logic \({\mathcal {L}}_{\textsc {hdmas}}\) which we introduce there. We then define normal form of formulae of \({\mathcal {L}}_{\textsc {hdmas}}\) and the fragment \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\), consisting of formulae in normal form. The key technical result obtained in that section is that every formula in \({\mathcal {L}}_{\textsc {hdmas}}\) is equivalent on finite models to one in \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\). In Sect. 4 we develop an algorithm for global model checking of formulae in \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\) in finite hdmas models, which invokes model checking truth of their respective translations into Presburger formulae, and illustrate that algorithm on running examples. In Sect. 5 we establish some refined complexity estimates for the model checking algorithm, using recent complexity results obtained in [11] for fragments of Presburger arithmetic. We end with some concluding remarks on extensions and possible applications of our work in Sect. 6.

1.3 Related work

A more closely related framework to ours is Open Multi-Agent Systems (OMAS) [17]. hdmas shares with it the characteristic ’dynamic’ feature of agents, which can therefore leave and join the system at runtime. However hdmas differs from OMAS in several essential aspects. First, although any finite number of agents can perform actions at each step, the evolution of OMAS depends only on the projection of those on the set of actions or, in other words, whether any action is performed by at least one agent. Thus, hdmas makes use of the full expressivity of Presburger arithmetic. Next, the verification formalism of OMAS is a temporal epistemic logic with (universally quantified) indices spanning over agents, while ours includes strategic operators. Lastly, decidability of model-checking Open Multi-agent Systems is obtained by restricting the semantics of the models and by using cutoff techniques whereas we ultimately invoke model-checking truth of Preseburger formulas.

We are aware of other threads of, more or less essentially, related work, however none of them considers formal models and verification methods for the type of homogeneous and dynamic multi-agent scenarios studied here. Therefore, we only mention them briefly as in all frameworks mentioned below, the number of agents is fixed along system executions, possibly as a parameter and the formal specification languages do not explicitly allow quantification over the number of agents.

  • Counting abstraction for verification of parametric systems has been studied in [10] and [4], where techniques based on Petri nets or Vector Addition Systems with States (VASS) are used to obtain decidability of model checking.

  • The work in [19] is closer to ours, as strategic reasoning is considered but only for a restricted set of properties such as reachability, coverability and deadlock avoidance. Also, assumptions on the system evolutions are made and, in particular, monotonicity with respect to a well-quasi-ordering.

  • In [15] temporal epistemic properties of parametric interpreted systems are checked irrespective of the number of agents by using cutoff techniques.

  • Modular Interpreted Systems [13] is a MAS framework where a decoupling between local agents and global system description is achieved, thus possibly amenable to model dynamical MAS frameworks.

  • Homogeneous MAS with transitions determined by the number of acting agents have been introduced in [18].

  • Population protocols [2] are parametric systems of homogeneous agents, and decidability of model checking against probabilistic linear-time specification is studied in [9].

  • In [7], instead of verifying MAS with unknown number of agents, the authors propose a technique to find the minimal number of agents which, once deployed and suitably orchestrated, can carry out a manufacturing task.

  • Lastly, as noted above, our logic of specification builds on the Alternating time temporal logic ATL ([1]) and extends the model checking algorithm for ATL to hdmas.

2 Preliminaries and modelling framework

We start by introducing the basic ingredients of our framework. We assume a hereafter fixed (finite, or possibly countably infinite) universe of potential agents \(Ag = \{ ag _1, ag _2, \ldots \}\), but only finitely many of them will be assumed currently present, or ‘currently existing’, at any time instant or stage of the evolution of the system. Alternatively, the universe of agents can be assumed always finite but unbounded.

Next, we consider a finite set of action names \(Act =\{{ act _1, \ldots , act _n}\}\). We extend this set with a specific ‘idle’ action \(\varepsilon\) and define \(Act ^+= Act \cup \{{\varepsilon }\}\). We also fix a set of distinct variables \(X = \{{x_1, \ldots , x_n}\}\) extended to \(X ^+= X \cup \{{x_{\varepsilon }}\}\), called action counters, associated to \(Act\) and \(Act ^+\) respectively. Formally, we relate these by a mapping \({\mu }: Act ^+ \rightarrow X ^+\) such that for each \(i \in \{{1, \ldots , n}\}\), \({\mu }( act _i)=x_i\) and \({\mu }(\varepsilon )=x_{\varepsilon }\). Hereafter, \(Act\), \(Act ^+\), \(X\), \(X ^+\), and \({\mu }\) are assumed fixed, as above.

An action profile over a given set of actions \(Act ' \subseteq Act ^+\) is defined as a function \(\mathsf {p}_{}: Ag \rightarrow Act '\), assigning an action from \(Act '\) to each agent in \(Ag\). More generally, for any subset of agents \(A \subseteq Ag\), a joint action of A over a set of actions \(Act ' \subseteq Act ^+\) is a function \(\mathsf {p}_{A}\) assigning an action from \(Act '\) to each agent in A.

Given a function f, we will write: \(dom (f)\) for the domain of f; \(f|_{Z}\) for the restriction of f to a domain \(Z \subseteq dom(f)\); and f[Z] for the image of Z under f. For technical purposes, we also consider a (unique) function \(f_{\emptyset }\) with an empty domain.

To express relevant conditions on the number of agents performing actions in \(X\), we make use of Presburger arithmetic (the first-order theory of natural numbers with addition and \(=\)). This is a fairly expressive, yet decidable theory, which makes it very natural and suitable for many computational tasks related to verification of various discrete infinite-state systems (see e.g. [12] for an introduction.)

Definition 1

(Guards) A (transition) guard \(g\) is an open (quantifier-free)Footnote 1 formula of Presburger arithmetic \(\mathsf {PrA}\) with predicates \(=\) and < over variables from the set of action counters \(X\). We denote by \(G\) the set of all guards, by \(Var(g)\) the set of variables occurring in a guard \(g \in G\), and we use the following standard abbreviations in Presburger formulas: \(n := 1+ \ldots +1\) (n times 1) and \(n x:= x+\cdots + x\) (n times \(x\)) for any \(n \in \mathbb {N}\) and \(x\in X ^+\).

Definition 2

An action distribution is any function \(\mathbf {act}: X' \rightarrow \mathbb {N}\), where \(X' \subseteq X ^+\). The domain \(X'\) is denoted, as usual, by \(dom (\mathbf {act})\). Intuitively, an action distribution assigns for every action \(act\), through the value of the action counter \({\mu }( act )\), the number of agents who are assigned the action \(act\).

Given an action distribution \(\mathbf {act}\) we define:

  • \(\mathbf {act} \models g\), for a given guard \(g\), if \(\mathbf {act}\) satisfies \(g\) with the expected standard semantics of \(\mathsf {PrA}\), namely:

    \(\mathbf {act} \models x_1 = x_2\) if \({\mu }(x_1) = {\mu }(x_2)\) and \(\mathbf {act} \models x_1 < x_2\) if \({\mu }(x_1) = {\mu }(x_2)\);

  • \(\mathsf {sum}(\mathbf {act}) := \sum _{x\in dom (\mathbf {act})} \mathbf {act} (x)\);

  • \(H|^{m} := \{{\mathbf {act} \mid \mathsf {sum}(\mathbf {act}) = m}\}\) is the set of action distributions where exactly m agents perform actions;

  • \(H:= \bigcup _{m \in \mathbb {N}} H|^{m}\) is the set of all action distributions.

We also define the mapping \(\oplus : H\times H\dashrightarrow H\), which, given two action distributions \(\mathbf {act} _{1}\) and \(\mathbf {act} _{2}\), is defined if \(dom (\mathbf {act} _{1}) = dom (\mathbf {act} _{2}) := Z\) and returns a new action distribution, \(\mathbf {act} _{1} \oplus \mathbf {act} _{2}\), with domain Z, defined component-wise as the sum of \(\mathbf {act} _{1}\) and \(\mathbf {act} _{2}\), i.e. \(\mathbf {act} _{1} \oplus \mathbf {act} _{2} (z) = \mathbf {act} _{1}(z) + \mathbf {act} _{2}(z)\) for each \(z \in Z\).

Remark 1

Note that guards are defined over the set of variables \(X\), while the domain of action distributions can also include \(x_{\varepsilon }\). It follows that, for any action distribution \(\mathbf {act}\), the value \(\mathbf {act} (x_{\varepsilon })\) does not have any influence on the satisfiability of a guard. More generally, for every \(\mathbf {act} \in H\) and \(g \in G\) we have \(\mathbf {act} \models g\) iff \(\mathbf {act} |_{ Var( g ) } \models g\).

We now relate action profiles with action distributions. Every action profile is associated with the action distribution that counts, for each action, the number of agents performing it. In that sense, action distributions are counting abstractions for action profiles. The formal definition follows, where we denote the set of all action profiles over \(Act\) by \(\mathsf {P}_{}\) and define the inverse of an action profile \(\mathsf {p}_{}\) as the function \(\mathsf {p}_{}^{-1} : Act \rightarrow \wp ( Ag )\) such that \(\mathsf {p}_{}^{-1}( act ) = \{{ ag \in Ag \mid \mathsf {p}_{}( ag )= act }\}\).

Definition 3

The action profile abstraction is the function \(\alpha : \mathsf {P}_{} \rightarrow H\) where \(\alpha (\mathsf {p}_{})({\mu }( act )) = |\mathsf {p}_{}^{-1}( act )|\) for all \(\mathsf {p}_{} \in \mathsf {P}_{}\) and \(act \in Act ^+\).

The function \(\alpha {}\) partitions the set \(\mathsf {P}_{}\) into equivalence classes of action profiles having the same abstraction that is, two action profiles \(\mathsf {p}_{1}\) and \(\mathsf {p}_{2}\) belongs to the same equivalence class iff \(\alpha (\mathsf {p}_{1}) = \alpha (\mathsf {p}_{2})\).

We now introduce the abstract models of our framework.

Definition 4

A homogeneous dynamic MAS (hdmas) is a structure \({\mathcal {M}}= \langle Ag , Act ^+, S , d , \delta , AP , \lambda \rangle\) where:

  • \(Ag = \{{ ag _1, ag _2, \ldots }\}\) is the countable set of agents.

  • \(Act ^+\) is the set of action names;

  • \(S\) is a set of states;Footnote 2

  • \(d : S \rightarrow \wp ( Act ^+)\) is the action availability function, that assigns to every state \(s\) the set of actions \(d ( s )\) available (to all agents) at \(s\), and is such that \(\varepsilon \in d ( s )\);

  • \(\delta : S \times S \rightarrow G\) is the transitions guard function, labelling possible transitions between states with guards such that:

    • \(Var(\delta ( s , s ')) \subseteq {\mu }[ d ( s )]\) for each \(s , s ' \in S\) (the guards at each state only involve action counters corresponding to actions available at that state),

    • and, for each \(s \in S\) and for each \(\mathbf {act} \in H|_{{\mu }[ d (s)]}\), there exists a unique \(s ' \in S\) such that \(\mathbf {act} \models \delta ( s , s ')\) (every possible action distribution over the set of actions available at the current state determines a unique transition).

  • \(AP = \{{p_1, p_2, \ldots }\}\) is a finite set of atomic propositions;

  • \(\lambda : S \rightarrow \wp ( AP )\) is a labelling function, assigning to any state \(s\) the set of atomic propositions that are true at \(s\).

Fig. 1
figure 1

The fortress example modelled as a hdmas

Fig. 2
figure 2

An abstract example of a hdmas

Example 1

The fortress example presented in the introduction, with \(k=3\) entry points, can be modeled as a hdmas as follows. The set \(S\) contain two states only, displayed as circles in Fig. 1: \(s _1\) and \(s _2\) represents respectively the fortress being under control of the defenders or being captured. Next, we have two actions for each entry point \(A_i\): one modelling the defensive action \(act _i\), and the other the attacking action \({\overline{act}}_i\) for \(A_i\); therefore \(Act ^+ = \{{ act _1, {\overline{act}}_1, act _2, {\overline{act}}_2, act _3, {\overline{act}}_3, \varepsilon }\}\), with \({\mu }( act _i)=x_i\) and \({\mu }({\overline{act}}_i)={\overline{x}}_i\) for \(i \in \{{1, 2, 3}\}\). All of them are allowed in \(s _1\) and none of them in \(s _2\), formally: \(d ( s _1) = Act ^+\) and \(d ( s _2) = \{{\varepsilon }\}\). The guards \(g _1, g _2\) are listed next to the picture, and an arrow is drawn from \(s_i\) to \(s_j\) and labeled with \(g_k\) iff \(\delta (s_i, s_j)= g _k\). Formula \(g _1\) guards transition from \(s _1\) to \(s _2\) and therefore it defines when the fortress is captured. This happens when, for each of the entry point \(A_i\) with \(i \in \{{1, 2, 3}\}\), one of two conditions hold: 1) the number of defenders \(x_i\) is less than \(m_i\) or 2) it is less than \(M_i\) and also less than the number of attackers \({\overline{x}}_i\). If this is not the case, the defenders hold the fortress (loop in \(s _1\)) but once is conquered, it remains so regardless of the actions performed (\(g _2\) is a tautology). The label of each state, as defined by the labelling function, is given next to it. We only have one atomic proposition, \(captured\), false in \(s _1\) and true in \(s _2\), therefore \(\lambda ( s _1) = \emptyset\) and \(\lambda ( s _2) = \{{captured}\}\).

Example 2

A more abstract example is given in Fig. 2, which will be used to illustrate some technical points and the model checking algorithm later. The set of actions is \(Act = \{{ act _1, act _2, act _3}\}\) and the action availability function is defined by \(d ( s _1)= d ( s _3)= d ( s _4)= Act ^+\), \(d ( s _2)=\{{ act _1, act _3, \varepsilon }\}\), \(d ( s _5)=\{{ act _2, act _3, \varepsilon }\}\) and \(d ( s _6)=\{{ act _1, \varepsilon }\}\). Lastly, the labelling function is defined as: \(\lambda ( s _1)=\emptyset\), \(\lambda ( s _2)=\lambda ( s _3)=\lambda ( s _4)=\{{p}\}\) and \(\lambda ( s _5)=\lambda ( s _6)=\{{q}\}\).

The restriction on \(\delta\) ensures that for any number of agents and their action profile of available actions, the next state is uniquely defined. Thus, the dynamics of the system in terms of possible state transitions is fully determined symbolically by the transitions guard function \(\delta\), as defined formally below.

Definition 5

Given a hdmas \({\mathcal {M}}\), a transition in \({\mathcal {M}}\) is a triple \(( s , \mathsf {p}_{}, s ')\), where \(s , s ' \in S\) and \(\mathsf {p}_{} \in \mathsf {P}_{}\), such that:

  1. 1.

    each agent \(ag\) performs an available action: \(\mathsf {p}_{}( ag ) \in d ( s )\);

  2. 2.

    the abstraction \(\alpha (\mathsf {p}_{})\) satisfies the (unique) guard that labels the transition from \(s\) to \(s '\), i.e., \(\alpha (\mathsf {p}_{}) \models \delta ( s , s ')\).

Since transitions only depend on the abstractions of the action profiles, that is, on action distributions, it is immediate to see that actions profiles with the same abstraction, applied at the same state, lead to the same successor state. Formally, the following holds.

Lemma 1

Given a hdmas \({\mathcal {M}}\) as above, for every \(s , s ' \in S\) ,and every \(\mathsf {p}_{1}, \mathsf {p}_{2} \in \mathsf {P}_{}\) , if \(\alpha (\mathsf {p}_{1}) = \alpha (\mathsf {p}_{2})\) , then \(( s , \mathsf {p}_{1}, s ')\) is a transition in \({\mathcal {M}}\) iff \(( s , \mathsf {p}_{2}, s ')\) is a transition in \({\mathcal {M}}\) .

Lemma 1 enables us to define the transition functionFootnote 3 of \({\mathcal {M}}\) directly on action distributions, rather than on action profiles.

Definition 6

Let \({\mathcal {M}}\) be a hdmas. The transition function of \({\mathcal {M}}\) is the partial mapping \(\varDelta : S \times H\dashrightarrow S\) defined as follows. For each \(s \in S\) and \(\mathbf {act} \in H\), the outcome state \(\varDelta ( s , \mathbf {act})\) of \(\mathbf {act}\) at \(s\) is defined and equal to \(s ' \in S\) iff there exists \(\mathsf {p}_{} \in \mathsf {P}_{}\) such that \(( s , \mathsf {p}_{}, s ')\) is a transition and \(\alpha (\mathsf {p}_{}) = \mathbf {act}\); otherwise \(\varDelta ( s , \mathbf {act})\) is undefined.

Infinite sequences of successor states will be called ‘plays’. Formally, a play is a sequence \(\pi = s _0, s _1, \ldots\) in \(S ^{\omega }\), such that for every stage (of the play) \(i \in \mathbb {N}\), there is \(\mathbf {act} _i \in H\) such that \(\varDelta ( s _i, \mathbf {act} _i) = s _{i+1}\). We denote by \(\pi [i]\) the state of the ith stage of the play, for each \(i \in \mathbb {N}\).

Since transitions from a given state \(s\) are defined only for action profiles that assigns to all agents only actions that are available at \(s\), we call these available action profiles in \(s\). We formally define for each state \(s \in S\) the set of available action profiles in \(s\) as

$$\begin{aligned} \mathsf {P}_{ s } = \{{\mathsf {p}_{} \in \mathsf {P}_{} \mid \mathsf {p}_{}( ag ) \in d ( s ) \ \hbox {for each} \ ag \in Ag }\}. \end{aligned}$$

More generally, for each set of agents \(A \subseteq Ag\) we define likewise the set of joint actions for \(A\) available in \(s\) as

$$\begin{aligned} \mathsf {P}_{ s }|_{A} = \{{\mathsf {p}_{A} \in \mathsf {P}_{A} \mid \mathsf {p}_{A}( ag ) \in d ( s ) \ \hbox {for each} \ ag \in A}\}. \end{aligned}$$

where \(\mathsf {P}_{A}\) denotes (with a mild abuse of notation) the set of all possible joint actions for \(A\).

Next, we define a positional strategy for a given coalition of agents \(A\) as a mapping that assigns to each state \(s\) an available joint action for \(A\).

Definition 7

Let \(A\) be a (possibly empty) set of agents and \({\mathcal {M}}\) be a hdmas with a state space \(S\). A joint (positional) strategy for the coalition \(A\) is a function \(\sigma _{A} : S \rightarrow \mathsf {P}_{}|_{A}\) such that \(\sigma _{A}(s) \in \mathsf {P}_{ s }|_{A}\) for each \(s \in S\). The empty coalition has only one joint strategy \(\sigma _{\emptyset }\), assigning the empty joint action at every state.

Hereafter we assume that at every stage of the play representing the evolution of the system, the set of all currently present agents is partitioned into two: the set of controllable agents, denoted by \(\varvec{C}\), and the set of uncontrollable agents, denoted by \(\varvec{N}\). Neither of these subsets (and their sizes) is fixed initially, nor during the play, but each of them can vary at each transition round.

Definition 8

Let \({\mathcal {M}}\) be a hdmas, \(s \in S\) be a state in it, \(\varvec{C}, \varvec{N}\subseteq Ag\) be the respective current sets of controllable and uncontrollable agents, and let \(\mathsf {p}_{\varvec{C}} \in \mathsf {P}_{ s }|_{\varvec{C}}\). The outcome set of \(\mathsf {p}_{\varvec{C}}\) at \(s\) is defined as follows:

$$\begin{aligned} out ( s , \mathsf {p}_{\varvec{C}}, \varvec{N}) := \, & \big\{ s ' \in S \mid s ' = \varDelta ( s , \alpha (\mathsf {p}_{})) \, \hbox {for some} \, \mathsf {p}_{} \in \mathsf {P}_{ s }|_{(\varvec{C}\cup \varvec{N})} \\ & \hbox {such that } \mathsf {p}_{} |_{\varvec{C}} = \mathsf {p}_{\varvec{C}}\big\}. \end{aligned}$$

Respectively, given a joint strategy \(\sigma _{\varvec{C}}\) for \(\varvec{C}\) we define the set of outcome plays of \(\mathsf {p}_{\varvec{C}}\) at \(s\) (against \(\varvec{N}\)) as

$$\begin{aligned} out ( s , \sigma _{\varvec{C}}, \varvec{N}):= \, & {} \big \{ \pi = s _0, s _1, \ldots \mid s _0= s \hbox { and for all } i \in \mathbb {N}\hbox { there exists } \mathsf {p}_{i} \in \mathsf {P}_{ s _i}|_{(\varvec{C}\cup \varvec{N})}\\ & \hbox { such that } \mathsf {p}_{i}|_{\varvec{C}}= {} \sigma _{\varvec{C}}( s _i) \hbox { and } \varDelta ( s _i, \alpha (\mathsf {p}_{i})) = s _{i+1} \big \}. \end{aligned}$$

The abstraction \(\alpha\), although defined on actions profiles, is readily extended over joint actions and naturally specifies an equivalence relation between them: two joint actions are equivalent whenever their abstraction is the same. Likewise for joint strategies, as the next definition formalizes.

Definition 9

Let \({\mathcal {M}}\) be a hdmas, \(\varvec{C}_1, \varvec{C}_2 \subseteq Ag\) and \(\mathsf {p}_{\varvec{C}_1}, \mathsf {p}_{\varvec{C}_2}\) be respective joint actions for \(\varvec{C}_1\) and \(\varvec{C}_2\). We say that \(\mathsf {p}_{\varvec{C}_1}\) and \(\mathsf {p}_{\varvec{C}_2}\) are equivalent, denoted \(\mathsf {p}_{\varvec{C}_1}~\equiv ~\mathsf {p}_{\varvec{C}_2}\), if \(\alpha (\mathsf {p}_{\varvec{C}_1}) = \alpha (\mathsf {p}_{\varvec{C}_2})\).

Likewise, we say that joint strategies \(\sigma _{\varvec{C}_1}\) and \(\sigma _{\varvec{C}_2}\) are equivalent, denoted \(\sigma _{\varvec{C}_1} \equiv \sigma _{\varvec{C}_2}\) if they prescribe equivalent joint actions for \(\varvec{C}_1\) and \(\varvec{C}_2\) at every state.

Note that if \(\mathsf {p}_{\varvec{C}_1} \equiv \mathsf {p}_{\varvec{C}_2}\) then \(|\varvec{C}_1| = |\varvec{C}_2|\) and \(\mathsf {p}_{\varvec{C}_1}\) and \(\mathsf {p}_{\varvec{C}_2}\) produce the same outcome sets.

Lemma 2

Let \({\mathcal {M}}\) be a hdmas and \(\varvec{C}_1, \varvec{C}_2,\varvec{N}_1, \varvec{N}_2 \ \subseteq Ag\) be such that, \(|\varvec{C}_1| = |\varvec{C}_2|\) , \(|\varvec{N}_1| = |\varvec{N}_2|\) , \(\varvec{C}_1 \cap \varvec{N}_1 = \emptyset\) , and \(\varvec{C}_2 \cap \varvec{N}_2= \emptyset\) . Then:

  1. 1.

    For any \(s \in S\), if \(\mathsf {p}_{\varvec{C}_1}\) and \(\mathsf {p}_{\varvec{C}_2}\) are two equivalent joint actions available at \(s\), respectively for \(\varvec{C}_1\) and \(\varvec{C}_2\), then \(out ( s , \mathsf {p}_{\varvec{C}_1},\varvec{N}_1) = out ( s , \mathsf {p}_{\varvec{C}_2},\varvec{N}_2)\).

  2. 2.

    If \(\sigma _{\varvec{C}_1}\) and \(\sigma _{\varvec{C}_2}\) are two equivalent joint strategies in \({\mathcal {M}}\) , respectively for \(\varvec{C}_1\) and \(\varvec{C}_2\) , then for each \(s \in S\) , \(out ( s , \sigma _{\varvec{C}_1},\varvec{N}_1) = out ( s , \sigma _{\varvec{C}_2},\varvec{N}_2)\) .

Proof

(1) Let \(s ' \in out ( s , \mathsf {p}_{\varvec{C}_1},\varvec{N}_1)\). Then \(s ' = \varDelta ( s , \alpha (\mathsf {p}_{1}))\) for some \(\mathsf {p}_{1} \in \mathsf {P}_{ s }|_{(\varvec{C}_1 \cup \varvec{N}_1)}\) such that \(\mathsf {p}_{1} |_{\varvec{C}_1} = \mathsf {p}_{\varvec{C}_1}\). Fix a bijection \(h: \varvec{C}_2 \rightarrow \varvec{C}_1\). It can be extended to a bijection \(f: Ag \rightarrow Ag\), such that \(f[\varvec{N}_2] = \varvec{N}_1\).

  • Define \(\mathsf {p}_{2} \in \mathsf {P}_{ s }|_{(\varvec{C}_2 \cup \varvec{N}_2)}\) so that \(\mathsf {p}_{2}( ag ) := \mathsf {p}_{1}(f( ag ))\). Clearly, \(\alpha (\mathsf {p}_{2}) = \alpha (\mathsf {p}_{1})\). Also \(\mathsf {p}_{2}|_{\varvec{C}_2} = \mathsf {p}_{1}|_{f[\varvec{C}_2]}\) as \(f[\varvec{C}_2] = \varvec{C}_1\), hence \(\alpha (\mathsf {p}_{2}|_{\varvec{C}_2}) = \alpha (\mathsf {p}_{1}|_{\varvec{C}_1}) = \alpha (\mathsf {p}_{\varvec{C}_1}) = \alpha (\mathsf {p}_{\varvec{C}_2})\) (since \(\mathsf {p}_{\varvec{C}_1} \equiv \mathsf {p}_{\varvec{C}_2}\)).

Therefore, we obtain that \(s ' = \varDelta ( s , \alpha (\mathsf {p}_{2})) \in out ( s , \mathsf {p}_{\varvec{C}_2},\varvec{N}_2)\).

Thus, \(out ( s , \mathsf {p}_{\varvec{C}_1},\varvec{N}_1) \subseteq out ( s , \mathsf {p}_{\varvec{C}_2},\varvec{N}_2)\).

The proof of the converse inclusion is completely symmetric.

(2) The claim follows easily by using (1). Indeed, every play \(\pi = s _0, s _1, \ldots\) in \(out ( s , \sigma _{\varvec{C}_1},\varvec{N}_1)\) can be generated step-by-step as a play in \(out ( s , \sigma _{\varvec{C}_2},\varvec{N}_2)\), by using the equivalence of \(\sigma _{\varvec{C}_1}\) and \(\sigma _{\varvec{C}_2}\) and applying (1) at every step of the construction. We leave out the routine details.

Thus, \(out ( s , \sigma _{\varvec{C}_1},\varvec{N}_1) \subseteq out ( s , \sigma _{\varvec{C}_2},\varvec{N}_2)\). Again, the converse inclusion is completely symmetric. \(\square\)

We now prove that, as expected, the outcome sets from joint actions and strategies do not depend on the actual sets of controllable and uncontrollable agents, but only on their sizes.

Lemma 3

Let \({\mathcal {M}}\) be a hdmas , \(s \in S\) , with \(\varvec{C}, \varvec{N}\subseteq Ag\) be the respective current sets of controllable and uncontrollable agents (hence, assumed disjoint), and let \(\mathsf {p}_{\varvec{C}} \in \mathsf {P}_{ s }|_{\varvec{C}}\) be an available joint action for \(\varvec{C}\) at \(s\) . Then for every \(\varvec{C}' \subseteq Ag\) such that \(|\varvec{C}'| = |\varvec{C}|\) there exists an available joint action \(\mathsf {p}_{\varvec{C}'}\) for \(\varvec{C}'\) at \(s\) , such that for every \(\varvec{N}' \subseteq Ag\) where \(\varvec{C}' \cap \varvec{N}' = \emptyset\) , if \(|\varvec{N}'| = |\varvec{N}|\) , then \(out ( s , \mathsf {p}_{\varvec{C}'}, \varvec{N}') = out ( s , \mathsf {p}_{\varvec{C}}, \varvec{N})\) .

Proof

Fix any \(\varvec{C}' \subseteq Ag\) such that \(|\varvec{C}'| = |\varvec{C}|\). Take a bijection \(h: \varvec{C}' \rightarrow \varvec{C}\). It transforms canonically the joint action \(\mathsf {p}_{\varvec{C}}\) to a joint action \(\mathsf {p}_{\varvec{C}'}\) available at \(s\), defined by \(\mathsf {p}_{\varvec{C}'}( ag ) := \mathsf {p}_{\varvec{C}}(h( ag ))\). Clearly, \(\alpha (\mathsf {p}_{\varvec{C}'}) = \alpha (\mathsf {p}_{\varvec{C}})\). Hence, by Lemma 2, \(out ( s , \mathsf {p}_{\varvec{C}'}, \varvec{N}') = out ( s , \mathsf {p}_{\varvec{C}}, \varvec{N})\) for every \(\varvec{N}' \subseteq Ag\) such that \(\varvec{C}' \cap \varvec{N}' = \emptyset\) and \(|\varvec{N}'| = |\varvec{N}|\). \(\square\)

Lemma 3 easily extends to joint strategies, as follows.

Lemma 4

Let \({\mathcal {M}}\) be a hdmas, \(s \in S\), with \(\varvec{C}, \varvec{N}\subseteq Ag\) be the respective current (disjoint) sets of controllable and uncontrollable agents, and let \(\sigma _{\varvec{C}}\) be a joint strategy for \(\varvec{C}\). Then for every \(\varvec{C}' \subseteq Ag\) with \(|\varvec{C}'| = |\varvec{C}|\) there exists a joint strategy \(\sigma _{\varvec{C}'}\) such that for every \(\varvec{N}' \subseteq Ag\) where \(\varvec{C}' \cap \varvec{N}' = \emptyset\), if \(|\varvec{N}'| = |\varvec{N}|\), then \(out ( s , \sigma _{\varvec{C}'}, \varvec{N}') = out ( s , \sigma _{\varvec{C}}, \varvec{N})\).

Proof

The argument is similar to the previous proof.

Fix any \(\varvec{C}' \subseteq Ag\) such that \(|\varvec{C}'| = |\varvec{C}|\). Take a bijection \(h: \varvec{C}' \rightarrow \varvec{C}\). It transforms canonically the joint strategy \(\sigma _{\varvec{C}}\) to a joint strategy \(\sigma _{\varvec{C}'}\), defined by \(\sigma _{\varvec{C}'}( s )( ag ) := \sigma _{\varvec{C}}( s )(h( ag ))\).

Clearly, \(\alpha (\sigma _{\varvec{C}'}( s )) = \alpha (\sigma _{\varvec{C}}( s ))\) for every state \(s\), hence \(\sigma _{\varvec{C}} \equiv \sigma _{\varvec{C}'}\). Therefore, by Lemma 2, \(out ( s , \sigma _{\varvec{C}'}, \varvec{N}') = out ( s , \sigma _{\varvec{C}}, \varvec{N})\) for every \(\varvec{N}' \subseteq Ag\) such that \(\varvec{C}' \cap \varvec{N}' = \emptyset\) and \(|\varvec{N}'| = |\varvec{N}|\). \(\square\)

Lemmas 3 and 4 essentially say that the strategic abilities in a hdmas are determined not by the concrete sets of controllable and uncontrollable agents, but only by their respective sizes. This justifies abstracting the notions of coalitional actions and strategies in terms of action profile abstractions, to be used thereafter in our semantics and verification procedures.

Definition 10

Let \({\mathcal {M}}\) be a hdmas and \(C,N \in \mathbb {N}\).

1.1. An abstract joint action for a coalition of C agents at state \(s \in S\) is an action distribution \(\mathbf {act} _{C} \in H|^{C}\) such that \(dom (\mathbf {act} _{C}) = {\mu }[ d ( s )]\) (recall notation from Definition 2).

Thus, an abstract joint action for a given coalition at state \(s\) prescribes for each action available at \(s\) how many agents from the coalition take that action.

1.2. The outcome set of states of the abstract joint action \(\mathbf {act} _{C}\) of C controllable agents against N uncontrollable agents at \(s\) is the set of states

$$\begin{aligned} out( s , \mathbf {act} _{C}, N) := \, & \big \{ s ' \in S \mid s ' = \varDelta ( s , \mathbf {act} _{C} \oplus \mathbf {act} _{N}) \hbox { for some } \mathbf {act} _{N} \in H|^{N}\\ & \hbox { such that } dom (\mathbf {act} _{N}) = {\mu }[ d ( s )] \big \}. \end{aligned}$$

2.1. An abstract (positional) joint strategy for a coalition of C agents is a function \(\rho _{C}: S \rightarrow H|^{C}\) such that for each \(s \in S\), \(\rho _{C}( s )\) is an abstract joint action such that \(dom (\rho _{C}( s )) = {\mu }[ d ( s )]\).

2.2. The outcome set of plays of an abstract joint strategy \(\rho _{C}\) of C controllable agents against N uncontrollable agents is the set of plays

$$\begin{aligned} out( s , \rho _{C}, N) := & \, \big \{ \pi = s _0, s _1, \ldots \mid s _0= s \hbox { and for all } i \in \mathbb {N}\hbox { there is } \mathbf {act} _{i} \in H|^{N}\\ & \hbox { such that } dom (\mathbf {act} _{i}) = {\mu }[ d ( s )] \hbox { and } \varDelta ( s _i, \rho _{C}( s _i) \oplus \mathbf {act} _{i} ) = s _{i+1} \big \}. \end{aligned}$$

3 Logic for specification and verification of HDMAS

We now introduce a logic \({\mathcal {L}}_{\textsc {hdmas}}\) for specifying and verifying properties of hdmas, based on the Alternating-time Temporal Logic ATL. It features a strategic operator that expresses the ability of a set of controllable agents to guarantee the satisfaction a temporal objective, regardless of the actions taken by the set of uncontrollable agents. As shown in the previous section, such ability only depends on the sizes of these sets. Therefore, our strategic operator \(\langle \!\langle {*, *}\rangle \!\rangle _{_{\! }}\,\) takes two arguments: the first one represent the number of controllable agents and the second—the number of uncontrollable agents currently present in the system. Intuitively, a formula of the kind \(\langle \!\langle {C, N}\rangle \!\rangle _{_{\! }}\, \chi\), with \(C, N \in \mathbb {N}\) and \(\chi\) being a (path) formula of \({\mathcal {L}}_{\textsc {hdmas}}\) specifies the property:

A coalition of C controllable agents has a joint strategy to guarantee satisfaction of the objective \(\chi\) against N uncontrollable agents on every play consistent with that strategy.

Each of the arguments C and N may be a concrete number, a parameter, or a variable that can be quantified over. Parameters are free variables that cannot be quantified over, which gives extra expressiveness of the language, because some syntactic restrictions will be imposed on the variables.

3.1 Formal syntax and semantics

We now fix a set of atomic propositions \(\varPhi = \{p_1,p_2,\ldots .\}\), a set of two special variables \(Y= \{{y_1, y_2}\}\), ranging over \(\mathbb {N}\), which we call agent counters. These will represent the numbers of controllable and uncontrollable agents respectively, and can be quantified over. We also fix a set of agent counting parametersFootnote 4\(Z= \{{z_1, z_2, \ldots }\}\), again ranging over \(\mathbb {N}\), and define the set of termsFootnote 5 as \(T= Y\cup Z\cup \mathbb {N}\). These will be used as arguments of the strategic operators in the logical language defined below.

Definition 11

The logic \({\mathcal {L}}_{\textsc {hdmas}}\) has two sorts of formulae, defined by mutual induction with the following grammars, where free (and bound) occurrences of variables are defined like in first-order logic (FOL):

Path formulae: \(\chi {:}{:}= {} \mathsf {X}\, \varphi \mid \mathsf {G}\, \varphi \mid \psi \, \mathsf {U} \, \varphi\),

where \(\varphi , \psi\) are state formulae.

State formulae:

\(\varphi {:}{:}= {} \top \mid p \mid \lnot \varphi \mid (\varphi \wedge \varphi ) \mid (\varphi \vee \varphi ) \mid \langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\,{ \chi } \mid \forall y\varphi \mid \exists y\varphi\)

where \(p \in \varPhi\), \(t_1 \in T{\setminus } \{y_2\}\), \(t_2 \in T{\setminus } \{y_1\}\), \(y\in Y\), and \(\chi\) is a path formula.

The cases of \(\forall y\varphi\) and \(\exists y\varphi\) are subject to the following syntactic constraint: all free occurrences of \(y\) in \(\varphi\) must have a positive polarity, viz. must be in the scope of an even number of negations.

The propositional connectives \(\bot , \rightarrow , \leftrightarrow\) are defined as usual. Also, we define \({\mathsf {F}\, \psi } := {\top \, \mathsf {U} \, \psi }\).

Remark 2

Some remarks on the formulae in \({\mathcal {L}}_{\textsc {hdmas}}\) are in order:

  1. 1.

    Note that \(y_1\) can only occur in the first position of \(\langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\,\) and \(y_2\) can only occur in the second position. However, the same parameter z may occur in both positions and this is one reason to allow the use of parameters, as the model checking algorithm will treat them uniformly.

  2. 2.

    The restriction for quantification only over positive free occurrences of variables is imposed for technical reasons. By using the duality of \(\forall\) and \(\exists\), that restriction can readily be relaxed to the requirement all free occurrences of the quantified variable to be of the same polarity (all positive, or all negative). Further relaxation, allowing both positive and negative occurrences under some restrictions, is possible, but it would complicate further the syntax and the model checking algorithm, without making an essential contribution to the useful expressiveness of the language. Indeed, one can argue that, if a formula is to make a meaningful claim about the strategic abilities of the coalition of controllable agents which is quantified over the number of these agents, then it is natural to assume that the controllable coalition appear only in positive context in that claimFootnote 6.

  3. 3.

    Some additional useful syntactic restrictions can be imposed, which (as it will be shown in the next section) do not essentially restrict the expressiveness of the language. They lead to the notion of ‘normal form’, to be introduced shortly.

Hereafter, by \({\mathcal {L}}_{\textsc {hdmas}}\)-formulae we will mean, unless otherwise specified, state formulae of \({\mathcal {L}}_{\textsc {hdmas}}\), whereas we will call the path formulae in \({\mathcal {L}}_{\textsc {hdmas}}\) temporal objectives. In particular, for any \({\mathcal {L}}_{\textsc {hdmas}}\)-formula \(\phi\) of the type \(\langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi\), the path subformula \(\chi\) is called the temporal objective of \(\phi\).

Some examples of \({\mathcal {L}}_{\textsc {hdmas}}\) formulae:

  • with reference to the fortress example:

    • \(\langle \!\langle {C, N_1}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \langle \!\langle {C, N_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \langle \!\langle {C, N_3}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \lnot captured\), with \(N_1< N_2 < N_3\) and \(N_i \in \mathbb {N}\) for \(i = \{{1, 2, 3}\}\), says that there is a strategy for \(C \in \mathbb {N}\) defenders to hold the fortress for three days against an increasing number of attackers.

    • \(\exists y_1 \langle \!\langle {y_1, N}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \lnot captured\) expresses that there is a number \(y_1\) of defenders that have a strategy to hold the fortress forever against N many invaders.

    • \(\forall y_2 \langle \!\langle {C, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \lnot captured\) expresses that, for any number of invaders \(y_2\), there is a strategy for C defenders to hold the fortress forever against \(y_2\) invadersFootnote 7.

    • \(\forall y_2 \exists y_1 \langle \!\langle { y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \lnot captured\) expresses that for any number (\(y_2\)) of invaders there is a number (\(y_1\)) of defenders who have a joint strategy to hold the fortress forever.

  • lastly, an abstract example with nesting of strategic operators and quantifiers:

    \(\langle \!\langle {z_2, z_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, p \vee \exists y_1 (\langle \!\langle {y_1, z_1}\rangle \!\rangle _{_{\! }}\, \mathsf {F}\, \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \lnot p \wedge \lnot \forall y_2 \langle \!\langle {z_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \lnot \langle \!\langle {y_1, z_2}\rangle \!\rangle _{_{\! }}\, p \, \mathsf {U} \, q)\),

    for \(z_1,z_2 \in Z\).

The semantics of \({\mathcal {L}}_{\textsc {hdmas}}\) is based on the standard, positional strategy semantics of ATL (cf [1] or [6]), applied in hdmas models, but uses abstract joint actions and strategy profiles, rather than concrete ones. In order to evaluate formulae that contain free variables and parameters, we use a version of FOL assignment, here defined as a function \(\theta : T\rightarrow \mathbb {N}\), where \(\theta (i)=i\) for \(i \in \mathbb {N}\).

Definition 12

Let \({\mathcal {M}}\) be a hdmas, \(s\) be a state and \(\theta\) an assignment in it. The satisfaction relation \(\models\) is inductively defined on the structure of \({\mathcal {L}}_{\textsc {hdmas}}\)-formulae as follows:

  1. 1.

    \({\mathcal {M}}, s , \theta \models \top\);

  2. 2.

    \({\mathcal {M}}, s , \theta \models p\) iff \(p \in \lambda ( s )\);

  3. 3.

    \(\wedge\) and \(\lnot\) have the standard semantics;

  4. 4.

    \({\mathcal {M}}, s , \theta \models \langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\,{\chi }\) iff there exists an abstract strategy \(\rho _{C}\) for a coalition of \(C = \theta (t_1)\) agents such that for every play \(\pi\) in the outcome set \(out ( s , \rho _{C}, N)\) against \(N=\theta (t_2)\) uncontrollable agents the following hold:

    1. (a)

      if \(\chi = \mathsf {X}\, \varphi\) then \({\mathcal {M}}, \pi [1], \theta \models \varphi\);

    2. (b)

      if \(\chi = \mathsf {G}\, \varphi\) then \({\mathcal {M}}, \pi [i], \theta \models \varphi\) for every \(i \in \mathbb {N}\);

    3. (c)

      if \(\chi = \varphi _1 \, \mathsf {U} \, \varphi _2\) then \({\mathcal {M}}, \pi [i], \theta \models \varphi _2\) for some \(i \ge 0\) and \({\mathcal {M}}, \pi [j], \theta \models \varphi _1\) for all \(0 \le j < i\);

  5. 5.

    \({\mathcal {M}}, s , \theta \models \forall y\varphi\) iff \({\mathcal {M}}, s , \theta [y := m] \models \varphi\) for every \(m \in \mathbb {N}\), where the assignment \(\theta [y := m]\) assigns m to \(y\) and agrees with \(\theta\) on every other argument.

  6. 6.

    Likewise for \({\mathcal {M}}, s , \theta \models \exists y\varphi\).

The notions of validity and (logical) equivalence in \({\mathcal {L}}_{\textsc {hdmas}}\) are defined as expected, and we will use the standard notation for them, viz. \(\models \varphi\) for validity and \(\varphi _1 \equiv \varphi _2\) for equivalence. We also say that two \({\mathcal {L}}_{\textsc {hdmas}}\)-formulae, \(\varphi _1\) and \(\varphi _2\) are equivalent in the finite, denoted \(\varphi _1 \equiv _{\mathsf {fin}}\varphi _2\), if \({\mathcal {M}}, s , \theta \models \varphi _1\) iff \({\mathcal {M}}, s , \theta \models \varphi _2\) for any finite hdmas model \({\mathcal {M}}\) and state \(s\) and assignment \(\theta\) in \({\mathcal {M}}\).

Remark 3

Note the following:

  1. 1.

    Defining the semantics in terms of abstract joint actions and strategies in the truth definitions of the strategic operators, rather than concrete ones, is justified by Lemmas 3 and 4 which imply that the ‘concrete’ and the ‘abstract’ semantics are equivalent.

  2. 2.

    Just like in FOL, the truth of any \({\mathcal {L}}_{\textsc {hdmas}}\)-formula \(\varphi\) only depends on the assignment of values to the parameters that occur in \(\varphi\) and to the variables that occur free in \(\varphi\). In particular, it does not depend at all on the assignment for closed formulae (containing no parameters and free variables). In such cases we simply write \({\mathcal {M}}, s \models \varphi\).

  3. 3.

    Again, just like in FOL, if \(y\) has no free occurrences in \(\varphi\), then \(\forall y\varphi \equiv \exists y\varphi \equiv \varphi\). Thus, in order to avoid such vacuous quantification, whenever it occurs we can assume that the formula is simplified automatically according to these equivalences.

Example 3

Consider the hdmas \({\mathcal {M}}\) in Example 2.

  1. 1.

    The closed formula \(\varphi = \langle \!\langle {7, 5}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, p\) is satisfied in state \(s _1\) of \({\mathcal {M}}\). Indeed, any abstract joint strategy \(\rho _{7}\) that prescribes \(\varepsilon\) to 3 of the controllable agents (\(\rho _{7} ( s _1)(\varepsilon )=3\)) and \(act _{3}\) to 4 of them (\(\rho _{7} ( s _1)( act _{3})=4\)) guarantees that guard \(g _2\) is satisfied, enforcing transition from \(s _1\) to \(s _3\).

  2. 2.

    \({\mathcal {M}}, s _1 \models \lnot \exists y_1 \langle \!\langle {y_1, 11}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, p\). Indeed, for any value of \(y_1\) the abstract joint action profile for the uncontrollable agents that prescribes to all of them to perform \(act _3\) falsifies both \(g _1\) and \(g _2\), thus forces a loop to \(s _1\) where p is false.

  3. 3.

    \({\mathcal {M}}, s _4 \models \langle \!\langle {7, 4}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, (\forall y_2 \exists y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, p)\), as we show in Sect. 4.

3.2 Normal form and monotonicity properties

This is a technically important section, where we define the fragment \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\) of normal form formulae of \({\mathcal {L}}_{\textsc {hdmas}}\). The normal form impose essential syntactic restrictions and therefore reduce the expressiveness of the language. However, the key technical result obtained here is that every formula in \({\mathcal {L}}_{\textsc {hdmas}}\) is equivalent on finite models to one in \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\). The importance of that result will be discussed further.

Definition 13

A \({\mathcal {L}}_{\textsc {hdmas}}\)-formula \(\psi\) is in a normal form if:

  1. (NF1)

    There are no occurrences of \(\forall y_1\) or \(\exists y_2\) in \(\psi\).

  2. (NF2)

    Every subformula \(\langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi\) of \(\psi\) where either \(t_1 = y_1\) or \(t_2 = y_2\) (but not both), such that that variable occurrence is bound in \(\psi\), is immediately preceded respectively by \(\exists y_1\) or \(\forall y_2\).

  3. (NF3)

    Every subformula \(\langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \chi\), where both variable occurrences are bound in \(\psi\), is immediately preceded either by \(\forall y_2 \exists y_1\) or \(\exists y_1 \forall y_2\).

Of the example formulae given after Definition 11, the first two are in normal form, while the last one is not.

We denote by \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\) the fragment of \({\mathcal {L}}_{\textsc {hdmas}}\) consisting of all formulae in normal form. We can give a more explicit definition of the formulae of \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\), by modifying the recursive definition of state formulae of \({\mathcal {L}}_{\textsc {hdmas}}\), where the clauses \(\forall y \varphi\) and \(\exists y \varphi\) are replaced with the following, where \(\chi\) is a temporal objective:

$$\begin{aligned} \begin{aligned}&\exists y_1 \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\,\chi \mid \forall y_2 \exists y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\,\chi \mid \\&\forall y_2 \langle \!\langle {t_1, y_2}\rangle \!\rangle _{_{\! }}\,\chi \mid \exists y_1 \forall y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\,\chi \end{aligned} \end{aligned}$$
(1)

The same syntactic constraints as before apply. In addition, in each case above no variable quantified in the prefix of the formula may occur free in \(\chi\).

The rest of the section is devoted to prove that every formula in \({\mathcal {L}}_{\textsc {hdmas}}\) is logically equivalent in the finite to one in \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\). That is of crucial importance, as our model checking algorithm works only on \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\) formulae. Indeed, the fact that quantification in formulae in normal form does not span across multiple temporal objectives enables us to obtain fixpoint characterizations for formulae of the types listed in (1) above, presented at the end of this section, in Theorem 3. That, in turn, allows us to retain the basic structure of the recursive model checking algorithm for ATL (cf [1] or [6]).

A first important observation is that the semantics of the strategic operators in \({\mathcal {L}}_{\textsc {hdmas}}\) is monotonic with respect to the number of controllable and uncontrollable agents, in a sense formalized in the following lemma.

Hereafter, for a given formula \(\varphi\), term t and \(k\in \mathbb {N}\), we denote by \(\varphi [k/t]\) the result of uniform substitution of all freeFootnote 8 occurrences of t in \(\varphi\) by k.

Lemma 5

For every \({\mathcal {L}}_{\textsc {hdmas}}\) -formula \(\varphi\) and a term t the following monotonicity properties hold.

(C-mon) :

Suppose \(C,C' \in \mathbb {N}\) are such that \(C'>C\) . Then:

(C-mon)\(^+\): If all free occurrences of t are positive and only in first position in strategic operators in \(\varphi\) then \(\models \varphi [C/t] \rightarrow \varphi [C'/t]\).

(C-mon) \(^-\) :

: If all free occurrences of t are negative and only in first position in strategic operators in \(\varphi\) then \(\models \varphi [C'/t] \rightarrow \varphi [C/t]\).

(N-mon) :

Suppose \(N,N' \in \mathbb {N}\) are such that \(N' <N\). Then:

(N-mon)\(^+\): If all free occurrences of t are positive and only in second position in strategic operators in \(\varphi\) then \(\models \varphi [N/t] \rightarrow \varphi [N'/t]\).

(N-mon) \(^-\) : If all free occurrences of t are negative and only in second position in strategic operators in \(\varphi\) then \(\models \varphi [N'/t] \rightarrow \varphi [N/t]\) .

Proof

(C-mon): Both claims are analogous and we prove both by simultaneous induction on the structure of \(\varphi\). We will present the proof for (C-mon)\(^+\), and the claim of (C-mon)\(^-\) will only be needed in the case when \(\varphi = \lnot \psi\), proved by using the inductive hypothesis for (C-mon)\(^-\) for \(\psi\) and contraposition.

The inductive cases where the main connective of \(\varphi\) is \(\wedge , \vee , \forall , \exists\) are easily proved by using the inductive hypothesis and the monotonicity of each of these logical connectives.

The only more essential inductive case is \(\varphi = \langle \!\langle {t, t_2}\rangle \!\rangle _{_{\! }}\, \chi\), where the inductive hypothesis is that the claim of (C-mon)\(^+\) holds for the main state subformulae of \(\chi\). Note that the semantics of the strategic and temporal operators is argument-monotone, in sense that if \(\models \psi \rightarrow \psi '\) then \(\models \langle \!\langle {t, t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \psi \rightarrow \langle \!\langle {t, t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \psi '\) and \(\models \langle \!\langle {t, t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \rightarrow \langle \!\langle {t, t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi '\), and likewise for Until. By using that and the inductive hypothesis, we obtain that \(\models \langle \!\langle {t, t_2}\rangle \!\rangle _{_{\! }}\, \chi [C/t] \rightarrow \langle \!\langle {t, t_2}\rangle \!\rangle _{_{\! }}\, \chi [C'/t]\). Therefore, \(\models \langle \!\langle {C, t_2}\rangle \!\rangle _{_{\! }}\, \chi [C/t] \rightarrow \langle \!\langle {C', t_2}\rangle \!\rangle _{_{\! }}\, \chi [C'/t]\). Thus, it remains to show that \(\models \langle \!\langle {C, t_2}\rangle \!\rangle _{_{\! }}\, \chi [C/t] \rightarrow \langle \!\langle {C', t_2}\rangle \!\rangle _{_{\! }}\, \chi [C/t]\). Let \({\mathcal {M}}, s , \theta \models \langle \!\langle {C, t_2}\rangle \!\rangle _{_{\! }}\, \chi [C/t]\). Let \(\rho _{C}\) be an abstract strategy for C controllable agents such that every play \(\pi\) in the outcome set \(out ( s , \rho _{C}, \theta (t_2))\) against \(\theta (t_2)\) uncontrollable agents satisfies the temporal objective \(\chi [C/t]\). Then, since \(C' > C\), the strategy \(\rho _{C}\) can be extended to strategy \(\rho _{C'}\) whereby the additional \(C'-C\) many agents always perform the idle action \(\varepsilon\). Clearly, \(\rho _{C'}\) ensures that \({\mathcal {M}}, s , \theta \models \langle \!\langle {C', t_2}\rangle \!\rangle _{_{\! }}\, \chi [C/t]\).

(N-mon): The proof is analogous to the one for (C-mon), so we only treat the inductive case of \(\varphi = \langle \!\langle {t_1, t}\rangle \!\rangle _{_{\! }}\, \chi\) for the claim (N-mon)\(^+\). Similarly to the case of (C-mon)\(^+\), it boils down to proving the validity \(\models \langle \!\langle {t_1, N}\rangle \!\rangle _{_{\! }}\, \chi [N/t] \rightarrow \langle \!\langle {t_1,N'}\rangle \!\rangle _{_{\! }}\, \chi [N/t]\). Let \({\mathcal {M}}, s , \theta \models \langle \!\langle {t_1, N}\rangle \!\rangle _{_{\! }}\, \chi [N/t]\) and let \(\rho _{C}\) be an abstract strategy for \(C = \theta (t_1)\) controllable agents such that every play \(\pi\) in the outcome set \(out ( s , \rho _{C}, N)\) against N uncontrollable agents satisfies the temporal objective \(\chi [N/t]\). Then the same strategy would ensure \({\mathcal {M}}, s , \theta \models \langle \!\langle {t_1,N'}\rangle \!\rangle _{_{\! }}\, \chi [N/t]\) for every \(N' < N\), since every joint action of \(N'\) can be lifted to a joint action of N leading to the same outcome, where the remaining \(N-N'\) agents always perform the idle action \(\varepsilon\). \(\square\)

A key consequence of the monotonicity properties is that it allows to eliminate some quantifier patterns, in the cases listed in the following lemma.

Lemma 6

For every term t and temporal objective \(\chi\) in \({\mathcal {L}}_{\textsc {hdmas}}\) , the following hold.

  1. 1.

    \(\forall y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \chi \equiv \langle \!\langle {0, t}\rangle \!\rangle _{_{\! }}\, \chi [0/y_1]\);

  2. 2.

    \(\exists y_2 \langle \!\langle {t, y_2}\rangle \!\rangle _{_{\! }}\, \chi \equiv \langle \!\langle {t, 0}\rangle \!\rangle _{_{\! }}\, \chi [0/y_2]\);

  3. 3.

    \(\forall y_1 \exists y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \chi \equiv \langle \!\langle {0, 0}\rangle \!\rangle _{_{\! }}\, \chi [0/y_1,0/y_2]\);

  4. 4.

    \(\exists y_2 \forall y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \chi \equiv \langle \!\langle {0, 0}\rangle \!\rangle _{_{\! }}\, \chi [0/y_1,0/y_2]\);

  5. 5.

    \(\forall y_2 \forall y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \chi \equiv \forall y_2 \langle \!\langle {0, y_2}\rangle \!\rangle _{_{\! }}\, \chi [0/y_1]\);

  6. 6.

    \(\forall y_1 \forall y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \chi \equiv \forall y_2 \langle \!\langle {0, y_2}\rangle \!\rangle _{_{\! }}\, \chi [0/y_1]\);

  7. 7.

    \(\exists y_1 \exists y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \chi \equiv \exists y_1 \langle \!\langle {y_1, 0}\rangle \!\rangle _{_{\! }}\, \chi [0/y_2]\);

  8. 8.

    \(\exists y_2 \exists y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \chi \equiv \exists y_1 \langle \!\langle {y_1, 0}\rangle \!\rangle _{_{\! }}\, \chi [0/y_2]\).

Proof

The logically non-trivial implications of claims 1–6 follow immediately from the polarity constraint in the definition of formulae and Lemma 5. Claims 7 and 8 follow respectively from claims 5 and 6, by commuting the quantifiers. \(\square\)

Lemma 6 shows that the only non-trivial cases of quantifications over formulae of the kind \(\langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi\) are those allowed in normal forms, listed in  (1) (after Definition 13). We will make use of that to re-define the syntax of \({\mathcal {L}}_{\textsc {hdmas}}\) to suit better our further technical work. First we define an admissible quantifier prefix \({\mathcal {Q}}\) to be a string of the form \(\mathsf {Q} y_i\) or \(\mathsf {Q} y_i\mathsf {Q} 'y_j\) where \(\mathsf {Q},\mathsf {Q} ' \in \{{\exists , \forall }\}\) and \(i,j \in \{{1, 2}\}\), \(i \ne j\). Now, we re-define the set of state formulas of \({\mathcal {L}}_{\textsc {hdmas}}\) to be generated by the following modified grammar:

$$\begin{aligned} \varphi {:}{:}= {} \top \mid p \mid \lnot \varphi \mid (\varphi \wedge \varphi ) \mid (\varphi \vee \varphi ) \mid \langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\,{ \chi } \mid {\mathcal {Q}} \varphi \end{aligned}$$

The same positive polarity requirements as before for applying the quantifier prefixes are imposed. Clearly, this grammar is equivalent to the original grammar, i.e. it generates the same set of formulae. In the rest of the paper we adopt the new grammar above.

Next, we define recursively a partial quantifier elimination function \(\textsc {pqe}\) on path and state formulae \(\xi \in {\mathcal {L}}_{\textsc {hdmas}}\) which produces formulae \(\textsc {pqe} (\xi )\) where all occurrences of subformulae in the left-hand sides of the equivalences in Lemma 6 are successively replaced with the corresponding right-hand sides.

figure a

Lemma 7

Let \(\varphi\) be any formula in \({\mathcal {L}}_{\textsc {hdmas}}\), then \(\textsc {pqe} (\varphi ) \equiv \varphi\).

Proof

By induction on the structure of \(\varphi\) (using the modified grammar), following the recursive definition of \(\textsc {pqe}\). The only non-trivial cases are those in lines 12–19 and they use the equivalences in Lemma 6. For instance, let \(\varphi = \forall y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \chi\). By definition, \(\textsc {pqe} (\forall y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \chi ) = \langle \!\langle {0, t}\rangle \!\rangle _{_{\! }}\, \textsc {pqe} (\chi )[0/y_1]\) and by inductive hypothesis \(\textsc {pqe} (\chi ) \equiv \chi\), thus we get \(\langle \!\langle {0, t}\rangle \!\rangle _{_{\! }}\, \textsc {pqe} (\chi )[0/y_1] \equiv \langle \!\langle {0, t}\rangle \!\rangle _{_{\! }}\, \chi [0/y_1]\). The claim now follows from case 1. in Lemma 6. All other cases are proved analogously. \(\square\)

Lemma 8

Let \(\varphi\) be any formula in \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\), then \(\textsc {pqe} (\varphi ) = \varphi\).

Proof

Again, induction on the structure of \(\varphi\) in normal form, following the recursive definition of \(\textsc {pqe}\). Note, that the only cases that apply to formulae in normal form are those in lines 5–9, 20 and 22–24, which do not modify \(\varphi\). \(\square\)

Note that after applying \(\textsc {pqe}\), the resulting formula satisfies condition (NF1) in the definition of normal form.

3.3 Transformation to normal forms and fixpoint equivalences

Next, we show that quantification can always be distributed, up to equivalence in the finite, over conjunctions and disjunctions and pushed inside subformulae so that every bound variable is immediately preceded by a quantifier that binds it, which will be used further for transformations of \({\mathcal {L}}_{\textsc {hdmas}}\) formulae to normal form.

We define by recursion a 2-argument function \(\textsc {push}\), applied to pairs consisting of an admissible quantifier prefix \({\mathcal {Q}}\) and a formula \(\varphi\) in \({\mathcal {L}}_{\textsc {hdmas}}\), such that \(\textsc {push} ({\mathcal {Q}}, \varphi )\) is a formula in \({\mathcal {L}}_{\textsc {hdmas}}\) which satisfies conditions (NF2) and (NF3) of the definition of normal form, and which we will prove to be equivalent to \({\mathcal {Q}} \varphi\). For the purpose of defining \(\textsc {push}\) as described, we will need to define it on a wider scope, viz. applied to any state or path formula \(\xi\), even though \({\mathcal {Q}} \xi\) may not be a legitimate formula of \({\mathcal {L}}_{\textsc {hdmas}}\).

In what follows, we denote by \(\overline{{\mathcal {Q}}}\) the swap of the quantifiers in the prefix \({\mathcal {Q}}\) with their duals, i.e. \(\exists\) with \(\forall\) and vice versa.

figure b

It is quite easy to see that \(\textsc {push} ({\mathcal {Q}}, \varphi )\) is a formula in \({\mathcal {L}}_{\textsc {hdmas}}\) whenever \({\mathcal {Q}} \varphi\) is a formula in \({\mathcal {L}}_{\textsc {hdmas}}\). Intuitively, the function \(\textsc {push}\) recursively pushes the quantifier prefix \({\mathcal {Q}}\) inside the formula by either swapping it when negation occurs or by distributing it over the others boolean connectives until it vanishes. When a strategic operator, possibly with variables as arguments that are quantified by \({\mathcal {Q}}\) is reached, then \({\mathcal {Q}}\) is placed in front of the strategic operator and is also distributed in its temporal objective, but the vacuous quantification occurring in the process is removed. Lastly, when the formula begins with another quantifier \(\mathsf {Q} '' y_k\), then it is prefixed by \({\mathcal {Q}}\), the resulting vacuous quantification, if any, is removed, and the resulting prefix is pushed inside.

Example 4

Let \(\varphi \in {\mathcal {L}}_{\textsc {hdmas}}\) be

$$\begin{aligned} \begin{array}{c} \forall y_1 \big ( \langle \!\langle {y_1, 5}\rangle \!\rangle _{_{\! }}\, (\forall y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, p_1) \; \, \mathsf {U} \, \; (\exists y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {F}\, p_2 ) \; \vee {} \\ \exists y_1 (\forall y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {F}\, p_3 \; \wedge \; \lnot \forall y_2 \langle \!\langle {3, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, p_1 ) \big ) \end{array} \end{aligned}$$

where \(p_1 , p_2, p_3 \in \varPhi\). Then \(\textsc {push} (\forall y_1, \varphi ) =\)

$$\begin{aligned} \begin{array}{c} \forall y_1 \langle \!\langle {y_1, 5}\rangle \!\rangle _{_{\! }}\, \big ( (\forall y_1 \forall y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, p_1) \;\; \, \mathsf {U} \, \;\; (\forall y_1 \exists y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {F}\, p_2 ) \big ) \vee {} \\ (\exists y_1 \forall y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {F}\, p_3 \;\; \wedge \;\; \lnot \forall y_2 \langle \!\langle {3, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, p_1) \end{array} \end{aligned}$$

Theorem 1

Let \({\mathcal {Q}}\) be an admissible quantifier prefix and let \({\mathcal {Q}} \varphi\) be a formula of \({\mathcal {L}}_{\textsc {hdmas}}\).

Then \(\textsc {push} ({\mathcal {Q}},\varphi )\) is logically equivalent in the finite to \({\mathcal {Q}}\varphi\).

Proof

We prove the claim by induction on the nesting depth \(\mathbf {nd} (\varphi )\) of strategic operators in the state formula \(\varphi\), defined as expected.

When \(\mathbf {nd} (\varphi ) = 0\) the claim is straightforward because any quantification over \(\varphi\) is vacuous, hence \({\mathcal {Q}}\varphi \equiv \varphi\) and \(\textsc {push} ({\mathcal {Q}},\varphi ) = \varphi\). Suppose now that \(\mathbf {nd} (\varphi ) > 0\) and the claim holds for all state formulae of \({\mathcal {L}}_{\textsc {hdmas}}\) with lower nesting depth. We will do a nested induction on the structure of \(\varphi\), following the recursive definition of \(\textsc {push}\).

  1. 1.

    \(\varphi = \top \mid p\). This case does not apply now, but it is, anyway, trivial for every \({\mathcal {Q}}\).

  2. 2.

    \(\varphi = \lnot \psi\) follows from FOL and the inductive hypothesis (IH) for \(\psi\).

  3. 3.

    \(\varphi = \psi _1 \wedge \psi _2\).

    1. (a)

      When \({\mathcal {Q}}=\forall y_i\), the claim follows immediately from the valid equivalence (proved just like in FOL) \(\forall y_i (\psi _1 \wedge \psi _2) \equiv \forall y_i \psi _1 \wedge \forall y_i \psi _2\) and the IH for each of \(\psi _1\) and \(\psi _2\).

    2. (b)

      When \({\mathcal {Q}} =\exists y_i\), it suffices to prove that \(\exists y_i (\psi _1 \wedge \psi _2) \equiv _{\mathsf {fin}}\exists y_i \psi _1 \wedge \exists y_i \psi _2\), and then use the IH for each of \(\psi _1\) and \(\psi _2\). The implication from left to right is by the validity of the implication \(\exists y_i(\psi _1 \wedge \psi _2) \rightarrow \exists y_i \psi _1 \wedge \exists y_i \psi _2\). To prove the converse implication, first note that, since \(\exists y_i (\psi _1 \wedge \psi _2)\) is a formula of \({\mathcal {L}}_{\textsc {hdmas}}\), all free occurrences of \(y_i\) in \(\psi _1\) and in \(\psi _2\) must be positive. Now, suppose first that \(i=1\) and let \({\mathcal {M}}, s , \theta \models \exists y_1 \psi _1 \wedge \exists y_1 \psi _2\) for some finite \({\mathcal {M}}\). Then, \({\mathcal {M}}, s , \theta \models \psi _1[C_1/ y_1]\) and \({\mathcal {M}}, s , \theta \models \psi _2[C_2/ y_1]\) for some \(C_1,C_2 \in \mathbb {N}\). Let \(C = \max (C_1,C_2)\). By the monotonicity property (C-mon)\(^+\) from Lemma 5, we obtain that \({\mathcal {M}}, s , \theta \models \psi _1[C/ y_1]\) and \({\mathcal {M}}, s , \theta \models \psi _2[C/ y_1]\). Therefore, \({\mathcal {M}}, s , \theta \models (\psi _1 \wedge \psi _2)[C/ y_1]\), hence \({\mathcal {M}}, s , \theta \models \exists y_1 (\psi _1 \wedge \psi _2)\). This proves the validity of the converse implication \((\exists y_1 \psi _1 \wedge \exists y_1 \psi _2) \rightarrow \exists y_1 (\psi _1 \wedge \psi _2)\). The proof of the case where \(i=2\) is analogous, using the monotonicity property (N-mon)\(^+\) from Lemma 5.

    3. (c)

      Lastly, the case when \({\mathcal {Q}} = \mathsf {Q} y_i\mathsf {Q} 'y_j\) is readily reducible to the previous 2 cases, by distributing first \(\mathsf {Q} 'y_j\) and then \(\mathsf {Q} y_i\).

  4. 4.

    \(\varphi = \psi _1 \vee \psi _2\). This case is dually analogous to the previous one.

    1. (a)

      When \({\mathcal {Q}} =\exists y_i\), the claim follows immediately from the valid equivalence \(\exists y_i(\psi _1 \vee \psi _2) \equiv \exists y_i \psi _1 \vee \exists y_i \psi _2\) and the IH for each of \(\psi _1\) and \(\psi _2\).

    2. (b)

      When \({\mathcal {Q}}=\forall y_i\), it suffices to prove that \(\forall y_i (\psi _1 \vee \psi _2) \equiv _{\mathsf {fin}}\forall y_i \psi _1 \vee \forall y_i \psi _2\), and then use the IH for each of \(\psi _1\) and \(\psi _2\). The implication from right to left \((\forall y_i \psi _1 \vee \forall y_i \psi _2) \rightarrow \forall y_i(\psi _1 \vee \psi _2)\) is a validity, proved just like in FOL. For the converse implication, suppose first that \(i=1\) and let \({\mathcal {M}}, s , \theta \models \forall y_1 (\psi _1 \vee \psi _2)\) for some finite \({\mathcal {M}}\). Then, \({\mathcal {M}}, s , \theta \models (\psi _1 \vee \psi _2)[0/y_1]\), hence \({\mathcal {M}}, s , \theta \models \psi _1[0/y_1]\) or \({\mathcal {M}}, s , \theta \models \psi _2[0/y_1]\). Suppose w.l.o.g. the former. Then, by the monotonicity property (C-mon)\(^+\) from Lemma 5, we obtain that \({\mathcal {M}}, s , \theta \models \psi _2[C/y_1]\) for any \(C\in \mathbb {N}\), hence \({\mathcal {M}}, s , \theta \models \forall y_1 \psi _1\), so \({\mathcal {M}}, s , \theta \models \forall y_1 \psi _1 \vee \forall y_1 \psi _2\).

      For the case that \(i=2\), assuming that \({\mathcal {M}}, s , \theta \models \forall y_2 (\psi _1 \vee \psi _2)\), it follows that at least one of \({\mathcal {M}}, s , \theta \models \psi _1[N/y_2]\) and \({\mathcal {M}}, s , \theta \models \psi _2[N/y_2]\) holds for infinitely many values of \(N \in \mathbb {N}\). Suppose w.l.o.g. the former. Then, by the monotonicity property (N-mon)\(^+\) from Lemma 5, we obtain that \({\mathcal {M}}, s , \theta \models {\psi _1}[N/y_2]\) for any \(N\in \mathbb {N}\), hence \({\mathcal {M}}, s , \theta \models \forall y_2 \psi _1\), so \({\mathcal {M}}, s , \theta \models \forall y_2 \psi _1 \vee \forall y_2 \psi _2\).

    3. (c)

      Lastly, the case when \({\mathcal {Q}} = \mathsf {Q} y_i\mathsf {Q} 'y_j\) is readily reducible to the previous 2 cases, by distributing first \(\mathsf {Q} 'y_j\) and then \(\mathsf {Q} y_i\).

  5. 5.

    \(\varphi = \mathsf {Q} '' y_k \psi\).Again, we consider the subcases depending on \({\mathcal {Q}}\).

    1. (a)

      \({\mathcal {Q}} = \mathsf {Q} y_i\), where \(i=k\).

      We are to show that \(\mathsf {Q} y_i \mathsf {Q} '' y_i \psi \equiv _{\mathsf {fin}}\textsc {push} (\mathsf {Q} '' y_i, \psi )\), which follows from \(\mathsf {Q} y_i \mathsf {Q} '' y_i \psi \equiv \mathsf {Q} '' y_i \psi\) and the IH.

    2. (b)

      \({\mathcal {Q}} = \mathsf {Q} y_i\), where \(i \ne k\).

      We are to show that \(\mathsf {Q} y_i \mathsf {Q} '' y_k \psi \equiv _{\mathsf {fin}}\textsc {push} (\mathsf {Q} y_i\mathsf {Q} '' y_k, \psi )\), which follows from the IH for \({\mathcal {Q}} = \mathsf {Q} y_i \mathsf {Q} '' y_k\) and \(\psi\).

    3. (c)

      \({\mathcal {Q}} = \mathsf {Q} y_i \mathsf {Q} ' y_j\), where \(i = k\).

      We are to show that \(\mathsf {Q} y_k \mathsf {Q} ' y_j \mathsf {Q} '' y_k \psi \equiv _{\mathsf {fin}}\textsc {push} (\mathsf {Q} ' y_j\mathsf {Q} '' y_k, \psi )\), which follows from \(\mathsf {Q} y_k \mathsf {Q} ' y_j \mathsf {Q} '' y_k \psi \equiv \mathsf {Q} ' y_j \mathsf {Q} '' y_k \psi\) and the IH.

    4. (c)

      The case \({\mathcal {Q}} = \mathsf {Q} y_i \mathsf {Q} ' y_j\), where \(j = k\), is analogous.

  6. 6.

    \(\varphi = \langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\,\chi\).

    This inductive case—for both inductions, the external one, on \(\mathbf {nd} (\varphi )\), and for the nested one, on the structure of \(\varphi\)—is the most involved case, where the finiteness of the models over which we prove the equivalence is used essentially. There are several subcases, depending on \({\mathcal {Q}}\) and on the main temporal connective of \(\chi\). The proof for each case is technical and some cases are longer than others, but they all use a similar approach, that essentially hinges on the finiteness of the model and the monotonicity properties from Lemma 5. These will allow us to obtain uniformly large enough values of the quantified variables, beyond which the truth values of all strategic subformulae stabilise, and thus to establish the truth of the non-trivial implications.

    We will provide a representative selection of proofs for some of the cases and will leave out the rest, which are essentially analogous, though possibly even longer.

    1. (a)

      \({\mathcal {Q}} = \mathsf {Q} y_i\), where \(t_i = y_i\), for \(i=1\) or \(i=2\).

      We are to show that \(\mathsf {Q} y_i \langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\,\chi \equiv _{\mathsf {fin}}\mathsf {Q} y_i \langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\, \textsc {push} (\mathsf {Q} y_i, \chi )\), assuming the inductive hypothesis for the main state subformulae of \(\chi\). We consider the subcases depending on \(\mathsf {Q}\), i, and the main temporal connective of \(\chi\).

      Case (\(\forall y_1 \mathsf {G}\,\)): to prove \(\forall y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv _{\mathsf {fin}}\forall y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \textsc {push} (\forall y_1, \psi )\).

      By the IH for \(\psi\), we have that \(\forall y_1 \psi \equiv _{\mathsf {fin}}\textsc {push} (\forall y_1, \psi )\).

      So, it suffices to prove that \(\forall y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv _{\mathsf {fin}}\forall y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \forall y_1 \psi\). By Lemma 6, \(\forall y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv \langle \!\langle {0, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi [0/y_1]\) and \(\forall y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \forall y_1 \psi \equiv \langle \!\langle {0, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \forall y_1 \psi\).

      So, we have to prove that \(\langle \!\langle {0, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi [0/y_1] \equiv \langle \!\langle {0, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \forall y_1 \psi\), which follows immediately, since \(\forall y_1 \psi \equiv \psi [0/y_1]\), by (C-mon)\(^+\) from Lemma 5.

      Case (\(\exists y_1 \mathsf {G}\,\)): to prove \(\exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv _{\mathsf {fin}}\exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \textsc {push} (\exists y_1, \psi )\).

      By the IH for \(\psi\), we have that \(\exists y_1 \psi \equiv _{\mathsf {fin}}\textsc {push} (\exists y_1, \psi )\). So, it suffices to prove that \(\exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv _{\mathsf {fin}}\exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \exists y_1 \psi\). Since \(\models \psi \rightarrow \exists y_1 \psi\), we obtain validity of the implication \(\exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \rightarrow \exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \exists y_1 \psi\).

      For the converse, suppose \({\mathcal {M}}, s , \theta \models \exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \exists y_1 \psi\) for some finite \({\mathcal {M}}\) with state space \(S\), assignment \(\theta\) and \(s \in S\). Fix any \(C\in \mathbb {N}\) such that \({\mathcal {M}}, s , \theta \models \langle \!\langle {C, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \exists y_1 \psi\). Since \(\theta\) fixes the values of all terms, we can treat \(\exists y_1 \psi\) as a closed formula. Note that, according to the syntax of \({\mathcal {L}}_{\textsc {hdmas}}\), all occurrences of \(y_1\) in \(\psi\) are positive. Let \(W = [\![ \exists y_1 \psi ]\!]_{{\mathcal {M}}}^{\theta }\) be its extension in \({\mathcal {M}}\) (which depends on \(\theta\)) and let \(w \in W\). Let \(f: W \rightarrow \mathbb {N}\) be a mapping assigning to every \(u \in W\) a number \(f(u)\) such that \({\mathcal {M}}, u, \theta \models \psi [f(u)/y_1]\). Now, letFootnote 9\(f^* := \max _{u \in W} f(u)\) and \(C^* := \max (f^*,C)\). Then, by (C-mon)\(^+\) from Lemma 5, we obtain that \({\mathcal {M}}, u, \theta \models \psi [C^*/y_1]\) for each \(u \in W\), hence \({\mathcal {M}}, s , \theta \models \langle \!\langle {C^*, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi [C^*/y_1]\). Therefore, \({\mathcal {M}}, s , \theta \models \exists y_1 \langle \!\langle {y_1,t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi\). Thus, \(\exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \exists y_1 \psi \rightarrow \exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi\) is valid in the finite, whence the claim.

      Case (\(\forall y_2 \mathsf {G}\,\)): to prove \(\forall y_2 \langle \!\langle {t,y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv _{\mathsf {fin}}\forall y_2 \langle \!\langle {t,y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \textsc {push} (\forall y_2, \psi )\).

      By the IH for \(\psi\), we have that \(\forall y_2 \psi \equiv _{\mathsf {fin}}\textsc {push} (\forall y_2, \psi )\). So, it suffices to prove that \(\forall y_2 \langle \!\langle {t,y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv _{\mathsf {fin}}\forall y_2 \langle \!\langle {t,y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \forall y_2 \psi\). The implication \(\models \forall y_2 \langle \!\langle {t,y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \forall y_2 \psi \rightarrow \forall y_2 \langle \!\langle {t,y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi\) follows from \(\models \forall y_2 \psi \rightarrow \psi\), proved just like in FOL. For the converse, suppose \({\mathcal {M}}, s , \theta \models \forall y_2 \langle \!\langle {t,y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi\) for some finite \({\mathcal {M}}\) with state space \(S\), assignment \(\theta\) and \(s \in S\). Then, for every \(N\in \mathbb {N}\), it holds that \({\mathcal {M}}, s , \theta \models \langle \!\langle {t,N}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi [N/y_2]\), i.e., there is an abstract positional joint strategy \(\sigma _N\) for \(\theta (t)\) many controllable agents, such that \(\psi\) is true at every state on every outcome play enabled by \(\sigma _N\) against N uncontrollable agents. Since there are only finitely many abstract positional joint strategies for \(\theta (t)\) controllable agents in \({\mathcal {M}}\), there is at least one such joint strategy which works for infinitely many values of N, and therefore, by (N-mon), it will work for all \(N\in \mathbb {N}\). Let us fix such strategy \(\sigma ^{{\mathbf {c}}}\). We will show that \({\mathcal {M}}, s , \theta \models \forall y_2 \langle \!\langle {t,y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \forall y_2 \psi\) by proving that, for every \(N\in \mathbb {N}\), if \(\sigma ^{{\mathbf {c}}}\) is played by \(\theta (t)\) many controllable agents it ensures the truth of \({\mathcal {M}}, s , \theta \models \langle \!\langle {t,N}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \forall y_2 \psi\). Suppose this is not the case for some \(N'\in \mathbb {N}\). Then, there is an abstract positional joint strategy \(\sigma ^{{\mathbf {n}}}\) for \(N'\) uncontrollable agents that guarantees reaching a state \(w\) where \(\forall y_2 \psi\) fails on the unique play \(\pi\) generated by the pair of joint strategies \((\sigma ^{{\mathbf {c}}}, \sigma ^{{\mathbf {n}}})\). Thus, \({\mathcal {M}}, w, \theta \not \models \forall y_2 \psi\), i.e., \({\mathcal {M}}, w, \theta \models \lnot \forall y_2 \psi\). Therefore, \({\mathcal {M}}, w, \theta \models \lnot \psi [N''/y_2]\) for some \(N''\in \mathbb {N}\). Let \(N^* := \max (N',N'')\). Then, by (N-mon)\(^-\) from Lemma 5, we have that \({\mathcal {M}}, w, \theta \models \lnot \psi [N^*/y_2]\). Furthermore, the strategy \(\sigma ^{{\mathbf {n}}}\) can be trivially extended to \(\sigma ^{{\mathbf {n}}*}\) for \(N^*\) uncontrollable agents (by letting the extra \(N^* - N'\) uncontrollable agents idle), hence the play \(\pi\) is still generated by the resulting pair of joint strategies \((\sigma ^{{\mathbf {c}}}, \sigma ^{{\mathbf {n}}*})\) and the state \(w\) as above will still be reached on it. On the other hand, by the choice of \(\sigma ^{{\mathbf {c}}}\), when it is played by the \(\theta (t)\) many controllable agents against \(N^*\) uncontrollable agents it guarantees maintaining forever the truth of \(\psi\), i.e., \({\mathcal {M}}, s , \theta \models \langle \!\langle {t,N^*}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi [N^*/y_2]\). In particular, that implies \({\mathcal {M}}, w, \theta \models \psi [N^*/y_2]\)—a contradiction. Therefore, the assumption that such \(N'\) exists is wrong, whence the claim.

      Case (\(\exists y_2 \mathsf {G}\,\)): to prove \(\exists y_2 \langle \!\langle {t,y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv _{\mathsf {fin}}\exists y_2 \langle \!\langle {t,y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \textsc {push} (\exists y_2, \psi )\).

      This case is quite analogous to Case (\(\forall y_1 \mathsf {G}\,\)) and is proved by using the IH for \(\psi\), the equivalences \(\exists y_2 \langle \!\langle {t,y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv \langle \!\langle {t,0}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi [0/y_2]\) and \(\exists y_2 \langle \!\langle {t,y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \exists y_2 \psi \equiv \langle \!\langle {t,0}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \exists y_2 \psi\) from Lemma 6, and the monotonicity properties \(\mathbf (C-mon)\) from Lemma 5.

      Cases \((\mathsf {Q} y_i \mathsf {X}\, )\) are analogous, but a little simpler than those above.

      Cases \((\mathsf {Q} y_i \, \mathsf {U} \, )\) are analogous, though a little longer than those above.

    2. (b)

      \({\mathcal {Q}} = \mathsf {Q} y_i\), where \(t_1 \not = y_i\) and \(t_2 \not = y_i\).

      We are to show that \(\mathsf {Q} y_i \langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\,\chi \equiv _{\mathsf {fin}}\langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\, \textsc {push} (\mathsf {Q} y_i, \chi )\), assuming the IH for the main state subformulae of \(\chi\). For that, it suffices to prove that the quantifier \(\mathsf {Q}\) can be equivalently pushed inside through \(\langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\,\) and the main temporal connective of \(\chi\), e.g., that \(\mathsf {Q} y_i \langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\,\mathsf {G}\, \psi \equiv _{\mathsf {fin}}\langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {Q} y_i \mathsf {G}\, \psi\). The non-trivial implications follow from the fact that there are only finitely many abstract positional strategies for the controllable agents in any given finite model, plus the monotonicity properties from Lemma 5. The argument for that is essentially the same as that in the proof of Case (\(\forall y_2 \mathsf {G}\,\)) above.

    3. (c)

      \({\mathcal {Q}} = \mathsf {Q} y_1 \mathsf {Q} ' y_2\), where \(t_1 \ne y_1\) and \(t_2 \ne y_2\).

      We are to show that \(\mathsf {Q} y_1 \mathsf {Q} ' y_2 \langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\,\chi \equiv _{\mathsf {fin}}\langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\, \textsc {push} (\mathsf {Q} y_1 \mathsf {Q} ' y_2, \chi )\), assuming the IH for all state formulae of lower nesting depth, including the main state subformulae of \(\chi\).

      This equivalence follows by applying case (b) twice, first for \({\mathcal {Q}} = \mathsf {Q} ' y_2\) and then for \({\mathcal {Q}} = \mathsf {Q} y_1\) (the IH on the nesting of strategic operators is used here), and each time using the IH.

      The case \({\mathcal {Q}} = \mathsf {Q} ' y_2 \mathsf {Q} y_1\), where \(t_1 \ne y_1\) and \(t_2 \ne y_2\) is completely analogous.

    4. (d)

      \({\mathcal {Q}} = \mathsf {Q} y_1 \mathsf {Q} ' y_2\), where \(t_1 = y_1\) and \(t_2 \ne y_2\).

      We are to show that

      $$\begin{aligned} \mathsf {Q} y_1 \mathsf {Q} ' y_2 \langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\,\chi \equiv _{\mathsf {fin}}\mathsf {Q} y_1 \langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\, \textsc {push} (\mathsf {Q} y_1 \mathsf {Q} ' y_2, \chi ), \end{aligned}$$

      assuming the IH for all state formulae of lower nesting depth, incl. the main state subformulae of \(\chi\).

      E.g., when \(\chi = \mathsf {G}\, \psi\), we are to prove \(\mathsf {Q} y_1 \mathsf {Q} ' y_2 \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv _{\mathsf {fin}}\mathsf {Q} y_1 \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \textsc {push} (\mathsf {Q} y_1 \mathsf {Q} ' y_2, \psi )\). By the IH, \(\textsc {push} (\mathsf {Q} y_1 \mathsf {Q} ' y_2, \psi ) \equiv _{\mathsf {fin}}\mathsf {Q} y_1 \mathsf {Q} ' y_2 \psi\), so we are to prove that \(\mathsf {Q} y_1 \mathsf {Q} ' y_2 \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv _{\mathsf {fin}}\mathsf {Q} y_1 \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \mathsf {Q} y_1 \mathsf {Q} ' y_2 \psi\).

      This follows by first applying case (b) for \({\mathcal {Q}} = \mathsf {Q} ' y_2\) and the IH to obtain

      \(\mathsf {Q} ' y_2 \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv _{\mathsf {fin}}\langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \textsc {push} (\mathsf {Q} ' y_2, \psi ) \equiv _{\mathsf {fin}}\langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \mathsf {Q} ' y_2 \psi\), and then applying \(\mathsf {Q} y_1\) to both sides, then case (a) for \({\mathcal {Q}} = \mathsf {Q} y_1\), and again the IH.

    5. (e)

      The case \({\mathcal {Q}} = \mathsf {Q} ' y_2 \mathsf {Q} y_1\), where \(t_1 = y_1\) and \(t_2 \ne y_2\) is similar.

    6. (f)

      The cases \({\mathcal {Q}} = \mathsf {Q} y_1 \mathsf {Q} ' y_2\) and \({\mathcal {Q}} = \mathsf {Q} ' y_2 \mathsf {Q} y_1\) where \(t_1 \ne y_1\) and \(t_2 = y_2\) are completely analogous.

    7. (g)

      \({\mathcal {Q}} = \mathsf {Q} y_1 \mathsf {Q} ' y_2\), where \(t_1 = y_1\) and \(t_2 = y_2\).

      We have to prove \(\mathsf {Q} y_1 \mathsf {Q} ' y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\,\chi \equiv _{\mathsf {fin}}\mathsf {Q} y_1 \mathsf {Q} ' y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \textsc {push} (\mathsf {Q} y_1 \mathsf {Q} ' y_2, \chi )\), assuming the IH for all state formulae of lower nesting depth, incl. the main state subformulae of \(\chi\).

      E.g., when \(\chi = \mathsf {G}\, \psi\), we are to prove \(\mathsf {Q} y_1 \mathsf {Q} ' y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv _{\mathsf {fin}}\mathsf {Q} y_1 \mathsf {Q} ' y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \textsc {push} (\mathsf {Q} y_1 \mathsf {Q} ' y_2, \psi )\).

      By the IH, \(\textsc {push} (\mathsf {Q} y_1 \mathsf {Q} ' y_2, \psi ) \equiv _{\mathsf {fin}}\mathsf {Q} y_1 \mathsf {Q} ' y_2 \psi\), so we are to prove that

      \(\mathsf {Q} y_1 \mathsf {Q} ' y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv _{\mathsf {fin}}\mathsf {Q} y_1 \mathsf {Q} ' y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \mathsf {Q} y_1 \mathsf {Q} ' y_2 \psi\).

      By case (a), we have already shown that

      \(\mathsf {Q} ' y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv _{\mathsf {fin}}\mathsf {Q} ' y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \mathsf {Q} ' y_2 \psi\).

      By applying \(\mathsf {Q} y_1\) to both sides we obtain

      \(\mathsf {Q} y_1\mathsf {Q} ' y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi \equiv _{\mathsf {fin}}\mathsf {Q} y_1\mathsf {Q} ' y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \mathsf {Q} ' y_2 \psi\), so it remains to prove

      \(\mathsf {Q} y_1\mathsf {Q} ' y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \mathsf {Q} ' y_2 \psi \equiv _{\mathsf {fin}}\mathsf {Q} y_1 \mathsf {Q} ' y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \mathsf {Q} y_1 \mathsf {Q} ' y_2 \psi\).

      For each case of \(\mathsf {Q}\) the argument for the non-trivial implication uses the monotonicity properties from Lemma 5 and is respectively similar to that in the proof of Case (\(\exists y_2 \mathsf {G}\,\)) and Case (\(\forall y_2 \mathsf {G}\,\)) above.

      The other cases for \(\chi\) are similar.

    8. (h)

      The case \({\mathcal {Q}} = \mathsf {Q} ' y_2 \mathsf {Q} y_1\), where \(t_1 = y_1\) and \(t_2 = y_2\) is completely analogous to the previous one.

    This completes the proof for all cases in the definition of \(\textsc {push} ({\mathcal {Q}}, \langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi )\) and, therefore, the last inductive case in both inductions.

\(\square\)

Now we will define a recursive function nf that transforms any state or path formula \(\xi\) of \({\mathcal {L}}_{\textsc {hdmas}}\) respectively into a state or path formula \(\xi ^{\mathsf {NF}}\) in \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\), while preserving equivalence in the finite.

figure c

Intuitively, \(\textsc {nf}\) transforms the input formula by first applying \(\textsc {push}\) and then \(\textsc {pqe}\) whenever a quantifier prefix is to be applied, thus producing a formula in a normal form.

Example 5

Let \(\varphi\) be as in Example 4. Then \(\textsc {nf} (\varphi )\) is:

$$\begin{aligned} \begin{array}{c} \langle \!\langle {0, 5}\rangle \!\rangle _{_{\! }}\, \big ( (\forall y_2 \langle \!\langle {0, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, p_1) \;\; \, \mathsf {U} \, \;\; (\forall y_2 \langle \!\langle {0, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {F}\, p_2 ) \big ) \vee {} \\ ( \exists y_1\forall y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {F}\, p_3 \;\; \wedge \;\; \lnot \forall y_2 \langle \!\langle {3, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, p_1 ) \end{array} \end{aligned}$$

Lemma 9

Let \(\varphi\) a state formula of \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\). For every admissible quantifier prefix \({\mathcal {Q}}\), if the variables occurring in \({\mathcal {Q}}\) do not occur free in \(\varphi\), then \(\textsc {push} ({\mathcal {Q}}, \varphi ) = \varphi\).

Proof

The argument is by structural induction on \(\varphi\) in normal form, by following the recursive definition of \(\textsc {push}\). The non-trivial cases are those involving quantifiers. We consider \(\exists y_1\langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi\) and \({\mathcal {Q}} = \mathsf {Q} y_1\mathsf {Q} 'y_2\), the other cases are proved analogously. Since \(\varphi \in {\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\) by hypothesis, we have that \(t_2 \not = y_2\), thus by definition \(\textsc {push} (\mathsf {Q} y_1\mathsf {Q} 'y_2, \exists y_1 \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi ) =\) \(\textsc {push} (\mathsf {Q} 'y_2 \exists y_1, \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi ) =\) \(\exists y_1 \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \textsc {push} (\mathsf {Q} 'y_2 \exists y_1, \chi )\). By hypothesis \(y_2\) is not free in \(\psi\), which entails that \(y_2\) is not free in \(\chi\) and the same holds for \(y_1\) by (NF2) in the definition of normal form. We can therefore apply the inductive hypothesis on \(\chi\) to get \(\exists y_1 \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \textsc {push} (\mathsf {Q} 'y_2 \exists y_1, \chi ) = \exists y_1 \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi\). \(\square\)

Lemma 10

If \(\varphi \in {\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\) . then \(\textsc {nf} (\varphi )=\varphi\).

Proof

By induction on the structure of \(\varphi\) in normal form, following the recursive definition of \(\textsc {nf}\). All cases are straightforward, except \(\varphi = {\mathcal {Q}} \psi\). We consider \(\varphi = \exists y_1 \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi\), all other cases being analogous. By the IH, \(\textsc {nf} (\langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi ) = \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi\). Also, note that \(y_1\) does not occur free in \(\chi\) since \(\varphi \in {\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\). Therefore, using Lemmas 9 and  8, we successively obtain:

$$\begin{aligned} \textsc {nf} (\varphi )= & {} \textsc {nf} (\exists y_1 \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi ) = \textsc {pqe} (\textsc {push} (\exists y_1, \textsc {nf} (\langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi ))) \\= & {} \textsc {pqe} (\textsc {push} (\exists y_1, \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi )) = \textsc {pqe} (\exists y_1 \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi ) = \exists y_1 \langle \!\langle {y_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi = \varphi . \end{aligned}$$

\(\square\)

Theorem 2

Let \(\varphi\) be any formula in \({\mathcal {L}}_{\textsc {hdmas}}\). Then:

  1. 1.

    \(\textsc {nf} (\varphi ) \in {\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\).

  2. 2.

    \(\textsc {nf} (\varphi ) \equiv _{\mathsf {fin}}\varphi\).

  3. 3.

    \(\textsc {nf} (\varphi )\) can be computed effectively and has length linearly bounded above by \(|\varphi |\).

Proof

The first claim follows by straightforward induction on the structure of \(\varphi\), or just by direct inspection of the function \(\textsc {nf}\).

Claim 2. is proved by induction on the structure of \(\varphi\), following the cases of the recursive definition of \(\textsc {nf}\). The only non-trivial case is \(\varphi = {\mathcal {Q}} \psi\), which follows immediately from the IH, Theorem 1, and Lemma 8.

Lastly, Claim 3. follows by direct inspection of all cases in the definitions of the functions \(\textsc {pqe}\), \(\textsc {push}\) and \(\textsc {nf}\). \(\square\)

We conclude the section by presenting the fixpoint characterizations of formulae in (1), which provide an effective procedure for the model checking algorithm.

Theorem 3

For every terms \(t, t',t'' \in T\) the following equivalences hold, where the formulae on the left are in \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\).

  1. 1.

    \(\langle \!\langle {t', t''}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi \equiv \varphi \wedge \langle \!\langle {t', t''}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \langle \!\langle {t', t''}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi\)

  2. 2.

    \(\langle \!\langle {t', t''}\rangle \!\rangle _{_{\! }}\, \psi \, \mathsf {U} \, \varphi \equiv \varphi \vee (\psi \wedge \langle \!\langle {t', t''}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \langle \!\langle {t', t''}\rangle \!\rangle _{_{\! }}\, \psi \, \mathsf {U} \, \varphi )\)

  3. 3.

    \(\exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi \equiv _{\mathsf {fin}}\varphi \wedge \exists y_1 \langle \!\langle {y_1,t}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \exists y_1 \langle \!\langle {y_1,t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi\)

  4. 4.

    \(\forall y_2 \langle \!\langle {t, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi \equiv _{\mathsf {fin}}\varphi \wedge \forall y_2 \langle \!\langle {t, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \forall y_2 \langle \!\langle {t, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi\)

  5. 5.

    \(\exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \psi \, \mathsf {U} \, \varphi \equiv _{\mathsf {fin}}\varphi \vee (\psi \wedge \exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \psi \, \mathsf {U} \, \varphi )\)

  6. 6.

    \(\forall y_2 \langle \!\langle {t, y_2}\rangle \!\rangle _{_{\! }}\, \psi \, \mathsf {U} \, \varphi \equiv _{\mathsf {fin}}\varphi \vee (\psi \wedge \forall y_2 \langle \!\langle {t, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \forall y_2 \langle \!\langle {t, y_2}\rangle \!\rangle _{_{\! }}\, \psi \, \mathsf {U} \, \varphi )\)

  7. 7.

    \(\forall y_2 \exists y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi \equiv _{\mathsf {fin}}\varphi \wedge \forall y_2 \exists y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \forall y_2 \exists y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi\).

  8. 8.

    \(\exists y_1 \forall y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi \equiv _{\mathsf {fin}}\varphi \wedge \exists y_1 \forall y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \exists y_1 \forall y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi\).

  9. 9.

    \(\forall y_2 \exists y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \psi \, \mathsf {U} \, \varphi \equiv _{\mathsf {fin}}\varphi \vee ( \psi \wedge \forall y_2 \exists y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \forall y_2 \exists y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \psi \, \mathsf {U} \, \varphi )\).

  10. 10.

    \(\exists y_1 \forall y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \psi \, \mathsf {U} \, \varphi \equiv _{\mathsf {fin}}\varphi \vee (\psi \wedge \exists y_1 \forall y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \exists y_1 \forall y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \psi \, \mathsf {U} \, \varphi )\).

Proof

  1. 1.

    Follows directly from the semantics, just like the respective fixpoint equivalence for \(\langle \!\langle {A}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\,\) in ATL, cf. [8].

  2. 2.

    Likewise, just like the respective fixpoint equivalence for \(\langle \!\langle {A}\rangle \!\rangle _{_{\! }}\,\! \, \mathsf {U} \,\) in ATL.

  3. 3.

    First, note that \(y_1\) does not occur free in \(\varphi\) since \(\exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi \in {\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\).

    Now, we take the equivalence 1, where \(t' = y_1\), and quantify both sides with \(\exists y_1\), obtaining:

    \(\exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi \equiv \exists y_1 (\varphi \wedge \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi ) \equiv \varphi \wedge \exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi\).

    By applying Theorem 2 to both sides above and then using Lemmas 9,  10 and 8 we obtain:

    $$\begin{aligned}&\exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi \\&\quad \equiv _{\mathsf {fin}}\textsc {nf} (\varphi \wedge \exists y_1\langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi ) \\&\quad = \textsc {nf} (\varphi ) \wedge \textsc {nf} (\exists y_1\langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi ) \\&\quad = \varphi \wedge \textsc {pqe} (\textsc {push} (\exists y_1, \textsc {nf} (\langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi ))) \\&\quad = \varphi \wedge \textsc {pqe} (\textsc {push} (\exists y_1, \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \textsc {nf} (\varphi ))) \\&\quad = \varphi \wedge \textsc {pqe} (\textsc {push} (\exists y_1, \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi )) \\&\quad = \varphi \wedge \textsc {pqe} (\exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \textsc {push} (\exists y_1, \mathsf {X}\, \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi )) \\&\quad = \varphi \wedge \textsc {pqe} (\exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \exists y_1\langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \textsc {push} (\exists y_1, \mathsf {G}\, \varphi )) \\&\quad = \varphi \wedge \textsc {pqe} (\exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \exists y_1\langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi ) \\&\quad \equiv \varphi \wedge \exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \exists y_1 \langle \!\langle {y_1, t}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \varphi . \end{aligned}$$

    The other cases are analogous.

\(\square\)

4 Model checking

In this section we develop an algorithm for model checking the fragment \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\). By virtue of Theorem 2, it will provide a model checking procedure for the whole \({\mathcal {L}}_{\textsc {hdmas}}\).

Let \(\varphi\) be any state formula of \({\mathcal {L}}_{\textsc {hdmas}}\), \({\mathcal {M}}\) be a hdmas, \(s\) a state and \(\theta\) an assignment in \({\mathcal {M}}\). The local model checking problem is the problem of deciding whether \({\mathcal {M}}, s , \theta \models \varphi\), while the global model checking problem is the computational problem that returns the set of states in \({\mathcal {M}}\) where the input formula \(\varphi\) is satisfied, i.e. it is the problem of computing the state extension of \(\varphi\) in \({\mathcal {M}}\) given \(\theta\), formally defined as:

$$\begin{aligned}{}[\![ \varphi ]\!]_{{\mathcal {M}}}^{\theta } = \{{ s \in S \mid {\mathcal {M}}, s , \theta \models \varphi }\}. \end{aligned}$$

For closed formulae \(\varphi\), \([\![ \varphi ]\!]_{{\mathcal {M}}}^{\theta }\) does not depend on the assignment \(\theta\), so we omit it and write \([\![ \varphi ]\!]_{{\mathcal {M}}}^{}\).

Algorithm 4 presented here solves the global model checking problem for all \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\) formulae. The core sub-procedure of the algorithm is the function \(\textsc {preImg}\) which, given a set \(Q\) of states in \(S\) and \(C, N \in \mathbb {N}\), returns the set of states from which C controllable agents have a joint action, which, when played against any joint action of other N uncontrollable agents produces an outcome state in \(Q\). We will call that set the (CN)-controllable pre-image of \(Q\). Often we will omit (CN), when unspecified or fixed in the context, and will write simply “the controllable pre-image of \(Q\)”. We also extend that notion to “ \((t_1,t_2)\)-controllable pre-image”, for any terms \(t_1,t_2\), the values of which are given by the assignment. When \(Q= [\![ \psi ]\!]_{{\mathcal {M}}}^{{\theta }}\), it computes the state extension of \(\langle \!\langle {t_1,t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \psi\) which is parameterised by terms \(t_1,t_2\) (by means of their values \(\theta (t_1)\) and \(\theta (t_2)\)). We then extend that further to quantified extensions of \(\langle \!\langle {t_1,t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \psi\), by adding the respective quantification to the result. In all cases, we reduce the problem of computing the controllable pre-images to checking the truth of Presburger formulae.

We now proceed with some technical preparation. Recall that \(X ^+\) is the set of \(n+1\) action counters. We will also be using auxiliary integer variables \(k_1, \ldots , k_n, k_{\varepsilon }\) and \(\ell _1, \ldots , \ell _n, \ell _{\varepsilon }\) not contained in \(X ^+\). Each \(k_i\) (respectively, \(\ell _i\)) represents the number of controllable (respectively, uncontrollable) agents performing action \(act _i\); likewise for \(k_{\varepsilon }\) (resp., \(\ell _{\varepsilon }\)) for the number of controllable (resp., uncontrollable) agents performing the idle action. Also, for each \(s\) in \(S\) and \(i \in \{{1, \ldots , n}\}\) we introduce an auxiliary propositional constant \(d ^i_s\) which is true if and only if action \(act _i\) is available in \(s\), i.e., \(act _i \in d ( s )\).

Definition 14

Given a hdmas \({\mathcal {M}}\) with a finite state space \(S\), state \(s\) in \(S\), a subset \(Q\) of \(S\), and terms \(t_1\), \(t_2\), we define the following Presburger formulae:

$$\begin{aligned}&g ^{ s }_{Q}(x_1,\ldots , x_n) := \bigvee _{ s ' \in Q} \delta ( s , s ')(x_1,\ldots , x_n).\\&\mathsf {PrF}({\mathcal {M}}, s , t_1, t_2, Q) := \exists k_1 \ldots \exists k_n\, \exists k_{\varepsilon } \Bigg ( \bigwedge _{i=1}^n (k_i \not = 0 \rightarrow d ^i_s) \wedge {} \sum _{i =1}^n k_i + k_{\varepsilon } = t_1 \wedge {} \\&\forall \ell _1\ldots \forall \ell _n\, \forall \ell _{\varepsilon } \, \bigg ( \Big (\bigwedge _{i=1}^n (\ell _i \not = 0 \rightarrow d ^i_s ) \wedge {} \sum _{i =1}^n \ell _i + \ell _{\varepsilon } = t_2 \Big ) \rightarrow g ^{ s }_{Q}\big ((k_1+\ell _1),\ldots ,(k_n+\ell _n)\big ) \bigg ) \Bigg ) \end{aligned}$$

The formula \(\mathsf {PrF}({\mathcal {M}}, s , t_1, t_2, Q)\) intuitively says that there is a tuple of available actions at \(s\) such that when played by \(t_1\) many (controllable) agents and combined with any tuple of available actions for \(t_2\) many (uncontrollable) agents, it satisfies a guard of a transition leading to a state in Q. (The formula can be shortened somewhat, if the quantification is restricted only to \(k-\) and \(\ell -\)variables that correspond to action counters that appear in the guard \(g ^{ s }_{Q}\), which would improve the complexity estimates, as shown in Sect. 5.) That formula and its extensions with quantifiers over \(t_1\) (when equal to \(y_1\)) and \(t_2\) (when equal to \(y_2\)) will be used by the global model checking algorithm to compute the controllable pre-images of state extensions.

Example 6

Let us compute the state extension of the formula

\(\varphi = \exists y_1 \forall y_2 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, (p \vee q)\) in the model \({\mathcal {M}}\) of Example 2. First, we compute \([\![ p \vee q]\!]_{{\mathcal {M}}}^{} = \{{ s _2, s _3, s _4, s _5, s _6}\}\). Then, for each state \(s \in {\mathcal {M}}\) we check the truth of the closed Presburger formula \(\exists y_1 \forall y_2 \mathsf {PrF}({\mathcal {M}}, s , y_1, y_2, [\![ p \vee q]\!]_{{\mathcal {M}}}^{})\) in \({\mathcal {M}}\).

  • \(\exists y_1 \forall y_2 \, \mathsf {PrF}({\mathcal {M}}, s _1, y_1, y_2, [\![ p \vee q]\!]_{{\mathcal {M}}}^{})\) is false, thus \(s _1\) does not belong to the \(\exists y_1\forall y_2(y_1,y_2)\)-controllable pre-image of \([\![ p \vee q]\!]_{{\mathcal {M}}}^{}\). Indeed 11 uncontrollable agents can force the system to stay in \(s _1\) when they all perform \(act _3\);

  • \(\exists y_1 \forall y_2 \, \mathsf {PrF}({\mathcal {M}}, s _2, y_1, y_2, [\![ p \vee q]\!]_{{\mathcal {M}}}^{})\) is true, hence \(s _2\) belongs to the \(\exists y_1\forall y_2(y_1,y_2)\)-controllable pre-image of \([\![ p \vee q]\!]_{{\mathcal {M}}}^{}\) trivially because all outgoing transitions from \(s _2\) lead to states in \([\![ p \vee q]\!]_{{\mathcal {M}}}^{}\);

  • checking all other states likewise produces the final result:

    \([\![ \varphi ]\!]_{{\mathcal {M}}}^{} = \{{ s _2, s _4, s _5, s _6}\}\).

figure d
figure e
figure f
figure g

We now present the global model checking Algorithm 4. From here on, we denote by \(\mathsf {pfix}\) any string from the set \(\{{\epsilon , \exists y_1, \forall y_2, \exists y_1 \forall y_2, \forall y_2 \exists y_1}\}\), where \(\epsilon\) is the empty string. In each of the cases of the algorithms, \(\mathsf {pfix}\) is assumed to be the longest quantifier prefix that matches the input (sub)-formula.

  1. 1.

    The base case in Algorithm 4 (line 3) of \(\varphi\) being an atomic proposition p simply returns the set of states, the labels of which contain p.

  2. 2.

    The boolean cases are straightforward.

  3. 3.

    In the case of Nexttime formula \(\mathsf {pfix}\langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\,\mathsf {X}\, \psi\), the algorithm first computes the state extension \(Q\) of the subformula \(\psi\) with a recursive call, and then the controllable pre-image of \(Q\). The computation of the respective controllable pre-image is shown in Algorithm 1. First, if any of \(t_1\) and \(t_2\) is not a variable that appears (i.e., is bound) in the quantifier prefix \(\mathsf {pfix}\), the assignment \(\theta\) is applied to assign its value. Then, for each state \(s\), if the formula \(\mathsf {pfix}\, \mathsf {PrF}({\mathcal {M}}, s , t_1, t_2, Q)\) is true, the algorithm adds \(s\) to the set of controllable states to be returned.

  4. 4.

    Algorithms 2 and 3 compute the extension of closed formulae of the type \(\mathsf {pfix}\langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\, \chi\) with temporal objective \(\chi\) starting with \(\mathsf {G}\,\) and \(\, \mathsf {U} \,\) respectively. Their structure is similar to that for global model checking of such formulae in ATL (cf. e.g. the algorithm presented in [8, Chapter 9]). They apply the iterative procedures‘ of computing controllable pre-images that the fixpoint characterizations of the temporal operators \(\mathsf {G}\,\) and \(\, \mathsf {U} \,\) yield (ibid.). This is possible for quantified formulae as the quantifiers in formulae from \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\) are propagated inside the temporal operators according to the respective fixpoint equivalences, proved in Theorem 3.

Theorem 4

Let \({\mathcal {M}}\) be a hdmas, \(\varphi\) a \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\)-formula and \(\theta\) an assignment. Then

$$\begin{aligned}{}[\![ \varphi ]\!]_{{\mathcal {M}}}^{\theta } = \textsc {globalMC}({\mathcal {M}}, \varphi , \theta ) \end{aligned}$$

Proof

By induction on the structure of \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\) formulae. The boolean cases are straightforward. For nexttime formulae \(\mathsf {pfix}\langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \psi\) the claim immediately follows from the correctness of Algorithm 1, implied by the semantics of \(\mathsf {PrF}({\mathcal {M}}, t_1, t_2, \mathsf {pfix}, [\![ \psi ]\!]_{{\mathcal {M}}}^{})\). For formulae of the type \(\mathsf {pfix}\langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, \psi\) and \(\mathsf {pfix}\langle \!\langle {t_1, t_2}\rangle \!\rangle _{_{\! }}\, \psi _1 \, \mathsf {U} \, \psi _2\), it follows from the correctness of Algorithms 2 and 3, justified by Theorem 3. \(\square\)

For model checking of the full language \({\mathcal {L}}_{\textsc {hdmas}}\), Algorithm 4 is combined with function \(\textsc {nf}\), transforming constructively any \({\mathcal {L}}_{\textsc {hdmas}}\)-formula \(\varphi\) to \(\varphi ^{\mathsf {NF}}\) in \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\), equivalent in the finite to \(\varphi\) by virtue of Theorem 2.

Example 7

We illustrate Algorithm 4 by sketching its application to the formula \(\psi = \langle \!\langle {7, 4}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, (\forall y_2 \exists y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, p)\) in the hdmas model \({\mathcal {M}}\) in Fig. 2. We fix any assignment \(\theta\) (it does not play any role, since \(\psi\) is closed). The outer formula is a \(\mathsf {X}\,\) formula, thus line 18 calls recursively the global model checking on the subformula in the temporal objective. Line 4 of \(\textsc {G-fixpoint}\) initializes \(Z\leftarrow \{{ s _2, s _3, s _4}\}\), viz., states labeled with p and \(W\leftarrow S = \{{ s _1, \ldots , s _6}\}\). Since \(W\not \subseteq Z\), we enter the while cycle computing the fixpoint. In the numbered list below, each item i) correspond to the ith iteration cycle.

  1. 1.
    • \(W\leftarrow \{{ s _2,\! s _3,\! s _4}\}\);

    • \(\textsc {preImg}({\mathcal {M}},\! y_1,\! y_2,\! \{{ s _2,\! s _3,\! s _4}\},\! \theta ,\! \forall y_2 \exists y_1)\!=\!\{{ s _2,\! s _4,\! s _5}\}\);

    • \(Z\leftarrow \{{ s _2, s _4, s _5}\} \cap \{{ s _2, s _3, s _4}\} = \{{ s _2, s _4}\}\).

  2. 2.
    • \(W\leftarrow \{{ s _2, s _4}\}\);

    • \(\textsc {preImg}({\mathcal {M}},\! y_1,\! y_2,\! \{{ s _2,\! s _4}\}, \theta ,\! \forall y_2 \exists y_1) = \{{ s _2,\! s _4,\! s _5}\}\);

    • \(Z\leftarrow \{{ s _2, s _4, s _5}\} \cap \{{ s _2, s _3, s _4}\} = \{{ s _2, s _4}\}\).

    Now \(W\leftarrow Z\) then the fixpoint is reached.

The set \(Z\) is then returned, so \([\![ \forall y_2 \exists y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, p]\!]_{{\mathcal {M}}}^{} = \{{ s _2, s _4}\}\). We now move to the outer next formula for which line 19 of globalmc algorithm calls the preimg procedure. For each \(s \in S\) the truth of formula \(\mathsf {PrF}({\mathcal {M}}, s , 7, 4, \{{ s _2, s _4}\})\) is called. The final result is \([\![ \psi ]\!]_{{\mathcal {M}}}^{}=\{{ s _4, s _5}\}\).

Example 8

Consider \(\varphi =\) \(\langle \!\langle {6, 3}\rangle \!\rangle _{_{\! }}\,\!\mathsf {X}\, \! \big (\exists y_1 \langle \!\langle {y_1,\! 10}\rangle \!\rangle _{_{\! }}\,\! (\forall y_2\exists y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\,\! \mathsf {G}\, p) \! \, \mathsf {U} \, \! (\forall y_2 \langle \!\langle {0, y_2}\rangle \!\rangle _{_{\! }}\, \! \mathsf {G}\, \! q)\! \big )\). We start by computing the extension of \(\forall y_2 \langle \!\langle {0, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, q\), following Algorithm 2.

From lines \(4--6\): \(Q\leftarrow [\![ q]\!]_{{\mathcal {M}}}^{} = \{{ s _5, s _6}\}\); \(W\leftarrow \{{ s _1, \ldots , s _6}\}\), and \(Z\leftarrow \{{ s _5, s _6}\}\).

Since \(W\not \subseteq Z\), we enter the iteration cycle:

  1. 1.
    • \(W\leftarrow \{{ s _5, s _6}\}\);

    • \(\textsc {preImg}({\mathcal {M}}, 0, y_2, \{{ s _5, s _6}\}, \theta , \forall y_2) = \{{ s _6}\}\)

    • \(Z\leftarrow \{{ s _6}\} \cap \{{ s _5, s _6}\} = \{{ s _6}\}\).

  2. 2.
    • \(W\leftarrow \{{ s _6}\}\);

    • \(\textsc {preImg}({\mathcal {M}}, 0, y_2, \{{ s _6}\}, \theta , \forall y_2) = \{{ s _6}\}\);

    • \(Z\leftarrow \{{ s _6}\} \cap \{{ s _5, s _6}\} = \{{ s _6}\}\).

    The fixpoint is reached and \([\![ \forall y_2 \langle \!\langle {0, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, q]\!]_{{\mathcal {M}}}^{} = \{{ s _6}\}\).

From Example 7 we get \([\![ \forall y_2\exists y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\, \mathsf {G}\, p]\!]_{{\mathcal {M}}}^{}=\{{ s _2, s _4}\}\). We then move to computing the extension of the until formula, following Algorithm 3. From lines \(4--7\):

\(Q_1 \leftarrow \{{ s _2, s _4}\}\); \(Q_2 \leftarrow \{{ s _6}\}\); \(W\leftarrow \emptyset\) and \(Z\leftarrow \{{ s _6}\}\).

Since \(Z\not \subseteq W\), we enter the iteration cycle:

  1. 1.
    • \(W\leftarrow \{{ s _6}\}\);

    • \(\textsc {preImg}({\mathcal {M}}, y_1, 10, \{{ s _6}\}, \theta , \exists y_1) = \{{ s _4, s _6}\}\).

      Indeed, from \(s _4\), e.g., 40 controllable agents performing \(act _1\) guarantee that guard \(g _4\) is satisfied.

    • \(Z\leftarrow \{{ s _6}\} \cup (\{{ s _4, s _6}\} \cap \{{ s _2, s _4}\} ) = \{{ s _4, s _6}\}\).

  2. 2.
    • \(W\leftarrow \{{ s _4, s _6}\}\);

    • \(\textsc {preImg}({\mathcal {M}}, y_1, 10, \{{ s _4, s _6}\}, \theta , \exists y_1) = \{{ s _2, s _4, s _6}\}\);

    • \(Z\leftarrow \{{ s _6}\} \cup (\{{ s _2, s _4, s _6}\} \cap \{{ s _2, s _4}\}) = \{{ s _2, s _4, s _6}\}\).

  3. 3.
    • \(W\leftarrow \{{ s _2, s _4, s _6}\}\);

    • \(\textsc {preImg}({\mathcal {M}},\! y_1,\! 10,\! \{{ s _2,\! s _4,\! s _6}\},\! \theta ,\! \exists y_1) = \{{ s _2,\! s _4,\! s _5,\! s _6}\}\);

    • \(Z\leftarrow \{{ s _6}\} \cup (\{{ s _2, s _4, s _5, s _6}\} \cap \{{ s _2, s _4}\}) = \{{ s _2, s _4, s _6}\}\).

    The fixpoint is reached. Thus:

    $$\begin{aligned}{}[\![ \exists y_1 \langle \!\langle {y_1,\! 10}\rangle \!\rangle _{_{\! }}\,\! (\forall y_2\exists y_1 \langle \!\langle {y_1, y_2}\rangle \!\rangle _{_{\! }}\,\! \mathsf {G}\, p) \! \, \mathsf {U} \, \! (\forall y_2 \langle \!\langle {0, y_2}\rangle \!\rangle _{_{\! }}\, \! \mathsf {G}\, \! q)]\!]_{{\mathcal {M}}}^{} = \{{ s _2, s _4, s _6}\}. \end{aligned}$$

Lastly, we call \(\textsc {preImg}({\mathcal {M}}, 6, 3, \{{ s _2, s _4, s _6}\}, \theta , \epsilon )\) to compute \([\![ \varphi ]\!]_{{\mathcal {M}}}^{} = \{{ s _1, s _4, s _5, s _6}\}\).

5 Complexity estimates

As well-known from [1], the time complexity of model checking of ATL formulae is linear in both the size of the modelFootnote 10 and the length of the formula. Note that in standard concurrent game models the number of agents is fixed and the transition relation is represented explicitly, by means of transitions from each state labelled with each action profile. In hdmas models, however, the transitions are represented symbolically, in terms of the guards that determine them. An explicit representation would be infinite, in general. Thus, the question of how to measure the size of hdmas models arises. Given a hdmas \({\mathcal {M}}\), we consider the following parameters: the size \(| S |\) of the state space; the size n of the action set \(Act\), and the size \(|\delta |\) of the symbolic transition guard function. The latter is defined as the sum of the length of all guards appearing in \(\delta\), where we assume a binary encoding of numbers.

Given a \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\) formula \(\varphi\) and a hdmas \({\mathcal {M}}\), the number of fixpoint computations in the global model checking algorithm is bounded by the length of \(|\varphi |\). Each computation executes the while cycle at most \(| S |\) times, and at each iteration, the function \(\textsc {preImage}\) is called. The pre-image algorithm cycles through all states again and invokes model checking of a \(\mathsf {PrA}\) formula \(\mathsf {PrF}\) each time. In the worst case \(|\mathsf {PrF}| = |\delta |\), as \(g ^{ s }_{Q}\) could be the disjunction of almost all guards in \({\mathcal {M}}\). The complexity of checking the truth of a \(\mathsf {PrA}\)-formula depends not just on its size, but more precisely on the numbers of quantifier alternations and of quantified variables in any quantifier block (cf. [12]). In our case, the maximum number of quantifier alternations is 4, while the number of variables in any quantifier block is at most \(n+1\). By applying results from [11] (cf. also [12]), these yield a worst case complexity \(\varSigma _3^{\textsf {EXP}}\), or more precisely \(\text {STA}(*, 2^{|\delta |^{O(1)}} , 3)\) when the model is not fixed, or at least n is unbounded, but it is down to \(\text {STA}(*, {|\delta |^{O(1)}}, 3)\) when n is fixed.

Thus, the number of variables and quantifier alternation depth in \(\mathsf {PrF}\)-formulas crucially affect the complexity of model checking of \({\mathcal {L}}_{\textsc {hdmas}} ^{\mathsf {NF}}\)- formulae. We can distinguish the following cases of lower complexity bounds:

  1. 1.

    When no quantifier patterns \(\exists y_1 \forall y_2\) occur, the maximal alternation depth is 3, hence the complexity is reduced to \(\text {STA}(*, 2^{|\delta |^{O(1)}} , 2)\), respectively \(\text {STA}(*, {|\delta |^{O(1)}} , 2)\).

  2. 2.

    If no quantification \(\forall y_2\) is allowed, but the number of uncontrollable agents is a parameter, the maximal alternation depth is 2, hence the complexity is reduced to \(\text {STA}(*, 2^{|\delta |^{O(1)}} , 1)\), respectively \(\text {STA}(*, {|\delta |^{O(1)}} , 1)\).

  3. 3.

    In the case when the number of either controllable or uncontrollable agents is fixed or bounded, the resulting \(\mathsf {PrF}\)-formulas become either existential or universal (by replacing the quantifiers over the actions of the bounded set of agents with conjunctions, resp. disjunctions), In these cases, the complexity drops to NP-complete if the number of actions is unbounded, resp. P-complete if that number is fixed or bounded.

6 Concluding remarks

We have proposed and explored a new, generic framework for modelling, formal specification and verification of dynamic multi-agent systems, where agents can freely join and leave during the evolution of the system. We consider indistinguishable agents and therefore the system evolution is affected only by the number of agents performing actions. As neither of the currently available logics are well-suited for expressing properties of such dynamic models, we have devised a variation of the alternating time temporal logic ATL to specify strategic abilities of coalitions of controllable versus non-controllable of agents.

The framework and results presented here are amenable to various extensions, e.g. allowing any \(\mathsf {PrA}\)-formulae as guards in hdmas models; allowing more expressive languages, e.g. with arbitrary LTL or parity objectives, with somewhat more liberal quantification patterns in \({\mathcal {L}}_{\textsc {hdmas}}\) (i.e., formulae of the type \(\forall y \langle \!\langle {y,y}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \varphi\) and \(\exists y \langle \!\langle {y,y}\rangle \!\rangle _{_{\! }}\, \mathsf {X}\, \varphi\) can be added easily), adding several super-agents with controllable sets of agents, etc. The main technical challenge for some of these extensions would be to lift or extend the model checking procedure for them. Still, in particular, extending the present framework to include any finite number of different agent “types”, with each type having a different protocol, is rather straightforward, as follows. Let us fix a set of agent types \(\{{T_1, \ldots , T_m}\}\). Now each agent belong to one specific type. Definition 4 will then have \(d_1, \ldots , d_m\) action availability functions, one for each type, so that agents belonging to the same type have the same set of available actions in each system state, but agents belonging to different types might have different available actions. Lastly the logic will now involve m variables for the controllable agents of each type, and m other variables for the non-controllable ones of each type. The same restrictions on the use of these variables will apply in this extended logic and the notion of normal form, the technical results related to it, and the model checking algorithm for formulae in normal form, extend as expected to the multi-type case.

Of the numerous possible applications we only mention a natural link with the Colonel Blotto games [5, 20], where two players simultaneously distribute military force units across n battlefields, and in each battlefield the player (if any) that has allocated the higher number of units wins. As suggested by our fortress example, our framework can be readily applied to model and solve algorithmically multi-player and multiple-round extensions of Colonel Blotto games, which we leave to future work. More generally, dynamic resource allocation games [3] as well as verification of parameterised fault-tolerance in multi-agent systems [16] seem naturally amenable to applications of the present work.