# State and path coalition effectivity models of concurrent multi-player games

## Abstract

We consider models of multi-player games where abilities of players and coalitions are defined in terms of sets of outcomes which they can effectively enforce. We extend the well-studied state effectivity models of one-step games in two different ways. On the one hand, we develop multiple state effectivity functions associated with different long-term temporal operators. On the other hand, we define and study coalitional path effectivity models where the outcomes of strategic plays are infinite paths. For both extensions we obtain representation results with respect to concrete models arising from concurrent game structures. We also apply state and path coalitional effectivity models to provide alternative, arguably more natural and elegant semantics to the alternating-time temporal logic ATL*, and discuss their technical and conceptual advantages.

### Keywords

Multi-step games Coalitional effectivity models Alternating-time temporal logic## 1 Introduction

A wide variety of multi-player games can be modeled by so called ‘multi-player game models’ [16, 29], a.k.a. ‘concurrent game models’ [6]. The models can be seen as a generalization of both extensive form games and repeated normal form games. Here, we view them as general models of *multi-step games*. Intuitively, such a game is based on a labelled transition system where every state is associated with a normal form game, with outcomes being possible successor states, and the transitions between states are labelled by tuples of actions,^{1} one for each player. Thus, the outcome of playing the normal form game at any given state is a transition to a new state, respectively to a new normal form game. In the quantitative version of such games, the outcome states are also associated with payoff vectors, while in the version that we consider here, the payoffs are *qualitative*—defined by properties of the outcome states, possibly expressed in a logical language. The players’ objectives in multi-step games can simply be about reaching a desired (’winning’) state, or they can be more involved, such as forcing a desired *long-term behaviour* (transition path, run) again possibly formalized in a suitable logical language such as the linear time temporal logic LTL.

Various logics for reasoning about coalitional abilities in multi-player games have been proposed and studied in the last two decades—most notably, Coalition Logic (CL) [27] and Alternating-time Temporal Logic (ATL* and its fragment ATL) [6]. Coalition Logic can be seen as a logic for reasoning about abilities of coalitions in one-step games to bring about an outcome *state* with desired properties by means of single actions. On the other hand, ATL and ATL* allow to express statements about multi-step scenarios. For example, the ATL formula \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {F}\varphi \) says that the coalition of players or agents^{2}\(C\) can ensure that \(\varphi \) will become true at some future moment, no matter what the other players do. Likewise, \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {G}\varphi \) expresses that the coalition \(C\) can enforce \(\varphi \) to be always the case. More generally, the ATL* formula \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\gamma \) holds true iff \(C\) has a strategy to ensure that any resulting behavior of the system (i.e., any play of the game) will satisfy the property \(\gamma \).

One way to characterize the abilities of players and coalitions to achieve desirable outcome of the game is in terms of *coalition effectivity functions*, first introduced in cooperative game theory [25]. Intuitively, an effectivity function in a game model assigns, at every state of the model and for every coalition \(C\), the family of sets of possible outcomes \(X\) for which the coalition has a suitable collective action. The collective action must guarantee that the outcome would be in the set \(X\) regardless of what the other players choose to do at that state, i.e., that \(C\) is be “effective” for the set \(X\) at that state. This concept is at the core of the “coalition effectivity models” studied in [27] and used there to provide semantics for CL. “Alternating transition systems”, originally used to provide semantics for ATL in [4], are closely related. Building on a result from [30], Pauly obtained in [27] an abstract characterization of “playable” coalition effectivity functions that correspond to the \(\alpha \)-effectivity functions in concrete models of one-step games. Later, that characterization was corrected and completed in the case of infinite state spaces in [19].

*multi-step games*can be modeled and characterized in terms of effectivity of coalitions with respect to possible outcome states on one hand, and outcome behaviours on the other. We also show how such models can be used to provide conceptually simple and technically elegant semantics for logics of multi-player games such as ATL*. The paper has three main objectives:

- (i)
To extend the semantics for CL based on one-step coalitional effectivity to semantics for ATL over state-based coalitional effectivity models;

- (ii)
To develop the analogous notion of

*coalitional path effectivity*representing the powers of coalitions in multi-step games to ensure long-term behaviors, and to provide semantics for ATL* based on it; - (iii)
To obtain characterizations of multi-player game models in terms of abstract state and path coalitional effectivity models, analogous to the representation theorems for state effectivity functions cited above.

*outcome paths*(plays), not just outcome states. Second,

*one*path effectivity function is sufficient to define the powers of coalitions in a multi-step game for all kinds of temporal patterns, through the standard semantics of temporal operators. This point is further supported by the fact that path effectivity models provide a conceptually straightforward semantics for the whole language of ATL* (which is not definable by alternation-free fixpoint operators on the one-step ability). Thus, the path-effectivity based semantics for multi-step games essentially simulates the state-effectivity based semantics for one shot games. By encapsulating the notion of a play as primitive, it provides a clear and conceptually simple interpretation of the ATL(*) operators. Finally, we argue that path effectivity can just as well be applied to variants of ATL(*) with imperfect information, where even simple modalities do not have fixpoint characterizations [12].

*Motivation* Effectivity functions provide mathematically elegant semantics of interaction between agents, in which properties of interaction are “distilled” and abstracted away from concrete details of implementation. This makes them significantly different from concurrent game models that focus on how concrete actions interfere and give rise to transitions, and how they can be used to build long-term strategies. In contrast, coalitional effectivity models present abilities in a “pure” form. This does not mean that effectivity models are supposed to replace concurrent game models in the semantics of logics like ATL. On the contrary, the two kinds of structures occupy largely different niches. Concrete models of interaction (such as concurrent game models) are more appropriate when one wants to build a model of an *actual system*, and possibly verify some actual requirements in it. Abstract models (such as coalitional effectivity models) serve better when used to investigate properties of *classes of systems*. Moreover, correspondence results between concrete and abstract models reveal structural properties of the former in a way that is difficult to achieve otherwise.

First of all, they characterize the limitations of concrete models. That is, they show which structural conditions must inevitably hold in simple models that are constructed in terms of concrete states, actions, and their combinations.

Secondly, they characterize which abstract patterns of effectivity can be implemented by concrete models.

Thirdly, they characterize classes of models for which the concrete and abstract semantics of strategic logics can be used interchangeably.

*imperfect information concurrent game models*(iCGM).

*Related work* We study correspondence between patterns of *coalitional effectivity* vs. standard models of long-term interaction, which are typically used in the field of multi-agent systems (cf. e.g. [17, 35]). Effectivity models originate from social choice theory [1, 25, 32]. More recently, they gained attention as models of ability in agent systems [27, 28]. On the other hand, multi-agent systems are often modeled by various kinds of transition systems [6, 17, 21, 27] that bear close resemblance to models of multi-step and repeated games from game theory. Multi-player game models (a.k.a. concurrent game structures) are the most typical example here.

Correspondence between “concrete” and “abstract” models of strategic power has been studied in a number of previous works. Characterizations of effectivity in simple cooperative games (voting games) were investigated e.g. in [25, 34]. Peleg and others characterized effectivity patterns arising in surjective normal form game frames [9, 31]. Pauly extended Peleg’s result to general normal form game frames, and provided a logical axiomatization of effectivity in such frames [27, 28]. In our previous work, we pointed out that Pauly’s result was in fact incorrect, and gave the correct characterization of the correspondence, both in structural and logical terms [19]. All the above results refer to one-shot games (either cooperative or noncooperative) where strategies are atomic.

While most models of multi-agent interaction are based on transition systems that resemble normal and/or extensive game frames, there is a smaller group of models that come closer to effectivity functions. In fact, alternating transition systems (ATS) from [5] can be seen as a special case of coalitional effectivity models where the aggregation of individual into coalitional power is additive. The correspondence between ATS and multi-player game models was studied in [16, 17]. Another class of effectivity-like models is provided by *stit*, i.e., the logic of “seeing to it that” [7]. Models of “strategic stit” [8, 11, 20, 23] are especially relevant here. In classical stit models [8, 23], choices are primitive objects rather than sets of paths (which in turn are sequences of states constructed by discrete transitions). Still, in the more computation-friendly approaches to stit, choices can be directly mapped to infinite sequences of time moments [11, 20, 22], so they come very close to the effectivity patterns studied in this paper. Depending on the interpretation, they can be seen as classes of *path effectivity functions* or *state effectivity functions*. However, not all effectivity patterns can be represented by stit models. Moreover, some of the patterns that can be represented are not “playable”, i.e., they cannot be obtained in natural multi-step games. We investigate the relationship between stit models and effectivity models in more detail in Sect. 6.2. It is worth noting that, to our best knowledge, this is the first formal study of the modeling limitations of stit. Some simulation results connect stit structures to multi-player game models [11] but they focus on their logical rather than structural properties.

This article builds on the preliminary research reported in [18].

*Structure of the paper* The paper is structured as follows. We begin by introducing the basic notions in Sect. 2. In Sect. 3 we develop state-based effectivity models that suffice to define semantics of ATL. The models include three different effectivity functions, one for each basic modality \(\mathrm {X},\mathrm {G},\mathrm {U}\). Then, in Sect. 4 we develop and study effectivity models based on paths. We show how they provide semantics to ATL*, and identify appropriate “playability” conditions, which we use to establish correspondences between powers of coalitions in the abstract models and strategic abilities of coalitions in concurrent game models. Finally, in Sect. 6 we briefly discuss how the path-oriented view can be used to construct an alternative definition of state effectivity, and to facilitate reasoning about games with imperfect information. Moreover, we show an application of our characterization results to the well-known stit models of agency.

## 2 Preliminaries

We begin by introducing some basic game-theoretic and logical notions. In all definitions hereafter, the sets of players, game (outcome) states, and actions available to players are assumed non-empty. Moreover, the set of players is always assumed finite.

### 2.1 Concurrent game structures and models

Strategic games (a.k.a. normal form games) are basic models of non-cooperative game theory [26]. Following the tradition in the qualitative study of games, we focus on abstract game modes, where the effect of strategic interaction between players is represented by abstract outcomes from a given set, and players’ preferences are not specified.

**Definition 1**

*Strategic game*) A

*strategic game*is a tuple

We define coalitional strategies \(\alpha _C\) in \(G\) as tuples of individual strategies \(\alpha _i\) for \(i\in C\), i.e., \(Act_C=\prod _{i\in C}Act_i\).

Strategic games are one-step encounters. They can be generalized to multi-step scenarios, in which every state is associated with a strategic game, as follows.

**Definition 2**

*Concurrent game structures and models*) A

*concurrent game structure*(CGS) (aka

*multi-player game frame*[16, 29]) is a tuple

A *concurrent game model* (CGM) \(M\) is a CGS endowed with a valuation \(V:St\rightarrow \mathcal {P}({Prop})\) for some fixed set of atomic propositions \(Prop\).

Note that in a CGS all players execute their actions synchronously and the combination of the actions, together with the current state, determines the transition in the CGS. We also observe that a CGS can be seen as a collection of strategic games, each assigned to a different state in the CGS.

**Example 1**

(*A model of aggressive play*) Consider two agents interacting in a common environment, for instance marketing similar products, building up reputation in a social network, or playing the same strategic online game. At any moment, each of them can choose to play aggressively (\(aggr\)) or conservatively (\(cons\)). It is well known that in many games (economic as well as recreational) playing aggressively against a conservative opponent is risky but—if lucky—it can also bring higher profits. Thus, it is usually advisable to play aggressively when one’s situation is relatively bad. If the player’s position is strong, conservative play is usually a better choice.

*Strategies in multi-step games* A *path* in a CGS/CGM is an infinite sequence of states that can result from subsequent transitions in the structure. A *strategy* of a player \(a\) in a CGS/CGM \({\mathcal {M}}\) is a conditional plan that specifies what \(a\) should do in each possible situation. Depending on the type of memory that we assume for the players, a strategy can range from a *memoryless (positional)*, formally represented with a function \(s_a : St\rightarrow Act\), such that \(s_a(q)\in d_a(q)\), to a *perfect recall strategy*, represented with a function \(s_a : St^{+}\rightarrow Act\) such that \(s_a(\langle \dots , q\rangle )\in d_a(q)\), where \(St^{+}\) is the set of *histories*, i.e., finite prefixes of paths in \({\mathcal {M}}\) [6, 33]. The latter corresponds to players with perfect recall of the past states; the former to players whose memory is entirely encoded in the state of the system. A *collective strategy* for a group of players \(C=\{{a_1,...,a_r}\}\) is simply a tuple of strategies \(s_C = \langle {s_{a_1},...,s_{a_r}}\rangle \), one for each player from \(C\). We denote player \(a\)’s component of the collective strategy \(s_C\) by \(s_C[a]\).

\(out(q,s_C) =\)\(\{ \lambda =q_0,q_1,q_2\ldots \mid q_0=q\) and for each \(i=0,1,\ldots \) there exists \(\langle {\alpha ^{i}_{a_1},\ldots ,\alpha ^{i}_{a_k}}\rangle \) such that \(\alpha ^{i}_{a} \in d_a(q_{i})\) for every \(a\in {\mathbb {A}\mathrm {gt}}\), \(\alpha ^{i}_{a} = s_C[a](q_{i})\) for every \(a\in C\) and \(q_{i+1} = o(q_{i},\alpha ^{i}_{a_1},\ldots ,\alpha ^{i}_{a_k}) \}\).

\(out(q,s_C) =\)\(\{ \lambda =q_0,q_1,q_2\ldots \mid q_0=q\) and for each \(i=0,1,\ldots \) there exists \(\langle {\alpha ^{i}_{a_1},\ldots ,\alpha ^{i}_{a_k}}\rangle \) such that \(\alpha ^{i}_{a} \in d_a(q_{i})\) for every \(a\in {\mathbb {A}\mathrm {gt}}\), \(\alpha ^{i}_{a} = s_C[a](\langle q_0\ldots , q_{i}\rangle )\) for every \(a\in C\) and \(q_{i+1} = o(q_{i},\alpha ^{i}_{a_1},\ldots ,\alpha ^{i}_{a_k}) \}\).

### 2.2 Abstract models of coalitional effectivity

**Definition 3**

(*Effectivity functions and models*) A *local effectivity function*\(E: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow {\mathcal {P}({St})}\) associates a family of sets of states with each set of players. A *global effectivity function*\(E: St\times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\) assigns a local effectivity function to every state \(q\in St\). We will use the notations \(E(q)(C)\) and \(E_q(C)\) interchangeably.

Finally, a *coalitional effectivity model* consists of a global effectivity function, plus a valuation of atomic propositions.

Intuitively, the elements of \(E(C)\) correspond to choices of collective actions available to the coalition \(C\): if \(X \in E(C)\) then by choosing

\(X\) the coalition \(C\) can force the outcome of the game to be in \(X\). Hereafter, the elements of \(E(C)\) will be called *(collective) action choices* of the coalition \(C\). The idea to represent a choice (of a collective action) of a coalition by the set of possible outcomes which can be effected by that choice was also captured by the notions of “coalition effectivity models” [27] and “alternating transition systems” [4].

**Definition 4**

*True playability*[19, 27]) A local effectivity function \(E\) is

*truly playable*iff the following hold:

*Outcome Monotonicity:*\(X \in E(C)\) and \(X \subseteq Y\) implies \(Y \in E(C)\);*Liveness:*\(\emptyset \notin E(C)\);*Safety:*\(St\in E(C)\);*Superadditivity:*if \(C \cap D = \emptyset \), \(X \in E(C)\) and \(Y \in E(D)\), then \(X \cap Y \in E(C \cup D)\);\({\mathbb {A}\mathrm {gt}}\)

*-Maximality:*\(\overline{X} \not \in E(\emptyset )\) implies \(X \in E({\mathbb {A}\mathrm {gt}})\);*Determinacy:*if \(X \in E({\mathbb {A}\mathrm {gt}})\) then \(\{x\} \in E({\mathbb {A}\mathrm {gt}})\) for some \(x \in X\).

\(\alpha \)-*Effectivity* Each strategic game \(G\) can be canonically associated with an effectivity function, called the \(\alpha \)-effectivity function of \(G\) and denoted with \(E^{\alpha }_{G}\) [27].

**Definition 5**

(\(\alpha \)-*effectivity in strategic games*) For a strategic game G, the (coalitional) \(\alpha \)-*effectivity function*\(E^{\alpha }_{G}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\) is defined as follows: \(X \in E^{\alpha }_{G}(C)\) if and only if there exists \(\sigma _C\) such that for all \(\sigma _{\overline{C}}\) we have \(o(\sigma _C,\sigma _{\overline{C}}) \in X\).

**Example 2**

The \(\alpha \)-effectivity for \(M_1,q_0\) is:

\(E(\{1,2\})\ =\ \{\{q_0\}, \{q_1\}, \{q_2\}, \{q_0,q_1\}, \{q_0,q_2\}, \{q_1,q_2\}, \{q_0,q_1,q_2\}\}\);

\(E(\{1\})\ =\ E(\{2\})\ =\ \{\{q_0,q_1\}, \{q_0,q_2\}, \{q_0,q_1,q_2\}\}\);

\(E(\emptyset )\ =\ \{\{q_0,q_1,q_2\}\}\).

Clearly, \(E\) is truly playable.

### 2.3 Logical reasoning about multi-step games

The Alternating-time Temporal Logic ATL* [4, 6] is a multimodal logic with strategic modalities \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\) and temporal operators \(\mathrm {X}\) (“at the next state”), \(\mathrm {G}\) (“always from now on”), and \(\mathrm {U}\) (“until”).

*state formulae*and

*path formulae*, respectively defined by the following grammar:

Let \(M\) be a CGM, \(q\) a state in \(M\), and \(\lambda = q_{0}, q_{1}, \ldots \) a path in \(M\). For every \(i \in {\mathbb {N}}\) we denote \(\lambda [i] = q_{i}\); \(\lambda [0..i]\) is the prefix \(q_{0}, q_{1}, \ldots , q_{i}\), and \(\lambda [i..\infty ]\) is the respective suffix of \(\lambda \).

Open image in new window iff \(q\in V(\mathsf {{p}})\), for \(\mathsf {{p}}\in Prop\);

Open image in new window iff Open image in new window and Open image in new window;

Open image in new window iff there is a strategy \(s_C\) for the players in \(C\) such that for each path \(\lambda \in out(q,s_C)\) we have Open image in new window.

Open image in new window iff Open image in new window and Open image in new window;

Open image in new window iff Open image in new window for every \(i\ge 0\); and

Open image in new window iff there is \(i\) such that Open image in new window and Open image in new window for all \(0\le j< i\).

**Example 3**

Consider again the model of aggressive vs. conservative play from Fig. 1. No player has a sure strategy to reach a good position in the game if they start from a bad position. That is, Open image in new window and Open image in new window. Also, no player can ensure that the other player will eventually be at disadvantage: Open image in new window and Open image in new window for all states \(q\). On the other hand, if the player’s initial position is good, she can keep being well off forever (e.g., Open image in new window); the right strategy is to always play conservatively. Moreover, when both players are in a good position, each of them can maintain the good position of the other one in the next moment (by playing aggressively): Open image in new window and Open image in new window. Finally, if the players cooperate then they control the game completely: we have Open image in new window for all states \(q\).

*ATL and CL as fragments of ATL** The most important fragment of ATL* is ATL where each strategic modality is directly followed by a single temporal operator. Thus, the semantics of ATL can be given entirely in terms of states, cf. [6] for details. Consequently, for ATL the two notions of strategy (memoryless vs. perfect recall) yield the same semantics.

Furthermore, the Coalition Logic (CL) from [27] can be seen as the fragment of ATL involving only booleans and operators \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {X}\), and thus it inherits the semantics of ATL on CGMs [16].

## 3 State effectivity in multi-step games

Open image in new window iff \(\varphi ^M\in E_q(C)\), where Open image in new window.

The semantics of ATL has never been explicitly defined in terms of abstract effectivity models. An informal outline of such semantics has been suggested in [17], essentially by representation of the modalities \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {G}\) and \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {U}\) as appropriate fixpoints of \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {X}\), cf. also [6, 16]. In this section, we properly extend state-based effectivity models to provide semantics for ATL. For that, as pointed out earlier, a different effectivity function will be needed for each temporal pattern.

We note that an effectivity function for the “always” modality \(\mathrm {G}\) was already constructed in [27]. Moreover, an effectivity function for reachability, i.e. for the \(\mathrm {F}\) modality, has recently been presented in [3]. Our construction here is algebraic and differs significantly from both these approaches. Moreover, it allows to cover all kinds of effectivity that can be addressed in ATL (though not in ATL*!).

### 3.1 Operations on state effectivity functions

First, we define basic operations and relations on effectivity functions, reflecting the meaning of these as operations on games.

**Definition 6**

*(Operations and relations on effectivity functions)*Let \({E},{F}:St\times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\) be effectivity functions for the set of agents \({\mathbb {A}\mathrm {gt}}\) on a state space St. Then:

*Composition*of the effectivity functions \({E},{F}\) is the effectivity function \({E}\circ {F}\) where, for all \(q\in St\), \(Y\subseteq St\) and \(C \in \mathcal {P}({{\mathbb {A}\mathrm {gt}}})\), it holds that \(Y\in ({E}\circ {F})_{q}(C)\) iff there exists a subset \(Z\) of \(St\), such that \(Z\in {E}_{q}(C)\) and \(Y\in {F}_{z}(C)\) for every \(z\in Z\).*Union*of the effectivity functions \({E},{F}\) is the effectivity function \({E}\cup {F}\) where, for all \(q\in St\), \(Y\subseteq St\) and \(C \in \mathcal {P}({{\mathbb {A}\mathrm {gt}}})\), it holds that \(Y\in ({E}\cup {F})_{q}(C)\) iff \(Y\in {E}_{q}(C)\) or \(Y\in {F}_{q}(C)\).*Intersection*of effectivity functions is defined analogously. Likewise, we define union and intersection of any family of effectivity functions. For instance, given a family of effectivity functions \(\{E^{j}\}_{j\in J}\), its union is the effectivity functionsuch that \(Y\in E_{q}(C)\) iff there exists a \(j\in J\) such that \(Y\in E^{j}_{q}(C)\), for all \(q\in St\), \(Y\subseteq St\) and \(C \in \mathcal {P}({{\mathbb {A}\mathrm {gt}}})\).$$\begin{aligned} E = \bigcup _{j\in J} E^{j} \end{aligned}$$*Inclusion*of effectivity functions:\({E}\subseteq {F}\) iff \({E}_{q}(C)\subseteq {F}_{q}(C)\) for every \(q\in St\) and \(C\subseteq {\mathbb {A}\mathrm {gt}}.\)

Lastly, the

*idle effectivity function*\(I\) is defined as follows:\(I_{q}(C)=\{Y\subseteq St\mid q\in Y \}\) for every \(q\in St\) and \(C\subseteq {\mathbb {A}\mathrm {gt}}.\)

Hereafter, we assume that \(\circ \) has a stronger binding power than \(\cup \) and \(\cap \).

**Proposition 1**

- 1.
\({E}\circ I={I\circ E={E}.}\)

- 2.
If \({F}_{1}\subseteq {F}_{2}\) then \({E}\circ {F}_{1}\subseteq {E}\circ {F}_{2}\).

- 3.
\(({E\cup F)}\circ {G=(E}\circ {G)\cup ({E}\circ F)}\).

- 4.
\(({E\cap F)}\circ {G=(E}\circ {G)\cap ({E}\circ F)}\).

*Proof*

Routine. \(\square \)

*Remark 1*

- 1.
We note that, e.g., item 2 in Proposition 1, does not require the effectivity function to be outcome monotone. However, we will only apply this proposition to outcome monotone effectivity functions, so the monotonicity assumption is unproblematic.

- 2.
The identities \({E\circ (F{\cup G)}=(E}\circ {F)\cup ({E}\circ G)}\) and \({E\circ (F}\cap {{G)}=(E}\circ {F)\cap ({E}\circ G)}\) are not valid. However, by Proposition 1.1, the inclusions \({E\circ (F{\cup G)} \!\supseteq \! (E}\circ {F)\cup ({E}\circ G)}\) and \({E\circ (F}\cap {{G)} {\subseteq } (E}\circ {F)\cap ({E}\circ G)}\) hold.

**Definition 7**

For any effectivity function \({E}\) we define inductively the effectivity functions \({E}^{(n)}\) and \({E}^{[n]}\) as follows:

\({E}^{(0)}=I\), \({E}^{(n+1)}=I\cup {E}\circ E^{(n)}\),

\({E}^{[0]}=I\), \({E}^{[n+1]}=I\cap {E}\circ E^{[n]}\).

**Proposition 2**

For every \(n\ge 0:\)\({E}^{(n)}\subseteq {E} ^{(n+1)} \) and \({E}^{[n+1]}\subseteq {E}^{[n]}\).

*Proof*

Routine, by induction on \(n\). \(\square \)

**Definition 8**

Given an effectivity function \({E}: St\times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\), the *weak iteration* of \({E}\) is the function \({E}^{(*)}=\bigcup \limits _{k=0}^{\infty } {E}^{(k)}\), i.e., \(Y\in {E}_{q}^{(*)}(C)\) iff \(\exists n.\ Y\in {E}_{q}^{(n)}(C)\).

The *strong iteration* of \({E}\) is the function \({E}^{{[*]}}=\bigcap \limits _{k=0}^{\infty } {E}^{[k]}\),

i.e., \(Y\in {E}_{q}^{{[*]}}(C)\) iff \(\forall n.\ Y\in {E}_{q}^{[n]}(C)\).

**Proposition 3**

Unions, intersections, compositions, week and strong iterations preserve outcome-monotonicity of effectivity functions.

*Proof*

Routine. \(\square \)

**Proposition 4**

- 1.
\({E}^{(*)}\) is the least fixed point of the monotone operator \({\mathfrak {F}}_{w}\) defined by \({\mathfrak {F}}_{w}({F})=I\cup E\circ F.\)

- 2.
\({E}^{{[*]}}\) is the greatest fixed point of the monotone operator \({\mathfrak {F}}_{q}\) defined by \({\mathfrak {F}}_{q}({F})= I\cap E \circ F\).

*Proof*

(1) First, we show by induction on \(k\) that for every \(k,\)\({E} ^{(k)}\subseteq I\cup E\circ {E}^{(*)}.\) Indeed, \({E}^{(0)}=I\subseteq I\cup E\circ {E}^{(*)};\)\({E}^{(k+1)}=I\cup E\circ {E}^{(k)}\subseteq I\cup E\circ {E}^{(*)}\) by the inductive hypothesis and Proposition 1. Thus, \({E}^{(*)}\subseteq I\cup E\circ {E}^{(*)}.\)

For the converse inclusion, let \(Y\in (I\cup E\circ E^{(*)})_{q}(C).\) If \(Y\in I_{q}(C),\) then \(Y\in {E}_{q}^{(*)}\) by definition. Suppose \(Y\in ({E\circ {E}^{(*)})}_{q}(C).\) Then, there is \(Z\in {E}_{q}(C)\) such that for every \(z\in Z,\)\(Y\in {{E}^{(*)}}_{z}(C),\) hence \(Y\in {{E}}_{z }^{(k_{z})}(C)\) for some \(k_{z}\ge 0.\) Let \(m=\max \limits _{z\in Z}k_{z}\). Then, by Proposition 2, \(Y\in {E}_{z}^{(m)}(C)\) for every \( z\in Z.\) Therefore, \(Y\in ({E\circ {E}^{(m)})}_{q}(C)\subseteq {E}_{q}^{(m+1)}(C)\subseteq {E}_{q}^{(*)}(C).\)

Thus, \({E}^{(*)}\) is a fixed point of the operator \( {\mathfrak {F}}_{w}.\)

Now, suppose that \({F}\) is such that \({\mathfrak {F}} _{w}({F})=I\cup E\circ F.\) Then, we show by induction on \(k\) that for every \(k,\)\({E}^{(k)}\subseteq {F.}\) Indeed, \({E}^{(0)}=I\subseteq I\cup E\circ F=F.\) Suppose \({E}^{(k)}\subseteq {F.}\) Then \({E}^{(k+1)}= I\cup E\circ {E}^{(k)}\subseteq I\cup E\circ F=F\) by the inductive hypothesis and Proposition 1. Thus, \({E} ^{(*)}\subseteq {F.}\) Therefore, \({{E}^{(*)}}\) is the least fixed point of \({\mathfrak {F}}_{w}.\)

(2) The argument is dually analogous. \(\square \)

The proof above only works when the state space \(St\) is finite. However, the operators \({\mathfrak {F}}_{w}\) and \({\mathfrak {F}}_{q}\) are monotone in the general case and the result above suggests that \({E}^{(*)}\) and \({E}^{{[*]}}\) can be defined in general as the respective fixed points.

### 3.2 Binary effectivity functions

Binary effectivity functions will be used to provide fixed point characterisation and semantics for the binary temporal connective Until.

**Definition 9**

Given a set of players \({\mathbb {A}\mathrm {gt}}\) and a set of states \(St\), a *local binary effectivity function* for \({\mathbb {A}\mathrm {gt}}\) on \(St\) is a mapping \({U}:\mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})\times \mathcal {P}({St})})\) associating with each set of players a family of pairs of outcome sets.

A *global binary effectivity function* associates a local binary effectivity function with each state from \(St\).

Now we define some global binary effectivity functions and operations and relations on them.

**Definition 10**

*Left-idle*binary effectivity function \(\mathbf {L}:St{\times }\mathcal {P}({{\mathbb {A}\mathrm {gt}}}) {\rightarrow } \mathcal {P}({\mathcal {P}({St}){\times }\mathcal {P}({St})})\), where \(\mathbf {L}_{q}(C)=\{(X,Y)\mid q\in X\}\) for any \(q\in St\) and \(C\subseteq {\mathbb {A}\mathrm {gt}}\). Respectively,*right-idle*binary effectivity function \(\mathbf {R}\) is defined by \(\mathbf {R}_{q}(C)=\{(X,Y)\mid q\in Y\}\) for any \(q\in St\) and \(C\subseteq {\mathbb {A}\mathrm {gt}}.\)*Union*of binary effectivity functions \({U},{W}:St\times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})\times \mathcal {P}({St})})\) is the binary effectivity function \({U}\cup {W}\) where \((X,Y)\in ({U}\cup {W})_{q}(C)\) iff \((X,Y)\in {U}_{q}(C)\) or \((X,Y)\in {V}_{q}(C)\).*Intersection*of binary effectivity functions is defined analogously.*Right projection*of \(U\) is the unary effectivity function \(E\) such that \(E_q(C) = \{ Y \mid (X,Y)\in U_q(C) \text{ for } \text{ some } X \in \mathcal {P}({St}) \} \}\) for all \(q,C\).Likewise, we define union, intersection, and right projection of any family of binary effectivity functions.

*Composition*of a unary effectivity function \({E}\) with a binary effectivity function \({U}\) is the binary effectivity function \({E}\circ {U}\) such that \((X,Y)\in ({E}\circ {U})_{q}(C)\) iff there exists a subset \(Z\) of \(St\), such that \(Z\in {E}_{q}(C)\) and \((X,Y)\in {U}_{z}(C)\) for every \(z\in Z.\)*Inclusion*of binary effectivity functions: \({U}\subseteq {W}\) iff \({U}_{q}(C)\subseteq {W}_{q}(C)\) for every \(q\in St\) and \(C\subseteq {\mathbb {A}\mathrm {gt}}.\)*Binary iteration.*For any unary effectivity function \({E}\) we define the binary effectivity functions \({E}^{\left\{ n\right\} }\), \(n\ge 0,\) inductively as follows: \({E}^{\left\{ 0\right\} }=\mathbf {R};E ^{\left\{ n+1\right\} } =R\cup (L\cap {E}\circ {E}^{\left\{ n\right\} } )\).Then, the

*binary iteration*of \({E}\) is defined as the binary effectivity function \({E}^{\left\{ *\right\} } =\bigcup \limits _{k=0}^{\infty } {E} ^{\left\{ k\right\} } ,\) i.e. \((X,Y)\in {E}_{q}^{\left\{ *\right\} } (C)\) iff \( (X,Y)\in {E}_{q}^{\left\{ n\right\} } (C)\) for some \( n.\)

**Definition 11**

A binary effectivity function \({U}\) is *outcome-monotone* if every \({U}_{q}(C)\) is upwards closed, i e. \((X,Y){\in U}_{q}(C)\) and \(X\subseteq X^{\prime } ,Y\subseteq Y^{\prime } \) imply \((X^{\prime } ,Y^{\prime } ){\in U}_{q}(C).\)

**Proposition 5**

For any finite state space \(St\) and unary effectivity function \({E}\) in it, \({E}^{\left\{ *\right\} }\) is the least fixed point of the monotone operator \({\mathfrak {F}}_{b}\) defined by \({\mathfrak {F}}_{b}({U})=\mathbf {R}\cup (\mathbf {L}\cap E\circ U).\)

*Proof*

Analogous to the proof of Proposition 4. \(\square \)

Again, the operator \({\mathfrak {F}}_{b}\) is monotone for any (finite or infinite) state space \(St\) and the result above suggests how \({E}^{\left\{ *\right\} }\) can be defined in general.

The next result follows immediately from Propositions 3, 4 and 5.

**Proposition 6**

\({E}^{(*)}\), \({E}^{[*]}\) and \({E}^{\left\{ *\right\} }\) are outcome-monotone. Moreover, \({E}^{(*)}\) is the right projection of \({E}^{\left\{ *\right\} }\).

### 3.3 State-based effectivity models for ATL

The semantics of ATL can now be given in terms of models that are more abstract and technically simpler than CGM.

**Definition 12**

*state-based effectivity frame (SEF) for ATL*is a tuple

A *state-based effectivity model (SEM) for ATL* is a SEF plus a valuation of atomic propositions.

That is, an effectivity frame/model for ATL includes not one but *three* effectivity functions: one for each temporal modality in the language.

**Definition 13**

*standard*iff

- 1.
\(\mathbf {E}\) is truly playable,

- 2.
\(\mathbf {G}=\mathbf {E}^{\mathbf {[*]}}\),

- 3.
\(\mathbf {U}=\mathbf {E}^{\left\{ *\right\} }\).

*standard*if \({\mathcal {F}}\) is standard.

### 3.4 State-based effectivity semantics for ATL

*Extending*\(\alpha \)-

*effectivity to SEM*Given a CGM \(M=({\mathbb {A}\mathrm {gt}}, St, Act, d, o, V)\), we construct its corresponding SEM as follows: \(\mathrm {SEM}(M) = ({\mathbb {A}\mathrm {gt}},St,\mathbf {E},\mathbf {G},\mathbf {U})\) where \(\mathbf {E}_q = E_{M,q}^\alpha \) for all \(q\in St\), \(\mathbf {G}=\mathbf {E}^{\mathbf {[*]}}\), and \(\mathbf {U}=\mathbf {E}^{\left\{ *\right\} }\).

**Example 4**

The “always” effectivity in state \(q_0\) of the model of aggressive vs. conservative play from Example 1 can be written as follows:

\(\mathbf {G}_{q_0}(\emptyset ) = \{\{q_0,q_1,q_2\}\}\), \(\mathbf {G}_{q_0}(\{1\}) = \mathbf {G}_{q_0}(\{2\}) = \{\{q_0,q_1\}, \{q_0,q_2\}, \{q_0,q_1,q_2\}\}\),

\(\mathbf {G}_{q_0}(\{1,2\}) = \{\{q_0\}, \{q_0,q_1\}, \{q_0,q_2\}, \{q_0,q_1,q_2\}\}\).

The next result easily follows from Theorem 1:

**Theorem 2**

(Representation Theorem) A state effectivity model \({\mathcal {M}}\) for ATL is standard iff there exists a CGM \(M\) such that \({\mathcal {M}}=\mathrm {SEM}(M)\).

Moreover, we note that the ATL semantics in CGMs and in their associated standard SEMs coincide.

**Theorem 3**

For every CGM \(M\), state \(q\) in \(M\), and ATL formula \(\varphi \), we have that Open image in new window iff Open image in new window.

*Proof*

Routine, by structural induction on formulae. \(\square \)

**Corollary 1**

Any ATL formula \(\varphi \) is valid (resp., satisfiable) in concurrent game models iff \(\varphi \) is valid (resp., satisfiable) in standard state-based effectivity models.

## 4 Coalitional path effectivity

State-based effectivity models for ATL partly characterize coalitional powers for achieving long-term objectives. However, the applicability of such models is limited by the fact that they characterize effectivity with respect to outcome states, while effectivity for outcome *paths (i.e., plays)* is only captured when such paths are described by the specific temporal patterns definable in ATL. Thus, in particular, state-based effectivity models are not suitable for providing semantics of the whole ATL*.

In this section we aim at getting to the core of the notion of effectivity in multi-step games, regardless of the temporal pattern that defines the winning condition, by re-defining it in terms of outcome *paths*, rather than states. The idea is natural: every collective strategy of the grand coalition in a multi-step game determines a unique path (play) through the state space of the game. Consequently, the outcome of following an individual or coalitional strategy in such game is a set of paths (plays) that can result from execution of the strategy, depending on the moves of the remaining players. Hence, powers of players and coalitions in multi-step games can be characterized by sets of sets of paths. Our main conceptual motivation is precisely that a strategy of a player, or a collective strategy of a coalition, determines a set of paths (plays), not states, which can be effected by such strategy. Viewing outcomes of a strategy as infinite paths seems appropriate for reasoning about repeated (or extensive) games that run in infinitely many steps.

We also claim that the notion of path effectivity captures adequately the meaning of strategic operators in ATL(*). Moreover, it provides correct semantics for the whole ATL*, and not only its limited fragment ATL.

### 4.1 Path effectivity functions, frames and models

**Definition 14**

(*Path effectivity function*) Let \({\mathbb {A}\mathrm {gt}}\) be a set of players, and \(St\) a set of states. A *path* in \(St\) is an infinite sequence of states, i.e., an element of \(St^\omega \). A *path effectivity function* is a mapping \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) that assigns to each coalition a non-empty family of sets of paths.

The intuition is analogous to that for state effectivity: the inclusion of a set of paths \({\mathcal {X}}\) in \(\mathcal {E}(C)\) means that the coalition \(C\) can choose a strategy that ensures that the game will develop along one of the paths in \({\mathcal {X}}\).

Note that the definition above refers to global effectivity, in the sense that \({\mathcal {X}}\in \mathcal {E}(C)\) can (in fact, must) include paths starting from different states. Local path effectivity (for each initial state separately) is easily extractable from the global one. This is in line with the concept of a strategy as a complete conditional plan: in particular, the strategy must prescribe collective actions of the coalition from all possible initial states of the game.

By analogy with identifying action choices as sets of outcome states in state effectivity models, we refer to the elements of \(\mathcal {E}(C)\) for a path effectivity function \(\mathcal {E}\) as *(global) strategic choices* of the coalition \(C\). The intuition is that every strategic choice \({\mathcal {F}} \in \mathcal {E}(C)\) is the sets of paths in \(St\) that \(C\) can enforce when playing the chosen collective strategy represented by \({\mathcal {F}}\).

*feasible*path in a given concrete model (i.e, a CGM), but only those that follow the transitions in the model. Likewise, for an abstract path effectivity function \(\mathcal {E}\), it is not required that all the sequences of states appear in \(\mathcal {E}\). We define the feasible paths in \(\mathcal {E}\) as

*over the set of feasible paths*\(\mathsf {Paths}_{\mathcal {E}}\).

Hereafter, we will assume that \(\mathcal {E}\) captures the *outcome monotone effectivity*, i.e., it collects the actual outcome paths of choices available to \(C\), and then it takes all their supersets, i.e., closes under upwards monotonicity.

**Definition 15**

(*Path effectivity frames/models*) A *path effectivity frame (PEF)* is a structure \({\mathcal {F}} = ({\mathbb {A}\mathrm {gt}},St,\mathcal {E})\) consisting of a set of players \({\mathbb {A}\mathrm {gt}}\), a set of states \(St\) and a path effectivity function \(\mathcal {E}\) on these. A *path effectivity model (PEM)*\({\mathcal {M}}\) expands a PEF with a valuation of the propositions \(V: Prop\rightarrow \mathcal {P}({St})\).

*Notation*Clearly, not every path effectivity frame corresponds to a concrete game structure. To capture “playability” conditions for path effectivity functions and frames, we will need some additional notation. Let \(q\in St\), \(h,h'\in St^{+}\), \({\mathcal {X}}\in \mathcal {P}({St^\omega })\), and \(\mathcal {E}\) be a path effectivity function. We define the following:

\(h \preceq h'\) if \(h'\) is an extension of \(h\);

\({\mathcal {X}}[i] \,{:}{=}\, \{\lambda [i] \mid \lambda \in {\mathcal {X}}\}\) collects states that appear on the \(i\)th position of paths in \({\mathcal {X}}\);

\({\mathcal {X}}(q) \,{:}{=}\,\{\lambda \in {\mathcal {X}}\mid \lambda [0]= q\}\) selects the paths in \({\mathcal {X}}\) starting from \(q\);

\({\mathcal {X}}(h) \,{:}{=}\,\{\lambda \mid \lambda \!\in \! {\mathcal {X}}, \ \text{ and } \ \lambda [0..k]= h \ \text{ for } \text{ some } k \}\) is the set of paths in \({\mathcal {X}}\) starting with \(h\);

\({\mathcal {X}}| h \,{:}{=}\,\{\lambda [k..\infty ] \mid \lambda \!\in \! {\mathcal {X}}\ \text{ and } \ \lambda [0..k]= h\}\) is the set of suffixes of paths in \({\mathcal {X}}\), extending \(h\);

Consequently, for sets of sets of paths:

\(\mathcal {E}(C)(q) = \{{\mathcal {X}}(q) \mid {\mathcal {X}}\in \mathcal {E}(C)\}\),

\(\mathcal {E}(C)(h) = \{{\mathcal {X}}(h) \mid {\mathcal {X}}\in \mathcal {E}(C)\}\),

\(\mathcal {E}(C)| h = \{{\mathcal {X}}| h \mid {\mathcal {X}}\in \mathcal {E}(C)\}\).

The initial segments \(\lambda [0..k]\) of feasible paths of a path effectivity function \(\mathcal {E}\) will be called *(initial) feasible histories* of \(\mathcal {E}\).

### 4.2 Generating state effectivity from path effectivity functions and vice versa

We will now define two natural mappings between path and state effectivity functions. First, a path effectivity function can be transformed into a state effectivity function by extracting from paths their initial segments (the “opening moves”). Secondly, a state effectivity function can be transformed into a path effectivity function by “unfolding” all possible paths that arise from a given subset of state transitions.

**Definition 16**

*State projection*) The

*(successor) state projection*of a global strategic choice \({\mathcal {X}}\subseteq St^\omega \) is the mapping \({\mathcal {X}}^{S}: St\rightarrow \mathcal {P}({St})\), called the

*(global) action choice*corresponding to \({\mathcal {X}}\), defined as follows:

*state projection*of a path effectivity function \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({Paths})})\) is the global state effectivity function \(\mathcal {E}^{S}: St\times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\) that assigns to every \(C\subseteq {\mathbb {A}\mathrm {gt}}\) and \(q \in St\) the family

\({\mathcal {X}}^{S}(q)\) includes all the states that are immediate successors of \(q\) at the beginning

of a path in \({\mathcal {X}}\). Thus, \({\mathcal {X}}^{S}\) assigns possible successors to each state, so it can be seen as a representation of a possible transition relation between states in \(St\). Moreover, \(\mathcal {E}^{S}\) collects all such transition relations that “approximate” the choices available in \(\mathcal {E}\).

*suffix closed*, i.e., contains all paths \(\lambda [i..\infty ]\) for every path \(\lambda \in {\mathcal {X}}\), then the definition of state projection of \({\mathcal {X}}\) is equivalent to

**Definition 17**

*Path closure*) Given a global action choice \(X : St\rightarrow \mathcal {P}({St})\), we define its

*path closure*\(X^{P}\subseteq St^\omega \) as follows:

That is, \(X^{P}\) collects the paths generated by the transition function represented by \(X\). Moreover, \(E^{P}\) is the outcome-monotone closure of the family of strategic choices generated this way from the state effectivity function \(E\).

### 4.3 Path effectivity in concurrent game structures

In this section, we propose an analogue of \(\alpha \)-effectivity from Sect. 2.2 for distilling abstract path effectivity from CGM’s. Not every set of feasible paths in a CGM is a feasible choice for a coalition, and the powers of players and coalitions in a game crucially depend on their available strategies. There are different notions of strategy, e.g., depending on the amount of memory that the players can use. We will parameterize our concept of effectivity in multi-step games with a type (class) of strategies. Two types of strategies were already introduced in Sect. 2.1, namely deterministic memoryless and deterministic perfect recall strategies, and we will focus on these classes henceforth. However, one can easily imagine other types of strategies, such as bounded memory strategies, finite memory strategies, nondeterministic strategies, and so on. Our concept of effectivity in multi-step games is well defined for all these classes, under the mild conditions set out below.

**Definition 18**

*Normal class of strategies*) A class \(\varSigma \) of individual and coalitional strategies is

*normal*iff:

- 1.
Every player has at least one strategy in \(\varSigma \),

- 2.
Coalitional strategies are obtained by freely combining the individual strategies of the participating players,

^{3}and - 3.
No strategy in \(\varSigma \) (individual or coalitional) ever yields an empty set of successor states.

It is easy to see that the classes of *perfect recall* and *memoryless* strategies from Sect. 2.1 are normal. We will refer to them with \(\mathfrak {FulMem}\) and \(\mathfrak {NoMem}\), respectively.

For a CGM \(M\), by \(\mathsf {Paths}_{M}\) we denote the set of all paths feasible in \(M\), that is, the set of infinite sequences of states that can be obtained by subsequent transitions in \(M\). We leave out the details of the formal definition.

**Definition 19**

*effectivity*) Let \(M\) be a CGM and \(\varSigma = \bigcup _{C\subseteq {\mathbb {A}\mathrm {gt}}}\varSigma _C\) be a normal set of coalitional strategies in \(M\). The

*path*\(\varSigma \)-

*effectivity function of*\(M\) is defined as

Specifically, we denote by \(\mathcal {E}^\mathfrak {FulMem} _M\) and \(\mathcal {E}^\mathfrak {NoMem} _M\) the effectivity of coalitions respectively for perfect recall strategies and for memoryless strategies in \(M\).

**Example 5**

Below we collect some observations that will be used further.

**Proposition 7**

- 1.
Every coalition has a collective strategy, and therefore for every state \(q\) in \(M\) it can enforce at least one set of outcome paths starting from \(q\).

*(Safety)* - 2.
For any coalition \(C\) and state \(q\) in \(M\), every coalitional strategy produces a non-empty set of outcome paths starting from \(q\).

*(Liveness)* - 3.
All the supersets of a choice in \(\mathcal {E}_M^\varSigma (C)\) belong to \(\mathcal {E}_M^\varSigma (C)\), too.

*(Outcome-Monotonicity)* - 4.
\(\mathcal {E}^\varSigma _M(\emptyset )\) is a singleton. More precisely, \(\mathcal {E}^\varSigma _M(\emptyset ) = \{\mathsf {Paths}_{M}\}\).

- 5.
Every two disjoint coalitions can join their chosen coalitional strategies to enforce the intersection of the outcome paths enforced by each of the coalitions following its respective strategy. Together with outcome-monotonicity, this implies that, if \(C\cap D=\emptyset \), \({\mathcal {X}}\in \mathcal {E}^\varSigma _M(C)\), and \(\mathcal {Y}\in \mathcal {E}^\varSigma _M(D)\), then \({\mathcal {X}}\cap \mathcal {Y}\in \mathcal {E}^\varSigma _M(C\cup D)\).

*(Superadditivity)*

- 6.
\(\mathcal {E}^\varSigma _M({\mathbb {A}\mathrm {gt}})\) is the outcome-monotone closure of the family of all the sets of paths that contain a path from \(\mathsf {Paths}_{M}\) starting from each initial state. Consequently, \(\mathcal {E}^\varSigma _M({\mathbb {A}\mathrm {gt}}) = \{ {\mathcal {X}}\subseteq \mathsf {Paths}_{M} \mid {\mathcal {X}}(q)\ne \emptyset \text { for every }q\in St\}\).

*(Determinacy)*

*Proof*

Straightforward. \(\square \)

### 4.4 Path effectivity semantics of ATL*

**Example 6**

Let us apply the path effectivity semantics of ATL* to our model of aggressive vs. conservative play \(M_1\) from Fig. 1. Analogously to the standard semantics, we have Open image in new window and Open image in new window for every \(i=0,1,2\). This can be demonstrated e.g. by the choice \(\{q_0(q_i)^\omega \}\) that belongs to \(\mathcal {E}_{M_1}^\mathfrak {FulMem} (\{1,2\})\) as well as \(\mathcal {E}_{M_1}^\mathfrak {NoMem} (\{1,2\})\).

## 5 Characterizing path effectivity functions

The path effectivity semantics for ATL* defined above is very general, and allows for reasoning about quite abstract—one may even say contrived—patterns of effectivity. Here we identify the characteristic properties of path effectivity functions arising in CGSs, and define an analogue of the notion of (truly) playable state effectivity functions. We begin with generic conditions that must apply to any pattern of effectivity, regardless of the type of strategies being used. Then, we proceed to characterize additional conditions that are necessary (and sufficient) in the special cases of memoryless and perfect recall strategies.

### 5.1 General playability conditions

**Definition 20**

*(Playability in path effectivity)*Let \(\mathsf {Paths}\subseteq St^\omega \). A path effectivity function \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) is

*truly playable*over the set of feasible paths \(\mathsf {Paths}\) if it satisfies the following conditions:

*P-Safety:*\(\mathcal {E}(C)(q)\) is non-empty for every \(C\subseteq {\mathbb {A}\mathrm {gt}},q\in St\).*P-Liveness:*\(\emptyset \notin \mathcal {E}(C)(q)\) for every \(C\subseteq {\mathbb {A}\mathrm {gt}},q\in St\).*P-Outcome Monotonicity:*For every \(C\subseteq {\mathbb {A}\mathrm {gt}}\) the set \(\mathcal {E}(C)\) is upwards closed: if \({\mathcal {X}}\in \mathcal {E}(C)\) and \({\mathcal {X}}\subseteq \mathcal {Y}\subseteq \mathsf {Paths}\) then \(\mathcal {Y}\in \mathcal {E}(C)\).*P-Superadditivity:*For every \(C,D\subseteq {\mathbb {A}\mathrm {gt}}\), if \(C \cap D = \emptyset \), \({\mathcal {X}}\in \mathcal {E}(C)\) and \(\mathcal {Y}\in \mathcal {E}(D)\), then \({\mathcal {X}}\cap \mathcal {Y}\in \mathcal {E}(C \cup D)\).*P*-\(\emptyset \)-*Minimality:*\(\mathcal {E}(\emptyset )\) is the singleton \(\{\mathsf {Paths}\}\).*P-Determinacy:*For every \(q\in St\), if \({\mathcal {X}}\in \mathcal {E}({\mathbb {A}\mathrm {gt}})\) then \(\{\lambda \} \in \mathcal {E}({\mathbb {A}\mathrm {gt}})(q)\) for some \(\lambda \in {\mathcal {X}}(q)\).^{4}

We note that the playability conditions above are variants of true playability for path-based effectivity.

**Proposition 8**

^{5}Then:

- 1.
If the path effectivity function \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) is truly playable over \(\mathsf {Paths}\) then its state projection \(\mathcal {E}^{S}\) is truly playable, too.

- 2.
If the state effectivity function \(E: St\times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\) is truly playable then its path closure \(E^{P}\) is truly playable over \(\mathsf {Paths}\).

*Proof*

Checking the respective playability conditions is straightforward, and we leave it to the interested reader. \(\square \)

Besides the general conditions in Definition 20, we need additional conditions which are specific to the underlying class of strategies, and relate local choices with global strategies in path effectivity frames.

### 5.2 Path effectivity with memoryless strategies

Here we will obtain an abstract characterization of the path effectivity functions in concurrent game structures corresponding to memoryless strategies.

#### 5.2.1 Preparation

**Definition 21**

(*State-transition closed choices and effectivity functions*) A global choice \({\mathcal {X}}\) of a path effectivity function \(\mathcal {E}\) is *state-transition closed* iff \(({\mathcal {X}}^{S})^{P}= {\mathcal {X}}\). That is, \({\mathcal {X}}\) coincides with the set of paths that follow the state-based transition relation projected from \({\mathcal {X}}\).

Respectively, \(\mathcal {E}\) is *state-transition closed* iff \((\mathcal {E}^{S})^{P}= \mathcal {E}\).

Clearly, every path effectivity function generated by memoryless strategies of any coalition \(C\) in any CGS is state-transition closed. Moreover, the set of paths in any global choice \({\mathcal {X}}\) determined by memoryless strategies of a coalition in a CGS \(M\) corresponds to the set of *all paths* along a transition relation in \(M\), suitably restricted by these memoryless strategies. By a result of Emerson [15], every such set of paths is precisely characterized by 3 simple closure conditions, defined below.

**Definition 22**

*Closure conditions for sets of paths*) A set of paths \({\mathcal {X}}\) in a state space \(St\) is:

- 1.
*suffix closed*if every suffix path \(\lambda [i..\infty ]\) of a path in \({\mathcal {X}}\) belongs to \({\mathcal {X}}\); - 2.
*fusion closed*if for every \(\lambda ,\lambda ' \in {\mathcal {X}}\) such that \(\lambda [i] = \lambda '[0]\) then the “fusion path” \(\lambda ''\) such that \(\lambda ''[0..i] = \lambda [0..i]\) and \(\lambda ''[i..\infty ] = \lambda '[i..\infty ]\) belongs to \({\mathcal {X}}\). - 3.
*limit closed*if for every path \(\lambda \), if there is a sequence of paths \(\{\lambda _{i}\}_{i\in {\mathbb {N}}}\) in \({\mathcal {X}}\) such that \(\lambda [0..i] = \lambda _{i}[0..i]\) for every \(i\in {\mathbb {N}}\), then \(\lambda \) belongs to \({\mathcal {X}}\), too.

We obtain the following characterization of state-transition closed global choices of a path effectivity function \(\mathcal {E}\).

**Proposition 9**

A global choice \({\mathcal {X}}\) of a path effectivity function \(\mathcal {E}\) is state-transition closed iff it is suffix, fusion, and limit closed.

*Proof*

As proved in [15], a set of paths in a state space \(St\) is suffix, fusion, and limit closed iff it is the set of all paths along some transition relation in \(St\). Thus, every state-transition closed global choice satisfies these closure conditions. Conversely, if a global choice \({\mathcal {X}}\) satisfies these closure conditions then it is the set of paths generated by some transition relation \(R\) in \(St\). Because of the suffix closure, \(R\) is precisely the state projection of \({\mathcal {X}}\), hence \({\mathcal {X}}\) is state-transition closed. \(\square \)

**Definition 23**

*State-transition closed core of*\(\mathcal {E}\)) The

*state-transition closed core of*a path effectivity function \(\mathcal {E}\) is the path effectivity function \(\mathcal {E}^{core}\) that selects only state-transition closed choices from \(\mathcal {E}\), i.e.:

Intuitively, if all players in \(C\) are following a collective memoryless strategy, while the others are free to execute any available actions, then the same set of possible successor states should be available whenever the system is in state \(q\), regardless of the path that leads to that state. Ideally, these should be exactly the *feasible* successors, i.e., ones that can be effected by a transition consistent with the strategy. Every such “feasible” global choice is by definition state-transition closed. However, Definition 23 allows also for state-transition closed choices that properly extend feasible choices by adding superfluous successor states in a uniform way (that is, the same superfluous successors are added whenever \(q\) occurs).

We note in passing that \(\mathcal {E}^{core}\) is never outcome-monotone except in trivial cases, even if \(\mathcal {E}\) is.

**Lemma 1**

*Proof*

First, note that the path closure \(X^{P}\) of any global action choice \(X : St\rightarrow \mathcal {P}({St})\) is state-transition closed.

Now, let \({\mathcal {X}}\in \mathcal {E}(C)\). Then, also \({\mathcal {X}}\in (\mathcal {E}^{S})^{P}(C)\), and hence there must exist \(\mathcal {Y}\subseteq {\mathcal {X}}\) such that \(\mathcal {Y}\) is the path closure of some global action choice in \(\mathcal {E}^{S}\).

Then, \(\mathcal {Y}= (\mathcal {Y}^{S})^{P}\), hence \(\mathcal {Y}\in \mathcal {E}^{core}(C)\). Consequently, \({\mathcal {X}}\) is in the outcome-monotone closure of \(\mathcal {E}^{core}(C)\). The converse direction is analogous. \(\square \)

#### 5.2.2 Characterization

Now we can proceed with our characterization of path effectivity functions that correspond to concurrent game structures. We begin with a proposition that characterizes structurally state-transition closed effectivity functions. Intuitively, \(\mathfrak {NoMem}\)-grounding specifies that every strategic choice is an outcome-monotone extension of some “internally consistent” (that is, state-transition closed) choice. Moreover, \(\mathfrak {NoMem}\)-convexity requires that any consistent collection of “locally applied” strategies for a given coalition \(C\) can be pieced together into a global memoryless strategy for \(C\).

**Proposition 10**

(\(\mathfrak {NoMem}\)-

*Grounding)*\(\mathcal {E}(C)\) is the outcome-monotone closure of \(\mathcal {E}^{core}(C)\), i.e., for every \(\mathcal {Y}\in \mathcal {E}(C)\) there is \({\mathcal {X}}\in \mathcal {E}^{core}(C)\) such that \({\mathcal {X}}\subseteq \mathcal {Y}\).(\(\mathfrak {NoMem}\)-

*Convexity)*For every family \(\{{\mathcal {X}}^{q} \in \mathcal {E}^{core}(C) \mid q \in St\}\) of state-transition closed global choices, if \(\mathcal {Y}\in \mathcal {E}(C)\) is such that \(\mathcal {Y}(q) = {\mathcal {X}}^{q}(q)\) for every \(q \in St\), then \((\mathcal {Y}^{S})^{P}\in \mathcal {E}^{core}(C)\).Equivalently, the \(\mathfrak {NoMem}\)-Convexity condition can be formulated as follows: For every family \(\{{\mathcal {X}}^{q} \in \mathcal {E}^{core}(C) \mid q \in St\}\) of state-transition closed global choices, we have that \(\big (\big (\bigcup _{q\in St}{\mathcal {X}}^{q}(q)\big )^{S}\big )^{P}\in \mathcal {E}^{core}(C)\).

*Proof*

**“**\(\Rightarrow \)**”:** Let \(\mathcal {E}(C)\) be state-transition closed. Then, \(\mathfrak {NoMem}\)-Grounding holds by Lemma 1. Moreover, take any family \(\{{\mathcal {X}}^{q} \in \mathcal {E}^{core}(C) \mid q \in St\}\) of state-transition closed global choices. For every \(q\in St\), the set of immediate successors of the initial state \(q\) in \({\mathcal {X}}^q(q)\) is in \(({\mathcal {X}}^q)^{S}\). Consider the global action choice \(Y\) such that, for every \(q\in St\), \(Y(q)=({\mathcal {X}}^q)^{S}\). Clearly, \(Y\in \mathcal {E}^{S}(C)\), hence \(Y^{P}\in (\mathcal {E}^{S})^{P}(C)\). Finally, we observe that \(Y = \big (\bigcup _{q\in St}{\mathcal {X}}^{q}(q)\big )^{S}\) and \((\mathcal {E}^{S})^{P}(C)= \mathcal {E}(C)\) by assumption. Thus, \(\big (\big (\bigcup _{q\in St}{\mathcal {X}}^{q}(q)\big )^{S}\big )^{P}\in \mathcal {E}(C)\). Since that choice is closed by construction, it must also be in \(\mathcal {E}^{core}(C)\), which concludes this part of the proof.

**“**\(\Leftarrow \)**”:**Let \(\mathcal {E}\) be \(\mathfrak {NoMem}\)-grounded and \(\mathfrak {NoMem}\)-convex, and let \({\mathcal {X}}\in \mathcal {E}(C)\). Then, there is \(\mathcal {Y}\subseteq {\mathcal {X}}\) which is state-transition closed, by \(\mathfrak {NoMem}\)-Grounding. But then also \(\mathcal {Y}\in (\mathcal {E}^{S})^{P}(C)\), hence \({\mathcal {X}}\in (\mathcal {E}^{S})^{P}(C)\) because path closure is closed under supersets.

Conversely, let \({\mathcal {X}}\in (\mathcal {E}^{S})^{P}(C)\). Then, it is a superset of the path closure of a global action choice generated from a combination of state projections of strategic choices \(\{{\mathcal {X}}^{q} \mid q \in St\}\) in \(\mathcal {E}(C)\). More precisely: \({\mathcal {X}}\supseteq \big (\big (\bigcup _{q\in St}{\mathcal {X}}^{q}(q)\big )^{S}\big )^{P}\). By \(\mathfrak {NoMem}\)-grounding, for each \({\mathcal {X}}^{q}\), there must be a state-transition closed strategic choice \(\mathcal {Y}^{q}\in \mathcal {E}^{core}(C)\) such that \(\mathcal {Y}^{q}\subseteq {\mathcal {X}}^{q}\). Now, take the family \(\{\mathcal {Y}^{q} \mid q \in St\}\). By \(\mathfrak {NoMem}\)-convexity, we get that \(\big (\big (\bigcup _{q\in St}\mathcal {Y}^{q}(q)\big )^{S}\big )^{P}\in \mathcal {E}(C)\). Since (i) \(\bigcup _{q\in St}\mathcal {Y}^{q}(q) \subseteq \bigcup _{q\in St}{\mathcal {X}}^{q}(q)\), (ii) the operations of state projection and path closure are monotonic wrt sets of outcomes from the effectivity functions, and (iii) \(\mathcal {E}(C)\) is closed under supersets, we finally obtain that \({\mathcal {X}}\in \mathcal {E}(C)\). \(\square \)

**Theorem 4**

(\(\mathfrak {NoMem}\)-Representation theorem) A path effectivity function \(\mathcal {E}\) over a set of feasible paths \(\mathsf {Paths}\) equals the path effectivity function with memoryless strategies \(\mathcal {E}^\mathfrak {NoMem} _M\) for some concurrent game structure \(M\) if and only if \(\mathsf {Paths}\) is state-transition closed and \(\mathcal {E}\) is truly playable and state-transition closed.

*Proof*

By Proposition 10 it suffices to prove that \(\mathcal {E}\) is representable in concurrent game structures with memoryless strategies iff \(\mathsf {Paths}\) is state-transition closed and \(\mathcal {E}\) is truly playable, \(\mathfrak {NoMem}\)-grounded, and \(\mathfrak {NoMem}\)-convex.

“\(\Rightarrow \)”: Take any CGS \(M\) and its path effectivity function \(\mathcal {E}^\mathfrak {NoMem} _M\). State-transition closedness of \(\mathsf {Paths}_M\) is obvious. Further, we observe that \(\mathcal {E}^\mathfrak {NoMem} _M\) is the path closure of the state \(\alpha \)-effectivity function of \(M\), i.e, \(\mathcal {E}^\mathfrak {NoMem} _M = (E_M^\alpha )^{P}\). Thus, by Proposition 8(2) \(\mathcal {E}^\mathfrak {NoMem} _M\) must be truly playable. Therefore, every choice \(\mathcal {Y}\in \mathcal {E}^\mathfrak {NoMem} _M(C)\) is a superset of some state-transition closed choice \({\mathcal {X}}\) generated by some collective memoryless strategy of \(C\), and hence it is also \(\mathfrak {NoMem}\)-grounded. Finally, for a family of state-transition closed choices \(\{{\mathcal {X}}^{q} \mid q\in St\}\) in \(\mathcal {E}^\mathfrak {NoMem} _M(C)\), let us take \(\widehat{{\mathcal {X}}}^q \subseteq {\mathcal {X}}^q\) to be a choice generated by an actual collective strategy of \(C\) (it must exist by construction of \(\mathcal {E}^\mathfrak {NoMem} _M\)). Let \(\mathcal {Y}= \bigcup _{q\in St}{\mathcal {X}}^{q}(q)\) and \(\widehat{\mathcal {Y}} = \bigcup _{q\in St}\widehat{{\mathcal {X}}}^{q}(q)\). Clearly, \(\widehat{\mathcal {Y}}\) is the set of paths generated by a collective strategy of \(C\) that combines the opening moves from \(\{\widehat{{\mathcal {X}}}^{q}\}\). In consequence, \(\widehat{\mathcal {Y}}\in \mathcal {E}^\mathfrak {NoMem} _M(C)\). Moreover, \(\widehat{\mathcal {Y}}\subseteq \mathcal {Y}\), so \(\mathcal {Y}\in \mathcal {E}^\mathfrak {NoMem} _M(C)\) by the outcome-monotonicity of \(\mathcal {E}^\mathfrak {NoMem} _M\). That proves \(\mathfrak {NoMem}\)-convexity.

- 1.
We construct the global state effectivity function \(\mathcal {E}^{S}\) as the state projection of \(\mathcal {E}\) (Definition 16). By Proposition 8(1), \(\mathcal {E}^{S}\) is truly playable.

- 2.
Using the representation theorem in [19] we construct a CGS \(M\) for the same set of agents \({\mathbb {A}\mathrm {gt}}\) and state space \(St\), such that the state effectivity function \(E_M^\alpha \) of \(M\) coincides with \(\mathcal {E}^{S}\).

- 3.
Using \(E_M^\alpha \) we construct the respective path effectivity function \(\mathcal {E}^\mathfrak {NoMem} _M\) as the path closure of \(E_M^\alpha \), according to Definition 17.

- 4.Finally, we show that \(\mathcal {E}^\mathfrak {NoMem} _M\) coincides with \(\mathcal {E}\) by using the \(\mathfrak {NoMem}\)-grounding and \(\mathfrak {NoMem}\)-convexity of each of \(\mathcal {E}\) and \(\mathcal {E}^\mathfrak {NoMem} _M\). For that we fix any coalition \(C\) and prove both inclusions:
\( \mathcal {E}(C) \subseteq \mathcal {E}^\mathfrak {NoMem} _M(C)\): Take any global choice \(\mathcal {Y}\in \mathcal {E}(C)\). Then there is \({\mathcal {X}}\in \mathcal {E}^{core}(C)\) such that \({\mathcal {X}}\subseteq \mathcal {Y}\) (by Grounding). The state projection \({\mathcal {X}}^{S}\) in \(\mathcal {E}^{S}\) is a global action choice in \(\mathcal {E}^{S}(C) = E_M^\alpha (C)\). Thus, there is a collective memoryless strategy \(\sigma _{C}\) for \(C\) in \(M\) that generates an actual global action choice \(\widehat{X} \in E_M^\alpha (C)\) of which \({\mathcal {X}}^{S}\) is an extension (i.e., \(\widehat{X}(q)\subseteq {\mathcal {X}}^{S}(q)\) for every \(q\in St\)). Clearly, the path closure of \(\widehat{X}\) corresponds to the set of actual outcome paths of \(\sigma _C\), therefore \(\widehat{X}^{P}\in \mathcal {E}^\mathfrak {NoMem} _M(C)\). By monotonicity of path closure, we also have that \(\widehat{X}^{P}\subseteq ({\mathcal {X}}^{S})^{P}\). Moreover, \(({\mathcal {X}}^{S})^{P}={\mathcal {X}}\) because \({\mathcal {X}}\) is state-transition closed. Thus, \(\mathcal {Y}\supseteq {\mathcal {X}}\supseteq \widehat{X}\in \mathcal {E}^\mathfrak {NoMem} _M(C)\), and hence \(\mathcal {Y}\in \mathcal {E}^\mathfrak {NoMem} _M(C)\) by the outcome-monotonicity of \(\mathcal {E}^\mathfrak {NoMem} _M(C)\).

\( \mathcal {E}^\mathfrak {NoMem} _M(C) \subseteq \mathcal {E}(C)\): Take any global choice \(\mathcal {Y}\in \mathcal {E}^\mathfrak {NoMem} _M(C)\). By construction of \(\mathcal {E}^\mathfrak {NoMem} _M\), there must a global state choice \(X\in E_M^\alpha (C)\), hence \(X\in \mathcal {E}^{S}(C)\) (by point 5.2.2 above), that corresponds to an actual collective strategy of \(C\) in \(M\) and \(\mathcal {Y}\) extends the set of paths generated by \(X\) (that is, \(X^{P}\subseteq \mathcal {Y}\)). By the definition of state projection, there must be a strategic choice \(\widehat{{\mathcal {X}}}\in \mathcal {E}(C)\) such that \(X(q) = \{q' \mid \lambda [0]=q\text { and }\lambda [1]=q'\text { for some }\lambda \in \widehat{{\mathcal {X}}}\}\) for every \(q\in St\). Moreover, by the \(\mathfrak {NoMem}\)-groundedness of \(\mathcal {E}\), we have that for every \(q\) there is a state-transition closed \({\mathcal {X}}^q\) such that \({\mathcal {X}}^q(q)\subseteq \widehat{{\mathcal {X}}}(q)\). Take \(\widehat{\mathcal {Y}} = \big (\big (\bigcup _{q\in St}{\mathcal {X}}^{q}(q)\big )^{S}\big )^{P}\). By \(\mathfrak {NoMem}\)-convexity of \(\mathcal {E}\), we have that \(\widehat{\mathcal {Y}}\in \mathcal {E}(C)\). Summarizing, we have \(\mathcal {Y}\supseteq X^{P}= \big (\big (\bigcup _{q\in St}{\mathcal {X}}^{q}(q)\big )^{S}\big )^{P}= \widehat{\mathcal {Y}}\in \mathcal {E}(C)\). Thus, by the outcome-monotonicity of \(\mathcal {E}\), we obtain that \(\mathcal {Y}\in \mathcal {E}(C)\).

### 5.3 Path effectivity with perfect recall strategies

Our characterization of representability for perfect recall strategies is analogous, but now the requirements on a valid strategy are more relaxed. As a consequence, more sets of paths (strategic choices) in a path effectivity function correspond to actual strategies in the CGS. In fact, *every* sequence of collective actions at the states of an infinite play by a group of agents can be regarded as determined by a perfect recall strategy of that group. The difference from the case of memoryless strategies is that every passing through the same state allows a different choice and hence determines a possibly different set of successor states. That difference can be captured in two different, but equivalent ways: by associating not state-based, but history-based effectivity functions, or by considering state effectivity functions and memoryless strategies in the *tree unfolding* of the CGS. We will present both approaches.

#### 5.3.1 Preparation

We begin by updating the mappings between path and state effectivity functions, which were defined in Sect. 4.2 with memoryless strategies in mind. First, recall some notation introduced in Sect. 4.1. For \({\mathcal {X}}\subseteq St^\omega , h\in St^+\), we have:

\({\mathcal {X}}(h) \,{:}{=}\, \{\lambda \mid \lambda \in {\mathcal {X}}, \ \text{ and } \ \lambda [0..k]= h \ \text{ where } k = |h| \}\);

\(\mathcal {E}(C)(h) = \{{\mathcal {X}}(h) \mid {\mathcal {X}}\in \mathcal {E}(C)\}\).

**Definition 24**

*History-based state effectivity functions*) A

*history-based state effectivity function*on a state space \(St\) is a mapping

*history-based global strategic choices*of the coalition \(C\) in \(E^{H}(C)\).

Every CGS \(M\) for a set of agents \({\mathbb {A}\mathrm {gt}}\) over a state space \(St\) defines the history-based state effectivity function \(E^{H}_{M}\). The function assigns to every coalition \(C\) and history \(h\) the family of possible sets of successors of the last state of \(h\), corresponding to the possible perfect recall strategies of \(C\) that produce \(h\) following a suitable collective behavior of the remaining agents. Thus, every perfect recall strategy \(\sigma _{C}\) determines a history-based global strategic choice of \(C\) that assigns to every history \(h\) the set of possible continuations of \(h\) resulting from the agents in \(C\) following the strategy \(\sigma _{C}\).

**Definition 25**

*History-based state projection*) For a global strategic choice \({\mathcal {X}}\subseteq St^\omega \), we define its

*history-based state projection*as the history-based global action choice \({\mathcal {X}}^{HS}: St^+ \rightarrow \mathcal {P}({St})\) constructed as follows:

\({\mathcal {X}}^{HS}(h)\) includes all the states that can appear right after prefix \(h\) in the set of paths \({\mathcal {X}}\). Thus, \({\mathcal {X}}^{S}\) assigns possible successors to each finite sequence of states that can occur in the system. This can be seen as a representation of a *tree* of possible finite histories admitted by a fixed perfect recall collective strategy of the agents in \(C\). Moreover, \(\mathcal {E}^{HS}\) collects all such trees that can be “extracted” from the strategic choices of \(C\) in \(\mathcal {E}\).

**Definition 26**

*History-based path closure*) Given a history-based action choice \(X : St^+ \rightarrow \mathcal {P}({St})\), we define its

*history-based path closure*\(X^{HP}\subseteq St^\omega \) as follows:

*history-based path closure*of a history-based state effectivity function \(E: St^+ \times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\) is defined as the path effectivity function \(E^{HP}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) constructed as follows:

That is, \(X^{HP}\) collects the paths generated by the transition tree represented by \(X\). Moreover, \(E^{HP}\) is the outcome-monotone closure of the family of strategic choices generated this way from the extended state effectivity function \(E\).

**Definition 27**

(*History-transition closed choices and effectivity functions*) A strategic choice \({\mathcal {X}}\) is *history-transition closed* iff \(({\mathcal {X}}^{HS})^{HP}= {\mathcal {X}}\).

A path effectivity function \(\mathcal {E}\) is *history-transition closed* iff \((\mathcal {E}^{HS})^{HP}= \mathcal {E}\).

As it turns out, the analogue of state-transition closed *choices* and state-transition closed core for perfect recall strategies require only Emerson’s limit closure condition [15] which we already presented in Sect. 5.2, and recall again below.

**Definition 28**

(*Limit closure, limit-closed core*) A strategic choice \({\mathcal {X}}\subseteq St^\omega \) is *limit-closed* iff, whenever it contains an infinite sequence of paths \(\{\lambda _{i}\}_{i\in {\mathbb {N}}}\) in \({\mathcal {X}}\) such that \(\lambda _{i}[0..i] = \lambda [0..i]\) for every \(i\in {\mathbb {N}}\), then \(\lambda \) belongs to \({\mathcal {X}}\), too.

*limit-closed core of*\(\mathcal {E}\) is defined as the effectivity function \(\mathcal {E}^{lcore}\) that selects only limit-closed choices from \(\mathcal {E}\):

**Proposition 11**

For any global strategic choice \({\mathcal {X}}\subseteq St^\omega \), \({\mathcal {X}}\) is history-transition closed iff \({\mathcal {X}}\) is limit-closed.

*Proof*

We prove that \(({\mathcal {X}}^{HS})^{HP}= {\mathcal {X}}\) iff \({\mathcal {X}}\) is limit-closed. Essentially by definition, \(X^{HP}\) is limit-closed for every history-based action choice \(X\), hence the implication from left to right. Conversely, let \({\mathcal {X}}\) be limit-closed. First, note that \({\mathcal {X}}\subseteq ({\mathcal {X}}^{HS})^{HP}\), immediately from the definition of \( ({\mathcal {X}}^{HS})^{HP}\). For the other inclusion, let \(\lambda \in ({\mathcal {X}}^{HS})^{HP}\). Then, for every \(i \ge 0\), we have \(\lambda [i+1] \in {\mathcal {X}}^{HS}(\lambda [0..i])\), hence \(\lambda [i+1] = \lambda '[i+1]\) for some \(\lambda ' \in {\mathcal {X}}\) such that \(\lambda '[0..i] = \lambda [0..i]\). Put \(\lambda _{i+1} = \lambda '\). Thus, we have defined an infinite sequence \(\{\lambda _{j}\}_{j>0}\) of paths in \({\mathcal {X}}\) such that \(\lambda _{j}[0..j] = \lambda _{j}[0..j]\) for each \(j\). By limit closure of \({\mathcal {X}}\) it follows that \(\lambda \in {\mathcal {X}}\). Thus, we have also proved that \(({\mathcal {X}}^{HS})^{HP}\subseteq {\mathcal {X}}\).\(\square \)

**Lemma 2**

*Proof*

Let \((\mathcal {E}^{HS})^{HP}= \mathcal {E}\). Now, let \({\mathcal {X}}\in \mathcal {E}(C)\). Then, \({\mathcal {X}}\in (\mathcal {E}^{HS})^{HP}(C)\), hence there exists a \(X \in \mathcal {E}^{HS}(C)\) such that \(X^{{HP}}\subseteq {\mathcal {X}}\). Since \(X^{{HP}}\) is limit closed, we have that \(X^{{HP}} \in \mathcal {E}^{lcore}(C)\). Conversely, let \({\mathcal {X}}\supseteq \mathcal {Y}\) for some limit-closed \(\mathcal {Y}\in \mathcal {E}(C)\). Then \({\mathcal {X}}\in \mathcal {E}(C)\) because \(\mathcal {E}(C)\) is outcome-monotone. \(\square \)

#### 5.3.2 Path effectivity functions in tree-like structures

Here we are going to do some technical preparation for reduction of the characterization of path effectivity functions with perfect recall strategies to the case with memoryless strategies in tree-like concurrent game structures.

**Definition 29**

*Tree-like concurrent game structures*) A CGS is:

*injective*, if for every state, any two different action profiles applied at that state result in different successor states.*tree-like*, if it is injective and all states have pairwise disjoint sets of successor states.

Equivalently, a CGS is tree-like if every state has a unique maximal (i.e. not properly extendable) history, i.e., path along the transition relation ending at that state. Note that in the definition above we do not assume existence of a root, so a history of a state may be without initial state, hence infinite.

*Remark 2*

Any state in a tree-like CGS can be visited at most once during a play, and therefore memoryless and perfect recall strategies in tree-like CGSs coincide.

**Definition 30**

*Tree unfolding of concurrent game structures*[2, 12]) The

*tree unfolding*of a CGS \(F = ({\mathbb {A}\mathrm {gt}}, St, Act, d, o)\) is the CGS

\(\widehat{St}\) is the set of all initial feasible histories \(\lambda [0..i]\) of the feasible paths \(\lambda \) in \(F\);

\(\widehat{d} : {\mathbb {A}\mathrm {gt}}\times \widehat{St} \rightarrow \mathcal {P}({Act})\) assigns to each agent \(\mathsf a \) and history \(\lambda [0..i]\) the set of actions available to \(\mathsf a \) at the last state of that history \(d(\mathsf a ,\lambda [i])\).

\(\widehat{o}\) is the transition function defined on every history and action profile as \(o\) applied to the last state of the history and the same action profile: \(\widehat{o}(\lambda [0..i],\alpha _1,\dots ,\alpha _k) {:}{=} o(\lambda [i],\alpha _1,\)\(\dots ,\alpha _k)\).

Now, we define the liftings of strategies, paths, choices and effectivity functions from concurrent game structures to their tree unfoldings.

**Definition 31**

*Liftings of strategies, paths and choices*) Consider the

*tree unfolding*\(\widehat{F} = ({\mathbb {A}\mathrm {gt}}, \widehat{St}, Act, \widehat{d}, \widehat{o})\) of a CGS \(F = ({\mathbb {A}\mathrm {gt}}, St, Act, d, o)\).

Every perfect recall strategy \(\sigma _\mathsf{a }\) of an agent \(\mathsf a \) in \(F\) defines a strategy \(\widehat{\sigma }_\mathsf{a }\) in \(\widehat{F}\) that prescribes at every state in \(\widehat{F}\) (i.e., history \(h\) in \(F\)) the action \(\sigma _\mathsf{a }(h)\). Likewise for coalitional strategies.

For every path \(\lambda \) in \(F\) we define its

*lifting*as the path of its initial histories \(\widehat{\lambda } = \lambda [0..0], \lambda [0..1], \ldots \lambda [0..n], \ldots \) in \(\widehat{F}\). Note that \(\widehat{\lambda }\) is a feasible path (play) in \(\widehat{F}\) iff \(\lambda \) is a feasible path (play) in \(F\).Likewise, for every set of paths \({\mathcal {X}}\) in \(F\) we define its lifting in \(\widehat{F}\) as \(\widehat{{\mathcal {X}}} = \{\widehat{\lambda } \mid \lambda \in {\mathcal {X}}\}\).

Every (abstract) path effectivity function \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) is lifted accordingly to a path effectivity function \(\widehat{\mathcal {E}} : \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({\widehat{St}^\omega })})\).

- 1.
Every tree unfolding of a CGS is tree-like.

- 2.
The tree unfolding of a tree-like CGS \(F\) is isomorphic to \(F\).

- 3.
The mapping \(\widehat{\cdot }\) defined above is a bijection between the feasible paths (plays) in the CGS \(F\) and those in its tree unfolding \(\widehat{F}\).

**Proposition 12**

Let \(\widehat{F} = ({\mathbb {A}\mathrm {gt}}, \widehat{St}, Act, \widehat{d}, \widehat{o})\) be the tree unfolding of the CGS \(F = ({\mathbb {A}\mathrm {gt}}, St, Act, d, o)\). Then the lifting of the path effectivity function with perfect recall strategies \(\mathcal {E}^\mathfrak {FulMem} _F\) is precisely the path effectivity function with memoryless strategies \(\widehat{\mathcal {E}}^\mathfrak {NoMem} _{\widehat{F}}\) in \(\widehat{F}\).

*Proof*

First, every perfect recall strategy \(\sigma \) of an agent or coalition in \(F\) is lifted to the memoryless strategy \(\widehat{\sigma }\) in \(\widehat{F}\) as defined above. Conversely, every memoryless strategy in \(\widehat{F}\) is a lifting of a respective perfect recall strategy in \(F\). Furthermore, a play \(\lambda \) in \(F\) is consistent with a perfect recall strategy \(\sigma \) in \(F\) iff its lifting \(\widehat{\lambda }\) is consistent with the corresponding memoryless strategy in \(\widehat{F}\). Consequently, the global strategic choices in \(\widehat{\mathcal {E}}^\mathfrak {NoMem} _{\widehat{F}}\) in \(\widehat{F}\) are precisely the liftings of the global strategic choices in \(\mathcal {E}^\mathfrak {FulMem} _F\). \(\square \)

#### 5.3.3 Characterization

Now we obtain characterizations of path effectivity functions that correspond to concurrent game structures with perfect recall strategies. Instead of repeating the work done for the case of memoryless strategies, we can reduce that characterization to the one with memoryless strategies in tree-like CGSs using the definitions and results from Sect. 5.3.2.

**Proposition 13**

Let \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) be a path effectivity function over a set of feasible paths \(\mathsf {Paths}\). Then \(\mathcal {E}\) is history-transition closed and truly playable iff its lifting \(\widehat{\mathcal {E}} : \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({\widehat{St}^\omega })})\) is state-transition closed and truly playable.

*Proof*

First, suppose \(\mathcal {E}\) is history-transition closed and truly playable. Then \(\widehat{\mathcal {E}}\) is state-transition closed, immediately from the definitions, as the lifting transforms histories into states. Furthermore, the playability conditions from Definition 20 for the path effectivity function \(\mathcal {E}\) are directly lifted to the playability conditions for the global state effectivity function \(\widehat{\mathcal {E}}\) from Definition 4. Conversely, assume that \(\widehat{\mathcal {E}}\) is state-transition closed and the playability conditions from Definition 4 hold globally for \(\widehat{\mathcal {E}}\). Then, again immediately from the definitions, \(\mathcal {E}\) is history-transition closed. Furthermore, *P-Safety* and *P-Liveness* for \(\mathcal {E}\) follow immediately. Likewise for *P-outcome Monotonicity*, *P-Superadditivity*, *P*-\(\emptyset \)-*Minimality* and *P-Determinacy*, using the fact that \(\mathcal {E}\) is history-transition closed and Lemma 2. We omit the routine details. \(\square \)

**Theorem 5**

(\(\mathfrak {FulMem}\)- Representation theorem) A path effectivity function \(\mathcal {E}\) over a state space \(St\) and a set of feasible paths \(\mathsf {Paths}\) equals \(\mathcal {E}^\mathfrak {FulMem} _F\) for some concurrent game structure \(F\) if and only if \(\mathsf {Paths}\) is state-transition closed and \(\mathcal {E}\) is truly playable and history-transition closed.

*Proof*

First, if \(\mathcal {E}\) equals \(\mathcal {E}^\mathfrak {FulMem} _F\) for some CGS \(F\), then its lifting in \(\widehat{\mathcal {E}}\) equals the path effectivity function with memoryless strategies \(\widehat{\mathcal {E}}^\mathfrak {NoMem} _{\widehat{F}}\), hence it satisfies the characterization of Theorem 4 (possibly simplified for tree-like structures). Note that the suffix, fusion and limit closure conditions are preserved both ways by liftings of sets of paths, and hence by liftings of path effectivity functions. Thus, \(\mathcal {E}\) is truly playable and history-transition closed, by Proposition 13.

Conversely, if the conditions are satisfied by \(\mathcal {E}\), then its lifting \(\widehat{\mathcal {E}}\) satisfies the characterization conditions of Theorem 4, hence it is equal to the path effectivity function with memoryless strategies for some tree-like CGS over \(\widehat{St}\). The latter can be regarded as the lifting of the path effectivity function with perfect recall strategies of a respective (tree-like) CGS over \(St\), which is equal to \(\mathcal {E}\). \(\square \)

We now proceed with an alternative characterization that establishes internal characterization of history-transition closed effectivity functions, in terms of the properties \(\mathfrak {FulMem}\)-grounding and \(\mathfrak {FulMem}\)-convexity, stated in Proposition 14 below. Intuitively, \(\mathfrak {FulMem}\)-grounding specifies that every strategic choice can be “grounded” onto one that satisfies limit closure. Moreover, \(\mathfrak {FulMem}\)-convexity requires that any collection of sub-strategies for a given coalition \(C\) can be pieced together into a global perfect recall strategy for \(C\).

**Proposition 14**

(\(\mathfrak {FulMem}\)-

*Grounding)*For every \(\mathcal {Y}\in \mathcal {E}(C)\) there is \({\mathcal {X}}\in \mathcal {E}^{lcore}(C)\) such that \({\mathcal {X}}\subseteq \mathcal {Y}\).(\(\mathfrak {FulMem}\)-

*Convexity)*For every family \(\{{\mathcal {X}}^{h} \in \mathcal {E}^{lcore}(C) \mid h \in St^+ \}\) of strategic choices, if \(\mathcal {Y}(h) = {\mathcal {X}}^{h}(h)\) for every \(h\), then \((\mathcal {Y}^{HS})^{HP}\in \mathcal {E}^{lcore}(C)\).Equivalently, the condition can be formulated as follows: For every family \(\{{\mathcal {X}}^{h} \in \mathcal {E}^{lcore}(C) \mid h \in St^+ \}\), we have that \(\big (\big (\bigcup _{h\in St^+}{\mathcal {X}}^{h}(h)\big )^{HS}\big )^{HP}\in \mathcal {E}^{lcore}(C)\).

*Proof*

Follows from Propositions 10 and 13.

First, Proposition 13 and its proof can be simplified to only state that \(\mathcal {E}\) is history-transition closed iff its lifting \(\widehat{\mathcal {E}} : \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({\widehat{St}^\omega })})\) is state-transition closed.

Now, if \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) is history-transition closed then (\(\mathfrak {FulMem}\)-Grounding) follows immediately from Lemma 2 and (\(\mathfrak {FulMem}\)-Convexity) follows from Proposition 10 and the simplified Proposition 13.

The converse direction follows the proof of Proposition 10 using the simplified Proposition 13. We omit the routine details. \(\square \)

## 6 Further remarks on path effectivity

In Sect. 4, we argued that path effectivity is conceptually the best match for representing effectivity in multi-step games, and to provide semantics to logics of long-term ability, such as ATL and ATL*. Here, we briefly show that a single path effectivity function can be used to derive state effectivity functions for any given temporal pattern (Sect. 6.1). Moreover, we show how our technical results from Sect. 5 can be applied to provide insight into existing theories of agency—in this case, the *stit* theory of “seeing to it that” (Sect. 6.2). Finally, we offer some speculation on how path effectivity functions can be used to model multi-step games with imperfect information (Sect. 6.3).

### 6.1 From path effectivity back to state effectivity

In Sect. 3, we showed how effectivity of agents and coalitions can be presented entirely in terms of states (positions) in the game. Essentially, one has to devote a separate effectivity function for each temporal pattern of interest. Thus, we need one function to describe what properties the agents are effective for *in the next moment*, another one to describe which properties can be *maintained* by whom forever from now on, etc. If the structures are used to give semantics to ATL, we need three effectivity functions (\(\mathbf {E}\) for “next”, \(\mathbf {G}\) for “always”, and \(\mathbf {U}\) for “until”, like in Sect. 3). However, in the richer language of ATL*, there are infinitely many possible temporal patterns. For instance, we can be interested in the properties that coalition \(C\) can enforce *infinitely often* (i.e., \(\varphi \) such that \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {G}\mathrm {F}\varphi \)), those that can be maintained from some moment on (\(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {F}\mathrm {G}\varphi \)), ones that can be achieved at two subsequent time points (\(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {F}(\varphi \wedge \mathrm {X}\varphi )\)), and so forth. Thus, using the framework of state effectivity leads to a fairly complicated picture if one is interested in coalitional effectivity with respect to anything beyond the three standard temporal operators. On the other hand, the path effectivity function can be used to derive state effectivity functions for all temporal patterns specifiable in ATL*. In this sense, a path effectivity function is not only an intuitive, but also a much more *complete* description of what the agents and coalitions can effect in the system. We begin by showing how to “distill” the state effectivity functions for “next” (\(\mathrm {X}\)), “eventually” (\(\mathrm {F}\)), “always” (\(\mathrm {G}\)), and “until” (\(\mathrm {U}\)). Then, we extend the treatment to some more sophisticated temporal patterns.

#### 6.1.1 Deriving state effectivity for standard temporal operators

**Definition 32**

*From path to state effectivity*) Let \({\mathcal {X}}\subseteq \mathcal {P}({St^\omega })\) be a set of paths. The following sets of states can be derived from \({\mathcal {X}}\):

Note that \(\mathcal {E}^\mathrm {X}\) is exactly the state projection of \(\mathcal {E}\) (\(\mathcal {E}^\mathrm {X}=\mathcal {E}^{S}\)), cf. Definition 16. We also observe that the definitions of \({\mathcal {X}}^\mathrm {X}\), \({\mathcal {X}}^\mathrm {F}\), \({\mathcal {X}}^\mathrm {G}\), and \({\mathcal {X}}^\mathrm {U}\) are straightforward, and closely follow the semantic definitions of the corresponding temporal operators (\(\mathrm {X}\), \(\mathrm {F}\), \(\mathrm {G}\), and \(\mathrm {U}\)).

**Example 7**

The following proposition shows that Definition 32 provides an alternative characterization of standard state effectivity functions from Sect. 3.

**Proposition 15**

- 1.
\(\mathbf {E}_q\ =\ (\mathcal {E}_M^\mathfrak {NoMem})^\mathrm {X}(q)\ =\ (\mathcal {E}_M^\mathfrak {FulMem})^\mathrm {X}(q)\),

- 2.
\(\mathbf {G}_q\ =\ (\mathcal {E}_M^\mathfrak {NoMem})^\mathrm {G}(q)\ =\ (\mathcal {E}_M^\mathfrak {FulMem})^\mathrm {G}(q)\),

- 3.
\(\mathbf {U}_q\ =\ (\mathcal {E}_M^\mathfrak {NoMem})^\mathrm {U}(q)\ =\ (\mathcal {E}_M^\mathfrak {FulMem})^\mathrm {U}(q)\).

*Proof*

- 1.
Straightforward.

- 2.
First, we prove that \(X\in \mathbf {G}_q\) iff \(X\in (\mathcal {E}_M^\mathfrak {FulMem})^\mathrm {G}(q)\). Observe that \(\mathbf {G}_q\ =\ \mathbf {E}_q^{\mathbf {[*]}}\ =\ \bigcap \limits _{k=0}^{\infty } {E}_q^{[k]}\ =\ \{X\subseteq St\mid \forall k\;.\;X\in {E}_q^{[k]} \}\). Thus, \(X\in \mathbf {G}_q\) iff for all \(k\) there exists a mapping \(f(h) = Y_h\) such that: (i) \(f\) maps sequences of states \(h\) such that \(h[0]=q\) and \(|h|\le k\), to subsets of states \(Y_h\in \mathbf {E}_{last(h)}\); (ii) for every \(h\) with \(h[0]=q\) and \(|h|\le k\), if \(h[i]\in f(h[0..i-1])\) for all \(i=0,\dots ,k\) then \(f(h)\subseteq X\). But then, \(f\) specifies a perfect recall strategy in \(M\) such that the paths in \(out(q,f)\) contain only states in \(X\), which is equivalent to \(X\in (\mathcal {E}_M^\mathfrak {FulMem})^\mathrm {G}(q)\).

Secondly, \((\mathcal {E}_M^\mathfrak {NoMem})^\mathrm {G}(q)\ =\ (\mathcal {E}_M^\mathfrak {FulMem})^\mathrm {G}(q)\) follows from the fact that the perfect recall and memoryless semantics of \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {G}\varphi \) coincide [6, 33]. Take any \(X\subseteq St\). Let \(M_X\) be model \(M\) with the valuation of propositions extended to an additional atomic proposition \(\mathsf {{p}}\) such that \(\mathsf {{p}}^{M_X} = X\) (i.e., \(\mathsf {{p}}\) holds exactly in the states from \(X\)). By [6, 33], we have that Open image in new window iff Open image in new window. Thus, \(\mathsf {{p}}^{M_X} \in (\mathcal {E}_{M_X}^\mathfrak {NoMem})^\mathrm {G}(q)\) iff \(\mathsf {{p}}^{M_X} \in (\mathcal {E}_{M_X}^\mathfrak {FulMem})^\mathrm {G}(q)\). Note that \(M\) and \(M_X\) differ only in their valuations of propositions; hence, they must induce the same effectivity functions. In consequence, we get that \(X \in (\mathcal {E}_M^\mathfrak {NoMem})^\mathrm {G}(q)\) iff \(X \in (\mathcal {E}_M^\mathfrak {FulMem})^\mathrm {G}(q)\).

- 3.
Analogous.

The following is an immediate consequence:

**Corollary 2**

- 1.
- 2.
Open image in new window iff \(\varphi ^{(\mathcal {E},V)} \in \mathcal {E}^\mathrm {X}_q(C)\), where Open image in new window.

- 3.
Open image in new window iff \(\varphi ^{(\mathcal {E},V)} \in \mathcal {E}^\mathrm {F}_q(C)\).

- 4.
Open image in new window iff \(\varphi ^{(\mathcal {E},V)} \in \mathcal {E}^\mathrm {G}_q(C)\).

- 5.
Open image in new window iff \((\varphi ^{(\mathcal {E},V)},\psi ^{(\mathcal {E},V)}) \in \mathcal {E}^\mathrm {U}_q(C)\).

#### 6.1.2 Obtaining state effectivity for other temporal patterns

Proposition 15 and Corollary 2 show that path effectivity functions for concurrent game models are at least as informative as state effectivity functions. Below, we show that the template from Definition 32 can be applied to obtain state effectivity functions that correspond to many other temporal patterns.

**Definition 33**

*(From path to state effectivity II)*For \({\mathcal {X}}\subseteq \mathcal {P}({St^\omega })\), we define:

\({\mathcal {X}}^{\mathrm {F}\mathrm {G}}\) collects sets of states \(X\) such that every path from \({\mathcal {X}}\) stays in \(X\) from some moment on. \({\mathcal {X}}^{\mathrm {G}\mathrm {F}}\) contains sets \(X\) such that every path from \({\mathcal {X}}\) visits \(X\) infinitely often. \({\mathcal {X}}^{\mathrm {F}+}\) collects sets \(X\) such that every path from \({\mathcal {X}}\) stays in \(X\) for at least two moments in a row. The following is straightforward:

**Proposition 16**

- 1.
Open image in new window iff \(\varphi ^{(\mathcal {E},V)} \in \mathcal {E}^{\mathrm {F}\mathrm {G}}_q(C)\).

- 2.
Open image in new window iff \(\varphi ^{(\mathcal {E},V)} \in \mathcal {E}^{\mathrm {G}\mathrm {F}}_q(C)\).

- 3.
Open image in new window iff \(\varphi ^{(\mathcal {E},V)} \in \mathcal {E}^{\mathrm {F}+}_q(C)\).

Note that none of the formulae is expressible in ATL [14]. Thus, \(\mathcal {E}^{\mathrm {F}\mathrm {G}}\), \(\mathcal {E}^{\mathrm {G}\mathrm {F}}\), and \(\mathcal {E}^{\mathrm {F}+}\) cannot be obtained by a simple combination of \(\mathcal {E}^\mathrm {X}\), \(\mathcal {E}^\mathrm {G}\), and \(\mathcal {E}^\mathrm {U}\).

### 6.2 Stit models vs. path effectivity

In this paper, we take coalitional effectivity models as the starting point, and show how they can be used to model long-term interaction. So, the inspiration comes from models that have been used in social choice theory for over 30 years. A major part of the paper is based on the observation that, in multi-step scenarios, the outcome of the game can be seen as the complete sequence of states (or worlds) that can possibly happen. The mathematical structure that we obtain is surprisingly similar to models of “seeing to it that”, that have been studied in philosophy since late 1980s. In the subsequent paragraphs, we show that stit frames can be seen as a subclass of path effectivity functions. However, the subclass is too general and too restricted at the same time. On the one hand, it allows for effectivity patterns that cannot be implemented in simple multi-step games based on concurrent game structures (cf. Sect. 6.2.2). On the other hand, it does not allow for modeling some natural patterns of coalitional effectivity (Sect. 6.2.3).

Alternatively, stit frames can be seen as a more complicated way of defining state effectivity functions. We look closer at this interpretation in Sect. 6.2.4.

We point out that the results presented in Sects. 6.2.2 and 6.2.3 are straightforward applications of the characterizations proposed in Sect. 4. In other words, our results on path effectivity directly expose some hitherto unknown (and important!) limitations of models that have been studied for 25 years. We believe that this makes a good case for the explanatory and analytical value of the structures and characterizations that we propose.

*Remark 3*

Our analysis in this section focuses on one of the existing semantics of stit, namely the “classical” semantics based on full trees [8, 11, 20, 22, 23]. Other approaches include the semantics based on the concept of bundled tree [13], a Kripke-style semantics based on the concept of Ockhamist frame [24], as well as the semantics based on the concept of Kamp frame [10]. Applying our results to the other semantics of stit is an interesting issue, but we leave it for another study.

#### 6.2.1 Models of “seeing to it that”

Models of “seeing to it that” have been defined in [7], taking branching time structures as the starting point, and enhancing them to give account of how agents can influence the dynamics of the system. For a broader discussion and extensions of stit, we refer the reader to [8, 11, 20, 22, 23].

*stit frame*is a tuple \((St,<,{\mathbb {A}\mathrm {gt}},Choice)\) where:

\((St,<)\) is a branching-time structure, i.e., a transition structure that forms a tree;

\({\mathbb {A}\mathrm {gt}}\) is a finite set of agents;

\(Choice : {\mathbb {A}\mathrm {gt}}\times St\rightarrow \mathcal {P}({\mathcal {P}({\mathsf {Paths}})})\), where \(\mathsf {Paths}\) is the set of all maximal linearly ordered sequences of points in \((St,<)\),

^{6}such that for every \(q \in St\) and \(\mathsf{a } \in {\mathbb {A}\mathrm {gt}}\), \(Choice(\mathsf a ,q)\) is a partition of the set \(\mathsf {Paths}(q)\) of all paths passing through \(q\) into a family of non-empty sets. That partition represents the available choices for \(\mathsf a \) at \(q\) (as in alternating transition systems [5]).

*stit model*extends a stit frame with a valuation of atomic propositions into sets of paths.

Note that, since \((St,<)\) is a tree, we can see the elements of \(St\) as both states and (finite) histories of interaction. To avoid confusion, they will be referred to in the remainder of this section as *positions*. Moreover, for stit models, the concepts of memoryless and perfect recall play coincide.

*choice selection function at*\(q\) is a function \(s_q: {\mathbb {A}\mathrm {gt}}\rightarrow \mathcal {P}({\mathsf {Paths}(q)})\), such that \(s_q(\mathsf{a }) \in Choice(\mathsf a ,q)\) for each \(\mathsf{a } \in {\mathbb {A}\mathrm {gt}}\). The set of all selection functions \(s_q\), for a given \(q\), is denoted by \(Select_q\). Now, for any \(C\subseteq {\mathbb {A}\mathrm {gt}}, q\in St\) we define

*Independence of agents’ choices*must hold for \(Choice\):

We observe that stit models come very close to coalitional path effectivity models. In fact, function \(Choice\) looks pretty much like a path effectivity function. Whether it *does* represent path effectivity, however, depends on how it is interpreted. The informal explanation in most stit literature is that a choice \(X\in Choice(a,q)\) constrains the set of possible paths to the ones consistent with \(X\). In that case, function \(Choice\) clearly represents path effectivity, and the fundamental differences to our approach are minor. We look closer at this interpretation in Sects. 6.2.2 and 6.2.3.

On the other hand, some texts in the existing literature suggest that \(Choice\) is but a more involved representation of state effectivity (cf. e.g. [11, 22]). We discuss the latter interpretation in Sect. 6.2.4.

#### 6.2.2 Stit models are too general

Assuming that \(X\in Choice(C,q)\) simply collects the paths that may result from agents \(C\) choosing \(X\) at position \(q\), we get that \(Choice(C,q)\) describes the effectivity of \(C\) in \(q\) in the following manner.

**Definition 34**

That is, \(\mathcal {E}(\mathcal {S})(C)\) is the outcome-monotone closure of the set of all global combinations of choices from \(Choice(C,\cdot )\).

**Proposition 17**

For every stit frame \(\mathcal {S}\), we have that \(\mathcal {E}(\mathcal {S})\) satisfies *P-Safety*, *P-Liveness*, *P-Outcome Monotonicity*, *P-Superadditivity*, and *P*-\(\emptyset \)-*Minimality*. It does not have to satisfy *P-Determinacy*.

*Proof*

Straightforward. \(\square \)

**Corollary 3**

Path effectivity in stit frames is playable, but not necessarily truly playable.

Thus, path effectivity in stit frames satisfies most, though not all, general playability conditions. More importantly, it does not have to satisfy the structural conditions that make effectivity patterns implementable in natural multi-step games. We focus on realizability under perfect recall, since realizability in memoryless strategies can be seen as its special case.

**Proposition 18**

\(\mathcal {E}(\mathcal {S})\) is generally not history-transition closed.

*Proof (sketch)*

Let us construct a stit frame \(\mathcal {S}\) as follows. Take an arbitrary nontrivial stit frame \((St,<,{\mathbb {A}\mathrm {gt}},Choice)\) and replace its \(Choice\) function with \(Choice'\) such that \(Choice'(a,q) = \{X\in Choice(a,q) \mid X\text { is not limit-closed}\}\) for every \(a\in {\mathbb {A}\mathrm {gt}}, q\in St\).

Suppose now that \(\mathcal {E}(\mathcal {S})\) is history-transition closed. By Proposition 11 and Lemma 2, it must include choices that are limit closed, which is not the case. \(\square \)

**Corollary 4**

There are stit frames whose path effectivity cannot be realized in concurrent game structures.

#### 6.2.3 Stit models are too restricted

On one hand, stit frames describe effectivity patterns that can be non-truly playable and non-realizable in both \(\mathfrak {NoMem} \) and \(\mathfrak {FulMem} \) sets of strategies. On the other hand, the way they construct coalitional effectivity allows only for *strictly additive* aggregation of abilities. In other words, no synergy between members of a coalition can be modeled in a stit frame.

**Proposition 19**

*(P-Additivity)*For every \(q\in St\) and \(C \cap D = \emptyset \), we have:

- 1.
if \({\mathcal {X}}\in \mathcal {E}(\mathcal {S})(C)\) and \(\mathcal {Y}\in \mathcal {E}(\mathcal {S})(D)\) then \({\mathcal {X}}\cap \mathcal {Y}\in \mathcal {E}(\mathcal {S})(C \cup D)\);

- 2.
if \(\mathcal {Z}\in \mathcal {E}(\mathcal {S})(C \cup D)\) then there exist \({\mathcal {X}}\in \mathcal {E}(\mathcal {S})(C)\) and \(\mathcal {Y}\in \mathcal {E}(\mathcal {S})(D)\) such that \(\mathcal {Z}= {\mathcal {X}}\cap \mathcal {Y}\).

*Proof*

Follows by construction of \(Choice(C,q)\). \(\square \)

Since *P-Additivity* is strictly stronger than *P-Superadditivity*, by Theorems 4 and 5 we get the following:

**Corollary 5**

There are concurrent game structures generating path effectivity functions that cannot be obtained in stit frames.

#### 6.2.4 Stit models as representations of state effectivity

For every \(q,q'\in St\) such that \(q < q'\), all paths in \(\mathsf {Paths}^{q'}\) belong to the same choices of all agents at \(q\), i.e., for every \(\lambda , \lambda ' \in \mathsf {Paths}^{q'}\), for each \(X \in Choice({\mathbb {A}\mathrm {gt}},q)\), either \(\lambda , \lambda ' \in X\) or \(\lambda , \lambda ' \notin X\).

[11] assumes additionally that the choice function is deterministic in the following sense: for each \(q\) there exists \(q'\) such that \(Choice({\mathbb {A}\mathrm {gt}},q) = \mathsf {Paths}^{q'}\).

*selection functions*in the stit literature.

^{7}

Formally, function Choice can be transformed into a state effectivity function in the following way:

**Definition 35**

Under this interpretation, stit models are just a more complicated way of representing one-step effectivity. The role of strategies (a.k.a. selection functions) is to unfold state effectivity into path effectivity, similarly to Definitions 17, 19, and 26.

**Proposition 20**

For every NCBUH stit frame \(\mathcal {S}\), we have that \(E(\mathcal {S})\) satisfies *Safety*, *Liveness*, *Outcome Monotonicity*, *Superadditivity*, and \({\mathbb {A}\mathrm {gt}}\)-*Maximality*. *P-Determinacy* is satisfied for deterministic frames, but not in general.

*Proof*

Straightforward. \(\square \)

Moreover, stit frames do not enable modeling synergy within coalitions.

**Proposition 21**

*(Additivity)*For every \(q\in St\) and \(C \cap D = \emptyset \), we have additionally that:

if \(Z \in E(\mathcal {S})(C \cup D,q)\) then there exist \({\mathcal {X}}\in E(\mathcal {S})(C,q)\) and \(\mathcal {Y}\in E(\mathcal {S})(D,q)\) such that \(\mathcal {Z}= {\mathcal {X}}\cap \mathcal {Y}\).

*Proof*

Follows by construction of \(Choice(C,q)\). \(\square \)

In consequence, not every stit frame represents state effectivity that can be implemented with a strategic game (because effectivity is strategic games must satisfy *Determinacy*). Moreover, not every state effectivity function, implementable in strategic games, can be represented by a stit frame (because stit frames do not allow for non-additive coalitional effectivity patterns).

### 6.3 Beyond perfect information

So far, we have been only concerned with games where every player knows the global state of the system at any moment. Modeling and reasoning about imperfect information scenarios is more sophisticated. First, not all strategies are executable—even in the perfect recall case. This is because an agent cannot specify that she will execute two different actions in situations that look the same to her. Therefore, only *uniform* strategies are admissible here (for the definition of uniformity, see below). Moreover, it is often important to find a uniform strategy that succeeds in *all* indistinguishable states, rather than contend that there is such a successful strategy for the current global state of the system.

In this section, we briefly sketch how path effectivity models can be used to give account on powers of coalitions under imperfect information. This is by no means intended as an exhaustive analysis. Rather, we point out that the modeling power of path effectivity can be applied to more sophisticated scenarios than ones assuming complete knowledge.

#### 6.3.1 Reasoning about imperfect information games

We take Schobbens’ \(\hbox {ATL}_{ir}\) and \(\hbox {ATL}_{iR}\) [33] as the “core”, minimal ATL-based logics for strategic ability under imperfect information. The logics include the same formulae as ATL, only the cooperation modalities are presented with subscripts. The operator \(\langle \!\langle {C}\rangle \!\rangle _{_{\! ir }}\) indicates that we reason about agents with imperfect information and imperfect recall, while \(\langle \!\langle {C}\rangle \!\rangle _{_{\! iR }}\) indicates that agents have imperfect information and perfect Recall. Models of \(\hbox {ATL}_{ir}\) and \(\hbox {ATL}_{ir}\) are *imperfect information concurrent game models* (iCGM), which can be seen as concurrent game models augmented with a family of indistinguishability relations \(\sim _a \subseteq St\times St\), one per agent \(a\in {\mathbb {A}\mathrm {gt}}\). The relations describe agents’ uncertainty: \(q\sim _a q'\) means that, while the system is in state \(q\), agent \(a\) considers it possible that it is in \(q'\). Each \(\sim _a\) is an equivalence relation. It is also required that agents have the same choices in indistinguishable states: if \(q\sim _a q'\) then \(d(a,q)=d(a,q')\). Additionally, for two histories \(h,h'\), we define \(h\approx _a h'\) iff \(|h|=|h'|\) and for every \(i\) it holds that \(h[i]\sim _a h'[i]\).

*uniform memoryless strategy*for agent \(a\) is a function \(s_a : St\rightarrow Act\), such that: (1) \(s_a(q)\in d(a,q)\); (2) if \(q\sim _a q'\) then \(s_a(q)=s_a(q')\). A

*uniform perfect recall strategy*for agent \(a\) is a function \(s_a : St^+\rightarrow Act\), such that: (1) \(s_a(h)\in d(a,last(h))\); (2) if \(h\approx _a h'\) then \(s_a(h)=s_a(h')\). Again a collective strategy is uniform if it contains only uniform individual strategies. Function \(out(q,s_C)\) returns the set of all paths that may result from agents \(C\) executing strategy \(s_C\) from state \(q\) onward. The semantics of cooperation modalities in \(\hbox {ATL}_{ir}^*\) and \(\hbox {ATL}_{iR}^*\) is defined as follows:

Open image in new window iff there exists a

*uniform memoryless strategy*\(s_C\) such that, for each \(a\in C\), \(q'\) such that \(q\sim _a q'\), and path \(\lambda \in out(s_C,q')\), we have Open image in new window.Open image in new window iff there exists a

*uniform perfect recall strategy*\(s_C\) such that, for each \(a\in C\), \(q'\) such that \(q\sim _a q'\), and path \(\lambda \in out(s_C,q')\), we have Open image in new window.

**Example 8**

Consider the model of aggressive vs. conservative play from Example 1 with the following twist: now, each player can only perceive his own situation in the game, and not the position of the other player. Thus, player \(1\) cannot distinguish between states \(q_0,q_2\) while player \(2\) cannot discern states \(q_0,q_1\). The resulting iCGM is presented in Fig. 2.

Now, no agent can make sure anymore that the other one remains in a good position: Open image in new window and Open image in new window. This is because player \(1\) in state \(q_0\) must take into account the possibility of being in state \(q_1\) for which he has no sure strategy of getting to \(\{{q_0,q_2}\}\). The situation of player \(q_2\) is analogous. It is not even the case that the respective players can achieve the property in a finite number of steps: Open image in new window and Open image in new window.

On the other hand, if the players cooperate then they can still control the next state in the game: Open image in new window for all states \(q\). We leave checking this to an interested reader, and only remark that such a tight control of the successor state is rather incidental to the scenario, and does not hold in general for imperfect information models.

#### 6.3.2 Path effectivity under imperfect information

*effectivity function of*\(M\) is still defined as \(\mathcal {E}^\varSigma _M(C) = \{\bigcup _{q\in St}out(q,s_C) \mid s_C\in \varSigma _C \}\). We refer to uniform strategies as \(\mathfrak {uFulMem} \) (for perfect recall) and \(\mathfrak {uNoMem} \) (for memoryless strategies).

**Example 9**

\({\mathcal {X}}_1 = (q_0\cup q_1)^\omega \ \cup \ q_2^\omega \ \cup \ q_2^+q_1(q_0\cup q_1)^\omega \) corresponds to player \(1\)’s strategy of playing conservatively in every state,

\({\mathcal {X}}_2 = (q_0\cup q_1)^\omega \ \cup \ q_2^\omega \ \cup \ q_2^+q_0(q_0\cup q_1)^\omega \) corresponds to the strategy of playing conservatively in \(\{{q_0,q_1}\}\) and aggressively in \(q_2\),

\({\mathcal {X}}_3 = q_0^\omega \ \cup q_1^\omega \ \cup \ (q_0^+\cup q_1^+\cup \epsilon )q_2^+(q_0\cup q_2)^\omega \) corresponds to the strategy of playing aggressively in every state,

\({\mathcal {X}}_4 = q_0^\omega \ \cup q_1^\omega \ \cup \ (q_0^+\cup q_1^+\cup \epsilon )q_2^+(q_1\cup q_2)^\omega \) corresponds to the strategy of playing aggressively in \(\{{q_0,q_1}\}\) and conservatively in \(q_2\).

#### 6.3.3 Semantics of \(\hbox {ATL}_{ir/R}^*\) based on path effectivity

Open image in new window iff there is \({\mathcal {X}}\in \mathcal {E}^\mathfrak {uNoMem} _M(C)\) such that \({\mathcal {X}}([q]_C) \subseteq \gamma ^M\).

Open image in new window iff there is \({\mathcal {X}}\in \mathcal {E}^\mathfrak {uFulMem} _M(C)\) such that \({\mathcal {X}}([q]_C) \subseteq \gamma ^M\).

**Example 10**

Choice \({\mathcal {X}}_1\) from Example 9 can be used to demonstrate that Open image in new window, because \({\mathcal {X}}_1(\{{q_0,q_1}\}) = (q_0\cup q_1)^\omega \). On the other hand, Open image in new window because \({\mathcal {X}}_1(\{{q_2}\}) = q_2^\omega \ \cup \ q_2^+q_1(q_0\cup q_1)^\omega \) does not guarantee \(\mathrm {G}\,\mathsf {{good_2}}\) (and similarly for \({\mathcal {X}}_2\), \({\mathcal {X}}_3\), and \({\mathcal {X}}_4\)). Still, a more sophisticated ATL* property holds: Open image in new window: the strategy behind \({\mathcal {X}}_1\) guarantees that, from some moment on, either player 1 or player 2 remains in a good position forever.

#### 6.3.4 Properties of path effectivity under uncertainty: general playability

*P-Safety:*\(\mathcal {E}(C)(q)\) is non-empty for every \(C\subseteq {\mathbb {A}\mathrm {gt}},q\in St\).*P-Liveness:*\(\emptyset \notin \mathcal {E}(C)(q)\) for every \(C\subseteq {\mathbb {A}\mathrm {gt}},q\in St\).*P-Outcome Monotonicity:*For every \(C\subseteq {\mathbb {A}\mathrm {gt}}\) the set \(\mathcal {E}(C)\) is upwards closed: if \({\mathcal {X}}\in \mathcal {E}(C)\) and \({\mathcal {X}}\subseteq \mathcal {Y}\subseteq \mathsf {Paths}\) then \(\mathcal {Y}\in \mathcal {E}(C)\).*P-Superadditivity:*For every \(C,D\subseteq {\mathbb {A}\mathrm {gt}}\), if \(C \cap D = \emptyset \), \({\mathcal {X}}\in \mathcal {E}(C)\) and \(\mathcal {Y}\in \mathcal {E}(D)\), then \({\mathcal {X}}\cap \mathcal {Y}\in \mathcal {E}(C \cup D)\).*P*-\(\emptyset \)-*Minimality:*\(\mathcal {E}(\emptyset )\) is the singleton \(\{\mathsf {Paths}\}\).*P-Determinacy:*For every \(q\in St\), if \({\mathcal {X}}\in \mathcal {E}({\mathbb {A}\mathrm {gt}})\) then \(\{\lambda \} \in \mathcal {E}({\mathbb {A}\mathrm {gt}})(q)\) for some \(\lambda \in {\mathcal {X}}(q)\).

**Proposition 22**

For every iCGM \(M\), the induced path effectivity functions for uniform memoryless strategies (\(\mathcal {E}^\mathfrak {uNoMem} _M\)) and for uniform perfect recall strategies (\(\mathcal {E}^\mathfrak {uFulMem} _M\)) satisfy *P-Safety*, *P-Liveness*, *P-Outcome Monotonicity*, *P-Superadditivity*, *P*-\(\emptyset \)-*Minimality*, and *P-Determinacy*.

#### 6.3.5 Properties of path effectivity under uncertainty: realizability in memoryless strategies

We observe first that the grounding condition holds for memoryless strategies.

**Proposition 23**

For every iCGM \(M\) we have that effectivity in memoryless strategies satisfies \(\mathfrak {NoMem}\)-*Grounding*.

Formally, for every \({\mathcal {X}}\in \mathcal {E}^\mathfrak {uNoMem} _M(C)\) there exists \(\mathcal {Y}\in \mathcal {E}^\mathfrak {uNoMem} _M(C)\) such that \(\mathcal {Y}\subseteq {\mathcal {X}}\) and \(\mathcal {Y}\) is state-transition closed (i.e., \((\mathcal {Y}^{S})^{P}= \mathcal {Y}\)).

*Proof*

Let \({\mathcal {X}}\in \mathcal {E}^\mathfrak {uNoMem} _M(C)\). Then, there must exist a memoryless unform strategy \(s_C\) such that \(\bigcup _{q\in St}out(q,s_C) \subseteq {\mathcal {X}}\). By construction, \(\bigcup _{q\in St}out(q,s_C)\) is state-transition closed. \(\square \)

On the other hand, the convexity condition is no longer valid:

**Proposition 24**

There exists an iCGM \(M\) for which effectivity in memoryless strategies does not satisfy \(\mathfrak {NoMem}\)*-Convexity*. Formally, \(\mathcal {E}^\mathfrak {uNoMem} _M\) includes a family of state-transition closed global choices \(\{{\mathcal {X}}^{q} \in \mathcal {E}^\mathfrak {uNoMem} _M(C) \mid q \in St\}\) such that \(\big (\big (\bigcup _{q\in St}{\mathcal {X}}^{q}(q)\big )^{S}\big )^{P}\notin \mathcal {E}^\mathfrak {uNoMem} _M(C)\).

*Proof*

\(\mathcal {Y}^{S}(q_0) = \{q_0,q_1\}\)

\(\mathcal {Y}^{S}(q_1) = \{q_1,q_2\}\)

\(\mathcal {Y}^{S}(q_2) = \{q_1,q_2\}\).

As a consequence, the \(\mathfrak {NoMem} \)-Representation Theorem from Sect. 5.2 no longer holds for imperfect information:

**Corollary 6**

There are iCGM’s whose path effectivity functions in memoryless uniform strategies are not state-transition closed.

#### 6.3.6 Properties of path effectivity under uncertainty: realizability in perfect recall strategies

Again, the grounding condition holds:

**Proposition 25**

For every iCGM \(M\) we have that effectivity in perfect recall strategies satisfies \(\mathfrak {FulMem}\)-*Grounding*.

Formally, for every \({\mathcal {X}}\in \mathcal {E}^\mathfrak {uFulMem} _M(C)\) there exists \(\mathcal {Y}\in \mathcal {E}^\mathfrak {uFulMem} _M(C)\) such that \(\mathcal {Y}\subseteq {\mathcal {X}}\) and \(\mathcal {Y}\) is history-transition closed.

*Proof*

Analogous to Proposition 23. \(\square \)

On the other hand, the convexity condition is no longer valid:

**Proposition 26**

There exists an iCGM \(M\) for which effectivity in perfect recall strategies does not satisfy \(\mathfrak {FulMem}\)*-Convexity*. Formally, \(\mathcal {E}^\mathfrak {uFulMem} _M\) includes a family of history-transition closed choices \(\{{\mathcal {X}}^{h} \in \mathcal {E}^\mathfrak {uFulMem} _M(C) \mid h \in St^+ \}\) such that \(\big (\big (\bigcup _{h\in St^+}{\mathcal {X}}^{h}(h)\big )^{HS}\big )^{P}\notin \mathcal {E}^\mathfrak {uNoMem} _M(C)\).

*Proof*

\(\mathcal {Y}^{HS}(\dots q_0) = \{q_0,q_1\}\)

\(\mathcal {Y}^{HS}(\dots q_1) = \{q_1,q_2\}\)

\(\mathcal {Y}^{HS}(\dots q_2) = \{q_1,q_2\}\).

As a consequence, the \(\mathfrak {FulMem} \)-Representation Theorem from Sect. 5.3 no longer holds for imperfect information:

**Corollary 7**

There are iCGM’s whose path effectivity functions in uniform perfect recall strategies are not history-transition closed.

*Summary* We have obtained a partial characterization of path effectivity in multi-step games of imperfect information. General playability conditions hold, as well as grounding conditions in both memoryless and perfect recall cases. On the other hand, convexity does not hold for both types of uniform strategies. A complete characterization is outside of the scope of this paper, and we leave a detailed study of sufficient realizability conditions under imperfect information for future research.

## 7 Conclusions

In this paper we have developed the idea of characterizing multi-player multi-step games in terms of what coalitions can enforce which sets of outcomes—states or paths—by executing one or another collective strategy. These characterizations lead to respective notions of state-based and path-based coalition effectivity models. We believe the characterizations to be both conceptually important and technically interesting, as they extract the core game-theoretic “essence” from game models. They also provide alternative semantics for logics of such games, most notably for the game logics ATL and ATL*.

We show how the new characterizations can be applied to gain insight into properties of the well known stit models of agency. We also use path effectivity functions to highlight (and partially resolve) some technical issues arising in the semantics of ATL* for scenarios of incomplete and imperfect information. We would also like to point out that a better understanding of abstract realizability can lead to satisfiability checking procedures and complete axiomatic characterization for the variants of ATL where such results have not been established yet, e.g., for ATL* as well as all the variants of ATL/ATL* with imperfect information. We leave this final item for future work.

## Footnotes

- 1.
Such actions are also called ‘strategies’ in normal form games, but we reserve the use of the term ‘strategy’ for a

*global conditional plan*in a multi-step scenario. - 2.
Here we use the terms ‘agent’ and ’player’ as synonyms and use the term ‘coalition’ to refer to a set of agents that may be pursuing a common objective, but without assuming any explicit contract or even coordination between then.

- 3.
Here we adhere to the assumption that the available strategies of one member in a coalition is independent of the actual choices of the other members.

- 4.
Note that, unlike in the case of state effectivity functions where the determinacy constraint is only needed for infinite state games (cf. [19]), it becomes essential here, because even very simple 2-state structures can generate uncountably many paths.

- 5.
Later we will call such sets of paths

*state-transition closed*, cf. Definition 21. - 6.
In stit literature, such sequences are called

*histories*, and their set is denoted by \(H\). We use the term*paths*here to be consistent with the terminology used throughout the paper. - 7.
Recall that on tree-like structures memoryless and perfect recall strategies coincide.

## Notes

### Acknowledgments

Wojciech Jamroga acknowledges the support of the National Research Fund (FNR) Luxembourg under the Project GALOT (INTER/DFG/12/06), as well as the support of the 7th Framework Programme of the European Union under the Marie Curie IEF Project ReVINK (PIEF-GA-2012-626398). Valentin Goranko partly worked on this paper during his visit to the Centre International de Mathématiques et Informatique de Toulouse. The authors also thank the anonymous reviewers of JAAMAS for their useful comments.

### References

- 1.Abdou, J., & Keiding, H. (1991).
*Effectivity functions in social choice*. Heidelberg: Springer.CrossRefMATHGoogle Scholar - 2.Ågotnes, T., Goranko, V., & Jamroga, W. (2007). Alternating-time temporal logics with irrevocable strategies. In D. Samet (Ed.)
*Proceedings of TARK XI*(pp. 15–24).Google Scholar - 3.Alechina, N., Logan, B., Nga, N., & Rakib, A. (2009). A logic for coalitions with bounded resources. In
*Proceedings of IJCAI*(pp. 659–664).Google Scholar - 4.Alur, R., Henzinger, T.A., & Kupferman, O. (1997). Alternating-time Temporal Logic. In
*Proceedings of the 38th Annual Symposium on Foundations of Computer Science (FOCS)*(pp. 100–109). IEEE Computer Society Press, Los Alamitos.Google Scholar - 5.Alur, R., Henzinger, T. A., & Kupferman, O. (1998). Alternating-time temporal logic.
*Lecture Notes in Computer Science*,*1536*, 23–60.MathSciNetCrossRefMATHGoogle Scholar - 6.Alur, R., Henzinger, T. A., & Kupferman, O. (2002). Alternating-time temporal logic.
*Journal of the ACM*,*49*, 672–713. doi:10.1145/585265.585270.MathSciNetCrossRefMATHGoogle Scholar - 7.Belnap, N., & Perloff, M. (1988). Seeing to it that: A canonical form for agentives.
*Theoria*,*54*(3), 175–199.CrossRefGoogle Scholar - 8.Belnap, N., Perloff, M., & Xu, M. (2001).
*Facing the future: Agents and choices in our indeterminist world*. Oxford: Oxford University Press.Google Scholar - 9.Boros, E., Elbassioni, K., Gurvich, V., & Makino, K. (2010). On effectivity functions of game forms.
*Games and Economic Behavior*,*68*(2), 512–531.MathSciNetCrossRefMATHGoogle Scholar - 10.Broersen, J. (2011). Deontic epistemic stit logic distinguishing modes of mens rea.
*Journal of Applied Logic*,*9*(2), 127–152.MathSciNetCrossRefMATHGoogle Scholar - 11.Broersen, J., Herzig, A., & Troquard, N. (2006). Embedding alternating-time temporal logic in strategic STIT logic of agency.
*Journal of Logic and Computation*,*16*(5), 559–578.MathSciNetCrossRefMATHGoogle Scholar - 12.Bulling, N., & Jamroga, W. (2014). Comparing variants of strategic ability.
*Journal of Autonomous Agents and Multi-Agent Systems*,*28*(3), 474–518.CrossRefGoogle Scholar - 13.Ciuni, R., & Zanardo, A. (2010). Completeness of a branching-time logic with possible choices.
*Studia Logica*,*96*(3), 393–420.MathSciNetCrossRefMATHGoogle Scholar - 14.Emerson, E., & Halpern, J. (1986). “Sometimes” and “not never” revisited: On branching versus linear time temporal logic.
*Journal of the ACM*,*33*(1), 151–178.MathSciNetCrossRefMATHGoogle Scholar - 15.Emerson, E. A. (1983). Alternative semantics for temporal logics.
*Theoretical Computer Science*,*26*, 121–130.MathSciNetCrossRefMATHGoogle Scholar - 16.Goranko, V. (2001). Coalition games and alternating temporal logics. In J. van Benthem (Ed.)
*Proceedings of TARK VIII*(pp. 259–272). Morgan Kaufmann, Siena.Google Scholar - 17.Goranko, V., & Jamroga, W. (2004). Comparing semantics of logics for multi-agent systems.
*Synthese*,*139*(2), 241–280.MathSciNetCrossRefMATHGoogle Scholar - 18.Goranko, V., & Jamroga, W. (2012). State and path effectivity models for logics of multi-player games. In
*Proceedings of AAMAS*(pp. 1123–1130).Google Scholar - 19.Goranko, V., Jamroga, W., & Turrini, P. (2013). Strategic games and truly playable effectivity functions.
*Journal of Autonomous Agents and Multi-Agent Systems*,*26*(2), 288–314.CrossRefGoogle Scholar - 20.Herzig, A., & Troquard, N. (2006). Knowing how to play: Uniform choices in logics of agency. In
*Proceedings of AAMAS’06*(pp. 209–216).Google Scholar - 21.van der Hoek, W., & Wooldridge, M. (2002). Tractable multiagent planning for epistemic goals. In C. Castelfranchi & W. Johnson (Eds.),
*Proceedings of the First International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-02)*(pp. 1167–1174). New York: ACM Press.CrossRefGoogle Scholar - 22.Horty, J., & Belnap, N. (1995). The deliberative stit: A study of action, omission, ability, and obligation.
*Journal of Philosophical Logic*,*24*, 583–644.MathSciNetCrossRefMATHGoogle Scholar - 23.Horty, J. F. (2001).
*Agency and Deontic Logic*. Oxford: Oxford University Press.CrossRefMATHGoogle Scholar - 24.Lorini, E. (2013). Temporal stit logic and its application to normative reasoning.
*Journal of Applied Non-Classical Logics*,*23*(4), 372–399.MathSciNetCrossRefGoogle Scholar - 25.Moulin, H., & Peleg, B. (1982). Cores of effectivity functions and implementation theory.
*Journal of Mathematical Economics*,*10*(1), 115–145.MathSciNetCrossRefMATHGoogle Scholar - 26.Osborne, M., & Rubinstein, A. (1994).
*A course in game theory*. Cambridge: MIT Press.MATHGoogle Scholar - 27.Pauly, M. (2001).
*Logic for social software*. Ph.D. thesis, University of Amsterdam, Amsterdam.Google Scholar - 28.Pauly, M. (2001). A logical framework for coalitional effectivity in dynamic procedures.
*Bulletin of Economic Research*,*53*(4), 305–324.MathSciNetCrossRefGoogle Scholar - 29.Pauly, M. (2002). A modal logic for coalitional power in games.
*Journal of Logic and Computation*,*12*(1), 149–166.MathSciNetCrossRefMATHGoogle Scholar - 30.Peleg, B. (1997). Effectivity functions, game forms, games, and rights.
*Social Choice and Welfare*,*15*(1), 67–80.MathSciNetCrossRefMATHGoogle Scholar - 31.Peleg, B. (1998). Effectivity functions, game forms, games, and rights.
*Social Choice and Welfare*,*15*, 67–80.MathSciNetCrossRefMATHGoogle Scholar - 32.Rosenthal, R. (1972). Cooperative games in effectiveness form.
*Journal of Economic Theory*,*5*, 88–101.MathSciNetCrossRefGoogle Scholar - 33.Schobbens, P. Y. (2004). Alternating-time logic with imperfect recall.
*Electronic Notes in Theoretical Computer Science*,*85*(2), 82–93.MathSciNetCrossRefMATHGoogle Scholar - 34.Storcken, T. (1997). Effectivity functions and simple games.
*International Journal of Game Theory*,*26*, 235–248.MathSciNetCrossRefMATHGoogle Scholar - 35.Wooldridge, M. (2002).
*An Introduction to Multi Agent Systems*. Chichester: John Wiley & Sons.Google Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.