Abstract
We consider models of multiplayer games where abilities of players and coalitions are defined in terms of sets of outcomes which they can effectively enforce. We extend the wellstudied state effectivity models of onestep games in two different ways. On the one hand, we develop multiple state effectivity functions associated with different longterm temporal operators. On the other hand, we define and study coalitional path effectivity models where the outcomes of strategic plays are infinite paths. For both extensions we obtain representation results with respect to concrete models arising from concurrent game structures. We also apply state and path coalitional effectivity models to provide alternative, arguably more natural and elegant semantics to the alternatingtime temporal logic ATL*, and discuss their technical and conceptual advantages.
Introduction
A wide variety of multiplayer games can be modeled by so called ‘multiplayer game models’ [16, 29], a.k.a. ‘concurrent game models’ [6]. The models can be seen as a generalization of both extensive form games and repeated normal form games. Here, we view them as general models of multistep games. Intuitively, such a game is based on a labelled transition system where every state is associated with a normal form game, with outcomes being possible successor states, and the transitions between states are labelled by tuples of actions,^{Footnote 1} one for each player. Thus, the outcome of playing the normal form game at any given state is a transition to a new state, respectively to a new normal form game. In the quantitative version of such games, the outcome states are also associated with payoff vectors, while in the version that we consider here, the payoffs are qualitative—defined by properties of the outcome states, possibly expressed in a logical language. The players’ objectives in multistep games can simply be about reaching a desired (’winning’) state, or they can be more involved, such as forcing a desired longterm behaviour (transition path, run) again possibly formalized in a suitable logical language such as the linear time temporal logic LTL.
Various logics for reasoning about coalitional abilities in multiplayer games have been proposed and studied in the last two decades—most notably, Coalition Logic (CL) [27] and Alternatingtime Temporal Logic (ATL* and its fragment ATL) [6]. Coalition Logic can be seen as a logic for reasoning about abilities of coalitions in onestep games to bring about an outcome state with desired properties by means of single actions. On the other hand, ATL and ATL* allow to express statements about multistep scenarios. For example, the ATL formula \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {F}\varphi \) says that the coalition of players or agents^{Footnote 2} \(C\) can ensure that \(\varphi \) will become true at some future moment, no matter what the other players do. Likewise, \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {G}\varphi \) expresses that the coalition \(C\) can enforce \(\varphi \) to be always the case. More generally, the ATL* formula \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\gamma \) holds true iff \(C\) has a strategy to ensure that any resulting behavior of the system (i.e., any play of the game) will satisfy the property \(\gamma \).
One way to characterize the abilities of players and coalitions to achieve desirable outcome of the game is in terms of coalition effectivity functions, first introduced in cooperative game theory [25]. Intuitively, an effectivity function in a game model assigns, at every state of the model and for every coalition \(C\), the family of sets of possible outcomes \(X\) for which the coalition has a suitable collective action. The collective action must guarantee that the outcome would be in the set \(X\) regardless of what the other players choose to do at that state, i.e., that \(C\) is be “effective” for the set \(X\) at that state. This concept is at the core of the “coalition effectivity models” studied in [27] and used there to provide semantics for CL. “Alternating transition systems”, originally used to provide semantics for ATL in [4], are closely related. Building on a result from [30], Pauly obtained in [27] an abstract characterization of “playable” coalition effectivity functions that correspond to the \(\alpha \)effectivity functions in concrete models of onestep games. Later, that characterization was corrected and completed in the case of infinite state spaces in [19].
In this paper we study how multistep games can be modeled and characterized in terms of effectivity of coalitions with respect to possible outcome states on one hand, and outcome behaviours on the other. We also show how such models can be used to provide conceptually simple and technically elegant semantics for logics of multiplayer games such as ATL*. The paper has three main objectives:

(i)
To extend the semantics for CL based on onestep coalitional effectivity to semantics for ATL over statebased coalitional effectivity models;

(ii)
To develop the analogous notion of coalitional path effectivity representing the powers of coalitions in multistep games to ensure longterm behaviors, and to provide semantics for ATL* based on it;

(iii)
To obtain characterizations of multiplayer game models in terms of abstract state and path coalitional effectivity models, analogous to the representation theorems for state effectivity functions cited above.
We argue that characterizing effectivity of coalitions in multistep games in terms of paths (cf. points (ii) and (iii) above) is conceptually more natural and elegant than in terms of outcome states, in several respects. First, collective strategies in such games generate outcome paths (plays), not just outcome states. Second, one path effectivity function is sufficient to define the powers of coalitions in a multistep game for all kinds of temporal patterns, through the standard semantics of temporal operators. This point is further supported by the fact that path effectivity models provide a conceptually straightforward semantics for the whole language of ATL* (which is not definable by alternationfree fixpoint operators on the onestep ability). Thus, the patheffectivity based semantics for multistep games essentially simulates the stateeffectivity based semantics for one shot games. By encapsulating the notion of a play as primitive, it provides a clear and conceptually simple interpretation of the ATL(*) operators. Finally, we argue that path effectivity can just as well be applied to variants of ATL(*) with imperfect information, where even simple modalities do not have fixpoint characterizations [12].
Motivation Effectivity functions provide mathematically elegant semantics of interaction between agents, in which properties of interaction are “distilled” and abstracted away from concrete details of implementation. This makes them significantly different from concurrent game models that focus on how concrete actions interfere and give rise to transitions, and how they can be used to build longterm strategies. In contrast, coalitional effectivity models present abilities in a “pure” form. This does not mean that effectivity models are supposed to replace concurrent game models in the semantics of logics like ATL. On the contrary, the two kinds of structures occupy largely different niches. Concrete models of interaction (such as concurrent game models) are more appropriate when one wants to build a model of an actual system, and possibly verify some actual requirements in it. Abstract models (such as coalitional effectivity models) serve better when used to investigate properties of classes of systems. Moreover, correspondence results between concrete and abstract models reveal structural properties of the former in a way that is difficult to achieve otherwise.
Such correspondence results are important for several reasons:

First of all, they characterize the limitations of concrete models. That is, they show which structural conditions must inevitably hold in simple models that are constructed in terms of concrete states, actions, and their combinations.

Secondly, they characterize which abstract patterns of effectivity can be implemented by concrete models.

Thirdly, they characterize classes of models for which the concrete and abstract semantics of strategic logics can be used interchangeably.
To make the motivation more tangible, we apply the characterizations obtained in this paper to gain insight into properties of two other classes of structures. In Sect. 6.2, we apply our results to the well known models of “seeing to it that” (stit). We show that stit models are too general and too restricted at the same time. On the one hand, the stit framework allows for models that are not playable, in the sense that they cannot be implemented by concrete games. On the other hand, stit models accept only a very limited palette of coalitional ability patterns. Both features follow immediately from our characterization results from Sect. 5, which demonstrates the analytical power of the results. Moreover, in Sect. 6.3, we use path effectivity functions to expose properties of imperfect information scenarios, encoded in imperfect information concurrent game models (iCGM).
Related work We study correspondence between patterns of coalitional effectivity vs. standard models of longterm interaction, which are typically used in the field of multiagent systems (cf. e.g. [17, 35]). Effectivity models originate from social choice theory [1, 25, 32]. More recently, they gained attention as models of ability in agent systems [27, 28]. On the other hand, multiagent systems are often modeled by various kinds of transition systems [6, 17, 21, 27] that bear close resemblance to models of multistep and repeated games from game theory. Multiplayer game models (a.k.a. concurrent game structures) are the most typical example here.
Correspondence between “concrete” and “abstract” models of strategic power has been studied in a number of previous works. Characterizations of effectivity in simple cooperative games (voting games) were investigated e.g. in [25, 34]. Peleg and others characterized effectivity patterns arising in surjective normal form game frames [9, 31]. Pauly extended Peleg’s result to general normal form game frames, and provided a logical axiomatization of effectivity in such frames [27, 28]. In our previous work, we pointed out that Pauly’s result was in fact incorrect, and gave the correct characterization of the correspondence, both in structural and logical terms [19]. All the above results refer to oneshot games (either cooperative or noncooperative) where strategies are atomic.
While most models of multiagent interaction are based on transition systems that resemble normal and/or extensive game frames, there is a smaller group of models that come closer to effectivity functions. In fact, alternating transition systems (ATS) from [5] can be seen as a special case of coalitional effectivity models where the aggregation of individual into coalitional power is additive. The correspondence between ATS and multiplayer game models was studied in [16, 17]. Another class of effectivitylike models is provided by stit, i.e., the logic of “seeing to it that” [7]. Models of “strategic stit” [8, 11, 20, 23] are especially relevant here. In classical stit models [8, 23], choices are primitive objects rather than sets of paths (which in turn are sequences of states constructed by discrete transitions). Still, in the more computationfriendly approaches to stit, choices can be directly mapped to infinite sequences of time moments [11, 20, 22], so they come very close to the effectivity patterns studied in this paper. Depending on the interpretation, they can be seen as classes of path effectivity functions or state effectivity functions. However, not all effectivity patterns can be represented by stit models. Moreover, some of the patterns that can be represented are not “playable”, i.e., they cannot be obtained in natural multistep games. We investigate the relationship between stit models and effectivity models in more detail in Sect. 6.2. It is worth noting that, to our best knowledge, this is the first formal study of the modeling limitations of stit. Some simulation results connect stit structures to multiplayer game models [11] but they focus on their logical rather than structural properties.
This article builds on the preliminary research reported in [18].
Structure of the paper The paper is structured as follows. We begin by introducing the basic notions in Sect. 2. In Sect. 3 we develop statebased effectivity models that suffice to define semantics of ATL. The models include three different effectivity functions, one for each basic modality \(\mathrm {X},\mathrm {G},\mathrm {U}\). Then, in Sect. 4 we develop and study effectivity models based on paths. We show how they provide semantics to ATL*, and identify appropriate “playability” conditions, which we use to establish correspondences between powers of coalitions in the abstract models and strategic abilities of coalitions in concurrent game models. Finally, in Sect. 6 we briefly discuss how the pathoriented view can be used to construct an alternative definition of state effectivity, and to facilitate reasoning about games with imperfect information. Moreover, we show an application of our characterization results to the wellknown stit models of agency.
Preliminaries
We begin by introducing some basic gametheoretic and logical notions. In all definitions hereafter, the sets of players, game (outcome) states, and actions available to players are assumed nonempty. Moreover, the set of players is always assumed finite.
Concurrent game structures and models
Strategic games (a.k.a. normal form games) are basic models of noncooperative game theory [26]. Following the tradition in the qualitative study of games, we focus on abstract game modes, where the effect of strategic interaction between players is represented by abstract outcomes from a given set, and players’ preferences are not specified.
Definition 1
(Strategic game) A strategic game is a tuple
consisting of a set of players (agents) \({\mathbb {A}\mathrm {gt}}\), a set of outcome states \(St\), a set of actions (atomic strategies) \(Act_i\) for each player \(i \in {\mathbb {A}\mathrm {gt}}\), and an outcome function \(o: \prod _{i\in {\mathbb {A}\mathrm {gt}}} Act_i \rightarrow St\) which associates an outcome with every action profile.
We define coalitional strategies \(\alpha _C\) in \(G\) as tuples of individual strategies \(\alpha _i\) for \(i\in C\), i.e., \(Act_C=\prod _{i\in C}Act_i\).
Strategic games are onestep encounters. They can be generalized to multistep scenarios, in which every state is associated with a strategic game, as follows.
Definition 2
(Concurrent game structures and models) A concurrent game structure (CGS) (aka multiplayer game frame [16, 29]) is a tuple
which consists of a set of players \({\mathbb {A}\mathrm {gt}}= \{{1,\dots ,k}\}\), a set of states \(St\), a set of (atomic) actions \(Act\), a function \(d : {\mathbb {A}\mathrm {gt}}\times St\rightarrow \mathcal {P}({Act})\) that assigns a sets of actions available to players at each state, and a deterministic transition function \(o\) that assigns a unique outcome state \(o(q,\alpha _1,\dots ,\alpha _k)\) to every starting state \(q\) and a tuple of actions \(\langle \alpha _1, \dots , \alpha _k\rangle \), \(\alpha _i \in d(i,q)\), that can be executed by \({\mathbb {A}\mathrm {gt}}\) in \(q\).
A concurrent game model (CGM) \(M\) is a CGS endowed with a valuation \(V:St\rightarrow \mathcal {P}({Prop})\) for some fixed set of atomic propositions \(Prop\).
Note that in a CGS all players execute their actions synchronously and the combination of the actions, together with the current state, determines the transition in the CGS. We also observe that a CGS can be seen as a collection of strategic games, each assigned to a different state in the CGS.
Example 1
(A model of aggressive play) Consider two agents interacting in a common environment, for instance marketing similar products, building up reputation in a social network, or playing the same strategic online game. At any moment, each of them can choose to play aggressively (\(aggr\)) or conservatively (\(cons\)). It is well known that in many games (economic as well as recreational) playing aggressively against a conservative opponent is risky but—if lucky—it can also bring higher profits. Thus, it is usually advisable to play aggressively when one’s situation is relatively bad. If the player’s position is strong, conservative play is usually a better choice.
A very simple model of the scenario is presented in Fig. 1. Propositions \(\mathsf {{good_1}}\) (resp. \(\mathsf {{good_2}}\)) label states where player \(1\)’s (resp. \(2\)’s) situation is good. Of course, the CGM is not meant as a serious formalization of aggressive and conservative play. We will only need it to demonstrate how coalitional effectivity arises in multiplayer games with longterm interaction.
Strategies in multistep games A path in a CGS/CGM is an infinite sequence of states that can result from subsequent transitions in the structure. A strategy of a player \(a\) in a CGS/CGM \({\mathcal {M}}\) is a conditional plan that specifies what \(a\) should do in each possible situation. Depending on the type of memory that we assume for the players, a strategy can range from a memoryless (positional), formally represented with a function \(s_a : St\rightarrow Act\), such that \(s_a(q)\in d_a(q)\), to a perfect recall strategy, represented with a function \(s_a : St^{+}\rightarrow Act\) such that \(s_a(\langle \dots , q\rangle )\in d_a(q)\), where \(St^{+}\) is the set of histories, i.e., finite prefixes of paths in \({\mathcal {M}}\) [6, 33]. The latter corresponds to players with perfect recall of the past states; the former to players whose memory is entirely encoded in the state of the system. A collective strategy for a group of players \(C=\{{a_1,...,a_r}\}\) is simply a tuple of strategies \(s_C = \langle {s_{a_1},...,s_{a_r}}\rangle \), one for each player from \(C\). We denote player \(a\)’s component of the collective strategy \(s_C\) by \(s_C[a]\).
We define the function \(out(q,s_C)\) to return the set of all paths \(\lambda \in St^\omega \) that can be realised when the players in \(C\) follow the strategy \(s_C\) from state \(q\) onward. Formally, for memoryless strategies, it can be defined as below:

\(out(q,s_C) =\) \(\{ \lambda =q_0,q_1,q_2\ldots \mid q_0=q\) and for each \(i=0,1,\ldots \) there exists \(\langle {\alpha ^{i}_{a_1},\ldots ,\alpha ^{i}_{a_k}}\rangle \) such that \(\alpha ^{i}_{a} \in d_a(q_{i})\) for every \(a\in {\mathbb {A}\mathrm {gt}}\), \(\alpha ^{i}_{a} = s_C[a](q_{i})\) for every \(a\in C\) and \(q_{i+1} = o(q_{i},\alpha ^{i}_{a_1},\ldots ,\alpha ^{i}_{a_k}) \}\).
The definition for perfect recall strategies is analogous:

\(out(q,s_C) =\) \(\{ \lambda =q_0,q_1,q_2\ldots \mid q_0=q\) and for each \(i=0,1,\ldots \) there exists \(\langle {\alpha ^{i}_{a_1},\ldots ,\alpha ^{i}_{a_k}}\rangle \) such that \(\alpha ^{i}_{a} \in d_a(q_{i})\) for every \(a\in {\mathbb {A}\mathrm {gt}}\), \(\alpha ^{i}_{a} = s_C[a](\langle q_0\ldots , q_{i}\rangle )\) for every \(a\in C\) and \(q_{i+1} = o(q_{i},\alpha ^{i}_{a_1},\ldots ,\alpha ^{i}_{a_k}) \}\).
Abstract models of coalitional effectivity
Definition 3
(Effectivity functions and models) A local effectivity function \(E: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow {\mathcal {P}({St})}\) associates a family of sets of states with each set of players. A global effectivity function \(E: St\times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\) assigns a local effectivity function to every state \(q\in St\). We will use the notations \(E(q)(C)\) and \(E_q(C)\) interchangeably.
Finally, a coalitional effectivity model consists of a global effectivity function, plus a valuation of atomic propositions.
Intuitively, the elements of \(E(C)\) correspond to choices of collective actions available to the coalition \(C\): if \(X \in E(C)\) then by choosing
\(X\) the coalition \(C\) can force the outcome of the game to be in \(X\). Hereafter, the elements of \(E(C)\) will be called (collective) action choices of the coalition \(C\). The idea to represent a choice (of a collective action) of a coalition by the set of possible outcomes which can be effected by that choice was also captured by the notions of “coalition effectivity models” [27] and “alternating transition systems” [4].
Definition 4
(True playability [19, 27]) A local effectivity function \(E\) is truly playable iff the following hold:

Outcome Monotonicity: \(X \in E(C)\) and \(X \subseteq Y\) implies \(Y \in E(C)\);

Liveness: \(\emptyset \notin E(C)\);

Safety: \(St\in E(C)\);

Superadditivity: if \(C \cap D = \emptyset \), \(X \in E(C)\) and \(Y \in E(D)\), then \(X \cap Y \in E(C \cup D)\);

\({\mathbb {A}\mathrm {gt}}\) Maximality: \(\overline{X} \not \in E(\emptyset )\) implies \(X \in E({\mathbb {A}\mathrm {gt}})\);

Determinacy: if \(X \in E({\mathbb {A}\mathrm {gt}})\) then \(\{x\} \in E({\mathbb {A}\mathrm {gt}})\) for some \(x \in X\).
A global effectivity function is truly playable iff it consists only of local functions that are truly playable.
\(\alpha \)Effectivity Each strategic game \(G\) can be canonically associated with an effectivity function, called the \(\alpha \)effectivity function of \(G\) and denoted with \(E^{\alpha }_{G}\) [27].
Definition 5
(\(\alpha \)effectivity in strategic games) For a strategic game G, the (coalitional) \(\alpha \)effectivity function \(E^{\alpha }_{G}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\) is defined as follows: \(X \in E^{\alpha }_{G}(C)\) if and only if there exists \(\sigma _C\) such that for all \(\sigma _{\overline{C}}\) we have \(o(\sigma _C,\sigma _{\overline{C}}) \in X\).
Example 2
The \(\alpha \)effectivity for \(M_1,q_0\) is:
\(E(\{1,2\})\ =\ \{\{q_0\}, \{q_1\}, \{q_2\}, \{q_0,q_1\}, \{q_0,q_2\}, \{q_1,q_2\}, \{q_0,q_1,q_2\}\}\);
\(E(\{1\})\ =\ E(\{2\})\ =\ \{\{q_0,q_1\}, \{q_0,q_2\}, \{q_0,q_1,q_2\}\}\);
\(E(\emptyset )\ =\ \{\{q_0,q_1,q_2\}\}\).
Clearly, \(E\) is truly playable.
Theorem 1
(Representation Theorem [19, 27, 30]) A local effectivity function E is truly playable if and only if there exists a strategic game \(G\) such that \(E^{\alpha }_G=E\).
Logical reasoning about multistep games
The Alternatingtime Temporal Logic ATL* [4, 6] is a multimodal logic with strategic modalities \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\) and temporal operators \(\mathrm {X}\) (“at the next state”), \(\mathrm {G}\) (“always from now on”), and \(\mathrm {U}\) (“until”).
There are two types of formulae of ATL*, state formulae and path formulae, respectively defined by the following grammar:
for \(C\subseteq {\mathbb {A}\mathrm {gt}},\mathsf {{p}}\in Prop\). Temporal operator \(\mathrm {F}\) (“sometime in the future”) can be defined as \(\mathrm {F}\varphi \equiv \top \mathrm {U}\varphi \).
Let \(M\) be a CGM, \(q\) a state in \(M\), and \(\lambda = q_{0}, q_{1}, \ldots \) a path in \(M\). For every \(i \in {\mathbb {N}}\) we denote \(\lambda [i] = q_{i}\); \(\lambda [0..i]\) is the prefix \(q_{0}, q_{1}, \ldots , q_{i}\), and \(\lambda [i..\infty ]\) is the respective suffix of \(\lambda \).
The semantics of ATL* is given by the following clauses [6]:

iff \(q\in V(\mathsf {{p}})\), for \(\mathsf {{p}}\in Prop\);

iff ;

iff and ;

iff there is a strategy \(s_C\) for the players in \(C\) such that for each path \(\lambda \in out(q,s_C)\) we have .

iff ;

iff ;

iff and ;

iff ;

iff for every \(i\ge 0\); and

iff there is \(i\) such that and for all \(0\le j< i\).
Example 3
Consider again the model of aggressive vs. conservative play from Fig. 1. No player has a sure strategy to reach a good position in the game if they start from a bad position. That is, and . Also, no player can ensure that the other player will eventually be at disadvantage: and for all states \(q\). On the other hand, if the player’s initial position is good, she can keep being well off forever (e.g., ); the right strategy is to always play conservatively. Moreover, when both players are in a good position, each of them can maintain the good position of the other one in the next moment (by playing aggressively): and . Finally, if the players cooperate then they control the game completely: we have for all states \(q\).
ATL and CL as fragments of ATL* The most important fragment of ATL* is ATL where each strategic modality is directly followed by a single temporal operator. Thus, the semantics of ATL can be given entirely in terms of states, cf. [6] for details. Consequently, for ATL the two notions of strategy (memoryless vs. perfect recall) yield the same semantics.
Furthermore, the Coalition Logic (CL) from [27] can be seen as the fragment of ATL involving only booleans and operators \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {X}\), and thus it inherits the semantics of ATL on CGMs [16].
State effectivity in multistep games
An alternative semantics of CL was given in [27] in terms of the effectivity models defined in Sect. 2.2, via the following clause:

iff \(\varphi ^M\in E_q(C)\), where .
It is easy to see that the CGMbased and the effectivitybased semantics of CL coincide on truly playable models.
The semantics of ATL has never been explicitly defined in terms of abstract effectivity models. An informal outline of such semantics has been suggested in [17], essentially by representation of the modalities \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {G}\) and \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {U}\) as appropriate fixpoints of \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {X}\), cf. also [6, 16]. In this section, we properly extend statebased effectivity models to provide semantics for ATL. For that, as pointed out earlier, a different effectivity function will be needed for each temporal pattern.
We note that an effectivity function for the “always” modality \(\mathrm {G}\) was already constructed in [27]. Moreover, an effectivity function for reachability, i.e. for the \(\mathrm {F}\) modality, has recently been presented in [3]. Our construction here is algebraic and differs significantly from both these approaches. Moreover, it allows to cover all kinds of effectivity that can be addressed in ATL (though not in ATL*!).
Operations on state effectivity functions
First, we define basic operations and relations on effectivity functions, reflecting the meaning of these as operations on games.
Definition 6
(Operations and relations on effectivity functions) Let \({E},{F}:St\times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\) be effectivity functions for the set of agents \({\mathbb {A}\mathrm {gt}}\) on a state space St. Then:

Composition of the effectivity functions \({E},{F}\) is the effectivity function \({E}\circ {F}\) where, for all \(q\in St\), \(Y\subseteq St\) and \(C \in \mathcal {P}({{\mathbb {A}\mathrm {gt}}})\), it holds that \(Y\in ({E}\circ {F})_{q}(C)\) iff there exists a subset \(Z\) of \(St\), such that \(Z\in {E}_{q}(C)\) and \(Y\in {F}_{z}(C)\) for every \(z\in Z\).

Union of the effectivity functions \({E},{F}\) is the effectivity function \({E}\cup {F}\) where, for all \(q\in St\), \(Y\subseteq St\) and \(C \in \mathcal {P}({{\mathbb {A}\mathrm {gt}}})\), it holds that \(Y\in ({E}\cup {F})_{q}(C)\) iff \(Y\in {E}_{q}(C)\) or \(Y\in {F}_{q}(C)\).

Intersection of effectivity functions is defined analogously. Likewise, we define union and intersection of any family of effectivity functions. For instance, given a family of effectivity functions \(\{E^{j}\}_{j\in J}\), its union is the effectivity function
$$\begin{aligned} E = \bigcup _{j\in J} E^{j} \end{aligned}$$such that \(Y\in E_{q}(C)\) iff there exists a \(j\in J\) such that \(Y\in E^{j}_{q}(C)\), for all \(q\in St\), \(Y\subseteq St\) and \(C \in \mathcal {P}({{\mathbb {A}\mathrm {gt}}})\).

Inclusion of effectivity functions:
\({E}\subseteq {F}\) iff \({E}_{q}(C)\subseteq {F}_{q}(C)\) for every \(q\in St\) and \(C\subseteq {\mathbb {A}\mathrm {gt}}.\)

Lastly, the idle effectivity function \(I\) is defined as follows:
\(I_{q}(C)=\{Y\subseteq St\mid q\in Y \}\) for every \(q\in St\) and \(C\subseteq {\mathbb {A}\mathrm {gt}}.\)
Hereafter, we assume that \(\circ \) has a stronger binding power than \(\cup \) and \(\cap \).
Proposition 1
The following hold for any outcome monotone effectivity functions \({E},{F,G:}\)

1.
\({E}\circ I={I\circ E={E}.}\)

2.
If \({F}_{1}\subseteq {F}_{2}\) then \({E}\circ {F}_{1}\subseteq {E}\circ {F}_{2}\).

3.
\(({E\cup F)}\circ {G=(E}\circ {G)\cup ({E}\circ F)}\).

4.
\(({E\cap F)}\circ {G=(E}\circ {G)\cap ({E}\circ F)}\).
Proof
Routine. \(\square \)
Remark 1

1.
We note that, e.g., item 2 in Proposition 1, does not require the effectivity function to be outcome monotone. However, we will only apply this proposition to outcome monotone effectivity functions, so the monotonicity assumption is unproblematic.

2.
The identities \({E\circ (F{\cup G)}=(E}\circ {F)\cup ({E}\circ G)}\) and \({E\circ (F}\cap {{G)}=(E}\circ {F)\cap ({E}\circ G)}\) are not valid. However, by Proposition 1.1, the inclusions \({E\circ (F{\cup G)} \!\supseteq \! (E}\circ {F)\cup ({E}\circ G)}\) and \({E\circ (F}\cap {{G)} {\subseteq } (E}\circ {F)\cap ({E}\circ G)}\) hold.
Definition 7
For any effectivity function \({E}\) we define inductively the effectivity functions \({E}^{(n)}\) and \({E}^{[n]}\) as follows:
\({E}^{(0)}=I\), \({E}^{(n+1)}=I\cup {E}\circ E^{(n)}\),
\({E}^{[0]}=I\), \({E}^{[n+1]}=I\cap {E}\circ E^{[n]}\).
Proposition 2
For every \(n\ge 0:\) \({E}^{(n)}\subseteq {E} ^{(n+1)} \) and \({E}^{[n+1]}\subseteq {E}^{[n]}\).
Proof
Routine, by induction on \(n\). \(\square \)
Definition 8
Given an effectivity function \({E}: St\times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\), the weak iteration of \({E}\) is the function \({E}^{(*)}=\bigcup \limits _{k=0}^{\infty } {E}^{(k)}\), i.e., \(Y\in {E}_{q}^{(*)}(C)\) iff \(\exists n.\ Y\in {E}_{q}^{(n)}(C)\).
The strong iteration of \({E}\) is the function \({E}^{{[*]}}=\bigcap \limits _{k=0}^{\infty } {E}^{[k]}\),
i.e., \(Y\in {E}_{q}^{{[*]}}(C)\) iff \(\forall n.\ Y\in {E}_{q}^{[n]}(C)\).
Proposition 3
Unions, intersections, compositions, week and strong iterations preserve outcomemonotonicity of effectivity functions.
Proof
Routine. \(\square \)
Proposition 4
For any finite state space \(St\) and effectivity function \({E}\) in it:

1.
\({E}^{(*)}\) is the least fixed point of the monotone operator \({\mathfrak {F}}_{w}\) defined by \({\mathfrak {F}}_{w}({F})=I\cup E\circ F.\)

2.
\({E}^{{[*]}}\) is the greatest fixed point of the monotone operator \({\mathfrak {F}}_{q}\) defined by \({\mathfrak {F}}_{q}({F})= I\cap E \circ F\).
Proof
(1) First, we show by induction on \(k\) that for every \(k,\) \({E} ^{(k)}\subseteq I\cup E\circ {E}^{(*)}.\) Indeed, \({E}^{(0)}=I\subseteq I\cup E\circ {E}^{(*)};\) \({E}^{(k+1)}=I\cup E\circ {E}^{(k)}\subseteq I\cup E\circ {E}^{(*)}\) by the inductive hypothesis and Proposition 1. Thus, \({E}^{(*)}\subseteq I\cup E\circ {E}^{(*)}.\)
For the converse inclusion, let \(Y\in (I\cup E\circ E^{(*)})_{q}(C).\) If \(Y\in I_{q}(C),\) then \(Y\in {E}_{q}^{(*)}\) by definition. Suppose \(Y\in ({E\circ {E}^{(*)})}_{q}(C).\) Then, there is \(Z\in {E}_{q}(C)\) such that for every \(z\in Z,\) \(Y\in {{E}^{(*)}}_{z}(C),\) hence \(Y\in {{E}}_{z }^{(k_{z})}(C)\) for some \(k_{z}\ge 0.\) Let \(m=\max \limits _{z\in Z}k_{z}\). Then, by Proposition 2, \(Y\in {E}_{z}^{(m)}(C)\) for every \( z\in Z.\) Therefore, \(Y\in ({E\circ {E}^{(m)})}_{q}(C)\subseteq {E}_{q}^{(m+1)}(C)\subseteq {E}_{q}^{(*)}(C).\)
Thus, \({E}^{(*)}\) is a fixed point of the operator \( {\mathfrak {F}}_{w}.\)
Now, suppose that \({F}\) is such that \({\mathfrak {F}} _{w}({F})=I\cup E\circ F.\) Then, we show by induction on \(k\) that for every \(k,\) \({E}^{(k)}\subseteq {F.}\) Indeed, \({E}^{(0)}=I\subseteq I\cup E\circ F=F.\) Suppose \({E}^{(k)}\subseteq {F.}\) Then \({E}^{(k+1)}= I\cup E\circ {E}^{(k)}\subseteq I\cup E\circ F=F\) by the inductive hypothesis and Proposition 1. Thus, \({E} ^{(*)}\subseteq {F.}\) Therefore, \({{E}^{(*)}}\) is the least fixed point of \({\mathfrak {F}}_{w}.\)
(2) The argument is dually analogous. \(\square \)
The proof above only works when the state space \(St\) is finite. However, the operators \({\mathfrak {F}}_{w}\) and \({\mathfrak {F}}_{q}\) are monotone in the general case and the result above suggests that \({E}^{(*)}\) and \({E}^{{[*]}}\) can be defined in general as the respective fixed points.
Binary effectivity functions
Binary effectivity functions will be used to provide fixed point characterisation and semantics for the binary temporal connective Until.
Definition 9
Given a set of players \({\mathbb {A}\mathrm {gt}}\) and a set of states \(St\), a local binary effectivity function for \({\mathbb {A}\mathrm {gt}}\) on \(St\) is a mapping \({U}:\mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})\times \mathcal {P}({St})})\) associating with each set of players a family of pairs of outcome sets.
A global binary effectivity function associates a local binary effectivity function with each state from \(St\).
Now we define some global binary effectivity functions and operations and relations on them.
Definition 10

Leftidle binary effectivity function \(\mathbf {L}:St{\times }\mathcal {P}({{\mathbb {A}\mathrm {gt}}}) {\rightarrow } \mathcal {P}({\mathcal {P}({St}){\times }\mathcal {P}({St})})\), where \(\mathbf {L}_{q}(C)=\{(X,Y)\mid q\in X\}\) for any \(q\in St\) and \(C\subseteq {\mathbb {A}\mathrm {gt}}\). Respectively, rightidle binary effectivity function \(\mathbf {R}\) is defined by \(\mathbf {R}_{q}(C)=\{(X,Y)\mid q\in Y\}\) for any \(q\in St\) and \(C\subseteq {\mathbb {A}\mathrm {gt}}.\)

Union of binary effectivity functions \({U},{W}:St\times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})\times \mathcal {P}({St})})\) is the binary effectivity function \({U}\cup {W}\) where \((X,Y)\in ({U}\cup {W})_{q}(C)\) iff \((X,Y)\in {U}_{q}(C)\) or \((X,Y)\in {V}_{q}(C)\).

Intersection of binary effectivity functions is defined analogously.

Right projection of \(U\) is the unary effectivity function \(E\) such that \(E_q(C) = \{ Y \mid (X,Y)\in U_q(C) \text{ for } \text{ some } X \in \mathcal {P}({St}) \} \}\) for all \(q,C\).

Likewise, we define union, intersection, and right projection of any family of binary effectivity functions.

Composition of a unary effectivity function \({E}\) with a binary effectivity function \({U}\) is the binary effectivity function \({E}\circ {U}\) such that \((X,Y)\in ({E}\circ {U})_{q}(C)\) iff there exists a subset \(Z\) of \(St\), such that \(Z\in {E}_{q}(C)\) and \((X,Y)\in {U}_{z}(C)\) for every \(z\in Z.\)

Inclusion of binary effectivity functions: \({U}\subseteq {W}\) iff \({U}_{q}(C)\subseteq {W}_{q}(C)\) for every \(q\in St\) and \(C\subseteq {\mathbb {A}\mathrm {gt}}.\)

Binary iteration. For any unary effectivity function \({E}\) we define the binary effectivity functions \({E}^{\left\{ n\right\} }\), \(n\ge 0,\) inductively as follows: \({E}^{\left\{ 0\right\} }=\mathbf {R};E ^{\left\{ n+1\right\} } =R\cup (L\cap {E}\circ {E}^{\left\{ n\right\} } )\).
Then, the binary iteration of \({E}\) is defined as the binary effectivity function \({E}^{\left\{ *\right\} } =\bigcup \limits _{k=0}^{\infty } {E} ^{\left\{ k\right\} } ,\) i.e. \((X,Y)\in {E}_{q}^{\left\{ *\right\} } (C)\) iff \( (X,Y)\in {E}_{q}^{\left\{ n\right\} } (C)\) for some \( n.\)
Definition 11
A binary effectivity function \({U}\) is outcomemonotone if every \({U}_{q}(C)\) is upwards closed, i e. \((X,Y){\in U}_{q}(C)\) and \(X\subseteq X^{\prime } ,Y\subseteq Y^{\prime } \) imply \((X^{\prime } ,Y^{\prime } ){\in U}_{q}(C).\)
Proposition 5
For any finite state space \(St\) and unary effectivity function \({E}\) in it, \({E}^{\left\{ *\right\} }\) is the least fixed point of the monotone operator \({\mathfrak {F}}_{b}\) defined by \({\mathfrak {F}}_{b}({U})=\mathbf {R}\cup (\mathbf {L}\cap E\circ U).\)
Proof
Analogous to the proof of Proposition 4. \(\square \)
Again, the operator \({\mathfrak {F}}_{b}\) is monotone for any (finite or infinite) state space \(St\) and the result above suggests how \({E}^{\left\{ *\right\} }\) can be defined in general.
The next result follows immediately from Propositions 3, 4 and 5.
Proposition 6
\({E}^{(*)}\), \({E}^{[*]}\) and \({E}^{\left\{ *\right\} }\) are outcomemonotone. Moreover, \({E}^{(*)}\) is the right projection of \({E}^{\left\{ *\right\} }\).
Statebased effectivity models for ATL
The semantics of ATL can now be given in terms of models that are more abstract and technically simpler than CGM.
Definition 12
A statebased effectivity frame (SEF) for ATL is a tuple
where \({\mathbb {A}\mathrm {gt}}\) is a set of players, \(St\) is a set of states, \(\mathbf {E}\) and \(\mathbf {G}\) are outcomemonotone effectivity functions, and \(\mathbf {U}\) is an outcomemonotone binary effectivity function.
A statebased effectivity model (SEM) for ATL is a SEF plus a valuation of atomic propositions.
That is, an effectivity frame/model for ATL includes not one but three effectivity functions: one for each temporal modality in the language.
Definition 13
A SEF \({\mathcal {F}}\) is standard iff

1.
\(\mathbf {E}\) is truly playable,

2.
\(\mathbf {G}=\mathbf {E}^{\mathbf {[*]}}\),

3.
\(\mathbf {U}=\mathbf {E}^{\left\{ *\right\} }\).
A SEM \({\mathcal {M}}= \left\langle {\mathcal {F}},V\right\rangle \) is standard if \({\mathcal {F}}\) is standard.
Statebased effectivity semantics for ATL
Now, we define truth of an ATL formula at a state of a statebased effectivity model uniformly as follows:

,

,

.
Extending \(\alpha \)effectivity to SEM Given a CGM \(M=({\mathbb {A}\mathrm {gt}}, St, Act, d, o, V)\), we construct its corresponding SEM as follows: \(\mathrm {SEM}(M) = ({\mathbb {A}\mathrm {gt}},St,\mathbf {E},\mathbf {G},\mathbf {U})\) where \(\mathbf {E}_q = E_{M,q}^\alpha \) for all \(q\in St\), \(\mathbf {G}=\mathbf {E}^{\mathbf {[*]}}\), and \(\mathbf {U}=\mathbf {E}^{\left\{ *\right\} }\).
Example 4
The “always” effectivity in state \(q_0\) of the model of aggressive vs. conservative play from Example 1 can be written as follows:
\(\mathbf {G}_{q_0}(\emptyset ) = \{\{q_0,q_1,q_2\}\}\), \(\mathbf {G}_{q_0}(\{1\}) = \mathbf {G}_{q_0}(\{2\}) = \{\{q_0,q_1\}, \{q_0,q_2\}, \{q_0,q_1,q_2\}\}\),
\(\mathbf {G}_{q_0}(\{1,2\}) = \{\{q_0\}, \{q_0,q_1\}, \{q_0,q_2\}, \{q_0,q_1,q_2\}\}\).
The next result easily follows from Theorem 1:
Theorem 2
(Representation Theorem) A state effectivity model \({\mathcal {M}}\) for ATL is standard iff there exists a CGM \(M\) such that \({\mathcal {M}}=\mathrm {SEM}(M)\).
Moreover, we note that the ATL semantics in CGMs and in their associated standard SEMs coincide.
Theorem 3
For every CGM \(M\), state \(q\) in \(M\), and ATL formula \(\varphi \), we have that iff .
Proof
Routine, by structural induction on formulae. \(\square \)
Corollary 1
Any ATL formula \(\varphi \) is valid (resp., satisfiable) in concurrent game models iff \(\varphi \) is valid (resp., satisfiable) in standard statebased effectivity models.
Coalitional path effectivity
Statebased effectivity models for ATL partly characterize coalitional powers for achieving longterm objectives. However, the applicability of such models is limited by the fact that they characterize effectivity with respect to outcome states, while effectivity for outcome paths (i.e., plays) is only captured when such paths are described by the specific temporal patterns definable in ATL. Thus, in particular, statebased effectivity models are not suitable for providing semantics of the whole ATL*.
In this section we aim at getting to the core of the notion of effectivity in multistep games, regardless of the temporal pattern that defines the winning condition, by redefining it in terms of outcome paths, rather than states. The idea is natural: every collective strategy of the grand coalition in a multistep game determines a unique path (play) through the state space of the game. Consequently, the outcome of following an individual or coalitional strategy in such game is a set of paths (plays) that can result from execution of the strategy, depending on the moves of the remaining players. Hence, powers of players and coalitions in multistep games can be characterized by sets of sets of paths. Our main conceptual motivation is precisely that a strategy of a player, or a collective strategy of a coalition, determines a set of paths (plays), not states, which can be effected by such strategy. Viewing outcomes of a strategy as infinite paths seems appropriate for reasoning about repeated (or extensive) games that run in infinitely many steps.
We also claim that the notion of path effectivity captures adequately the meaning of strategic operators in ATL(*). Moreover, it provides correct semantics for the whole ATL*, and not only its limited fragment ATL.
Path effectivity functions, frames and models
Definition 14
(Path effectivity function) Let \({\mathbb {A}\mathrm {gt}}\) be a set of players, and \(St\) a set of states. A path in \(St\) is an infinite sequence of states, i.e., an element of \(St^\omega \). A path effectivity function is a mapping \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) that assigns to each coalition a nonempty family of sets of paths.
The intuition is analogous to that for state effectivity: the inclusion of a set of paths \({\mathcal {X}}\) in \(\mathcal {E}(C)\) means that the coalition \(C\) can choose a strategy that ensures that the game will develop along one of the paths in \({\mathcal {X}}\).
Note that the definition above refers to global effectivity, in the sense that \({\mathcal {X}}\in \mathcal {E}(C)\) can (in fact, must) include paths starting from different states. Local path effectivity (for each initial state separately) is easily extractable from the global one. This is in line with the concept of a strategy as a complete conditional plan: in particular, the strategy must prescribe collective actions of the coalition from all possible initial states of the game.
By analogy with identifying action choices as sets of outcome states in state effectivity models, we refer to the elements of \(\mathcal {E}(C)\) for a path effectivity function \(\mathcal {E}\) as (global) strategic choices of the coalition \(C\). The intuition is that every strategic choice \({\mathcal {F}} \in \mathcal {E}(C)\) is the sets of paths in \(St\) that \(C\) can enforce when playing the chosen collective strategy represented by \({\mathcal {F}}\).
Note that not every sequence of states is a feasible path in a given concrete model (i.e, a CGM), but only those that follow the transitions in the model. Likewise, for an abstract path effectivity function \(\mathcal {E}\), it is not required that all the sequences of states appear in \(\mathcal {E}\). We define the feasible paths in \(\mathcal {E}\) as
that is, \(\mathsf {Paths}_{\mathcal {E}}\) is the set of paths appearing in any choice from \(\mathcal {E}\). For the set \(\mathsf {Paths}_{\mathcal {E}}\) defined this way, we will sometimes say that \(\mathcal {E}\) is an effectivity function over the set of feasible paths \(\mathsf {Paths}_{\mathcal {E}}\).
Hereafter, we will assume that \(\mathcal {E}\) captures the outcome monotone effectivity, i.e., it collects the actual outcome paths of choices available to \(C\), and then it takes all their supersets, i.e., closes under upwards monotonicity.
Definition 15
(Path effectivity frames/models) A path effectivity frame (PEF) is a structure \({\mathcal {F}} = ({\mathbb {A}\mathrm {gt}},St,\mathcal {E})\) consisting of a set of players \({\mathbb {A}\mathrm {gt}}\), a set of states \(St\) and a path effectivity function \(\mathcal {E}\) on these. A path effectivity model (PEM) \({\mathcal {M}}\) expands a PEF with a valuation of the propositions \(V: Prop\rightarrow \mathcal {P}({St})\).
Notation Clearly, not every path effectivity frame corresponds to a concrete game structure. To capture “playability” conditions for path effectivity functions and frames, we will need some additional notation. Let \(q\in St\), \(h,h'\in St^{+}\), \({\mathcal {X}}\in \mathcal {P}({St^\omega })\), and \(\mathcal {E}\) be a path effectivity function. We define the following:

\(h \preceq h'\) if \(h'\) is an extension of \(h\);

\({\mathcal {X}}[i] \,{:}{=}\, \{\lambda [i] \mid \lambda \in {\mathcal {X}}\}\) collects states that appear on the \(i\)th position of paths in \({\mathcal {X}}\);

\({\mathcal {X}}(q) \,{:}{=}\,\{\lambda \in {\mathcal {X}}\mid \lambda [0]= q\}\) selects the paths in \({\mathcal {X}}\) starting from \(q\);

\({\mathcal {X}}(h) \,{:}{=}\,\{\lambda \mid \lambda \!\in \! {\mathcal {X}}, \ \text{ and } \ \lambda [0..k]= h \ \text{ for } \text{ some } k \}\) is the set of paths in \({\mathcal {X}}\) starting with \(h\);

\({\mathcal {X}} h \,{:}{=}\,\{\lambda [k..\infty ] \mid \lambda \!\in \! {\mathcal {X}}\ \text{ and } \ \lambda [0..k]= h\}\) is the set of suffixes of paths in \({\mathcal {X}}\), extending \(h\);

Consequently, for sets of sets of paths:
\(\mathcal {E}(C)(q) = \{{\mathcal {X}}(q) \mid {\mathcal {X}}\in \mathcal {E}(C)\}\),
\(\mathcal {E}(C)(h) = \{{\mathcal {X}}(h) \mid {\mathcal {X}}\in \mathcal {E}(C)\}\),
\(\mathcal {E}(C) h = \{{\mathcal {X}} h \mid {\mathcal {X}}\in \mathcal {E}(C)\}\).
To make the text easier to read, we will typically use \(X,Y,\dots \) for state choices, and \({\mathcal {X}},\mathcal {Y},\dots \) for path choices. Moreover, we will use \(E\) to denote state effectivity functions, and \(\mathcal {E}\) for path effectivity functions.
The initial segments \(\lambda [0..k]\) of feasible paths of a path effectivity function \(\mathcal {E}\) will be called (initial) feasible histories of \(\mathcal {E}\).
Generating state effectivity from path effectivity functions and vice versa
We will now define two natural mappings between path and state effectivity functions. First, a path effectivity function can be transformed into a state effectivity function by extracting from paths their initial segments (the “opening moves”). Secondly, a state effectivity function can be transformed into a path effectivity function by “unfolding” all possible paths that arise from a given subset of state transitions.
Definition 16
(State projection) The (successor) state projection of a global strategic choice \({\mathcal {X}}\subseteq St^\omega \) is the mapping \({\mathcal {X}}^{S}: St\rightarrow \mathcal {P}({St})\), called the (global) action choice corresponding to \({\mathcal {X}}\), defined as follows:
Similarly, the state projection of a path effectivity function \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({Paths})})\) is the global state effectivity function \(\mathcal {E}^{S}: St\times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\) that assigns to every \(C\subseteq {\mathbb {A}\mathrm {gt}}\) and \(q \in St\) the family
of sets of successor states, one for each set of paths in \(\mathcal {E}(C)(q)\).
\({\mathcal {X}}^{S}(q)\) includes all the states that are immediate successors of \(q\) at the beginning
of a path in \({\mathcal {X}}\). Thus, \({\mathcal {X}}^{S}\) assigns possible successors to each state, so it can be seen as a representation of a possible transition relation between states in \(St\). Moreover, \(\mathcal {E}^{S}\) collects all such transition relations that “approximate” the choices available in \(\mathcal {E}\).
We note that if a global strategic choice \({\mathcal {X}}\) is suffix closed, i.e., contains all paths \(\lambda [i..\infty ]\) for every path \(\lambda \in {\mathcal {X}}\), then the definition of state projection of \({\mathcal {X}}\) is equivalent to
That is, we can as well see the state choices in \({\mathcal {X}}^{S}(q)\) as collecting the successors of \(q\) on any path passing through \(q\). A global action choice can be also defined abstractly, rather than derived from a global strategic choice, as a mapping \(X : St\rightarrow \mathcal {P}({St})\). It may, but need not, correspond to a family of collective actions, one at each state, for a given coalition. The next definition describes how a global action choice generates a subset of paths.
Definition 17
(Path closure) Given a global action choice \(X : St\rightarrow \mathcal {P}({St})\), we define its path closure \(X^{P}\subseteq St^\omega \) as follows:
Likewise, the path closure of a global state effectivity function \(E: St\times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\) is defined as the path effectivity function \(E^{P}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) constructed as follows:
where by \(X\in E(C)\) we mean \(X(q) \in E_{q}(C)\) for every \(q\in St\).
That is, \(X^{P}\) collects the paths generated by the transition function represented by \(X\). Moreover, \(E^{P}\) is the outcomemonotone closure of the family of strategic choices generated this way from the state effectivity function \(E\).
Path effectivity in concurrent game structures
In this section, we propose an analogue of \(\alpha \)effectivity from Sect. 2.2 for distilling abstract path effectivity from CGM’s. Not every set of feasible paths in a CGM is a feasible choice for a coalition, and the powers of players and coalitions in a game crucially depend on their available strategies. There are different notions of strategy, e.g., depending on the amount of memory that the players can use. We will parameterize our concept of effectivity in multistep games with a type (class) of strategies. Two types of strategies were already introduced in Sect. 2.1, namely deterministic memoryless and deterministic perfect recall strategies, and we will focus on these classes henceforth. However, one can easily imagine other types of strategies, such as bounded memory strategies, finite memory strategies, nondeterministic strategies, and so on. Our concept of effectivity in multistep games is well defined for all these classes, under the mild conditions set out below.
Definition 18
(Normal class of strategies) A class \(\varSigma \) of individual and coalitional strategies is normal iff:

1.
Every player has at least one strategy in \(\varSigma \),

2.
Coalitional strategies are obtained by freely combining the individual strategies of the participating players,^{Footnote 3} and

3.
No strategy in \(\varSigma \) (individual or coalitional) ever yields an empty set of successor states.
It is easy to see that the classes of perfect recall and memoryless strategies from Sect. 2.1 are normal. We will refer to them with \(\mathfrak {FulMem}\) and \(\mathfrak {NoMem}\), respectively.
For a CGM \(M\), by \(\mathsf {Paths}_{M}\) we denote the set of all paths feasible in \(M\), that is, the set of infinite sequences of states that can be obtained by subsequent transitions in \(M\). We leave out the details of the formal definition.
Definition 19
(\(\varSigma \)effectivity) Let \(M\) be a CGM and \(\varSigma = \bigcup _{C\subseteq {\mathbb {A}\mathrm {gt}}}\varSigma _C\) be a normal set of coalitional strategies in \(M\). The path \(\varSigma \)effectivity function of \(M\) is defined as
Specifically, we denote by \(\mathcal {E}^\mathfrak {FulMem} _M\) and \(\mathcal {E}^\mathfrak {NoMem} _M\) the effectivity of coalitions respectively for perfect recall strategies and for memoryless strategies in \(M\).
Example 5
The difference between perfect recall and memoryless effectivity is most easily seen in the case of the grand coalition. For instance, in the model of aggressive vs. conservative play from Example 1, \(\mathcal {E}^\mathfrak {FulMem} _M(\{{1,2}\})\) is the outcomemonotone closure of the family \(\{ \{\lambda _0,\lambda _1,\lambda _2\} \mid \lambda \in \{q_0,q_1,q_2\}^\omega , \lambda _i[0]=q_i \}\), i.e.:
In contrast, \(\mathcal {E}^\mathfrak {NoMem} _M(\{{1,2}\})\) is the outcomemonotone closure of the family containing sets \(\{\lambda _0,\lambda _1,\lambda _2\}\) such that: each \(\lambda _i\in \{q_0,q_1,q_2\}^\omega \), \(\lambda _i[0]=q_i\), and moreover each of \(\lambda _0,\lambda _1,\lambda _2\) is of the form: \((q_i)^\omega , i\in \{0,1,2\}\), or \(q_i(q_j)^\omega , i,j\in \{0,1,2\}\), or \((q_iq_j)^\omega , i,j\in \{0,1,2\}\), or \(q_iq_j(q_k)^\omega , i,j,k\in \{0,1,2\}, i\ne j\), or \(q_i(q_jq_k)^\omega , i,j,k\in \{0,1,2\}, i\ne j\), or \((q_iq_jq_k)^\omega , i,j,k\in \{0,1,2\}, i\ne j\), which reduces to:
That is, the players can enforce any sequence of states when they have perfect memory, but in the memoryless case they can only enforce the “periodic” paths that fall into a loop as soon as they revisit the same state. It is interesting to note that \(\mathcal {E}^\mathfrak {FulMem} _M(\{{1,2}\})\) contains uncountably many elements (choice sets), whereas \(\mathcal {E}^\mathfrak {NoMem} _M(\{{1,2}\})\) is countable.
Below we collect some observations that will be used further.
Proposition 7
For every CGM \(M\) and a normal class \(\varSigma \) of coalitional strategies in \(M\):

1.
Every coalition has a collective strategy, and therefore for every state \(q\) in \(M\) it can enforce at least one set of outcome paths starting from \(q\). (Safety)

2.
For any coalition \(C\) and state \(q\) in \(M\), every coalitional strategy produces a nonempty set of outcome paths starting from \(q\). (Liveness)

3.
All the supersets of a choice in \(\mathcal {E}_M^\varSigma (C)\) belong to \(\mathcal {E}_M^\varSigma (C)\), too. (OutcomeMonotonicity)

4.
\(\mathcal {E}^\varSigma _M(\emptyset )\) is a singleton. More precisely, \(\mathcal {E}^\varSigma _M(\emptyset ) = \{\mathsf {Paths}_{M}\}\).

5.
Every two disjoint coalitions can join their chosen coalitional strategies to enforce the intersection of the outcome paths enforced by each of the coalitions following its respective strategy. Together with outcomemonotonicity, this implies that, if \(C\cap D=\emptyset \), \({\mathcal {X}}\in \mathcal {E}^\varSigma _M(C)\), and \(\mathcal {Y}\in \mathcal {E}^\varSigma _M(D)\), then \({\mathcal {X}}\cap \mathcal {Y}\in \mathcal {E}^\varSigma _M(C\cup D)\). (Superadditivity)
Moreover, for \(\varSigma = \mathfrak {FulMem} \) and \(\varSigma = \mathfrak {NoMem} \), we have the following:

6.
\(\mathcal {E}^\varSigma _M({\mathbb {A}\mathrm {gt}})\) is the outcomemonotone closure of the family of all the sets of paths that contain a path from \(\mathsf {Paths}_{M}\) starting from each initial state. Consequently, \(\mathcal {E}^\varSigma _M({\mathbb {A}\mathrm {gt}}) = \{ {\mathcal {X}}\subseteq \mathsf {Paths}_{M} \mid {\mathcal {X}}(q)\ne \emptyset \text { for every }q\in St\}\). (Determinacy)
Proof
Straightforward. \(\square \)
Path effectivity semantics of ATL*
Given an ATL* path formula \(\gamma \) and a path effectivity model \({\mathcal {M}}\), let
denote the set of paths in \({\mathcal {M}}\) that satisfy \(\gamma \). Note that the relation is already well defined by the relevant semantic clauses in Sect. 2.3 (it is essentially the semantics of Linear Time Logic LTL). Then, the path effectivity semantics of ATL* in strategies \(\varSigma \) is given by the clause below:

iff .
We observe that the above clause interprets ATL* modalities as CL modalities over outcome paths. Moreover, using path effectivity functions brings technical simplicity: only one effectivity function is needed to completely describe the power of coalitions. Last but not least, only one semantic clause is needed to define strategic ability in ATL*. The temporal patterns (that, in a sense, serve as winning conditions) are appropriately handled by LTL semantics.
Example 6
Let us apply the path effectivity semantics of ATL* to our model of aggressive vs. conservative play \(M_1\) from Fig. 1. Analogously to the standard semantics, we have and for every \(i=0,1,2\). This can be demonstrated e.g. by the choice \(\{q_0(q_i)^\omega \}\) that belongs to \(\mathcal {E}_{M_1}^\mathfrak {FulMem} (\{1,2\})\) as well as \(\mathcal {E}_{M_1}^\mathfrak {NoMem} (\{1,2\})\).
Characterizing path effectivity functions
The path effectivity semantics for ATL* defined above is very general, and allows for reasoning about quite abstract—one may even say contrived—patterns of effectivity. Here we identify the characteristic properties of path effectivity functions arising in CGSs, and define an analogue of the notion of (truly) playable state effectivity functions. We begin with generic conditions that must apply to any pattern of effectivity, regardless of the type of strategies being used. Then, we proceed to characterize additional conditions that are necessary (and sufficient) in the special cases of memoryless and perfect recall strategies.
General playability conditions
Definition 20
(Playability in path effectivity) Let \(\mathsf {Paths}\subseteq St^\omega \). A path effectivity function \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) is truly playable over the set of feasible paths \(\mathsf {Paths}\) if it satisfies the following conditions:

PSafety: \(\mathcal {E}(C)(q)\) is nonempty for every \(C\subseteq {\mathbb {A}\mathrm {gt}},q\in St\).

PLiveness: \(\emptyset \notin \mathcal {E}(C)(q)\) for every \(C\subseteq {\mathbb {A}\mathrm {gt}},q\in St\).

POutcome Monotonicity: For every \(C\subseteq {\mathbb {A}\mathrm {gt}}\) the set \(\mathcal {E}(C)\) is upwards closed: if \({\mathcal {X}}\in \mathcal {E}(C)\) and \({\mathcal {X}}\subseteq \mathcal {Y}\subseteq \mathsf {Paths}\) then \(\mathcal {Y}\in \mathcal {E}(C)\).

PSuperadditivity: For every \(C,D\subseteq {\mathbb {A}\mathrm {gt}}\), if \(C \cap D = \emptyset \), \({\mathcal {X}}\in \mathcal {E}(C)\) and \(\mathcal {Y}\in \mathcal {E}(D)\), then \({\mathcal {X}}\cap \mathcal {Y}\in \mathcal {E}(C \cup D)\).

P\(\emptyset \)Minimality: \(\mathcal {E}(\emptyset )\) is the singleton \(\{\mathsf {Paths}\}\).

PDeterminacy: For every \(q\in St\), if \({\mathcal {X}}\in \mathcal {E}({\mathbb {A}\mathrm {gt}})\) then \(\{\lambda \} \in \mathcal {E}({\mathbb {A}\mathrm {gt}})(q)\) for some \(\lambda \in {\mathcal {X}}(q)\).^{Footnote 4}
We note that the playability conditions above are variants of true playability for pathbased effectivity.
Proposition 8
True playability carries over between corresponding path and state effectivity functions. More precisely, let \(\mathsf {Paths}\subseteq St^\omega \) be such that \((\mathsf {Paths}^{S})^{P}= \mathsf {Paths}\).^{Footnote 5} Then:

1.
If the path effectivity function \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) is truly playable over \(\mathsf {Paths}\) then its state projection \(\mathcal {E}^{S}\) is truly playable, too.

2.
If the state effectivity function \(E: St\times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\) is truly playable then its path closure \(E^{P}\) is truly playable over \(\mathsf {Paths}\).
Proof
Checking the respective playability conditions is straightforward, and we leave it to the interested reader. \(\square \)
Besides the general conditions in Definition 20, we need additional conditions which are specific to the underlying class of strategies, and relate local choices with global strategies in path effectivity frames.
Path effectivity with memoryless strategies
Here we will obtain an abstract characterization of the path effectivity functions in concurrent game structures corresponding to memoryless strategies.
Preparation
Definition 21
(Statetransition closed choices and effectivity functions) A global choice \({\mathcal {X}}\) of a path effectivity function \(\mathcal {E}\) is statetransition closed iff \(({\mathcal {X}}^{S})^{P}= {\mathcal {X}}\). That is, \({\mathcal {X}}\) coincides with the set of paths that follow the statebased transition relation projected from \({\mathcal {X}}\).
Respectively, \(\mathcal {E}\) is statetransition closed iff \((\mathcal {E}^{S})^{P}= \mathcal {E}\).
Clearly, every path effectivity function generated by memoryless strategies of any coalition \(C\) in any CGS is statetransition closed. Moreover, the set of paths in any global choice \({\mathcal {X}}\) determined by memoryless strategies of a coalition in a CGS \(M\) corresponds to the set of all paths along a transition relation in \(M\), suitably restricted by these memoryless strategies. By a result of Emerson [15], every such set of paths is precisely characterized by 3 simple closure conditions, defined below.
Definition 22
(Closure conditions for sets of paths) A set of paths \({\mathcal {X}}\) in a state space \(St\) is:

1.
suffix closed if every suffix path \(\lambda [i..\infty ]\) of a path in \({\mathcal {X}}\) belongs to \({\mathcal {X}}\);

2.
fusion closed if for every \(\lambda ,\lambda ' \in {\mathcal {X}}\) such that \(\lambda [i] = \lambda '[0]\) then the “fusion path” \(\lambda ''\) such that \(\lambda ''[0..i] = \lambda [0..i]\) and \(\lambda ''[i..\infty ] = \lambda '[i..\infty ]\) belongs to \({\mathcal {X}}\).

3.
limit closed if for every path \(\lambda \), if there is a sequence of paths \(\{\lambda _{i}\}_{i\in {\mathbb {N}}}\) in \({\mathcal {X}}\) such that \(\lambda [0..i] = \lambda _{i}[0..i]\) for every \(i\in {\mathbb {N}}\), then \(\lambda \) belongs to \({\mathcal {X}}\), too.
We obtain the following characterization of statetransition closed global choices of a path effectivity function \(\mathcal {E}\).
Proposition 9
A global choice \({\mathcal {X}}\) of a path effectivity function \(\mathcal {E}\) is statetransition closed iff it is suffix, fusion, and limit closed.
Proof
As proved in [15], a set of paths in a state space \(St\) is suffix, fusion, and limit closed iff it is the set of all paths along some transition relation in \(St\). Thus, every statetransition closed global choice satisfies these closure conditions. Conversely, if a global choice \({\mathcal {X}}\) satisfies these closure conditions then it is the set of paths generated by some transition relation \(R\) in \(St\). Because of the suffix closure, \(R\) is precisely the state projection of \({\mathcal {X}}\), hence \({\mathcal {X}}\) is statetransition closed. \(\square \)
Definition 23
(Statetransition closed core of \(\mathcal {E}\)) The statetransition closed core of a path effectivity function \(\mathcal {E}\) is the path effectivity function \(\mathcal {E}^{core}\) that selects only statetransition closed choices from \(\mathcal {E}\), i.e.:
Intuitively, if all players in \(C\) are following a collective memoryless strategy, while the others are free to execute any available actions, then the same set of possible successor states should be available whenever the system is in state \(q\), regardless of the path that leads to that state. Ideally, these should be exactly the feasible successors, i.e., ones that can be effected by a transition consistent with the strategy. Every such “feasible” global choice is by definition statetransition closed. However, Definition 23 allows also for statetransition closed choices that properly extend feasible choices by adding superfluous successor states in a uniform way (that is, the same superfluous successors are added whenever \(q\) occurs).
We note in passing that \(\mathcal {E}^{core}\) is never outcomemonotone except in trivial cases, even if \(\mathcal {E}\) is.
Lemma 1
Every statetransition closed path effectivity function \(\mathcal {E}\) is equal to the outcome monotone closure of its statetransition closed core. Formally, for every coalition \(C\):
Proof
First, note that the path closure \(X^{P}\) of any global action choice \(X : St\rightarrow \mathcal {P}({St})\) is statetransition closed.
Now, let \({\mathcal {X}}\in \mathcal {E}(C)\). Then, also \({\mathcal {X}}\in (\mathcal {E}^{S})^{P}(C)\), and hence there must exist \(\mathcal {Y}\subseteq {\mathcal {X}}\) such that \(\mathcal {Y}\) is the path closure of some global action choice in \(\mathcal {E}^{S}\).
Then, \(\mathcal {Y}= (\mathcal {Y}^{S})^{P}\), hence \(\mathcal {Y}\in \mathcal {E}^{core}(C)\). Consequently, \({\mathcal {X}}\) is in the outcomemonotone closure of \(\mathcal {E}^{core}(C)\). The converse direction is analogous. \(\square \)
Characterization
Now we can proceed with our characterization of path effectivity functions that correspond to concurrent game structures. We begin with a proposition that characterizes structurally statetransition closed effectivity functions. Intuitively, \(\mathfrak {NoMem}\)grounding specifies that every strategic choice is an outcomemonotone extension of some “internally consistent” (that is, statetransition closed) choice. Moreover, \(\mathfrak {NoMem}\)convexity requires that any consistent collection of “locally applied” strategies for a given coalition \(C\) can be pieced together into a global memoryless strategy for \(C\).
Proposition 10
A path effectivity function \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) over a set of feasible paths \(\mathsf {Paths}\) is statetransition closed iff for every \(C\subseteq {\mathbb {A}\mathrm {gt}}\) the following two conditions hold:

(\(\mathfrak {NoMem}\)Grounding) \(\mathcal {E}(C)\) is the outcomemonotone closure of \(\mathcal {E}^{core}(C)\), i.e., for every \(\mathcal {Y}\in \mathcal {E}(C)\) there is \({\mathcal {X}}\in \mathcal {E}^{core}(C)\) such that \({\mathcal {X}}\subseteq \mathcal {Y}\).

(\(\mathfrak {NoMem}\)Convexity) For every family \(\{{\mathcal {X}}^{q} \in \mathcal {E}^{core}(C) \mid q \in St\}\) of statetransition closed global choices, if \(\mathcal {Y}\in \mathcal {E}(C)\) is such that \(\mathcal {Y}(q) = {\mathcal {X}}^{q}(q)\) for every \(q \in St\), then \((\mathcal {Y}^{S})^{P}\in \mathcal {E}^{core}(C)\).
Equivalently, the \(\mathfrak {NoMem}\)Convexity condition can be formulated as follows: For every family \(\{{\mathcal {X}}^{q} \in \mathcal {E}^{core}(C) \mid q \in St\}\) of statetransition closed global choices, we have that \(\big (\big (\bigcup _{q\in St}{\mathcal {X}}^{q}(q)\big )^{S}\big )^{P}\in \mathcal {E}^{core}(C)\).
Proof
“ \(\Rightarrow \) ”: Let \(\mathcal {E}(C)\) be statetransition closed. Then, \(\mathfrak {NoMem}\)Grounding holds by Lemma 1. Moreover, take any family \(\{{\mathcal {X}}^{q} \in \mathcal {E}^{core}(C) \mid q \in St\}\) of statetransition closed global choices. For every \(q\in St\), the set of immediate successors of the initial state \(q\) in \({\mathcal {X}}^q(q)\) is in \(({\mathcal {X}}^q)^{S}\). Consider the global action choice \(Y\) such that, for every \(q\in St\), \(Y(q)=({\mathcal {X}}^q)^{S}\). Clearly, \(Y\in \mathcal {E}^{S}(C)\), hence \(Y^{P}\in (\mathcal {E}^{S})^{P}(C)\). Finally, we observe that \(Y = \big (\bigcup _{q\in St}{\mathcal {X}}^{q}(q)\big )^{S}\) and \((\mathcal {E}^{S})^{P}(C)= \mathcal {E}(C)\) by assumption. Thus, \(\big (\big (\bigcup _{q\in St}{\mathcal {X}}^{q}(q)\big )^{S}\big )^{P}\in \mathcal {E}(C)\). Since that choice is closed by construction, it must also be in \(\mathcal {E}^{core}(C)\), which concludes this part of the proof.
“ \(\Leftarrow \) ”:Let \(\mathcal {E}\) be \(\mathfrak {NoMem}\)grounded and \(\mathfrak {NoMem}\)convex, and let \({\mathcal {X}}\in \mathcal {E}(C)\). Then, there is \(\mathcal {Y}\subseteq {\mathcal {X}}\) which is statetransition closed, by \(\mathfrak {NoMem}\)Grounding. But then also \(\mathcal {Y}\in (\mathcal {E}^{S})^{P}(C)\), hence \({\mathcal {X}}\in (\mathcal {E}^{S})^{P}(C)\) because path closure is closed under supersets.
Conversely, let \({\mathcal {X}}\in (\mathcal {E}^{S})^{P}(C)\). Then, it is a superset of the path closure of a global action choice generated from a combination of state projections of strategic choices \(\{{\mathcal {X}}^{q} \mid q \in St\}\) in \(\mathcal {E}(C)\). More precisely: \({\mathcal {X}}\supseteq \big (\big (\bigcup _{q\in St}{\mathcal {X}}^{q}(q)\big )^{S}\big )^{P}\). By \(\mathfrak {NoMem}\)grounding, for each \({\mathcal {X}}^{q}\), there must be a statetransition closed strategic choice \(\mathcal {Y}^{q}\in \mathcal {E}^{core}(C)\) such that \(\mathcal {Y}^{q}\subseteq {\mathcal {X}}^{q}\). Now, take the family \(\{\mathcal {Y}^{q} \mid q \in St\}\). By \(\mathfrak {NoMem}\)convexity, we get that \(\big (\big (\bigcup _{q\in St}\mathcal {Y}^{q}(q)\big )^{S}\big )^{P}\in \mathcal {E}(C)\). Since (i) \(\bigcup _{q\in St}\mathcal {Y}^{q}(q) \subseteq \bigcup _{q\in St}{\mathcal {X}}^{q}(q)\), (ii) the operations of state projection and path closure are monotonic wrt sets of outcomes from the effectivity functions, and (iii) \(\mathcal {E}(C)\) is closed under supersets, we finally obtain that \({\mathcal {X}}\in \mathcal {E}(C)\). \(\square \)
Theorem 4
(\(\mathfrak {NoMem}\)Representation theorem) A path effectivity function \(\mathcal {E}\) over a set of feasible paths \(\mathsf {Paths}\) equals the path effectivity function with memoryless strategies \(\mathcal {E}^\mathfrak {NoMem} _M\) for some concurrent game structure \(M\) if and only if \(\mathsf {Paths}\) is statetransition closed and \(\mathcal {E}\) is truly playable and statetransition closed.
Proof
By Proposition 10 it suffices to prove that \(\mathcal {E}\) is representable in concurrent game structures with memoryless strategies iff \(\mathsf {Paths}\) is statetransition closed and \(\mathcal {E}\) is truly playable, \(\mathfrak {NoMem}\)grounded, and \(\mathfrak {NoMem}\)convex.
“\(\Rightarrow \)”: Take any CGS \(M\) and its path effectivity function \(\mathcal {E}^\mathfrak {NoMem} _M\). Statetransition closedness of \(\mathsf {Paths}_M\) is obvious. Further, we observe that \(\mathcal {E}^\mathfrak {NoMem} _M\) is the path closure of the state \(\alpha \)effectivity function of \(M\), i.e, \(\mathcal {E}^\mathfrak {NoMem} _M = (E_M^\alpha )^{P}\). Thus, by Proposition 8(2) \(\mathcal {E}^\mathfrak {NoMem} _M\) must be truly playable. Therefore, every choice \(\mathcal {Y}\in \mathcal {E}^\mathfrak {NoMem} _M(C)\) is a superset of some statetransition closed choice \({\mathcal {X}}\) generated by some collective memoryless strategy of \(C\), and hence it is also \(\mathfrak {NoMem}\)grounded. Finally, for a family of statetransition closed choices \(\{{\mathcal {X}}^{q} \mid q\in St\}\) in \(\mathcal {E}^\mathfrak {NoMem} _M(C)\), let us take \(\widehat{{\mathcal {X}}}^q \subseteq {\mathcal {X}}^q\) to be a choice generated by an actual collective strategy of \(C\) (it must exist by construction of \(\mathcal {E}^\mathfrak {NoMem} _M\)). Let \(\mathcal {Y}= \bigcup _{q\in St}{\mathcal {X}}^{q}(q)\) and \(\widehat{\mathcal {Y}} = \bigcup _{q\in St}\widehat{{\mathcal {X}}}^{q}(q)\). Clearly, \(\widehat{\mathcal {Y}}\) is the set of paths generated by a collective strategy of \(C\) that combines the opening moves from \(\{\widehat{{\mathcal {X}}}^{q}\}\). In consequence, \(\widehat{\mathcal {Y}}\in \mathcal {E}^\mathfrak {NoMem} _M(C)\). Moreover, \(\widehat{\mathcal {Y}}\subseteq \mathcal {Y}\), so \(\mathcal {Y}\in \mathcal {E}^\mathfrak {NoMem} _M(C)\) by the outcomemonotonicity of \(\mathcal {E}^\mathfrak {NoMem} _M\). That proves \(\mathfrak {NoMem}\)convexity.
“\(\Leftarrow \)”: Let \(\mathcal {E}\) be truly playable, \(\mathfrak {NoMem}\)grounded, and \(\mathfrak {NoMem}\)convex over a statetransition closed set of feasible paths \(\mathsf {Paths}\). Then:

1.
We construct the global state effectivity function \(\mathcal {E}^{S}\) as the state projection of \(\mathcal {E}\) (Definition 16). By Proposition 8(1), \(\mathcal {E}^{S}\) is truly playable.

2.
Using the representation theorem in [19] we construct a CGS \(M\) for the same set of agents \({\mathbb {A}\mathrm {gt}}\) and state space \(St\), such that the state effectivity function \(E_M^\alpha \) of \(M\) coincides with \(\mathcal {E}^{S}\).

3.
Using \(E_M^\alpha \) we construct the respective path effectivity function \(\mathcal {E}^\mathfrak {NoMem} _M\) as the path closure of \(E_M^\alpha \), according to Definition 17.

4.
Finally, we show that \(\mathcal {E}^\mathfrak {NoMem} _M\) coincides with \(\mathcal {E}\) by using the \(\mathfrak {NoMem}\)grounding and \(\mathfrak {NoMem}\)convexity of each of \(\mathcal {E}\) and \(\mathcal {E}^\mathfrak {NoMem} _M\). For that we fix any coalition \(C\) and prove both inclusions:

\( \mathcal {E}(C) \subseteq \mathcal {E}^\mathfrak {NoMem} _M(C)\): Take any global choice \(\mathcal {Y}\in \mathcal {E}(C)\). Then there is \({\mathcal {X}}\in \mathcal {E}^{core}(C)\) such that \({\mathcal {X}}\subseteq \mathcal {Y}\) (by Grounding). The state projection \({\mathcal {X}}^{S}\) in \(\mathcal {E}^{S}\) is a global action choice in \(\mathcal {E}^{S}(C) = E_M^\alpha (C)\). Thus, there is a collective memoryless strategy \(\sigma _{C}\) for \(C\) in \(M\) that generates an actual global action choice \(\widehat{X} \in E_M^\alpha (C)\) of which \({\mathcal {X}}^{S}\) is an extension (i.e., \(\widehat{X}(q)\subseteq {\mathcal {X}}^{S}(q)\) for every \(q\in St\)). Clearly, the path closure of \(\widehat{X}\) corresponds to the set of actual outcome paths of \(\sigma _C\), therefore \(\widehat{X}^{P}\in \mathcal {E}^\mathfrak {NoMem} _M(C)\). By monotonicity of path closure, we also have that \(\widehat{X}^{P}\subseteq ({\mathcal {X}}^{S})^{P}\). Moreover, \(({\mathcal {X}}^{S})^{P}={\mathcal {X}}\) because \({\mathcal {X}}\) is statetransition closed. Thus, \(\mathcal {Y}\supseteq {\mathcal {X}}\supseteq \widehat{X}\in \mathcal {E}^\mathfrak {NoMem} _M(C)\), and hence \(\mathcal {Y}\in \mathcal {E}^\mathfrak {NoMem} _M(C)\) by the outcomemonotonicity of \(\mathcal {E}^\mathfrak {NoMem} _M(C)\).

\( \mathcal {E}^\mathfrak {NoMem} _M(C) \subseteq \mathcal {E}(C)\): Take any global choice \(\mathcal {Y}\in \mathcal {E}^\mathfrak {NoMem} _M(C)\). By construction of \(\mathcal {E}^\mathfrak {NoMem} _M\), there must a global state choice \(X\in E_M^\alpha (C)\), hence \(X\in \mathcal {E}^{S}(C)\) (by point 5.2.2 above), that corresponds to an actual collective strategy of \(C\) in \(M\) and \(\mathcal {Y}\) extends the set of paths generated by \(X\) (that is, \(X^{P}\subseteq \mathcal {Y}\)). By the definition of state projection, there must be a strategic choice \(\widehat{{\mathcal {X}}}\in \mathcal {E}(C)\) such that \(X(q) = \{q' \mid \lambda [0]=q\text { and }\lambda [1]=q'\text { for some }\lambda \in \widehat{{\mathcal {X}}}\}\) for every \(q\in St\). Moreover, by the \(\mathfrak {NoMem}\)groundedness of \(\mathcal {E}\), we have that for every \(q\) there is a statetransition closed \({\mathcal {X}}^q\) such that \({\mathcal {X}}^q(q)\subseteq \widehat{{\mathcal {X}}}(q)\). Take \(\widehat{\mathcal {Y}} = \big (\big (\bigcup _{q\in St}{\mathcal {X}}^{q}(q)\big )^{S}\big )^{P}\). By \(\mathfrak {NoMem}\)convexity of \(\mathcal {E}\), we have that \(\widehat{\mathcal {Y}}\in \mathcal {E}(C)\). Summarizing, we have \(\mathcal {Y}\supseteq X^{P}= \big (\big (\bigcup _{q\in St}{\mathcal {X}}^{q}(q)\big )^{S}\big )^{P}= \widehat{\mathcal {Y}}\in \mathcal {E}(C)\). Thus, by the outcomemonotonicity of \(\mathcal {E}\), we obtain that \(\mathcal {Y}\in \mathcal {E}(C)\).

Path effectivity with perfect recall strategies
Our characterization of representability for perfect recall strategies is analogous, but now the requirements on a valid strategy are more relaxed. As a consequence, more sets of paths (strategic choices) in a path effectivity function correspond to actual strategies in the CGS. In fact, every sequence of collective actions at the states of an infinite play by a group of agents can be regarded as determined by a perfect recall strategy of that group. The difference from the case of memoryless strategies is that every passing through the same state allows a different choice and hence determines a possibly different set of successor states. That difference can be captured in two different, but equivalent ways: by associating not statebased, but historybased effectivity functions, or by considering state effectivity functions and memoryless strategies in the tree unfolding of the CGS. We will present both approaches.
Preparation
We begin by updating the mappings between path and state effectivity functions, which were defined in Sect. 4.2 with memoryless strategies in mind. First, recall some notation introduced in Sect. 4.1. For \({\mathcal {X}}\subseteq St^\omega , h\in St^+\), we have:
\({\mathcal {X}}(h) \,{:}{=}\, \{\lambda \mid \lambda \in {\mathcal {X}}, \ \text{ and } \ \lambda [0..k]= h \ \text{ where } k = h \}\);
\(\mathcal {E}(C)(h) = \{{\mathcal {X}}(h) \mid {\mathcal {X}}\in \mathcal {E}(C)\}\).
Definition 24
(Historybased state effectivity functions) A historybased state effectivity function on a state space \(St\) is a mapping
that assigns to every coalition \(C\subseteq {\mathbb {A}\mathrm {gt}}\) and every finite history \(h \in St^+\) a family of sets of successor states. The elements of \(E^{H}(C)\) are called historybased global strategic choices of the coalition \(C\) in \(E^{H}(C)\).
Every CGS \(M\) for a set of agents \({\mathbb {A}\mathrm {gt}}\) over a state space \(St\) defines the historybased state effectivity function \(E^{H}_{M}\). The function assigns to every coalition \(C\) and history \(h\) the family of possible sets of successors of the last state of \(h\), corresponding to the possible perfect recall strategies of \(C\) that produce \(h\) following a suitable collective behavior of the remaining agents. Thus, every perfect recall strategy \(\sigma _{C}\) determines a historybased global strategic choice of \(C\) that assigns to every history \(h\) the set of possible continuations of \(h\) resulting from the agents in \(C\) following the strategy \(\sigma _{C}\).
Definition 25
(Historybased state projection) For a global strategic choice \({\mathcal {X}}\subseteq St^\omega \), we define its historybased state projection as the historybased global action choice \({\mathcal {X}}^{HS}: St^+ \rightarrow \mathcal {P}({St})\) constructed as follows:
Similarly, the historybased state projection of a path effectivity function \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({Paths})})\) is the historybased state effectivity function \(\mathcal {E}^{HS}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow (St^+ \rightarrow \mathcal {P}({\mathcal {P}({St})}))\) that assigns to every coalition \(C\subseteq {\mathbb {A}\mathrm {gt}}\) and every finite history \(h \in St^+\) the family
of sets of successor states, one for each set of paths in \(\mathcal {E}(C)(h)\).
\({\mathcal {X}}^{HS}(h)\) includes all the states that can appear right after prefix \(h\) in the set of paths \({\mathcal {X}}\). Thus, \({\mathcal {X}}^{S}\) assigns possible successors to each finite sequence of states that can occur in the system. This can be seen as a representation of a tree of possible finite histories admitted by a fixed perfect recall collective strategy of the agents in \(C\). Moreover, \(\mathcal {E}^{HS}\) collects all such trees that can be “extracted” from the strategic choices of \(C\) in \(\mathcal {E}\).
Definition 26
(Historybased path closure) Given a historybased action choice \(X : St^+ \rightarrow \mathcal {P}({St})\), we define its historybased path closure \(X^{HP}\subseteq St^\omega \) as follows:
Likewise, the historybased path closure of a historybased state effectivity function \(E: St^+ \times \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St})})\) is defined as the path effectivity function \(E^{HP}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) constructed as follows:
That is, \(X^{HP}\) collects the paths generated by the transition tree represented by \(X\). Moreover, \(E^{HP}\) is the outcomemonotone closure of the family of strategic choices generated this way from the extended state effectivity function \(E\).
Definition 27
(Historytransition closed choices and effectivity functions) A strategic choice \({\mathcal {X}}\) is historytransition closed iff \(({\mathcal {X}}^{HS})^{HP}= {\mathcal {X}}\).
A path effectivity function \(\mathcal {E}\) is historytransition closed iff \((\mathcal {E}^{HS})^{HP}= \mathcal {E}\).
As it turns out, the analogue of statetransition closed choices and statetransition closed core for perfect recall strategies require only Emerson’s limit closure condition [15] which we already presented in Sect. 5.2, and recall again below.
Definition 28
(Limit closure, limitclosed core) A strategic choice \({\mathcal {X}}\subseteq St^\omega \) is limitclosed iff, whenever it contains an infinite sequence of paths \(\{\lambda _{i}\}_{i\in {\mathbb {N}}}\) in \({\mathcal {X}}\) such that \(\lambda _{i}[0..i] = \lambda [0..i]\) for every \(i\in {\mathbb {N}}\), then \(\lambda \) belongs to \({\mathcal {X}}\), too.
The limitclosed core of \(\mathcal {E}\) is defined as the effectivity function \(\mathcal {E}^{lcore}\) that selects only limitclosed choices from \(\mathcal {E}\):
Proposition 11
For any global strategic choice \({\mathcal {X}}\subseteq St^\omega \), \({\mathcal {X}}\) is historytransition closed iff \({\mathcal {X}}\) is limitclosed.
Proof
We prove that \(({\mathcal {X}}^{HS})^{HP}= {\mathcal {X}}\) iff \({\mathcal {X}}\) is limitclosed. Essentially by definition, \(X^{HP}\) is limitclosed for every historybased action choice \(X\), hence the implication from left to right. Conversely, let \({\mathcal {X}}\) be limitclosed. First, note that \({\mathcal {X}}\subseteq ({\mathcal {X}}^{HS})^{HP}\), immediately from the definition of \( ({\mathcal {X}}^{HS})^{HP}\). For the other inclusion, let \(\lambda \in ({\mathcal {X}}^{HS})^{HP}\). Then, for every \(i \ge 0\), we have \(\lambda [i+1] \in {\mathcal {X}}^{HS}(\lambda [0..i])\), hence \(\lambda [i+1] = \lambda '[i+1]\) for some \(\lambda ' \in {\mathcal {X}}\) such that \(\lambda '[0..i] = \lambda [0..i]\). Put \(\lambda _{i+1} = \lambda '\). Thus, we have defined an infinite sequence \(\{\lambda _{j}\}_{j>0}\) of paths in \({\mathcal {X}}\) such that \(\lambda _{j}[0..j] = \lambda _{j}[0..j]\) for each \(j\). By limit closure of \({\mathcal {X}}\) it follows that \(\lambda \in {\mathcal {X}}\). Thus, we have also proved that \(({\mathcal {X}}^{HS})^{HP}\subseteq {\mathcal {X}}\).\(\square \)
Lemma 2
Every path effectivity function \(\mathcal {E}\) which is historytransition closed is equal to the outcome monotone closure of its limitclosed core. Formally, for every coalition \(C\):
Proof
Let \((\mathcal {E}^{HS})^{HP}= \mathcal {E}\). Now, let \({\mathcal {X}}\in \mathcal {E}(C)\). Then, \({\mathcal {X}}\in (\mathcal {E}^{HS})^{HP}(C)\), hence there exists a \(X \in \mathcal {E}^{HS}(C)\) such that \(X^{{HP}}\subseteq {\mathcal {X}}\). Since \(X^{{HP}}\) is limit closed, we have that \(X^{{HP}} \in \mathcal {E}^{lcore}(C)\). Conversely, let \({\mathcal {X}}\supseteq \mathcal {Y}\) for some limitclosed \(\mathcal {Y}\in \mathcal {E}(C)\). Then \({\mathcal {X}}\in \mathcal {E}(C)\) because \(\mathcal {E}(C)\) is outcomemonotone. \(\square \)
Path effectivity functions in treelike structures
Here we are going to do some technical preparation for reduction of the characterization of path effectivity functions with perfect recall strategies to the case with memoryless strategies in treelike concurrent game structures.
Definition 29
(Treelike concurrent game structures) A CGS is:

injective, if for every state, any two different action profiles applied at that state result in different successor states.

treelike, if it is injective and all states have pairwise disjoint sets of successor states.
Equivalently, a CGS is treelike if every state has a unique maximal (i.e. not properly extendable) history, i.e., path along the transition relation ending at that state. Note that in the definition above we do not assume existence of a root, so a history of a state may be without initial state, hence infinite.
Remark 2
Any state in a treelike CGS can be visited at most once during a play, and therefore memoryless and perfect recall strategies in treelike CGSs coincide.
Definition 30
(Tree unfolding of concurrent game structures [2, 12]) The tree unfolding of a CGS \(F = ({\mathbb {A}\mathrm {gt}}, St, Act, d, o)\) is the CGS
where:

\(\widehat{St}\) is the set of all initial feasible histories \(\lambda [0..i]\) of the feasible paths \(\lambda \) in \(F\);

\(\widehat{d} : {\mathbb {A}\mathrm {gt}}\times \widehat{St} \rightarrow \mathcal {P}({Act})\) assigns to each agent \(\mathsf a \) and history \(\lambda [0..i]\) the set of actions available to \(\mathsf a \) at the last state of that history \(d(\mathsf a ,\lambda [i])\).

\(\widehat{o}\) is the transition function defined on every history and action profile as \(o\) applied to the last state of the history and the same action profile: \(\widehat{o}(\lambda [0..i],\alpha _1,\dots ,\alpha _k) {:}{=} o(\lambda [i],\alpha _1,\) \(\dots ,\alpha _k)\).
Now, we define the liftings of strategies, paths, choices and effectivity functions from concurrent game structures to their tree unfoldings.
Definition 31
(Liftings of strategies, paths and choices) Consider the tree unfolding \(\widehat{F} = ({\mathbb {A}\mathrm {gt}}, \widehat{St}, Act, \widehat{d}, \widehat{o})\) of a CGS \(F = ({\mathbb {A}\mathrm {gt}}, St, Act, d, o)\).

Every perfect recall strategy \(\sigma _\mathsf{a }\) of an agent \(\mathsf a \) in \(F\) defines a strategy \(\widehat{\sigma }_\mathsf{a }\) in \(\widehat{F}\) that prescribes at every state in \(\widehat{F}\) (i.e., history \(h\) in \(F\)) the action \(\sigma _\mathsf{a }(h)\). Likewise for coalitional strategies.

For every path \(\lambda \) in \(F\) we define its lifting as the path of its initial histories \(\widehat{\lambda } = \lambda [0..0], \lambda [0..1], \ldots \lambda [0..n], \ldots \) in \(\widehat{F}\). Note that \(\widehat{\lambda }\) is a feasible path (play) in \(\widehat{F}\) iff \(\lambda \) is a feasible path (play) in \(F\).

Likewise, for every set of paths \({\mathcal {X}}\) in \(F\) we define its lifting in \(\widehat{F}\) as \(\widehat{{\mathcal {X}}} = \{\widehat{\lambda } \mid \lambda \in {\mathcal {X}}\}\).

Every (abstract) path effectivity function \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) is lifted accordingly to a path effectivity function \(\widehat{\mathcal {E}} : \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({\widehat{St}^\omega })})\).
Note that:

1.
Every tree unfolding of a CGS is treelike.

2.
The tree unfolding of a treelike CGS \(F\) is isomorphic to \(F\).

3.
The mapping \(\widehat{\cdot }\) defined above is a bijection between the feasible paths (plays) in the CGS \(F\) and those in its tree unfolding \(\widehat{F}\).
Proposition 12
Let \(\widehat{F} = ({\mathbb {A}\mathrm {gt}}, \widehat{St}, Act, \widehat{d}, \widehat{o})\) be the tree unfolding of the CGS \(F = ({\mathbb {A}\mathrm {gt}}, St, Act, d, o)\). Then the lifting of the path effectivity function with perfect recall strategies \(\mathcal {E}^\mathfrak {FulMem} _F\) is precisely the path effectivity function with memoryless strategies \(\widehat{\mathcal {E}}^\mathfrak {NoMem} _{\widehat{F}}\) in \(\widehat{F}\).
Proof
First, every perfect recall strategy \(\sigma \) of an agent or coalition in \(F\) is lifted to the memoryless strategy \(\widehat{\sigma }\) in \(\widehat{F}\) as defined above. Conversely, every memoryless strategy in \(\widehat{F}\) is a lifting of a respective perfect recall strategy in \(F\). Furthermore, a play \(\lambda \) in \(F\) is consistent with a perfect recall strategy \(\sigma \) in \(F\) iff its lifting \(\widehat{\lambda }\) is consistent with the corresponding memoryless strategy in \(\widehat{F}\). Consequently, the global strategic choices in \(\widehat{\mathcal {E}}^\mathfrak {NoMem} _{\widehat{F}}\) in \(\widehat{F}\) are precisely the liftings of the global strategic choices in \(\mathcal {E}^\mathfrak {FulMem} _F\). \(\square \)
Characterization
Now we obtain characterizations of path effectivity functions that correspond to concurrent game structures with perfect recall strategies. Instead of repeating the work done for the case of memoryless strategies, we can reduce that characterization to the one with memoryless strategies in treelike CGSs using the definitions and results from Sect. 5.3.2.
Proposition 13
Let \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) be a path effectivity function over a set of feasible paths \(\mathsf {Paths}\). Then \(\mathcal {E}\) is historytransition closed and truly playable iff its lifting \(\widehat{\mathcal {E}} : \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({\widehat{St}^\omega })})\) is statetransition closed and truly playable.
Proof
First, suppose \(\mathcal {E}\) is historytransition closed and truly playable. Then \(\widehat{\mathcal {E}}\) is statetransition closed, immediately from the definitions, as the lifting transforms histories into states. Furthermore, the playability conditions from Definition 20 for the path effectivity function \(\mathcal {E}\) are directly lifted to the playability conditions for the global state effectivity function \(\widehat{\mathcal {E}}\) from Definition 4. Conversely, assume that \(\widehat{\mathcal {E}}\) is statetransition closed and the playability conditions from Definition 4 hold globally for \(\widehat{\mathcal {E}}\). Then, again immediately from the definitions, \(\mathcal {E}\) is historytransition closed. Furthermore, PSafety and PLiveness for \(\mathcal {E}\) follow immediately. Likewise for Poutcome Monotonicity, PSuperadditivity, P\(\emptyset \)Minimality and PDeterminacy, using the fact that \(\mathcal {E}\) is historytransition closed and Lemma 2. We omit the routine details. \(\square \)
Theorem 5
(\(\mathfrak {FulMem}\) Representation theorem) A path effectivity function \(\mathcal {E}\) over a state space \(St\) and a set of feasible paths \(\mathsf {Paths}\) equals \(\mathcal {E}^\mathfrak {FulMem} _F\) for some concurrent game structure \(F\) if and only if \(\mathsf {Paths}\) is statetransition closed and \(\mathcal {E}\) is truly playable and historytransition closed.
Proof
First, if \(\mathcal {E}\) equals \(\mathcal {E}^\mathfrak {FulMem} _F\) for some CGS \(F\), then its lifting in \(\widehat{\mathcal {E}}\) equals the path effectivity function with memoryless strategies \(\widehat{\mathcal {E}}^\mathfrak {NoMem} _{\widehat{F}}\), hence it satisfies the characterization of Theorem 4 (possibly simplified for treelike structures). Note that the suffix, fusion and limit closure conditions are preserved both ways by liftings of sets of paths, and hence by liftings of path effectivity functions. Thus, \(\mathcal {E}\) is truly playable and historytransition closed, by Proposition 13.
Conversely, if the conditions are satisfied by \(\mathcal {E}\), then its lifting \(\widehat{\mathcal {E}}\) satisfies the characterization conditions of Theorem 4, hence it is equal to the path effectivity function with memoryless strategies for some treelike CGS over \(\widehat{St}\). The latter can be regarded as the lifting of the path effectivity function with perfect recall strategies of a respective (treelike) CGS over \(St\), which is equal to \(\mathcal {E}\). \(\square \)
We now proceed with an alternative characterization that establishes internal characterization of historytransition closed effectivity functions, in terms of the properties \(\mathfrak {FulMem}\)grounding and \(\mathfrak {FulMem}\)convexity, stated in Proposition 14 below. Intuitively, \(\mathfrak {FulMem}\)grounding specifies that every strategic choice can be “grounded” onto one that satisfies limit closure. Moreover, \(\mathfrak {FulMem}\)convexity requires that any collection of substrategies for a given coalition \(C\) can be pieced together into a global perfect recall strategy for \(C\).
Proposition 14
A path effectivity function \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) over feasible paths \(\mathsf {Paths}\) is historytransition closed iff for every \(C\subseteq {\mathbb {A}\mathrm {gt}}\) the following two conditions hold:

(\(\mathfrak {FulMem}\)Grounding) For every \(\mathcal {Y}\in \mathcal {E}(C)\) there is \({\mathcal {X}}\in \mathcal {E}^{lcore}(C)\) such that \({\mathcal {X}}\subseteq \mathcal {Y}\).

(\(\mathfrak {FulMem}\)Convexity) For every family \(\{{\mathcal {X}}^{h} \in \mathcal {E}^{lcore}(C) \mid h \in St^+ \}\) of strategic choices, if \(\mathcal {Y}(h) = {\mathcal {X}}^{h}(h)\) for every \(h\), then \((\mathcal {Y}^{HS})^{HP}\in \mathcal {E}^{lcore}(C)\).
Equivalently, the condition can be formulated as follows: For every family \(\{{\mathcal {X}}^{h} \in \mathcal {E}^{lcore}(C) \mid h \in St^+ \}\), we have that \(\big (\big (\bigcup _{h\in St^+}{\mathcal {X}}^{h}(h)\big )^{HS}\big )^{HP}\in \mathcal {E}^{lcore}(C)\).
Proof
Follows from Propositions 10 and 13.
First, Proposition 13 and its proof can be simplified to only state that \(\mathcal {E}\) is historytransition closed iff its lifting \(\widehat{\mathcal {E}} : \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({\widehat{St}^\omega })})\) is statetransition closed.
Now, if \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) is historytransition closed then (\(\mathfrak {FulMem}\)Grounding) follows immediately from Lemma 2 and (\(\mathfrak {FulMem}\)Convexity) follows from Proposition 10 and the simplified Proposition 13.
The converse direction follows the proof of Proposition 10 using the simplified Proposition 13. We omit the routine details. \(\square \)
Further remarks on path effectivity
In Sect. 4, we argued that path effectivity is conceptually the best match for representing effectivity in multistep games, and to provide semantics to logics of longterm ability, such as ATL and ATL*. Here, we briefly show that a single path effectivity function can be used to derive state effectivity functions for any given temporal pattern (Sect. 6.1). Moreover, we show how our technical results from Sect. 5 can be applied to provide insight into existing theories of agency—in this case, the stit theory of “seeing to it that” (Sect. 6.2). Finally, we offer some speculation on how path effectivity functions can be used to model multistep games with imperfect information (Sect. 6.3).
From path effectivity back to state effectivity
In Sect. 3, we showed how effectivity of agents and coalitions can be presented entirely in terms of states (positions) in the game. Essentially, one has to devote a separate effectivity function for each temporal pattern of interest. Thus, we need one function to describe what properties the agents are effective for in the next moment, another one to describe which properties can be maintained by whom forever from now on, etc. If the structures are used to give semantics to ATL, we need three effectivity functions (\(\mathbf {E}\) for “next”, \(\mathbf {G}\) for “always”, and \(\mathbf {U}\) for “until”, like in Sect. 3). However, in the richer language of ATL*, there are infinitely many possible temporal patterns. For instance, we can be interested in the properties that coalition \(C\) can enforce infinitely often (i.e., \(\varphi \) such that \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {G}\mathrm {F}\varphi \)), those that can be maintained from some moment on (\(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {F}\mathrm {G}\varphi \)), ones that can be achieved at two subsequent time points (\(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {F}(\varphi \wedge \mathrm {X}\varphi )\)), and so forth. Thus, using the framework of state effectivity leads to a fairly complicated picture if one is interested in coalitional effectivity with respect to anything beyond the three standard temporal operators. On the other hand, the path effectivity function can be used to derive state effectivity functions for all temporal patterns specifiable in ATL*. In this sense, a path effectivity function is not only an intuitive, but also a much more complete description of what the agents and coalitions can effect in the system. We begin by showing how to “distill” the state effectivity functions for “next” (\(\mathrm {X}\)), “eventually” (\(\mathrm {F}\)), “always” (\(\mathrm {G}\)), and “until” (\(\mathrm {U}\)). Then, we extend the treatment to some more sophisticated temporal patterns.
Deriving state effectivity for standard temporal operators
Definition 32
(From path to state effectivity) Let \({\mathcal {X}}\subseteq \mathcal {P}({St^\omega })\) be a set of paths. The following sets of states can be derived from \({\mathcal {X}}\):
Let \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\) be a path effectivity function, and let \(T=\mathrm {X},\mathrm {F},\mathrm {G},\mathrm {U}\) be a temporal operator. We “distill” the state effectivity function for \(T\) as follows (for every \(C\subseteq {\mathbb {A}\mathrm {gt}}\) and \(q\in St\)):
Note that \(\mathcal {E}^\mathrm {X}\) is exactly the state projection of \(\mathcal {E}\) (\(\mathcal {E}^\mathrm {X}=\mathcal {E}^{S}\)), cf. Definition 16. We also observe that the definitions of \({\mathcal {X}}^\mathrm {X}\), \({\mathcal {X}}^\mathrm {F}\), \({\mathcal {X}}^\mathrm {G}\), and \({\mathcal {X}}^\mathrm {U}\) are straightforward, and closely follow the semantic definitions of the corresponding temporal operators (\(\mathrm {X}\), \(\mathrm {F}\), \(\mathrm {G}\), and \(\mathrm {U}\)).
Example 7
Consider path effectivity of the grand coalition in the model of aggressive vs. conservative play, cf. Example 5. For effectivity with perfect recall (\(\mathcal {E}^\mathfrak {FulMem} _M\)), we get the following (for \(q_i=q_0,q_1,q_2\)):
Moreover, \((\mathcal {E}^\mathfrak {NoMem} _M)^T\) is the same as \((\mathcal {E}^\mathfrak {FulMem} _M)^T\) for \(T=\mathrm {X},\mathrm {F},\mathrm {G},\mathrm {U}\).
The following proposition shows that Definition 32 provides an alternative characterization of standard state effectivity functions from Sect. 3.
Proposition 15
Let \(M\) be a concurrent game model with its underlying state effectivity model \(SEM(M) = ({\mathbb {A}\mathrm {gt}},St,\mathbf {E},\mathbf {G},\mathbf {U})\). Then, for every state \(q\in St\):

1.
\(\mathbf {E}_q\ =\ (\mathcal {E}_M^\mathfrak {NoMem})^\mathrm {X}(q)\ =\ (\mathcal {E}_M^\mathfrak {FulMem})^\mathrm {X}(q)\),

2.
\(\mathbf {G}_q\ =\ (\mathcal {E}_M^\mathfrak {NoMem})^\mathrm {G}(q)\ =\ (\mathcal {E}_M^\mathfrak {FulMem})^\mathrm {G}(q)\),

3.
\(\mathbf {U}_q\ =\ (\mathcal {E}_M^\mathfrak {NoMem})^\mathrm {U}(q)\ =\ (\mathcal {E}_M^\mathfrak {FulMem})^\mathrm {U}(q)\).
Proof

1.
Straightforward.

2.
First, we prove that \(X\in \mathbf {G}_q\) iff \(X\in (\mathcal {E}_M^\mathfrak {FulMem})^\mathrm {G}(q)\). Observe that \(\mathbf {G}_q\ =\ \mathbf {E}_q^{\mathbf {[*]}}\ =\ \bigcap \limits _{k=0}^{\infty } {E}_q^{[k]}\ =\ \{X\subseteq St\mid \forall k\;.\;X\in {E}_q^{[k]} \}\). Thus, \(X\in \mathbf {G}_q\) iff for all \(k\) there exists a mapping \(f(h) = Y_h\) such that: (i) \(f\) maps sequences of states \(h\) such that \(h[0]=q\) and \(h\le k\), to subsets of states \(Y_h\in \mathbf {E}_{last(h)}\); (ii) for every \(h\) with \(h[0]=q\) and \(h\le k\), if \(h[i]\in f(h[0..i1])\) for all \(i=0,\dots ,k\) then \(f(h)\subseteq X\). But then, \(f\) specifies a perfect recall strategy in \(M\) such that the paths in \(out(q,f)\) contain only states in \(X\), which is equivalent to \(X\in (\mathcal {E}_M^\mathfrak {FulMem})^\mathrm {G}(q)\).
Secondly, \((\mathcal {E}_M^\mathfrak {NoMem})^\mathrm {G}(q)\ =\ (\mathcal {E}_M^\mathfrak {FulMem})^\mathrm {G}(q)\) follows from the fact that the perfect recall and memoryless semantics of \(\langle \!\langle {C}\rangle \!\rangle _{_{\! }}\mathrm {G}\varphi \) coincide [6, 33]. Take any \(X\subseteq St\). Let \(M_X\) be model \(M\) with the valuation of propositions extended to an additional atomic proposition \(\mathsf {{p}}\) such that \(\mathsf {{p}}^{M_X} = X\) (i.e., \(\mathsf {{p}}\) holds exactly in the states from \(X\)). By [6, 33], we have that iff . Thus, \(\mathsf {{p}}^{M_X} \in (\mathcal {E}_{M_X}^\mathfrak {NoMem})^\mathrm {G}(q)\) iff \(\mathsf {{p}}^{M_X} \in (\mathcal {E}_{M_X}^\mathfrak {FulMem})^\mathrm {G}(q)\). Note that \(M\) and \(M_X\) differ only in their valuations of propositions; hence, they must induce the same effectivity functions. In consequence, we get that \(X \in (\mathcal {E}_M^\mathfrak {NoMem})^\mathrm {G}(q)\) iff \(X \in (\mathcal {E}_M^\mathfrak {FulMem})^\mathrm {G}(q)\).

3.
Analogous.
The following is an immediate consequence:
Corollary 2
For every \(\mathfrak {NoMem} \) or \(\mathfrak {FulMem} \)realizable path effectivity function \(\mathcal {E}\), valuation of propositions \(V\), state \(q\), and ATL formula \(\varphi \), we have:

1.
iff .

2.
iff \(\varphi ^{(\mathcal {E},V)} \in \mathcal {E}^\mathrm {X}_q(C)\), where .

3.
iff \(\varphi ^{(\mathcal {E},V)} \in \mathcal {E}^\mathrm {F}_q(C)\).

4.
iff \(\varphi ^{(\mathcal {E},V)} \in \mathcal {E}^\mathrm {G}_q(C)\).

5.
iff \((\varphi ^{(\mathcal {E},V)},\psi ^{(\mathcal {E},V)}) \in \mathcal {E}^\mathrm {U}_q(C)\).
Obtaining state effectivity for other temporal patterns
Proposition 15 and Corollary 2 show that path effectivity functions for concurrent game models are at least as informative as state effectivity functions. Below, we show that the template from Definition 32 can be applied to obtain state effectivity functions that correspond to many other temporal patterns.
Definition 33
(From path to state effectivity II) For \({\mathcal {X}}\subseteq \mathcal {P}({St^\omega })\), we define:
and, exactly like in Definition 32:
for every \(T=\mathrm {F}\mathrm {G},\mathrm {G}\mathrm {F},\mathrm {F}+\), \(C\subseteq {\mathbb {A}\mathrm {gt}}\), and \(q\in St\).
\({\mathcal {X}}^{\mathrm {F}\mathrm {G}}\) collects sets of states \(X\) such that every path from \({\mathcal {X}}\) stays in \(X\) from some moment on. \({\mathcal {X}}^{\mathrm {G}\mathrm {F}}\) contains sets \(X\) such that every path from \({\mathcal {X}}\) visits \(X\) infinitely often. \({\mathcal {X}}^{\mathrm {F}+}\) collects sets \(X\) such that every path from \({\mathcal {X}}\) stays in \(X\) for at least two moments in a row. The following is straightforward:
Proposition 16
For every \(\mathfrak {NoMem} \) or \(\mathfrak {FulMem} \)realizable path effectivity function \(\mathcal {E}\), valuation of propositions \(V\), state \(q\), and ATL formula \(\varphi \), we have:

1.
iff \(\varphi ^{(\mathcal {E},V)} \in \mathcal {E}^{\mathrm {F}\mathrm {G}}_q(C)\).

2.
iff \(\varphi ^{(\mathcal {E},V)} \in \mathcal {E}^{\mathrm {G}\mathrm {F}}_q(C)\).

3.
iff \(\varphi ^{(\mathcal {E},V)} \in \mathcal {E}^{\mathrm {F}+}_q(C)\).
Note that none of the formulae is expressible in ATL [14]. Thus, \(\mathcal {E}^{\mathrm {F}\mathrm {G}}\), \(\mathcal {E}^{\mathrm {G}\mathrm {F}}\), and \(\mathcal {E}^{\mathrm {F}+}\) cannot be obtained by a simple combination of \(\mathcal {E}^\mathrm {X}\), \(\mathcal {E}^\mathrm {G}\), and \(\mathcal {E}^\mathrm {U}\).
Stit models vs. path effectivity
In this paper, we take coalitional effectivity models as the starting point, and show how they can be used to model longterm interaction. So, the inspiration comes from models that have been used in social choice theory for over 30 years. A major part of the paper is based on the observation that, in multistep scenarios, the outcome of the game can be seen as the complete sequence of states (or worlds) that can possibly happen. The mathematical structure that we obtain is surprisingly similar to models of “seeing to it that”, that have been studied in philosophy since late 1980s. In the subsequent paragraphs, we show that stit frames can be seen as a subclass of path effectivity functions. However, the subclass is too general and too restricted at the same time. On the one hand, it allows for effectivity patterns that cannot be implemented in simple multistep games based on concurrent game structures (cf. Sect. 6.2.2). On the other hand, it does not allow for modeling some natural patterns of coalitional effectivity (Sect. 6.2.3).
Alternatively, stit frames can be seen as a more complicated way of defining state effectivity functions. We look closer at this interpretation in Sect. 6.2.4.
We point out that the results presented in Sects. 6.2.2 and 6.2.3 are straightforward applications of the characterizations proposed in Sect. 4. In other words, our results on path effectivity directly expose some hitherto unknown (and important!) limitations of models that have been studied for 25 years. We believe that this makes a good case for the explanatory and analytical value of the structures and characterizations that we propose.
Remark 3
Our analysis in this section focuses on one of the existing semantics of stit, namely the “classical” semantics based on full trees [8, 11, 20, 22, 23]. Other approaches include the semantics based on the concept of bundled tree [13], a Kripkestyle semantics based on the concept of Ockhamist frame [24], as well as the semantics based on the concept of Kamp frame [10]. Applying our results to the other semantics of stit is an interesting issue, but we leave it for another study.
Models of “seeing to it that”
Models of “seeing to it that” have been defined in [7], taking branching time structures as the starting point, and enhancing them to give account of how agents can influence the dynamics of the system. For a broader discussion and extensions of stit, we refer the reader to [8, 11, 20, 22, 23].
Formally, a stit frame is a tuple \((St,<,{\mathbb {A}\mathrm {gt}},Choice)\) where:

\((St,<)\) is a branchingtime structure, i.e., a transition structure that forms a tree;

\({\mathbb {A}\mathrm {gt}}\) is a finite set of agents;

\(Choice : {\mathbb {A}\mathrm {gt}}\times St\rightarrow \mathcal {P}({\mathcal {P}({\mathsf {Paths}})})\), where \(\mathsf {Paths}\) is the set of all maximal linearly ordered sequences of points in \((St,<)\),^{Footnote 6} such that for every \(q \in St\) and \(\mathsf{a } \in {\mathbb {A}\mathrm {gt}}\), \(Choice(\mathsf a ,q)\) is a partition of the set \(\mathsf {Paths}(q)\) of all paths passing through \(q\) into a family of nonempty sets. That partition represents the available choices for \(\mathsf a \) at \(q\) (as in alternating transition systems [5]).
A stit model extends a stit frame with a valuation of atomic propositions into sets of paths.
Note that, since \((St,<)\) is a tree, we can see the elements of \(St\) as both states and (finite) histories of interaction. To avoid confusion, they will be referred to in the remainder of this section as positions. Moreover, for stit models, the concepts of memoryless and perfect recall play coincide.
Collective choices—when considered—are usually assumed to independently influence the resulting evolution of the system. Thus, the outcome of a collective choice can be seen as the intersection of the individual choices that it combines. This can be formally modeled by extending the function \(Choice\) to type \(\mathcal {P}({{\mathbb {A}\mathrm {gt}}})\times St\rightarrow \mathcal {P}({\mathcal {P}({\mathsf {Paths}})})\) as follows. First, given the function \(Choice\), for each \(q \in St\) a choice selection function at \(q\) is a function \(s_q: {\mathbb {A}\mathrm {gt}}\rightarrow \mathcal {P}({\mathsf {Paths}(q)})\), such that \(s_q(\mathsf{a }) \in Choice(\mathsf a ,q)\) for each \(\mathsf{a } \in {\mathbb {A}\mathrm {gt}}\). The set of all selection functions \(s_q\), for a given \(q\), is denoted by \(Select_q\). Now, for any \(C\subseteq {\mathbb {A}\mathrm {gt}}, q\in St\) we define
It is easy to see that \(Choice(C,q)\) forms a partition of \(\mathsf {Paths}(q)\) refining each of the individual partitions \(Choice(\mathsf a ,q)\), for \(\mathsf a \in C\) and representing the possible collective choices of \(C\).
The following condition of Independence of agents’ choices must hold for \(Choice\):
An additional assumption that is often adopted, called no choice between undivided histories, will be discussed further, too.
We observe that stit models come very close to coalitional path effectivity models. In fact, function \(Choice\) looks pretty much like a path effectivity function. Whether it does represent path effectivity, however, depends on how it is interpreted. The informal explanation in most stit literature is that a choice \(X\in Choice(a,q)\) constrains the set of possible paths to the ones consistent with \(X\). In that case, function \(Choice\) clearly represents path effectivity, and the fundamental differences to our approach are minor. We look closer at this interpretation in Sects. 6.2.2 and 6.2.3.
On the other hand, some texts in the existing literature suggest that \(Choice\) is but a more involved representation of state effectivity (cf. e.g. [11, 22]). We discuss the latter interpretation in Sect. 6.2.4.
Stit models are too general
Assuming that \(X\in Choice(C,q)\) simply collects the paths that may result from agents \(C\) choosing \(X\) at position \(q\), we get that \(Choice(C,q)\) describes the effectivity of \(C\) in \(q\) in the following manner.
Definition 34
Let \(\mathcal {S}= (St,<,{\mathbb {A}\mathrm {gt}},Choice)\) be a stit frame. The path effectivity function of \(\mathcal {S}\), denoted \(\mathcal {E}(\mathcal {S})\), is defined as
Additionally, we define \(\mathcal {E}(\mathcal {S})(\emptyset ) = \{{\mathsf {Paths}}\}\).
That is, \(\mathcal {E}(\mathcal {S})(C)\) is the outcomemonotone closure of the set of all global combinations of choices from \(Choice(C,\cdot )\).
Proposition 17
For every stit frame \(\mathcal {S}\), we have that \(\mathcal {E}(\mathcal {S})\) satisfies PSafety, PLiveness, POutcome Monotonicity, PSuperadditivity, and P\(\emptyset \)Minimality. It does not have to satisfy PDeterminacy.
Proof
Straightforward. \(\square \)
Corollary 3
Path effectivity in stit frames is playable, but not necessarily truly playable.
Thus, path effectivity in stit frames satisfies most, though not all, general playability conditions. More importantly, it does not have to satisfy the structural conditions that make effectivity patterns implementable in natural multistep games. We focus on realizability under perfect recall, since realizability in memoryless strategies can be seen as its special case.
Proposition 18
\(\mathcal {E}(\mathcal {S})\) is generally not historytransition closed.
Proof (sketch)
Let us construct a stit frame \(\mathcal {S}\) as follows. Take an arbitrary nontrivial stit frame \((St,<,{\mathbb {A}\mathrm {gt}},Choice)\) and replace its \(Choice\) function with \(Choice'\) such that \(Choice'(a,q) = \{X\in Choice(a,q) \mid X\text { is not limitclosed}\}\) for every \(a\in {\mathbb {A}\mathrm {gt}}, q\in St\).
Suppose now that \(\mathcal {E}(\mathcal {S})\) is historytransition closed. By Proposition 11 and Lemma 2, it must include choices that are limit closed, which is not the case. \(\square \)
Corollary 4
There are stit frames whose path effectivity cannot be realized in concurrent game structures.
Stit models are too restricted
On one hand, stit frames describe effectivity patterns that can be nontruly playable and nonrealizable in both \(\mathfrak {NoMem} \) and \(\mathfrak {FulMem} \) sets of strategies. On the other hand, the way they construct coalitional effectivity allows only for strictly additive aggregation of abilities. In other words, no synergy between members of a coalition can be modeled in a stit frame.
Proposition 19
(PAdditivity) For every \(q\in St\) and \(C \cap D = \emptyset \), we have:

1.
if \({\mathcal {X}}\in \mathcal {E}(\mathcal {S})(C)\) and \(\mathcal {Y}\in \mathcal {E}(\mathcal {S})(D)\) then \({\mathcal {X}}\cap \mathcal {Y}\in \mathcal {E}(\mathcal {S})(C \cup D)\);

2.
if \(\mathcal {Z}\in \mathcal {E}(\mathcal {S})(C \cup D)\) then there exist \({\mathcal {X}}\in \mathcal {E}(\mathcal {S})(C)\) and \(\mathcal {Y}\in \mathcal {E}(\mathcal {S})(D)\) such that \(\mathcal {Z}= {\mathcal {X}}\cap \mathcal {Y}\).
Proof
Follows by construction of \(Choice(C,q)\). \(\square \)
Since PAdditivity is strictly stronger than PSuperadditivity, by Theorems 4 and 5 we get the following:
Corollary 5
There are concurrent game structures generating path effectivity functions that cannot be obtained in stit frames.
Stit models as representations of state effectivity
In some works (cf., e.g., [11, 22]), it is assumed that if two paths \(\lambda _1,\lambda _2\) passing through position \(q\) share the same successor of \(q\) then every choice at \(q\) must either include both paths or none of them. The assumption is sometimes referred to as “no choice between undivided histories” (NCBUH). Paraphrasing Horty’s explanation, since the branching between \(\lambda _1\) and \(\lambda _2\) is not happening yet, the uncertainty which path will occur can only be resolved in the future. Formally, this amounts to the following requirement:

For every \(q,q'\in St\) such that \(q < q'\), all paths in \(\mathsf {Paths}^{q'}\) belong to the same choices of all agents at \(q\), i.e., for every \(\lambda , \lambda ' \in \mathsf {Paths}^{q'}\), for each \(X \in Choice({\mathbb {A}\mathrm {gt}},q)\), either \(\lambda , \lambda ' \in X\) or \(\lambda , \lambda ' \notin X\).

[11] assumes additionally that the choice function is deterministic in the following sense: for each \(q\) there exists \(q'\) such that \(Choice({\mathbb {A}\mathrm {gt}},q) = \mathsf {Paths}^{q'}\).
Under the NCBUH assumption (with or without determinism), \(Choice(a,q)\) can be seen as collecting sets of successor states that agent \(a\) can enforce in \(q\). In fact, under this assumption, stit frames become just a more complicated way of representing treelike alternating transition systems [5, 17]. Agents’ abilities in NCBUH stit frames arise from available strategies which are called selection functions in the stit literature.^{Footnote 7}
Formally, function Choice can be transformed into a state effectivity function in the following way:
Definition 35
Let \(\mathcal {S}= (St,<,{\mathbb {A}\mathrm {gt}},Choice)\) be a stit frame satisfying the above conditions. The state effectivity function of \(\mathcal {S}\) is defined as
That is, we take all the choices \(X\in Choice(C,q)\), and for each \(X\) collect the immediate successors of \(q\) on paths in \(X\).
Under this interpretation, stit models are just a more complicated way of representing onestep effectivity. The role of strategies (a.k.a. selection functions) is to unfold state effectivity into path effectivity, similarly to Definitions 17, 19, and 26.
Proposition 20
For every NCBUH stit frame \(\mathcal {S}\), we have that \(E(\mathcal {S})\) satisfies Safety, Liveness, Outcome Monotonicity, Superadditivity, and \({\mathbb {A}\mathrm {gt}}\)Maximality. PDeterminacy is satisfied for deterministic frames, but not in general.
Proof
Straightforward. \(\square \)
Moreover, stit frames do not enable modeling synergy within coalitions.
Proposition 21
(Additivity) For every \(q\in St\) and \(C \cap D = \emptyset \), we have additionally that:

if \(Z \in E(\mathcal {S})(C \cup D,q)\) then there exist \({\mathcal {X}}\in E(\mathcal {S})(C,q)\) and \(\mathcal {Y}\in E(\mathcal {S})(D,q)\) such that \(\mathcal {Z}= {\mathcal {X}}\cap \mathcal {Y}\).
Proof
Follows by construction of \(Choice(C,q)\). \(\square \)
In consequence, not every stit frame represents state effectivity that can be implemented with a strategic game (because effectivity is strategic games must satisfy Determinacy). Moreover, not every state effectivity function, implementable in strategic games, can be represented by a stit frame (because stit frames do not allow for nonadditive coalitional effectivity patterns).
Beyond perfect information
So far, we have been only concerned with games where every player knows the global state of the system at any moment. Modeling and reasoning about imperfect information scenarios is more sophisticated. First, not all strategies are executable—even in the perfect recall case. This is because an agent cannot specify that she will execute two different actions in situations that look the same to her. Therefore, only uniform strategies are admissible here (for the definition of uniformity, see below). Moreover, it is often important to find a uniform strategy that succeeds in all indistinguishable states, rather than contend that there is such a successful strategy for the current global state of the system.
In this section, we briefly sketch how path effectivity models can be used to give account on powers of coalitions under imperfect information. This is by no means intended as an exhaustive analysis. Rather, we point out that the modeling power of path effectivity can be applied to more sophisticated scenarios than ones assuming complete knowledge.
Reasoning about imperfect information games
We take Schobbens’ \(\hbox {ATL}_{ir}\) and \(\hbox {ATL}_{iR}\) [33] as the “core”, minimal ATLbased logics for strategic ability under imperfect information. The logics include the same formulae as ATL, only the cooperation modalities are presented with subscripts. The operator \(\langle \!\langle {C}\rangle \!\rangle _{_{\! ir }}\) indicates that we reason about agents with imperfect information and imperfect recall, while \(\langle \!\langle {C}\rangle \!\rangle _{_{\! iR }}\) indicates that agents have imperfect information and perfect Recall. Models of \(\hbox {ATL}_{ir}\) and \(\hbox {ATL}_{ir}\) are imperfect information concurrent game models (iCGM), which can be seen as concurrent game models augmented with a family of indistinguishability relations \(\sim _a \subseteq St\times St\), one per agent \(a\in {\mathbb {A}\mathrm {gt}}\). The relations describe agents’ uncertainty: \(q\sim _a q'\) means that, while the system is in state \(q\), agent \(a\) considers it possible that it is in \(q'\). Each \(\sim _a\) is an equivalence relation. It is also required that agents have the same choices in indistinguishable states: if \(q\sim _a q'\) then \(d(a,q)=d(a,q')\). Additionally, for two histories \(h,h'\), we define \(h\approx _a h'\) iff \(h=h'\) and for every \(i\) it holds that \(h[i]\sim _a h'[i]\).
A uniform memoryless strategy for agent \(a\) is a function \(s_a : St\rightarrow Act\), such that: (1) \(s_a(q)\in d(a,q)\); (2) if \(q\sim _a q'\) then \(s_a(q)=s_a(q')\). A uniform perfect recall strategy for agent \(a\) is a function \(s_a : St^+\rightarrow Act\), such that: (1) \(s_a(h)\in d(a,last(h))\); (2) if \(h\approx _a h'\) then \(s_a(h)=s_a(h')\). Again a collective strategy is uniform if it contains only uniform individual strategies. Function \(out(q,s_C)\) returns the set of all paths that may result from agents \(C\) executing strategy \(s_C\) from state \(q\) onward. The semantics of cooperation modalities in \(\hbox {ATL}_{ir}^*\) and \(\hbox {ATL}_{iR}^*\) is defined as follows:

iff there exists a uniform memoryless strategy \(s_C\) such that, for each \(a\in C\), \(q'\) such that \(q\sim _a q'\), and path \(\lambda \in out(s_C,q')\), we have .

iff there exists a uniform perfect recall strategy \(s_C\) such that, for each \(a\in C\), \(q'\) such that \(q\sim _a q'\), and path \(\lambda \in out(s_C,q')\), we have .
The semantics of path formulae \(\gamma \) is defined exactly like in standard ATL*, see Sect. 2.3. The same applies to Boolean combinations of state formulae.
Example 8
Consider the model of aggressive vs. conservative play from Example 1 with the following twist: now, each player can only perceive his own situation in the game, and not the position of the other player. Thus, player \(1\) cannot distinguish between states \(q_0,q_2\) while player \(2\) cannot discern states \(q_0,q_1\). The resulting iCGM is presented in Fig. 2.
Now, no agent can make sure anymore that the other one remains in a good position: and . This is because player \(1\) in state \(q_0\) must take into account the possibility of being in state \(q_1\) for which he has no sure strategy of getting to \(\{{q_0,q_2}\}\). The situation of player \(q_2\) is analogous. It is not even the case that the respective players can achieve the property in a finite number of steps: and .
On the other hand, if the players cooperate then they can still control the next state in the game: for all states \(q\). We leave checking this to an interested reader, and only remark that such a tight control of the successor state is rather incidental to the scenario, and does not hold in general for imperfect information models.
Path effectivity under imperfect information
First, we observe that the same type of effectivity functions can be used to model powers in imperfect information games: \(\mathcal {E}: \mathcal {P}({{\mathbb {A}\mathrm {gt}}}) \rightarrow \mathcal {P}({\mathcal {P}({St^\omega })})\). Moreover, the notion of \(\varSigma \)effectivity does not change much. Given an iCGM \(M\) and \(\varSigma = \bigcup _{C\subseteq {\mathbb {A}\mathrm {gt}}}\varSigma _C\) be a set of (uniform) coalitional strategies in \(M\), the \(\varSigma \)effectivity function of \(M\) is still defined as \(\mathcal {E}^\varSigma _M(C) = \{\bigcup _{q\in St}out(q,s_C) \mid s_C\in \varSigma _C \}\). We refer to uniform strategies as \(\mathfrak {uFulMem} \) (for perfect recall) and \(\mathfrak {uNoMem} \) (for memoryless strategies).
Example 9
Let us “distill” the path effectivity of agent \(1\) alone in model \(M_2\) from Example 8. We get that \(\mathcal {E}_{M_2}^\mathfrak {uNoMem} (\{{1}\})\) is the outcomemonotone closure of \(\{{\mathcal {X}}_1,{\mathcal {X}}_2,{\mathcal {X}}_3,{\mathcal {X}}_4\}\), where:

\({\mathcal {X}}_1 = (q_0\cup q_1)^\omega \ \cup \ q_2^\omega \ \cup \ q_2^+q_1(q_0\cup q_1)^\omega \) corresponds to player \(1\)’s strategy of playing conservatively in every state,

\({\mathcal {X}}_2 = (q_0\cup q_1)^\omega \ \cup \ q_2^\omega \ \cup \ q_2^+q_0(q_0\cup q_1)^\omega \) corresponds to the strategy of playing conservatively in \(\{{q_0,q_1}\}\) and aggressively in \(q_2\),

\({\mathcal {X}}_3 = q_0^\omega \ \cup q_1^\omega \ \cup \ (q_0^+\cup q_1^+\cup \epsilon )q_2^+(q_0\cup q_2)^\omega \) corresponds to the strategy of playing aggressively in every state,

\({\mathcal {X}}_4 = q_0^\omega \ \cup q_1^\omega \ \cup \ (q_0^+\cup q_1^+\cup \epsilon )q_2^+(q_1\cup q_2)^\omega \) corresponds to the strategy of playing aggressively in \(\{{q_0,q_1}\}\) and conservatively in \(q_2\).
Semantics of \(\hbox {ATL}_{ir/R}^*\) based on path effectivity
The semantics of \(\hbox {ATL}_{ir/R}^*\), based on path effectivity functions, can be defined in a very similar way to the perfect information case (cf. Sect. 4.4). Let \([q]_C = \bigcup _{a\in C} \{q' \mid q\sim _a q'\}\) be the indistinguishability set of state \(q\) for coalition \(C\). We extend notation so that \({\mathcal {X}}(Q) = \bigcup _{q\in Q} {\mathcal {X}}(q)\) denotes all the paths in \({\mathcal {X}}\) starting from any state in \(Q\in St\). Moreover, let denote the set of paths in \(M\) satisfying \(\gamma \). Then:

iff there is \({\mathcal {X}}\in \mathcal {E}^\mathfrak {uNoMem} _M(C)\) such that \({\mathcal {X}}([q]_C) \subseteq \gamma ^M\).

iff there is \({\mathcal {X}}\in \mathcal {E}^\mathfrak {uFulMem} _M(C)\) such that \({\mathcal {X}}([q]_C) \subseteq \gamma ^M\).
That is, \(\langle \!\langle {C}\rangle \!\rangle _{_{\! ir/R }}\gamma \) holds iff \(C\) have a single choice satisfying \(\gamma \) on all outcome paths starting from states that look the same as \(q\).
Example 10
Choice \({\mathcal {X}}_1\) from Example 9 can be used to demonstrate that , because \({\mathcal {X}}_1(\{{q_0,q_1}\}) = (q_0\cup q_1)^\omega \). On the other hand, because \({\mathcal {X}}_1(\{{q_2}\}) = q_2^\omega \ \cup \ q_2^+q_1(q_0\cup q_1)^\omega \) does not guarantee \(\mathrm {G}\,\mathsf {{good_2}}\) (and similarly for \({\mathcal {X}}_2\), \({\mathcal {X}}_3\), and \({\mathcal {X}}_4\)). Still, a more sophisticated ATL* property holds: : the strategy behind \({\mathcal {X}}_1\) guarantees that, from some moment on, either player 1 or player 2 remains in a good position forever.
Properties of path effectivity under uncertainty: general playability
Section 6.3.2 demonstrated that path effectivity of agents and coalitions under imperfect information can be represented by functions of the same type as for perfect information games. Moreover, distilling the effectivity function from an iCGS proceeds in the same way as before. What possibly changes is the structural properties of effectivity functions that are induced by iCGM’s. We recall the properties below, starting with the general playability conditions from Sect. 5.1.

PSafety: \(\mathcal {E}(C)(q)\) is nonempty for every \(C\subseteq {\mathbb {A}\mathrm {gt}},q\in St\).

PLiveness: \(\emptyset \notin \mathcal {E}(C)(q)\) for every \(C\subseteq {\mathbb {A}\mathrm {gt}},q\in St\).

POutcome Monotonicity: For every \(C\subseteq {\mathbb {A}\mathrm {gt}}\) the set \(\mathcal {E}(C)\) is upwards closed: if \({\mathcal {X}}\in \mathcal {E}(C)\) and \({\mathcal {X}}\subseteq \mathcal {Y}\subseteq \mathsf {Paths}\) then \(\mathcal {Y}\in \mathcal {E}(C)\).

PSuperadditivity: For every \(C,D\subseteq {\mathbb {A}\mathrm {gt}}\), if \(C \cap D = \emptyset \), \({\mathcal {X}}\in \mathcal {E}(C)\) and \(\mathcal {Y}\in \mathcal {E}(D)\), then \({\mathcal {X}}\cap \mathcal {Y}\in \mathcal {E}(C \cup D)\).

P\(\emptyset \)Minimality: \(\mathcal {E}(\emptyset )\) is the singleton \(\{\mathsf {Paths}\}\).

PDeterminacy: For every \(q\in St\), if \({\mathcal {X}}\in \mathcal {E}({\mathbb {A}\mathrm {gt}})\) then \(\{\lambda \} \in \mathcal {E}({\mathbb {A}\mathrm {gt}})(q)\) for some \(\lambda \in {\mathcal {X}}(q)\).
The following is straightforward, and we leave it for the interested reader to check:
Proposition 22
For every iCGM \(M\), the induced path effectivity functions for uniform memoryless strategies (\(\mathcal {E}^\mathfrak {uNoMem} _M\)) and for uniform perfect recall strategies (\(\mathcal {E}^\mathfrak {uFulMem} _M\)) satisfy PSafety, PLiveness, POutcome Monotonicity, PSuperadditivity, P\(\emptyset \)Minimality, and PDeterminacy.
Properties of path effectivity under uncertainty: realizability in memoryless strategies
We observe first that the grounding condition holds for memoryless strategies.
Proposition 23
For every iCGM \(M\) we have that effectivity in memoryless strategies satisfies \(\mathfrak {NoMem}\)Grounding.
Formally, for every \({\mathcal {X}}\in \mathcal {E}^\mathfrak {uNoMem} _M(C)\) there exists \(\mathcal {Y}\in \mathcal {E}^\mathfrak {uNoMem} _M(C)\) such that \(\mathcal {Y}\subseteq {\mathcal {X}}\) and \(\mathcal {Y}\) is statetransition closed (i.e., \((\mathcal {Y}^{S})^{P}= \mathcal {Y}\)).
Proof
Let \({\mathcal {X}}\in \mathcal {E}^\mathfrak {uNoMem} _M(C)\). Then, there must exist a memoryless unform strategy \(s_C\) such that \(\bigcup _{q\in St}out(q,s_C) \subseteq {\mathcal {X}}\). By construction, \(\bigcup _{q\in St}out(q,s_C)\) is statetransition closed. \(\square \)
On the other hand, the convexity condition is no longer valid:
Proposition 24
There exists an iCGM \(M\) for which effectivity in memoryless strategies does not satisfy \(\mathfrak {NoMem}\) Convexity. Formally, \(\mathcal {E}^\mathfrak {uNoMem} _M\) includes a family of statetransition closed global choices \(\{{\mathcal {X}}^{q} \in \mathcal {E}^\mathfrak {uNoMem} _M(C) \mid q \in St\}\) such that \(\big (\big (\bigcup _{q\in St}{\mathcal {X}}^{q}(q)\big )^{S}\big )^{P}\notin \mathcal {E}^\mathfrak {uNoMem} _M(C)\).
Proof
Consider \(\mathcal {E}^\mathfrak {uNoMem} _{M_2}(\{{1}\})\) from Example 9, and observe that \({\mathcal {X}}_1,{\mathcal {X}}_2,{\mathcal {X}}_3,{\mathcal {X}}_4\) are statetransition closed by construction. Take \(\mathcal {Y}= {\mathcal {X}}_1(\{{q_0,q_2}\}) \cup {\mathcal {X}}_3(q_1)\). That is, player \(1\) plays conservatively on paths starting from \(q_0\) or \(q_2\), and aggressively on paths starting from \(q_1\). The state projection of \(\mathcal {Y}\) is:

\(\mathcal {Y}^{S}(q_0) = \{q_0,q_1\}\)

\(\mathcal {Y}^{S}(q_1) = \{q_1,q_2\}\)

\(\mathcal {Y}^{S}(q_2) = \{q_1,q_2\}\).
Thus, \((\mathcal {Y}^{S})^{P}= (q_1\cup q_2)^\omega \ \cup \ q_0^\omega \ \cup \ q_0^+q_1(q_1\cup q_2)^\omega \). Now we show that \((\mathcal {Y}^{S})^{P}\) is an outcomemonotone extension of neither \({\mathcal {X}}_1\) nor \({\mathcal {X}}_2\) nor \({\mathcal {X}}_3\) nor \({\mathcal {X}}_4\). First, \((\mathcal {Y}^{S})^{P}\) subsumes neither \({\mathcal {X}}_1\) nor \({\mathcal {X}}_2\) because it does not subsume \((q_0\cup q_1)^\omega \). Secondly, \((\mathcal {Y}^{S})^{P}\) subsumes neither \({\mathcal {X}}_3\) nor \({\mathcal {X}}_4\) because it does not subsume \(q_2(q_0\cup q_2)^\omega \). Thus, \((\mathcal {Y}^{S})^{P}\notin \mathcal {E}^\mathfrak {uNoMem} _{M_2}(\{{1}\})\). \(\square \)
As a consequence, the \(\mathfrak {NoMem} \)Representation Theorem from Sect. 5.2 no longer holds for imperfect information:
Corollary 6
There are iCGM’s whose path effectivity functions in memoryless uniform strategies are not statetransition closed.
Properties of path effectivity under uncertainty: realizability in perfect recall strategies
Again, the grounding condition holds:
Proposition 25
For every iCGM \(M\) we have that effectivity in perfect recall strategies satisfies \(\mathfrak {FulMem}\)Grounding.
Formally, for every \({\mathcal {X}}\in \mathcal {E}^\mathfrak {uFulMem} _M(C)\) there exists \(\mathcal {Y}\in \mathcal {E}^\mathfrak {uFulMem} _M(C)\) such that \(\mathcal {Y}\subseteq {\mathcal {X}}\) and \(\mathcal {Y}\) is historytransition closed.
Proof
Analogous to Proposition 23. \(\square \)
On the other hand, the convexity condition is no longer valid:
Proposition 26
There exists an iCGM \(M\) for which effectivity in perfect recall strategies does not satisfy \(\mathfrak {FulMem}\) Convexity. Formally, \(\mathcal {E}^\mathfrak {uFulMem} _M\) includes a family of historytransition closed choices \(\{{\mathcal {X}}^{h} \in \mathcal {E}^\mathfrak {uFulMem} _M(C) \mid h \in St^+ \}\) such that \(\big (\big (\bigcup _{h\in St^+}{\mathcal {X}}^{h}(h)\big )^{HS}\big )^{P}\notin \mathcal {E}^\mathfrak {uNoMem} _M(C)\).
Proof
Consider \(\mathcal {E}^\mathfrak {uFulMem} _{M_2}(\{{1}\})\) and observe that \(\mathcal {E}^\mathfrak {uNoMem} _{M_2}(\{{1}\}) \subseteq \mathcal {E}^\mathfrak {uFulMem} _{M_2}(\{{1}\})\) since all memoryless strategies are also perfect recall strategies. Moreover, \({\mathcal {X}}_1,{\mathcal {X}}_2,{\mathcal {X}}_3,{\mathcal {X}}_4\) are historytransition closed by construction. Again, take \(\mathcal {Y}= {\mathcal {X}}_1(\{{q_0,q_2}\}) \cup {\mathcal {X}}_3(q_1)\). The historybased state projection of \(\mathcal {Y}\) is:

\(\mathcal {Y}^{HS}(\dots q_0) = \{q_0,q_1\}\)

\(\mathcal {Y}^{HS}(\dots q_1) = \{q_1,q_2\}\)

\(\mathcal {Y}^{HS}(\dots q_2) = \{q_1,q_2\}\).
Thus, \((\mathcal {Y}^{HS})^{P}\!=\! (q_1\cup q_2)^\omega \cup q_0^\omega \cup q_0^+q_1(q_1\cup q_2)^\omega \). However, there is no \(\mathcal {Y}'\in \mathcal {E}^\mathfrak {uFulMem} _{M_2}(\{{1}\})\) that would subsume both \((q_1\cup q_2)^\omega \) and \(q_0q_1^\omega \). \(\square \)
As a consequence, the \(\mathfrak {FulMem} \)Representation Theorem from Sect. 5.3 no longer holds for imperfect information:
Corollary 7
There are iCGM’s whose path effectivity functions in uniform perfect recall strategies are not historytransition closed.
Summary We have obtained a partial characterization of path effectivity in multistep games of imperfect information. General playability conditions hold, as well as grounding conditions in both memoryless and perfect recall cases. On the other hand, convexity does not hold for both types of uniform strategies. A complete characterization is outside of the scope of this paper, and we leave a detailed study of sufficient realizability conditions under imperfect information for future research.
Conclusions
In this paper we have developed the idea of characterizing multiplayer multistep games in terms of what coalitions can enforce which sets of outcomes—states or paths—by executing one or another collective strategy. These characterizations lead to respective notions of statebased and pathbased coalition effectivity models. We believe the characterizations to be both conceptually important and technically interesting, as they extract the core gametheoretic “essence” from game models. They also provide alternative semantics for logics of such games, most notably for the game logics ATL and ATL*.
We show how the new characterizations can be applied to gain insight into properties of the well known stit models of agency. We also use path effectivity functions to highlight (and partially resolve) some technical issues arising in the semantics of ATL* for scenarios of incomplete and imperfect information. We would also like to point out that a better understanding of abstract realizability can lead to satisfiability checking procedures and complete axiomatic characterization for the variants of ATL where such results have not been established yet, e.g., for ATL* as well as all the variants of ATL/ATL* with imperfect information. We leave this final item for future work.
Notes
 1.
Such actions are also called ‘strategies’ in normal form games, but we reserve the use of the term ‘strategy’ for a global conditional plan in a multistep scenario.
 2.
Here we use the terms ‘agent’ and ’player’ as synonyms and use the term ‘coalition’ to refer to a set of agents that may be pursuing a common objective, but without assuming any explicit contract or even coordination between then.
 3.
Here we adhere to the assumption that the available strategies of one member in a coalition is independent of the actual choices of the other members.
 4.
Note that, unlike in the case of state effectivity functions where the determinacy constraint is only needed for infinite state games (cf. [19]), it becomes essential here, because even very simple 2state structures can generate uncountably many paths.
 5.
Later we will call such sets of paths statetransition closed, cf. Definition 21.
 6.
In stit literature, such sequences are called histories, and their set is denoted by \(H\). We use the term paths here to be consistent with the terminology used throughout the paper.
 7.
Recall that on treelike structures memoryless and perfect recall strategies coincide.
References
 1.
Abdou, J., & Keiding, H. (1991). Effectivity functions in social choice. Heidelberg: Springer.
 2.
Ågotnes, T., Goranko, V., & Jamroga, W. (2007). Alternatingtime temporal logics with irrevocable strategies. In D. Samet (Ed.) Proceedings of TARK XI (pp. 15–24).
 3.
Alechina, N., Logan, B., Nga, N., & Rakib, A. (2009). A logic for coalitions with bounded resources. In Proceedings of IJCAI (pp. 659–664).
 4.
Alur, R., Henzinger, T.A., & Kupferman, O. (1997). Alternatingtime Temporal Logic. In Proceedings of the 38th Annual Symposium on Foundations of Computer Science (FOCS) (pp. 100–109). IEEE Computer Society Press, Los Alamitos.
 5.
Alur, R., Henzinger, T. A., & Kupferman, O. (1998). Alternatingtime temporal logic. Lecture Notes in Computer Science, 1536, 23–60.
 6.
Alur, R., Henzinger, T. A., & Kupferman, O. (2002). Alternatingtime temporal logic. Journal of the ACM, 49, 672–713. doi:10.1145/585265.585270.
 7.
Belnap, N., & Perloff, M. (1988). Seeing to it that: A canonical form for agentives. Theoria, 54(3), 175–199.
 8.
Belnap, N., Perloff, M., & Xu, M. (2001). Facing the future: Agents and choices in our indeterminist world. Oxford: Oxford University Press.
 9.
Boros, E., Elbassioni, K., Gurvich, V., & Makino, K. (2010). On effectivity functions of game forms. Games and Economic Behavior, 68(2), 512–531.
 10.
Broersen, J. (2011). Deontic epistemic stit logic distinguishing modes of mens rea. Journal of Applied Logic, 9(2), 127–152.
 11.
Broersen, J., Herzig, A., & Troquard, N. (2006). Embedding alternatingtime temporal logic in strategic STIT logic of agency. Journal of Logic and Computation, 16(5), 559–578.
 12.
Bulling, N., & Jamroga, W. (2014). Comparing variants of strategic ability. Journal of Autonomous Agents and MultiAgent Systems, 28(3), 474–518.
 13.
Ciuni, R., & Zanardo, A. (2010). Completeness of a branchingtime logic with possible choices. Studia Logica, 96(3), 393–420.
 14.
Emerson, E., & Halpern, J. (1986). “Sometimes” and “not never” revisited: On branching versus linear time temporal logic. Journal of the ACM, 33(1), 151–178.
 15.
Emerson, E. A. (1983). Alternative semantics for temporal logics. Theoretical Computer Science, 26, 121–130.
 16.
Goranko, V. (2001). Coalition games and alternating temporal logics. In J. van Benthem (Ed.) Proceedings of TARK VIII (pp. 259–272). Morgan Kaufmann, Siena.
 17.
Goranko, V., & Jamroga, W. (2004). Comparing semantics of logics for multiagent systems. Synthese, 139(2), 241–280.
 18.
Goranko, V., & Jamroga, W. (2012). State and path effectivity models for logics of multiplayer games. In Proceedings of AAMAS (pp. 1123–1130).
 19.
Goranko, V., Jamroga, W., & Turrini, P. (2013). Strategic games and truly playable effectivity functions. Journal of Autonomous Agents and MultiAgent Systems, 26(2), 288–314.
 20.
Herzig, A., & Troquard, N. (2006). Knowing how to play: Uniform choices in logics of agency. In Proceedings of AAMAS’06 (pp. 209–216).
 21.
van der Hoek, W., & Wooldridge, M. (2002). Tractable multiagent planning for epistemic goals. In C. Castelfranchi & W. Johnson (Eds.), Proceedings of the First International Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS02) (pp. 1167–1174). New York: ACM Press.
 22.
Horty, J., & Belnap, N. (1995). The deliberative stit: A study of action, omission, ability, and obligation. Journal of Philosophical Logic, 24, 583–644.
 23.
Horty, J. F. (2001). Agency and Deontic Logic. Oxford: Oxford University Press.
 24.
Lorini, E. (2013). Temporal stit logic and its application to normative reasoning. Journal of Applied NonClassical Logics, 23(4), 372–399.
 25.
Moulin, H., & Peleg, B. (1982). Cores of effectivity functions and implementation theory. Journal of Mathematical Economics, 10(1), 115–145.
 26.
Osborne, M., & Rubinstein, A. (1994). A course in game theory. Cambridge: MIT Press.
 27.
Pauly, M. (2001). Logic for social software. Ph.D. thesis, University of Amsterdam, Amsterdam.
 28.
Pauly, M. (2001). A logical framework for coalitional effectivity in dynamic procedures. Bulletin of Economic Research, 53(4), 305–324.
 29.
Pauly, M. (2002). A modal logic for coalitional power in games. Journal of Logic and Computation, 12(1), 149–166.
 30.
Peleg, B. (1997). Effectivity functions, game forms, games, and rights. Social Choice and Welfare, 15(1), 67–80.
 31.
Peleg, B. (1998). Effectivity functions, game forms, games, and rights. Social Choice and Welfare, 15, 67–80.
 32.
Rosenthal, R. (1972). Cooperative games in effectiveness form. Journal of Economic Theory, 5, 88–101.
 33.
Schobbens, P. Y. (2004). Alternatingtime logic with imperfect recall. Electronic Notes in Theoretical Computer Science, 85(2), 82–93.
 34.
Storcken, T. (1997). Effectivity functions and simple games. International Journal of Game Theory, 26, 235–248.
 35.
Wooldridge, M. (2002). An Introduction to Multi Agent Systems. Chichester: John Wiley & Sons.
Acknowledgments
Wojciech Jamroga acknowledges the support of the National Research Fund (FNR) Luxembourg under the Project GALOT (INTER/DFG/12/06), as well as the support of the 7th Framework Programme of the European Union under the Marie Curie IEF Project ReVINK (PIEFGA2012626398). Valentin Goranko partly worked on this paper during his visit to the Centre International de Mathématiques et Informatique de Toulouse. The authors also thank the anonymous reviewers of JAAMAS for their useful comments.
Author information
Affiliations
Corresponding author
Additional information
A very preliminary version of this article appeared in [18].
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
About this article
Cite this article
Goranko, V., Jamroga, W. State and path coalition effectivity models of concurrent multiplayer games. Auton Agent MultiAgent Syst 30, 446–485 (2016). https://doi.org/10.1007/s1045801592944
Published:
Issue Date:
Keywords
 Multistep games
 Coalitional effectivity models
 Alternatingtime temporal logic