A Logic for Conditional Local Strategic Reasoning

We consider systems of rational agents who act and interact in pursuit of their individual and collective objectives. We study and formalise the reasoning of an agent, or of an external observer, about the expected choices of action of the other agents based on their objectives, in order to assess the reasoner's ability, or expectation, to achieve their own objective. To formalize such reasoning we extend Pauly's Coalition Logic with three new modal operators of conditional strategic reasoning, thus introducing the Logic for Local Conditional Strategic Reasoning ConStR. We provide formal semantics for the new conditional strategic operators in concurrent game models, introduce the matching notion of bisimulation for each of them, prove bisimulation invariance and Hennessy-Milner property for each of them, and discuss and compare briefly their expressiveness. Finally, we also propose systems of axioms for each of the basic operators of ConStR and for the full logic.


Introduction
Consider the following scenario.Alice and Bob are students at DownTown University.Alice is coming to campus today, and has some agenda to complete.Bob wants to meet Alice somewhere on campus today.She does not know that (maybe, even does not know Bob) and they have no communication.Bob may, or may not, know what Alice is going to do on campus, or where and at what time she will go during the day.Using his knowledge of what, where, and when Alice intends to do today, Bob wants to come up with a plan of how (where and when) to meet her.
From a more general perspective, we consider a scenario of agents acting independently, and possibly concurrently, in pursuit of their individual and collective goals and we analyse the reasoning of an agent (or, observer) about the possible local actions (at the current state only) of the other agents and their effect for realising or enabling the outcome of interest for the reasoner.
Related work and motivation.The kind of strategic reasoning discussed here is within the conceptual thrust motivating the research on logic-based strategic reasoning over the past two decades, starting with Coalition Logic CL ( [19], [20]), its temporal extension, the alternating-time temporal logic ATL ( [5]), its epistemic extension ATEL ( [14]), and gradually evolving towards increasingly expressive formalisms, such Strategy Logic SL [16] (cf.[17]).See [7] and [4] for overviews of the area.Most of these logical systems (except SL where the agents' strategies are explicitly named in the language) assume arbitrary or adversarial behaviour of the agents outside of the proponent coalitions in CL, ATL, and ATEL.Also, the knowledge of the agents involved in ATEL refers to truths (of formulae in the language) at the current state, rather than to their knowledge about each others' objectives and available actions.Thus, these logics formalise absolute / unconstrained strategic reasoning -usually, by an external observer -about the unconditional strategic abilities of agents and coalitions to achieve their goals.However, such unconstrained strategic reasoning is seldom applicable in practice, except in purely adversarial zero-sum games.Usually, all agents acting in the system (except for the environment, or an absolute adversary) have their own goals and act in pursuit of their fulfilment, rather than just to prevent the proponents from achieving their goals.This calls for a more refined strategic reasoning, conditional on the agents' knowledge of the opponents' goals and possible available actions to achieve them, which is the proposal of the present paper.It should be noted that there is a recent line of research on rational synthesis [9], [15] and rational verification [21], [13], which does take into account all agents' goals, but aims at designing stable strategy profiles (Nash equilibria) that only guarantee the satisfaction of the goal of one special agent (the proponent, representing the system), whereas all others are supposed to act rationally and accept the proposed solution, whether it satisfies their own goals or not.Thus, our work takes an essentially different perspective and has quite different objectives.We are aware of few other works that deal more directly and explicitly with conditional strategic reasoning in a sense akin to the present paper.Besides the earlier, conference version [12] of this work, perhaps the closest to it in spirit is the recent [11], to which the present work relates both conceptually and technically, as well as the conceptually related work [18], which presents a logic which can express statement of the type: "The coalition B has a strategy to achieve their goal once they know the strategy of the coalition A, no matter what the strategy is".If the epistemic ingredient in it is considered implicit, this statement is expressible by our operator for 'reactive strategic ability' O β .We also note that an axiomatic system is proposed and proved complete in [18], which shares some basic axioms with our axiom system for O β presented in Section 6.3, but differs from it on others, Our contributions.In this work we identify several patterns of conditional strategic reasoning of an observer or an active agent, depending on his/her knowledge about the objectives and possible actions of the other agents.To formalize such reasoning we extend Coalition Logic ( [19], [20]) with three new modal operators of conditional strategic reasoning, thus introducing the Logic for Local Conditional Strategic Reasoning ConStR.We provide formal semantics for the new conditional strategic operators, introduce the matching notion of bisimulation for each of them and discuss and compare briefly their expressiveness.We then also propose systems of axioms for each of the basic operators of ConStR and for the whole logic, without stating yet completeness claims (for lack of space, these are left to future work).
Structure of the paper.Section 2 provides some preliminaries on concurrent game models and the coalition logic CL.Then, Section 3 presents an informal discussion on conditional strategic reasoning, motivating the further technical work.Section 4 introduces three modal operators formalising patterns of conditional strategic reasoning and the new logic ConStR as an extension of Coalition Logic with these operators.Section 5 introduces the matching notion of bisimulation for that logic and discuss briefly it expressiveness.In Section 6 we propose systems of axioms for each of the basic operators of ConStR and for the full logic.We end with brief concluding remarks in Section 7.

Preliminaries
Multi-agent game models.We fix a finite set of agents Agt = {a 1 , ..., a n } and a set of atomic propositions Π. Subsets of Agt will also be called coalitions.Definition 2.1.A game model 1 for Agt and Π is a tuple where S is a non-empty set of states; each Σ a is a non-empty set of possible actions of agent a; V : Π → P(S) is a valuation of the atomic propositions from Π in S; and g is a game map that assigns to each s ∈ S a strategic game form g(s) = (Σ where σ| C is the restriction of σ to C. Note that the empty tuple σ ∅ is the only available joint action for the empty coalition ∅ at any state.
The basic logic for coalitional strategic reasoning CL.The Coalition Logic CL was introduced in [19], cf. also [20].CL extends the classical propositional logic with coalitional strategic modal operators [C], for any coalition of agents C. The formulae of CL are defined as follows: We will write [i] instead of [{i}].The intuitive reading of [C] ϕ is: 1 These game models are essentially equivalent to concurrent game models used in [5].
"The coalition C has a joint action that ensures an outcome (state) satisfying ϕ, regardless of how all other agents act." The semantics of CL is defined in terms of the notion of truth of a CL-formula ψ at a state s of a game model M, denoted M, s ψ, by induction on formulae, via the key clause: Thus, [C] φ formalises a claim of the ability of the agent/coalition C to choose a suitable (joint) action to ensure achieving the goal φ regardless of how all other agents choose to act, and therefore without assuming that the agents in C know the goal(s) of the remaining agents and their available actions to achieve these goals.
The notion of bisimulation that guarantees truth invariance of all CL-formulae was first defined in [6] for the so called 'alternating transition systems' (equivalent to a special case of concurrent game models), then independently in [19] for the abstract game models defined there, and later in [2] for concurrent game models, which definition we give here.Definition 2.2 (CL-bisimulation).Let M = (S, {Σ a } a∈Agt , g, V ) be a concurrent game model.A binary relation R ⊆ S 2 is a CL-bisimulation in M if it satisfies the following conditions for every pair of states (s 1 , s 2 ) such that s 1 Rs 2 and for every coalition C: Atom equivalence: For every p ∈ Π: Forth: For every joint action σ CL-bisimulation is defined here within a model.It readily extends to CL-bisimulation between models, by treating both as parts of a single model.
The Alternating-time Temporal Logic ATL * .The Alternating-time Temporal Logic ATL * , proposed by [5], is an extension of CL with temporal operators.Its featured operator is C φ, denoting the claim that C has a joint strategy that guarantees the satisfaction of φ, where φ is a 'temporal objective', i.e. a (path) formula beginning with a temporal operator 'nexttime' X , 'always' G , or 'until' U .The logic CL embeds into ATL * as the fragment extending propositional logic only with combinations of strategic and temporal operators of the type C X , cf [10].We only mention ATL * here for the sake of some further references, but the present paper will not make any essential use of that logic, and no familiarity with it, nor even with its fragment ATL, is required.
3 Conditional strategic reasoning: an informal discussion 2 Recall the scenario with the students Alice and Bob.Suppose that Alice has an objective γ A to achieve -say, to meet with her supervisor Carl on campus today.Suppose also that Alice has several possible choices of an action (or a 'strategy') 3 that would possibly, or certainly, guarantee the achievement of her objective.In our example, suppose these choices are: meeting in the supervisor's office, or in the library, or at the campus café.

Observer's conditional reasoning about an outcome from agent's actions
Let us first consider the case where Bob is just an observer who is not acting, but only reasoning about the consequences from Alice's possible actions with respect to the occurrence of another -intended or not -outcome event γ B .For instance, suppose that Bob is interested in meeting with Alice on campus today -let us call that event γ B -and is sitting in the campus café and reasoning about whether Alice will happen to come to the café, thus enabling the event γ B (recall that Alice may not know about bob expecting her there, or at all).More generally, we can also assume that there are other agents, besides Alice, also acting in pursuit of their own goals, and Bob is reasoning about their individual and collective choices of action and the consequences from these choices.This leads to an observer's conditional strategic reasoning about claims of the type: "Some/every action of Alice that guarantees achievement of γ A also guarantees/enables occurrence of the outcome γ B ".
Depending on Bob's knowledge about Alice's objective and of her expected choices of action there can be several possible cases for Bob's reasoning about the expected occurrence of the outcome γ B .

Observer Bob's reasoning, case 1: Bob knows nothing about Alice
Suppose that Bob does not know Alice's objective γ A , and therefore has no a priori expectations about her choice of action.In our example, suppose that Bob only knows that Alice is coming to campus today, but not why and where on campus she is going.Then, Bob can only claim for sure that the outcome γ B will occur if γ B is inevitable, regardless of how Alice (and all others) will act.For instance, if Bob knows that Alice is coming to campus and he is standing by the only entrance of the campus, then he will know for sure that he is going to meet Alice (γ B will occur), no matter what she will do there.This claim can be expressed in Coalition Logic CL simply as [∅]γ B .

Observer Bob's reasoning, case 2: Bob only know Alice's goal
Suppose now that Bob does know Alice's objective and knows that Alice can guarantee the achievement of that objective and will act towards that, but Bob does not know how exactly Alice might act.E.g., Bob knows that Alice is coming to campus to meet with her supervisor Carl but does not know where and when.Then, Bob can only claim that the outcome γ B will occur for sure if γ B is true on every possible course of events ("play") on which γ A is true.For instance, if Bob knows that Alice's supervisor will be working in his office for the whole day, and he is sitting in the corridor, next to Carl's office, then he knows that he will meet with Alice (γ B will occur) no matter when Alice comes to meet with Carl (i.e.no matter how γ A occurs).
This can be expressed in CL simply as [∅](γ A → γ B ) and reflects the case when γ A can occur in various, possibly unintended ways, but its occurrence always implies the occurrence of γ B (e.g. if Bob is with Carl throughout the day, then even if Alice bumps into Carl accidentally, Bob will still meet her).

Observer's reasoning, case 3: Bob knows Alice's goal and actions
Suppose now that Bob not only knows Alice's objective, but also knows all possible actions (or, strategies) of Alice that can ensure the satisfaction of her objective γ A , and knows that Alice will perform one of them, but does not know to which one.(E.g., Bob knows that Alice, who is coming to campus to meet with her supervisor, can meet with him either in his office, or in the library, or in the café.)Now, for Bob to claim that the outcome γ B will occur for sure, it suffices to know that each action of Alice that guarantees γ A will also guarantee γ B .(E.g., suppose that all possible meeting places for Alice and her supervisor are in the main building and Bob is waiting at the only entrance of the main building.) Here the conditional "If γ A then γ B " has a suitably constrained context, specifying that γ A can occur only because the agent (Alice) takes a deliberate action to bring about γ A .This can no longer be expressed in CL and requires introducing a new strategic operator.
Lastly, suppose that Bob also knows the specific action which Alice is taking in order to guarantee the achievement of her goal.Then, Bob can claim that the outcome γ B will occur for sure, as long as that specific action of Alice guarantees the satisfaction of γ B .To formalise that one needs explicit names for actions, but in our logic we will be able to state something stronger, viz.that every specific action of Alice that guarantees γ A will also bring about γ B .

Conditional reasoning of an agent about another agent's actions
Suppose now that Bob is not just a passive observer, but an acting agent, who has the outcome γ B as his own goal.There may be other agents, besides Alice and Bob, also acting in pursuit of their own goals, and Bob is reasoning about their expected choices of action and the consequences from these choices.Now, Bob is to decide -based on his reasoning about Alice's (and other agents') possible choices of actions -on his own action in pursuit of γ B .This calls for an agent's conditional strategic reasoning about statements of the type: "For some/every action of Alice that guarantees achievement of γ A , Bob has an action of his own to guarantee achievement of his objective γ B ".
We call this local conditional strategic reasoning, as it only refers to the immediate actions of the agents, not about their global strategies.Respectively, the outcomes from the local action profiles are just successor states, while in the general case they are (finite or possibly infinite) plays.The global conditional strategic reasoning will be treated in a follow-up work.
Each of the cases considered in Section 3.1 accordingly applies here, too, with the only difference being that now Bob is to choose a suitable action of his own.Besides, there are several additional cases to consider regarding the possible choice of action of Bob.
First, suppose, in addition, that Alice also knows Bob's objective and can choose to cooperate with Bob by selecting a suitable action σ A that would not only guarantee achievement of her objective but will also enable Bob to supplement σ A with an action σ B which would then also guarantee achievement of his objective, too.(So, we also assume that Alice knows enough about Bob's possible actions.)We refer the reader to Example 4.1 for a formal model illustrating the agent's ability assuming cooperation from the other agent.

Agent
Bob's reasoning not assuming Alice's cooperation.Now, suppose Bob cannot count on Alice's cooperation.Still, the statement "Whichever way Alice acts towards achieving the objective γ A , Bob can act so as to bring about achievement of his objective γ B ." admits two different readings, which we respectively call 'proactive ability' and 'reactive ability' which we discuss below4 .

Agent Bob's reasoning, case 2: reactive ability
In this case, for every action of Alice that ensures γ A Bob is to choose reactively an action of his, generally dependent on Alice's action, that would also ensure the occurrence of γ B (possibly in different ways for the different actions).This essentially corresponds to the case where Bob knows Alice's action at the time of choosing his own action, and for every choice of action of Alice that guarantees γ A , Bob's respective choice will also bring about γ B .For instance, in our running example, if Bob knows where Alice is going to meet with her supervisor, he can choose respectively where to go and wait for her.
More formally, each of Alice's actions that would guarantee γ A generates a set of possible outcome states (plays), and for each such set Bob is looking for a respective action that will bring about γ B on that set of outcome states.

Agent Bob's reasoning, case 3: proactive ability
The case of proactive ability is when Bob only knows that Alice has committed to act so as to achieve her goal γ A (to meet with Carl at one of the 3 possible meeting places), but does not knows the action that Alice has chosen and her choice will remain unknown to Bob at the time when he is to choose his action aiming at satisfying γ B (meet Alice).
In this case Bob must consider all possible courses of events (plays) that can occur as a result of Alice acting towards achieving γ A and reason about whether he can choose proactively and uniformly one action that would bring about γ B regardless of which action Alice may choose to apply in order to achieve her goal γ A .For instance, in our running story, assuming that all meeting places are in the main building, Bob can choose to wait for Alice at the only entrance of that building.Formally speaking, in this case, based on his knowledge Bob considers the set of states in the model which is the union of all sets of outcome states enabled by the specific actions of Alice that would guarantee γ A , and is looking for an action that will bring about γ B on each of these outcome states.
We refer the reader to Example 4.2 for a formal model illustrating the concepts of proactive and reactive abilities of an agent and their difference.
Top sum up, the proactive -reactive ability distinction applies to Bob depending on whether or not he knows Alice's choice at the time when he is to make his own choice of action.If he knows Alice's choice at that time, his reasoning is about reactive ability, else it is a proactive ability reasoning.We note that the notions of proactive and reactive ability respectively correspond to the notions of α-effectivity and β-effectivity in game theory (cf.e.g.[1]).
Lastly, an important point: even though knowledge of the agent about the other's goals and possible actions is essential, it will not feature in our formal logical language, nor in the formal semantics, but only in the external reasoners' analysis of which case of conditional strategic ability applies.4 The logic of conditional strategic reasoning ConStR

Modal operators for conditional strategic reasoning
Given coalitions A and B and joint actions σ A for A and σ B for B, we say that σ B is consistent with σ A if σ B coincides with σ A on A ∩ B.
We now introduce new operators for conditional strategic reasoning, for any coalitions A and B, with intuitive semantics corresponding to the three reasoning cases in Section 3.2, as follows.
(O c ) A c (φ; B ψ) says that A has a joint action σ A which, when applied, guarantees the truth of φ and enables B to apply a joint action σ B that is consistent with σ A and guarantees ψ when additionally applied by B, in sense that all agents in A act according to σ A and those in B \ A act according to σ B 5 This operator formalises the agent's reasoning Case 1 discussed in Section 3.2, where A knows the objective of B and can choose to cooperate with B by selecting a suitable action.
(O α ) [A] α (φ; B ψ) says that the coalition B \ A has an action σ B\A such that if A applies any action that guarantees the truth of φ, then B\A can guarantee the truth of ψ by applying additionally the action σ B\A .
This operator formalises a claim of the ability of the agent/coalition B to choose a suitable (joint) action so as to achieve the goal ψ assuming that A acts so as to achieve the goal φ, if B is to choose their (joint) action before A chooses their (joint) action, or before B learns the action of A. This corresponds to the notion of agent's proactive ability discussed in Section 3.2.4,respectively to the game-theoretic notion of α-effectivity, hence the notation.
(O β ) [A] β (φ; B ψ) says that for any joint action σ A of A that guarantees the truth of φ, when applied by A there is an action σ B that is consistent with σ A and guarantees ψ when additionally applied by B 6 .This operator formalises a claim of the ability of the agent/coalition B to choose a suitable (joint) action so as to achieve the goal ψ assuming that A acts so as to achieve the goal φ, if B is to choose their (joint) action after B learns the (joint) action of A. This corresponds to the notion of agent's reactive ability discussed in Section 3.2.3,respectively to the game-theoretic notion of β-effectivity, hence the notation.

The language of ConStR
We fix a finite nonempty set of agents Agt and a countable set of atomic propositions Π.The formulae of ConStR, where p ∈ Π and A, B ⊆ Agt are defined as follows:

Some definable operators and expressions in ConStR
The following can be easily seen from the informal semantics above, and can also be easily verified with the formal semantics introduced further.
• The coalitional operator from CL is definable as a special case of each of O c , O α , O β as follows: This expresses the case when B is not informed about the goal of A and has to choose proactively a joint action, before A has chosen their action.Thus, it indeed claims an unconditional ability of B to choose an action that guarantees φ. • The dual to O c operator ¬ A c (φ; B ¬ψ) says that every joint action of A that, when applied, guarantees the truth of φ, would prevent B from acting additionally so as to guarantee ψ.This formalises the conditional reasoning scenario where the goals of A and B are conflicting and where Bob can establish that whichever way A acts towards their goal, that would block B from acting to guarantee achievement of its goal.
• [A] β (⊤; B ψ) essentially formalises the case when B is not informed about the goal of A, but has to choose their action after learning the action of A.
• On the other hand, [A] c (φ|ψ) := [A] β (φ; ∅ ψ), also equivalent to [A] α (φ; ∅ ψ), says that for any joint strategy of A, if it guarantees φ to be true, then it guarantees ψ to be true, too.That formalises the case in section 3.1.3of reasoning of an observer who knows both the goal φ and the possible actions of A, about the occurrence of outcome ψ.
• A c (φ|ψ) := ¬[A] c (φ|¬ψ) says that there is a joint strategy of A that guarantees φ to be true and enables ψ to be true, too.Note that it is equivalent to a special case of the "socially friendly coalitional operator" SF, Moreover, A c (φ|ψ) is also definable as A c (φ; A ψ), where A = Agt \A.
• The coalitional operator [A] from CL is a special case of the above: [A]φ := A c (φ|⊤), meaning "A has a joint action to ensure the truth of φ" 7 .
• A c (φ; B ψ) is definable in terms of the "group protecting coalitional operator" GIP, introduced in [11]: . Nevertheless, it now has a different motivation and intuitive interpretation.

Formal semantics of ConStR
Given coalitions A, B ⊆ Agt and joint actions σ A for A and σ B for B, we define σ A ⊎ σ B to be the joint action for A ∪ B which equals to σ A when restricted to A and equals to σ B | B\A when restricted to B \ A. Thus, in particular, σ A ⊎ σ B = σ A for any B ⊆ A ⊆ Agt.Now, let M = (S, {Σ a } a∈Agt , g, V ) be a game model.The formal semantics of ConStR extends the one of CL to the new operators as follows: M, s [A] β (φ; B ψ) ⇔ for every joint action σ A of A such that M, u φ for every u ∈ Out[s, σ A ], B has a joint action σ B (generally, dependent on σ A ) such that M, u ψ for every u ∈ Out[s, σ A ⊎ σ B ].

Remark:
The semantics of each of the operators above can be re-stated to consider joint actions for B \ A rather than the whole B. For instance, it can be easily verified for the latter operator, that M, s [A] α (φ; B ψ) iff B \ A has a joint action σ B\A such that for every joint action

Some examples
Here we provide a few simple examples illustrating the semantics of ConStR.It is easy to see that M, s 0 a c (p; b q), while M, s 0 [b]q.Thus, an agent may have only conditional ability to achieve its goal.
Note that: Indeed, agent a has two actions at state s 0 to ensure p: a 1 and a 2 .For each of them, b has an action to ensure q, viz.: choose b 2 if a chooses a 1 , and choose b 1 if a chooses a 2 .
• M, s 0 [a] α (p; b q).Indeed, neither b 1 nor b 2 ensures q against both choices a 1 and a 2 of a. Thus, b does not have a uniform action to ensure q against any action of a that ensures p.
• However, if the outcomes of (a 2 , b 1 ) and (a 2 , b 2 ) are swapped, then [a] α (p; b q) becomes true at s 0 in the resulting model.
• (M, s 0 ) does not satisfy the 5 Bisimulations and expressiveness of ConStR

Bisimulations for ConStR
The definition of ConStR-bisimulation involves, besides atomic equivalence, 3 nested Forth and Back conditions, for each of the respective new operators O c , O α , and O β 8 .As the definition of CL-bisimulation given in Section 2, we only define ConStR-bisimulation within a game model, which generalises to ConStR-bisimulation between game models easily.Note that the nested back-and-forth conditions are needed because of the patterns of quantification in the semantic definitions of the new strategic operators: first quantification over the actions of A and B, and then over the outcomes generated by these actions.Definition 5.1 (ConStR-bisimulation).Let M = (S, {Σ a } a∈Agt , g, V ) be a game model.A binary relation R ⊆ S 2 is a ConStR-bisimulation in M if it satisfies the following conditions for every pair of states (s 1 , s 2 ) such that s 1 Rs 2 and for every coalitions A and B: Atom equivalence: For every p ∈ Π: O c -bisimulation: (For illustration, see Figure 1.) B of B at s 1 there is a joint action σ 2 B of B at s 2 , such that:  B of B at s 1 there is a joint action σ 2 B of B at s 2 , such that: A-Back α : For every joint action σ 2 A of A at s 2 there is a joint action σ 1 A of A at s 1 , such that: ] such that u 1 Ru 2 .B-Back α : Like B-Forth, but with 1 and 2 swapped.
Example 5.6.The game models M 1 and M 2 below involve two players: a and b.
Example 5.7.The game models M 1 , M 2 below involve three players: a, b, c.

Axiomatic system for ConStR
Here we propose systems of axiom schemes for each of the basic operators of ConStR and for the whole logic, without stating completeness claims; these are left for a follow-up work.Some of these axiom schemes are adapted from the axiomatic systems for SFCL and GPCL presented in [11].

Common axiom schemes and rules for ConStR
The axiomatic system Ax ConStR builds on the axiom schemes and rules of the complete axiomatic system for Coalition Logic Ax CL , given in [19] and [20].Analogues of these axiom schemes and rules are added for each of the conditional strategic operators occurring in the fragment of ConStR that is to be axiomatized.(Recall that the coalitional operator of CL is a special case of each of O c , O α , O β .)Some of these common axiom schemes and rules will turn out derivable from the special ones added below, but we are not concerned now with minimality of our system.2. We also claim that, for general reasons, ConStR has the finite tree-model property, and is therefore decidable (to be proved in a follow-up paper).We further conjecture that its satisfiability problem is PSPACE-complete -a major argument in favour of the modal approach to formalising conditional strategic reasoning advocated here, as opposed to one based on a version of Strategy Logic.We also leave the question of the precise complexity of model checking subject to further investigation, but conjecture that it will still be tractable in the size of the model, as in CL and ATL.
3. Extension of the framework to a full-fledged, long term conditional strategic reasoning, by extending the language with standard temporal operators, to produce an ATL-like extension of ConStR.
4. Long term conditional strategic reasoning naturally requires considerations about strategic commitments and model updates (cf.[2] and [3]) and, more generally, requires involving strategy contexts in the semantics ( [8]).
5. Adding knowledge in the semantics, and explicitly in the language, by assuming that the agents reason and act under imperfect information.
6. Last, but most important long-term objective of this project is to model and capture by semantically richer logic-based formalism the mutually conditional strategic reasoning, where all agents reason about their strategic choices, conditional on the others' strategic choices, conditional on the reasoners' choices, etc., recursively.

Example 4 . 1 .7
The game model M below has two players, a and b.Each has two actions at state s 0 : a 1 , a 2 , resp.b 1 , b 2 .NB: We have preserved the box-like notation for [A] from CL, even though it is not consistent with ours.

Example 4 . 2 .
The game model M below has two players, a and b.The agent a has 3 actions at state s 0 , a 1 , a 2 , a 3 , and b has 2 actions, b 1 , b 2 .

Figure 1 :
Figure 1: The A-Forth c half of O c -bisimulation

but with 1
and 2 swapped.
but with 1 and 2 swapped.States s 1 , s 2 ∈ M are ConStR-bisimulation equivalent, or just ConStR-bisimilar if there is a ConStR-bisimulation R in M such that s 1 Rs 2 .Proposition 5.2 (ConStR-bisimulation invariance).Let R be a ConStR-bisimulation in a game model M. Then for every ConStR-formula θ and a pair s 1 , s 2 ∈ M such that s 1 Rs 2