Computing Adequately Permissive Assumptions for Synthesis

We solve the problem of automatically computing a new class of environment assumptions in two-player turn-based finite graph games which characterize an ``adequate cooperation'' needed from the environment to allow the system player to win. Given an $\omega$-regular winning condition $\Phi$ for the system player, we compute an $\omega$-regular assumption $\Psi$ for the environment player, such that (i) every environment strategy compliant with $\Psi$ allows the system to fulfill $\Phi$ (sufficiency), (ii) $\Psi$ can be fulfilled by the environment for every strategy of the system (implementability), and (iii) $\Psi$ does not prevent any cooperative strategy choice (permissiveness). For parity games, which are canonical representations of $\omega$-regular games, we present a polynomial-time algorithm for the symbolic computation of adequately permissive assumptions and show that our algorithm runs faster and produces better assumptions than existing approaches -- both theoretically and empirically. To the best of our knowledge, for $\omega$-regular games, we provide the first algorithm to compute sufficient and implementable environment assumptions that are also permissive.


Introduction
Two-player ω-regular games on finite graphs are the core algorithmic components in many important problems of computer science and cyber-physical system design.Examples include the synthesis of programs which react to environment inputs, modal µ-calculus model checking, correct-by-design controller synthesis for cyber-physical systems, and supervisory control of autonomous systems.
These problems can be ultimately reduced to an abstract two-player game between an environment player and a system player, respectively capturing the external unpredictable influences and the system under design, while the game captures the non-trivial interplay between these two parts.A solution of the game is a set of decisions the system player needs to make to satisfy a given ω-regular temporal property over the states of the game, which is then used to design the sought system or its controller.
Traditionally, two-player games over graphs are solved in a zero-sum fashion, i.e., assuming that the environment will behave arbitrarily and possibly adversarially.Although this approach results in robust system designs, it usually makes the environment too powerful to allow an implementation for the system to exist.However in reality, many of the outlined application areas actually account for some cooperation of system components, especially if they are co-designed.In this scenario it is useful to understand how the environment (i.e., other processes) needs to cooperate to allow for an implementation to exist.This can be formalized by environment assumptions, which are ω-regular temporal properties that restrict the moves of the environment player in a synthesis game.Such assumptions can then be used as additional specifications in other components' synthesis problems to enforce the necessary cooperation (possibly in addition to other local requirements) or can be used to verify existing implementations.
For the reasons outlined above, the automatic computation of assumptions has received significant attention in the reactive synthesis community.It has been used in two-player games [8,6], both in the context of monolithic system design [11,20] as well as distributed system design [19,13].
All these works emphasize two desired properties of assumptions.They should be (i) sufficient, i.e., enable the system player to win if the environment obeys its assumption and (ii) implementable, i.e., prevent the system player to falsify the assumption and thereby vacuously win the game by not even respecting the original specification.In this paper, we claim that there is an important third property-termed permissiveness-which is needed when computed assumptions are used for distributed synthesis.An assumption is permissive if it retains all cooperatively winning plays in the game.This notion is crucial in the setting of distributed synthesis, as here assumptions are generated before the implementation of every component is fixed.Therefore, assumptions need to retain all feasible ways of cooperation to allow for a distributed implementation to be discovered in a decentralized manner.
While the class of assumptions considered in this paper is motivated by their use for distributed synthesis, this paper focuses only on their formalization and computation, i.e., given a two-player game over a finite graph and an ωregular winning condition Φ for the system player, we automatically compute an adequately permissive ω-regular assumption Ψ for the environment player that formalizes the above intuition by being (i) sufficient, (ii) implementable, and (iii) permissive.The main observation that we exploit is that such adequately permissive assumptions (APA for short) can be constructed from three simple templates which can be directly extracted from a cooperative synthesis game leading to a polynomial-time algorithm for their computation.By observing page constrains, we postpone the very interesting but largely orthogonal problem of contract-based distributed synthesis using APAs to future work.Fig. 1: Game graphs with environment (squares) and system (circles) vertices.
To appreciate the simplicity of the assumption templates we use, consider the game graphs depicted in Fig. 1 where the system and the environment player control the circle and square vertices, respectively.Given the specification Φ = ♦ {p} (which requires the play to eventually only see vertex p), the system player can win the game in Fig. 1 (a) by requiring the environment to fully disable edge e 1 .This introduces the first template type-a safety template-on e 1 .On the other hand, the game in Fig. 1 (b) only requires that e 1 is taken finitely often.This is captured by our second template type-a co-liveness template-on e 1 .Finally, consider the game in Fig. 1 (c) with the specification Φ = ♦{p}, i.e. vertex p should be seen infinitely often.Here, the system player wins if whenever the source vertices of edges e 1 and e 2 are seen infinitely often, also one of these edges is taken infinitely often.This is captured by our third template type-a live group template-on the edge-group {e 1 , e 2 }.
Contribution.The main contribution of this paper is to show that APAs can always be composed from the three outlined assumption templates and can be computed in polynomial time.
Using a set of benchmark examples taken from SYNTCOMP [2] and a prototype implementation of our algorithm in our new tool SImPA, we empirically show that our algorithm is both faster and produces more desirable solutions than existing approaches.In addition, we apply SImPA to the well known 2client arbiter synthesis benchmark from [22], which is known to only allow for an implementation of the arbiter if the clients' moves are suitably restricted.We show that applying SImPA to the unconstrained arbiter synthesis problem yields assumptions on the clients which are less restrictive but conceptually similar to the ones typically used in the literature.
Related Work.The problem of automatically computing environment assumptions for synthesis was already addressed by Chatterjee et al. [8].However, their class of assumptions does in general not allow to construct permissive assumptions.Further, computing their assumptions is an NP-hard problem, while our algorithm computes APAs in O(n 4 )-time for a parity game with n vertices.The difference in the complexity arises because Chatterjee et al. require minimality of the assumptions.On the other hand, we trade minimality for permissiveness which allows us to utilize cooperative games, which are easier to solve.
When considering cooperative solutions of non-zerosum games, related works either fix strategies for both players [7,14], assume a particularly rational behavior of the environment [4] or restrict themselves to safety assumptions [19].In contrast, we do not make any assumption on how the environment chooses its strategy.Finally, in the context of specification-repair in zerosum games multiple automated methods for repairing environment models exist, e.g.[23,15,16,21,8].Unfortunately, all of these methods fail to provide permissive repairs.A recent work by Cavezza et al. [6] computes a minimally restrictive set of assumptions but only for GR(1) specifications, which are a strict subclass of the problem considered in our work.To the best of our knowledge, we propose the first fully automated algorithm for computing permissive assumptions for general ω-regular games.

Preliminaries
Notation.We use N to denote the set of natural numbers including zero.Given two natural numbers a, b ∈ N with a < b, we use Languages.Let Σ be a finite alphabet.The notations Σ * and Σ ω denote the set of finite and infinite words over Σ, respectively, and Σ ∞ is equal to Σ * ∪ Σ ω .For any word w ∈ Σ ∞ , w i denotes the i-th symbol in w.Given two words u ∈ Σ * and v ∈ Σ ∞ , the concatenation of u and v is written as the word uv.

Game graphs. A game graph is a tuple
Without loss of generality, we assume that for every v ∈ V there exists v ∈ V s.t.(v, v ) ∈ E. For the purpose of this paper, the system and the environment players will be denoted by Player 0 and Player 1, respectively.A play originating at a vertex v 0 is a finite or infinite sequence of vertices Winning conditions.Given a game graph G, we consider winning conditions specified using a formula Φ in linear temporal logic (LTL) over the vertex set V , that is, we consider LTL formulas whose atomic propositions are sets of vertices V .In this case the set of desired infinite plays is given by the semantics of Φ over G, which is an ω-regular language L(Φ) ⊆ V ω .Every game graph with an arbitrary ω-regular set of desired infinite plays can be reduced to a game graph (possibly with an extended set of vertices) with an LTL winning condition, as above.The standard definitions of ω-regular languages and LTL are omitted for brevity and can be found in standard textbooks [3].

Games and strategies
where G is a game graph and Φ is a winning condition over G.A strategy of Player i, i ∈ {0, 1}, is a partial function π i : V * V i → V such that for every pv ∈ V * V i for which π is defined, it holds that π i (pv) ∈ E(v).Given a strategy π i , we say that the play ρ ) for all k ∈ dom(ρ) and, ρ is a finite play p only if π i (p) is undefined.We refer to a play compliant with π i and a play compliant with both π 0 and π 1 as a π i -play and a π 0 π 1 -play, respectively.We collect all plays compliant with π i , and compliant with both π 0 and π 1 in the sets L(π i ) and L(π 0 π 1 ), respectively.Winning.Given a game G = (G, Φ), a strategy π i is (surely) winning for Player i if L(π i ) ⊆ L(Φ), i.e., a Player 0 strategy π 0 is winning if for every Player 1 strategy π 1 it holds that L(π 0 π 1 ) ⊆ L(Φ).Similarly, a fixed strategy profile (π 0 , π 1 ) is cooperatively winning if L(π 0 π 1 ) ⊆ L(Φ).We say that a vertex v ∈ V is winning for Player i (resp.cooperatively winning) if there exists a winning strategy π i (resp.a cooperatively winning strategy profile (π 0 , π 1 )) s.t.π i (v) is defined.We collect all winning vertices of Player i in the Player i winning region i Φ ⊆ V and all cooperatively winning vertices in the cooperative winning region 0, 1 Φ.We note that i Φ ⊆ 0, 1 Φ for both i ∈ {0, 1}.

Adequately Permissive Assumptions for Synthesis
Given a two-player game G, the goal of this paper is to compute assumptions on Player 1 (i.e., the environment), such that both players cooperate just enough to fulfill Φ while retaining all possible cooperative strategy choices.Towards a formalization of this intuition, we define winning under assumptions.
) be a game and Ψ be an LTL formula over V .Then a Player 0 strategy π 0 is winning in G under assumption Ψ , if for every Player 1 strategy π 1 s.t.L(π 1 ) ⊆ L(Ψ ) it holds that L(π 0 π 1 ) ⊆ L(Φ).We denote by 0 Ψ Φ the set of vertices from which such a Player 0 strategy exists.
We see that the assumption Ψ introduced in Def. 1 weakens the strategy choices of the environment player (Player 1).We call assumptions sufficient if this weakening is strong enough to allow Player 0 to win from every vertex in the cooperative winning region.

Definition 2. An assumption
Unfortunately, sufficient assumptions can be abused to change the given synthesis problem in an unintended way.Consider for instance the game in Fig. 2 (left) with Φ = ♦{v 0 } and Ψ = ♦e 1 .Here, there is no strategy π 1 for Player 1 such that L(π 1 ) ⊆ L(Ψ ) as the system can always falsify the assumption by simply not choosing e 1 infinitely often in v 1 .Therefore, any Player 0 strategy is winning under assumption even if Φ is violated.The assumption Ψ , however, is trivially sufficient, as 0 Ψ Φ = V .In order to prevent sufficient assumptions to be falsifiable and thereby enabling vacuous winning, we define the notion of implementability, which ensures that Ψ solely restricts Player 1 moves.
An assumption which is sufficient and implementable ensures that the cooperative winning region of the original game coincides with the winning region under that assumption, i.e., 0 Ψ Φ = 0, 1 Φ.However, it does not yet ensure that all cooperative strategy choices of both players are retained, which is ensured by the notion of permissiveness.
This notion of permissiveness is motivated by the intended use of assumptions for compositional synthesis.In the simplest scenario of two interacting processes, two synthesis tasks-one for each process-are considered in parallel.Here, generated assumptions in one synthesis task are used as additional specifications in the other synthesis problem.Therefore, permissiveness is crucial to not "skip" over possible cooperative solutions-each synthesis task needs to keep all allowed strategy choices for both players intact to allow for compositional reasoning.This scenario is illustrated in the following example to motivate the considered class of assumptions.Formalizing assumption-based compositional synthesis in general is however out of the scope of this paper.
Example 1.Consider the (non-zerosum) two-player game in Fig. 2 (middle) with two different specifications for both players, namely on Player 1.Notice that both assumptions are sufficient and implementable for (G, Φ 0 ).However, Ψ 0 does not allow the play {v 1 } ω and hence is not permissive whereas Ψ 0 is permissive for (G, Φ 0 ).As a consequence, there is no way Player 1 can satisfy both her objective Φ 1 and the assumption Ψ 0 even if Player 0 cooperates, since L(Φ 1 ) ∩ L(Ψ 0 ) = ∅.However, under the assumption Ψ 0 on Player 1 and assumption Ψ 1 = ♦ ¬e 3 on Player 0 (which is sufficient and implementable for (G, Φ 1 ) if we interchange the vertices of the players), they can satisfy both their own objectives and the assumptions on themselves.Therefore, they can collectively satisfy both their objectives.Remark 1.We remark that for Ex. 1, the algorithm in [9] outputs Ψ 0 as the desired assumption for game (G, Φ 0 ) and their used assumption formalism is not rich enough to capture assumption Ψ 0 .This shows that the assumption type we are interested in is not computable by the algorithm from [9].Definition 5.An assumption Ψ is called adequately permissive (an APA for short) for (G, Φ) if it is sufficient, implementable and permissive.

Discussion on Definition 1
We first note some simple but interesting consequences of Def. 1. First, we have anti-monotonicity, i.e, if assumption Ψ 1 is stronger than assumption Ψ 2 (in terms of play inclusion), and π 0 is winning under Ψ 2 , then it is also winning under Ψ 1 .As a direct consequence of this observation, we also have conjunctivity, i.e., if π 0 is winning under Ψ 1 and π 0 is winning under Ψ 2 , then π 0 is winning under Interestingly, however, Def. 1 does not allow for disjunctivity, i.e., if π 0 is winning under Ψ 1 and π 0 is winning under Ψ 2 , then it need not be winning under Ψ 1 ∨ Ψ 2 .This last observation is illustrated by the following example.
Example 2. Consider the game graph in Fig. 3 with the specification Φ = ♦ {a} (which requires the play to eventually only see vertex a).Then consider the assumptions Ψ 1 = ¬e 0 UXe 1 (when edge e 0 is taken for the first time, the next edge should be e 1 ) and Ψ 2 = e 0 UXe 2 (when edge e 0 is taken for the first time, the next edge should be e 2 ).Notice that there is only one Player 1 strategy π 1 , i.e., the one that never uses edge e 0 , satisfying either assumption.So, any play compliant with π 1 eventually only visits vertex a, and hence, is winning.Therefore, any Player 0 strategy is winning under either assumption.In particular, consider the strategy π 0 that only uses edge e 1 .Then π 0 is winning under assumption Ψ i for each i.However, π 0 is not winning under Ψ := Ψ 1 ∨Ψ 2 ≡ true.To see this, note that assumption Ψ can be satisfied by any Player 1 strategy, in particular, the strategy π 1 that always uses e 0 from state a.It is easy to see that the combination of π 1 with π 0 yields the play (abc) ω that satisfies Ψ but not Φ.Hence, π 0 is not winning under assumption In addition, we want to remark that Def. 1 slightly differs from the typical linear-time synthesis setting, where winning under assumption would be naturally defined in terms of plays instead of strategies.We therefore want to briefly discuss this setting and give some intuition why it coincides with our definition of winning for the special type of assumptions we compute.
We start by giving an alternative formulation of Def. 1 in terms of plays.
Definition 6.Let G = ((V, V 0 , V 1 , E), Φ) be a game and Ψ be an LTL formula over V .Then a Player 0 strategy π 0 is winning in G under assumption Ψ , if every play ρ ∈ L(π 0 ) either fails to satisfy the assumption Ψ or satisfies the specification Φ.
It is easy to observe that a strategy π 0 that is winning under assumption by Def. 6 is also winning under assumption by Def. 1.However, the other direction is not true in general, as shown by the following example.
Example 3. Consider the same game as in Example 2, i.e., the game in Fig. 3 with specification Φ = ♦ {a}.Also, consider the assumption Ψ 1 = ¬e 0 UXe 1 as in Example 2. Then by the same arguments as before, the strategy π 0 , which only uses edge e 1 , is winning under Ψ 1 by Def. 1.However, note that the play (abc) ω is compliant with π 0 and satisfies Ψ 1 but does not satisfy Φ.Hence, π 0 is not winning under Ψ 1 by Def. 6.
Interestingly, the class of assumptions we compute in this paper does not allow for examples of the sort presented above.Intuitively, this is due to the fact that these assumptions are implementable by Player 1 and realized by a combination of very local templates.These assumptions can therefore be enforced by only restricting the moves of Player 1.This implies that for any play ρ that complies with the assumption and π 0 , there does exist a strategy π 1 satisfying the assumption, which results in ρ.Hence, any strategy π 0 which is winning under assumption for by Def. 1 is also winning under assumption by Def. 6.The complete proof of the equivalence between the two definitions for our class of assumption can be found in Appendix F as it requires the results of the next sections.
We conclude this subsection by noting that the choice of our formulation of 'winning under assumption' is inspired by distributed synthesis.Here, the environment agents might be unknown to the system.Our definition allows us to naturally argue that the strategy π 0 of Player 0 (System) is winning for any strategy π 1 that Player 1 (Environment) may choose to satisfy the assumption.

Computing Adequately Permissive Assumptions (APA)
In this section, we present our algorithm to compute adequately permissive assumptions (APA for short) for parity games, which are canonical representations of ω-regular games.For a gradual exposition of the topic, we first present algorithms for simpler winning conditions, namely safety (Sec.4.2), Büchi (Sec.4.3), and Co-Büchi (Sec.4.4), which are used as building blocks while presenting the algorithm for parity games (Sec.4.5).We first introduce some preliminaries.

Preliminaries
We use symbolic fixpoint algorithms expressed in the µ-calculus [18] to compute the winning regions and to generate assumptions in simple post-processing steps.Set Transformers.Let G = (V, V 0 , V 1 , E) be a game graph, U ⊆ V be a subset of vertices, and a ∈ {0, 1} be the player index.Then we define two types of predecessor operators: The predecessor operator pre G (U ) computes the set of vertices with at least one successor in U .The controllable predecessor operators cpre a G (U ) and cpre a,i G (U ) compute the set of vertices from which Player a can force visiting U in at most one and i steps respectively.In the following, we introduce the attractor operator attr a G (U ) that computes the set of vertices from which Player a can force at least a single visit to U in finitely many but nonzero3 steps: When clear from the context, we drop the subscript G from these operators.
Fixpoint Algorithms in the µ-calculus.µ-calculus [18] offers a succinct representation of symbolic algorithms (i.e., algorithms manipulating sets of vertices instead of individual vertices) over a game graph G.The formulas of the µ-calculus, interpreted over a 2-player game graph G, are given by the grammar where p ranges over subsets of V , X ranges over a set of formal variables, pre ranges over monotone set transformers in {pre, cpre a , attr a }, and µ and ν denote, respectively, the least and the greatest fixed point of the functional defined as X → φ(X).Since the operations ∪, ∩, and the set transformers pre are all monotonic, the fixed points are guaranteed to exist, due to the Knaster-Tarski Theorem [5].We omit the (standard) semantics of formulas (see [18]).
A µ-calculus formula evaluates to a set of vertices over G, and the set can be computed by induction over the structure of the formula, where the fixed points are evaluated by iteration.The reader may note that pre and cpre can be computed in time polynomial in number of vertices, and since the game graph is finite, attr is also computable in polynomial time.

Safety Games
A safety game is a game G = (G, Φ) with Φ := U for some U ⊆ V , and a play fulfills Φ if it never leaves U .APAs for safety games disallow every Player 1 move that leaves the cooperative winning region in G w.r.t.Safety(U ).This is formalized in the following theorem 4 .
is an APA for the game G.We denote by UnsafeA(G, U ) the algorithm computing S as above, which runs in time O(n 2 ), where n = |V |.
We call the LTL formula in (6) a safety template and assumptions that solely use this template safety assumptions.

Live Group Assumptions for Büchi Games
Büchi games.A Büchi game is a game G = (G, Φ) where Φ = ♦U for some U ⊆ V .Intuitively, a play is winning for a Büchi game if it visits the vertex set U infinitely often.We first recall that the cooperative winning region 0, 1 ♦U can be computed by a two-nested symbolic fixpoint algorithm [10] Büchi(G, U ) := νY.µX.
Live group templates.Given the standard algorithm in (7), the set X i computed in the i-th iteration of the fixpoint variable X in the last iteration of Y actually carries a lot of information to construct a very useful assumption for the Büchi game G.To see this, recall that X i contains all vertices which have an edge to vertices which can reach U in at most i − 1 steps [10, sec.3.2].Hence, for all Player 1 vertices in X i \ X i−1 we need to assume that Player 1 always eventually makes progress towards U by moving to X i .This can be formalized by a so called live group template.
Definition 7. Let G = (V, E) be a game graph.Then a live group H = {e j } j≥0 is a set of edges e j = (s j , t j ) with source vertices src(H) := {s j } j≥0 .Given a set of live groups H = {H i } i≥0 we define a live group template as The live group template says that if some vertex from the source of a live group is visited infinitely often, then some edge from this group should be taken infinitely often.We will use this template to give the assumptions for Büchi games.
Remark 2. We note that Chatterjee et al. [8] used live edges in their environment assumptions.Live edges are singleton live groups and are thereby less expressive.
In particular, there are instances of Büchi games, where there is no permissive live edge assumption but there is a permissive live group assumption6 .E.g., in Fig. 1 (c) the live edge assumption ♦e 1 ∧ ♦e 2 is sufficient but not permissive, whereas the live group assumption ♦src(H) =⇒ ♦H with H = {e 1 , e 2 } is an APA.
In the context of the fixpoint computation of ( 7), we can construct live groups H = {H i } i≥0 where each H i contains all edges of Player 1 which originate in X i \ X i−1 and end in X i−1 .Then the live group assumption in (8) precisely captures the intuition that, in order to visit U infinitely often, Player 1 should take edges in H i infinitely often if vertices in src(H i ) are seen infinitely often.Unfortunately, it turns out that this live group assumption is not permissive.The reason is that it restricts Player 1 also on those vertices from which he will anyway go towards U .For example, consider the game in Fig. 2 (right).Here defining live groups through computations of (10), will mark e 1 as a live group, but then (v 2 v 1 v 0 ) ω will be in L(Φ) but not in the language of the assumption.
Here the permissive assumption would be Ψ = true.

Accelerated fixpoint computation.
In order to compute a permissive live group assumption, we use a slightly modified fixpoint algorithm which computes the same set Z * but allows us to extract permissive assumptions directly from the fixpoint computations.Towards this goal, we introduce the together predecessor operator. tpre Intuitively, tpre adds all vertices from which Player 0 does not need any cooperation to reach U in every iteration of the fixpoint computation.The interesting observation we make is that substituting the inner pre operator in (7) by tpre does not change the computed set but only accelerates the computation.This is formalized in the next proposition and visualized in Fig. 4.
Prop. 1 follows from the correctness proof of (7) by using the observation that for all U ⊆ V we have µX.U ∪ pre(X) = µX.U ∪ tpre(X) which is proven in the Appendix, Lem. 1.
Computing live group assumptions.Intuitively, the operator tpre G computes the union of (i) the set of vertices from which Player 0 can reach U in a finite number of steps with no cooperation from Player 1 and (ii) the set of Player 1 vertices from which Player 0 can reach U with at most one-time cooperation from Player 1.Looking at Fig. 4, case (i) is indicated by the dotted line, while case (ii) corresponds to the last added Player 1 vertex (e.g., v 5 ).Hence, we need to capture the cooperation needed by Player 1 only from the vertices added last, which we call the frontier of U in G and are formalized as follows: It is easy to see that, indeed front(U ) ⊆ V 1 , as whenever v ∈ front(U ) ∩ V 0 , then it would have been the case that v ∈ attr 0 G (U ) via (10).Defining live groups based on frontiers instead of all elements in X i indeed yields the desired permissive assumption for Büchi games.By observing that we additionally need to ensure that Player 1 never leaves the cooperative winning region by a simple safety assumption, we get the following result, which is the main contribution of this section and is proved in the appendix.
where X i is the set computed in the i-th iteration of the computation over X and in the last iteration of the computation over Y in TBüchi.Then Ψ = Ψ unsafe (S) ∧ Ψ live (H ) is an APA for G, where S = UnsafeA(G, U ).We write LiveA(G, U ) to denote the algorithm to construct live groups H as above, which runs in time O(n 3 ), where n = |V |.
In fact, there is a faster algorithm that runs in time linear in the size of the graph for computation of APAs for Büchi games, which we present in Appendix C.1.We chose to present the mu-calculus based algorithm here, because it provides more insights into the nature of live groups.

Co-Liveness Assumptions in Co-Büchi Games
A co-Büchi game is the dual of a Büchi game, where a winning play should visit a designated set of vertices only finitely many times.Formally, a co-Büchi game is a tuple G = (G, Φ) where Φ = ♦ U for some U ⊆ V .The standard symbolic algorithm to compute the cooperative winning region is as follows: As before, the sets X i obtained in the i-th computation of X during the evaluation of (13) carry essential information for constructing assumptions.Intuitively, X 1 gives precisely the set of vertices from which the play can stay in U with Player 1's cooperation and we would like an assumption to capture the fact that we do not want Player 1 to go further away from X 1 infinitely often.This observation is naturally described by so called co-liveness templates.
Definition 8. Let G = (V, E) be a game graph and D ⊆ V × V a set of edges.Then a co-liveness template over G w.r.t.D is defined by the LTL formula The assumptions employing co-liveness templates will be called co-liveness assumptions.With this, we can state the main result of this section.
where X i is the set computed in the i-th iteration of fixpoint variable X in CoBüchi.Then Ψ = Ψ unsafe (S) ∧ Ψ colive (D) is an APA for G, where S = UnsafeA(G, U ).We write CoLiveA(G, U ) to denote the algorithm constructing co-live edges D as above which runs in time O(n 3 ), where n = |V |.
We observe that X 1 is a subset of U such that if a play reaches X 1 , Player 0 and Player 1 can cooperatively keep the play in X 1 .To do so, we ensure via the definition of D in (15) that Player 1 can only leave X 1 finitely often.Moreover, with the other co-live edges in D, we ensure that Player 1 can only go away from X 1 finitely often, and hence if Player 0 plays their strategy to reach X 1 and then stay there, the play will be winning.The permissiveness of the assumption comes from the observation that if co-liveness is violated, then Player 1 takes a co-live edge infinitely often, and hence leaves X 1 infinitely often, implying leaving U infinitely often.We refer the reader to the Appendix D for a formal proof of Theorem 3.
In the context of ( 13) the set results in the desired co-live assumptions.We argued that this defines the adequately permissive assumption for co-Büchi games.However, utilizing the observation from Section 4.3 we can equivalently use the accelerated fixed-point algorithm resulting from replacing the pre-operator over X in (13) by the tpre operator.This again only accelerates the computation, as formalized in the following proposition and visualized in Fig. 5.
When using the accelerated fixed-point algorithm, we can again restrict attention to Player 1 vertices in the frontier of X i for the construction of assumptions.In section 4.3, for Büchi game B üchi (U ), we introduced live groups to take the play towards U , whenever Player 1 can.But for co-Büchi games, we need to restrict Player 1 from going away from the region U good where the play stays in U .This requires co-liveness assumption only over frontiers of X i s, since any other vertex of Player 1 added in X i+1 is added in the attr 0 part of tpre, and hence can not go away from U good anyway.With this, we have the following main result of this section, for which we provide the proof in Appendix D.1.
where X i is the set computed in the i-th iteration of fixpoint variable X.Then Moreover, D can be constructed in time O(n 3 ), where n is the number of vertices.
In fact, there is again a faster algorithm that runs in time linear in size of the graph for computation of APAs for co-Büchi games, which we present in the Appendix D.2.We chose to present this version for the same reasons as for the Büchi games.Each colored region describes how X grows after every iteration, and the dotted region on the right is added by the attr part of tpre.The edges in red describe the co-live edges in both cases.Again pre computation would give assumptions on Player 0 vertices, while that with tpre only gives assumptions on Player 1's vertices.

APA Assumptions for Parity Games
Parity games.Let G = V, V 0 , V 1 , E be a game graph, and C = {C 0 , . . ., C k } be a set of subsets of vertices which form a partition of V .Then the game The set C is called the priority set and a vertex v in the set C i , for i ∈ [1; k], is said to have priority i.An infinite play ρ is winning for Φ = Parity(C) if the highest priority appearing infinitely often along ρ is even.
Conditional live group templates.As seen in the previous sections, for games with simple winning conditions which require visiting a fixed set of edges infinitely often or only finitely often, a single assumption (conjoined with a simple safety assumption) suffices to characterize APAs, as there is just one way to win.However, in general parity games, there are usually multiple ways of winning: for example, in parity games with priorities {0, 1, 2}, a play will be winning if either (i) it only infinitely often sees vertices of priority 0, or (ii) it sees priority 1 infinitely often but also sees priority 2 infinitely often.Intuitively, winning option (i) requires the use of co-liveness assumptions as in Sec.4.4.However, winning option (ii) actually requires the live group assumptions discussed in Sec.4.3 to be conditional on whether certain states with priority 1 have actually been visited infinitely often.This is formalized by generalizing live group templates to conditional live group templates.
Again, the assumptions employing conditional live group templates will be called conditional live group assumptions.With the generalization of live group assumptions to conditional live group assumptions, we actually have all the ingredients to define an APA for parity games as a conjunction of a safety, a co-liveness, and a conditional live group assumptions.Intuitively, we use (i) a safety assumption to prevent Player 1 to leave the cooperative winning region, (ii) a co-live assumption for each winning option that requires seeing a particular odd priority only finitely often, and (iii) a conditional live group assumption for each winning option that requires seeing an even priority infinitely often if certain odd priority have been seen infinitely often.The remainder of this section gives an algorithm (Alg. 1) to compute the actual safety, co-live and conditional live group sets S, D and H , respectively, and proves that the resulting assumption Ψ (as in ( 21)) is actually an APA for the parity game G.
Computing APAs.The computation of unsafe, co-live, and conditional live group sets S, D, and H to make Ψ in (21) an APA is formalized in Alg. 1.
Alg. 1 utilizes the standard fixpoint algorithm Parity(G, C) [12] to compute the cooperative winning region for a parity game G, defined as where τ is ν if d is even, and µ otherwise.In addition, Alg. 1 involves the algorithms UnsafeA (Thm.1), LiveA (Thm.2), and CoLiveA (Thm.3) to compute safety, live group, and co-liveness assumptions in an iterative manner.

18:
else 19: Fig. 6: A parity game, where a vertex with priority i has label ci.The dotted edges are the unsafe edges, the dashed edges are the co-live edges, and every similarly colored vertex-edge pair forms a conditional live group.

In addition, G|
to a subset of its vertices U ⊆ V .Further, C| U denotes the restriction of the priority set C from V to U ⊆ V .
We illustrate the steps of Alg. 1 by an example depicted in Fig. 6.In line 1, we begin with computing the cooperative winning region Z * of the entire game, to find that from vertex v 7 , there is no way of satisfying the parity condition even with Player 1's cooperation, i.e., Z * = {v 1 , . . ., v 6 }.So we mark the edge from v 6 to v 7 to be a safety-assumption edge, restrict the game to G = G| Z * and run ComputeSets on the new game.
In the new restricted game G the highest priority is d = 5, which is odd, hence we execute lines 9-10.Now a play would be winning only if eventually the play does not see v 5 any more.Hence, in step 9, we find the region W ¬5 = {v 1 , . . ., v 4 , v 6 } of the restricted graph G| V \C5 (only containing nodes v i with priority C(v i ) < 5)) from where we can satisfy the parity condition without seeing v 5 .We then make sure that we do not leave W ¬5 to visit v 5 in the game G infinitely often by executing CoLiveA(G, W ¬5 ) in line 10.This puts a coliveness assumption on the edges (v 5 , v 5 ) and (v 6 , v 5 ).
Once we restrict a play from visiting v 5 infinitely often, we only need to focus on satisfying parity without visiting v 5 within W ¬5 .This observation allows us to further restrict our computation to the game G = G| W¬5 in line 16, where we also update the priorities to only range from 0 to 4. In our example this step does not change anything.We then re-execute ComputeSets on this game.
In the restricted graph, the highest priority is 4 which is even, hence we execute lines 12-14.One way of winning in this game is to visit C 4 infinitely often, so we compute the respective cooperative winning region W 4 in line 12.
In our example we have W 4 = W ¬5 = {v 1 , . . ., v 4 , v 6 }.Now, to ensure that from the vertices from which we can cooperatively see 4, we actually win, we have to make sure that every time a lower odd priority vertex is visited infinitely often, a higher priority is also visited.This can be ensured by conditional live group fairness as computed in line 14.For every odd priority i < 4, (i.e, for i = 1 and i = 3) we have to make sure that either 2 or 4 (if i = 1) or 4 (if i = 3) is visited infinitely often.The resulting live groups H i = (R i , H i ) collect all vertices in W 4 with priority i in R i and all live groups allowing to see even priorities j with i < j ≤ 4 in H i , where the latter is computed using the fixedpoint algorithm LiveA to compute live groups.The resulting live groups for i = 1 (blue) and i = 3 (red) are depicted in Fig. 6 and given by ({v At this point we have W ¬4 = ∅.With this the game graph computed in line 16 becomes empty, and the algorithm eventually terminates after iteratively removing all priorities from C after ComputeSets has been run (without any computations, as G is empty) for priorities 3, 2 and 1.In a different game graph, the reasoning done for priorities 5 and 4 above can also repeat for lower priorities if there are other parts of the game graph not contained in W 4 , from where the game can be won by seeing priority 2 infinitely often.The main insight into the correctness of the outlined algorithm is that all computed assumptions can be conjoined to obtain an APA for the original parity game.Main result.With Alg. 1 in place, we can now state the main result of this section, and in particular, of the entire paper, proven in Appendix E.

Experimental Evaluation
We have developed a C++-based prototype tool SImPA7 computing Sufficient, Implementable and Permissive Assumptions for Büchi, co-Büchi, and parity games.We first compare SImPA against the closest related tool GIST [9] in Sec.5.1.We then show that SImPA gives small and meaningful assumptions for the well-known 2-client arbiter synthesis problem from [22] in Sec.5.2.

Performance Evaluation
We compare the effectiveness of our tool against a re-implementation of the closest related tool called GIST [9], which is not available anymore from the authors 8 .GIST originally computes assumptions only enabling a particular initial vertex to become winning for Player 0. However, for the experiments, we run GIST until one of the cooperatively winning vertices is not winning anymore.
Since GIST starts with a maximal assumption and keeps making it smaller until a fixed initial vertex is not winning anymore, our modification makes GIST faster as the modified termination condition is satisfied earlier.As our tool does not depend on any fixed initial vertex and the dependency on the initial vertex makes GIST slower, this modification allows a fair comparison.We compared the performance and the quality of the assumptions computed by SImPA and GIST on a set of parity games collected from the SYNTCOMP benchmark suite [2].For computing assumptions using both SImPA and GIST, we set a timeout of one hour.All the experiments were performed on a computer equipped with Intel(R) Core(TM) i5-10600T CPU @ 2.40GHz and 32 GiB RAM.
We provide all details of the experimental results in Table G in the appendix and summarize them in Table 1.In addition, Fig. 7 shows a scatter plot, where every instance of the benchmarks is depicted as a point, where the X and the Y coordinates represent the running time for SImPA and GIST (in seconds), respectively.We see that SImPA is computationally much faster than GIST in every instance (all dots lie above the lower red line) -most times by one (above the middle green line) and many times even two (above the upper orange line) orders of magnitude.
Moreover, in some experiments, GIST fails to compute a sufficient assumption (in the sense of Def. 2), whereas our algorithm successfully computes an APA (given in the row labeled 'no assumption generated' in Table 1 and marked using '*' next to the computation times in Table G).This is not surprising, as the class of assumptions used by GIST are only unsafe edges and live edges (i.e., singleton live groups) which are not expressive enough to provide sufficient assumptions for all parity games (see Fig. 1(b) for a simple example where there is no sufficient assumption that can be expressed using live edges).Furthermore, we note that in all cases where the assumptions computed by GIST are actually APAs, SImPA computes the same assumptions orders or magnitudes faster.

2-Client Arbiter Example
We consider the 2-client arbiter example from [22].In this example, clients i ∈ {1, 2} (Player 1) can request or free a shared resource by setting the input variables r i to true or false, and the arbiter (Player 0) can set the output variables g i to true or false to grant or withdraw the shared resource to/from client i.The game graph for this example is implicitly given as part of the specification (as this is a GR(1) synthesis problem, see [22] for details).We depict a relevant part of this game graph schematically in Fig. 8. Here, rectangles and circles represent Player 1 and Player 0 vertices, respectively, and the double-lined vertices have priority 2 (are Büchi vertices), while all other vertices have priority 1.The labels of the Player 0 states indicate the current status of the request and grant bits, and in addition, remember if a request is currently pending, i.e., F i = g i S (r i ∧g i ), where S denotes the LTL operator "since".Labels of Player 1 vertices additionally remember the last move chosen by Player 0. We see that all vertices with no pending requests have priority 2. It is known that there does not exist a winning strategy in this game for Player 0 if the moves of the clients (Player 1) are unconstrained.Running SImPA on this example yields only live group assumptions (as this is a Büchi game and all vertices are cooperatively winning) which were computed in 0.01 seconds.The edges of one live group are indicated schematically by thick red arrows in Fig. 8.We see that this live group ensures that the play eventually moves to vertices where the Player 0 can force a visit to a Büchi vertex.In [22], the assumption used to restrict the clients' behavior in order to render the synthesis problem realizable is given by We see that our live group assumptions are similar but more permissive.For example when we persistently see states with label r 1 g 1 and r 2 g 2 (e.g., cycling through states 1 and 2 in Fig. 8) we enforce that eventually the edge labeled r 1 r 2 needs to be taken.On the other hand, the second and third condition in Ψ (which are triggered by r 1 g 1 and r 2 g 2 , respectively) enforces that no other outgoing transition is allowed from state 2 except for the one labeled with r 1 r 2 , which is strictly more restrictive.
We have also run GIST on this example.It took 6.44 seconds to compute live edge assumptions for unrestricted initial conditions, which is two orders of magnitude slower than SImPA.Further, in order to see 6 infinitely often GIST returns the live edges 2 − 3 and 7 − 1 .This assumption is not permissive, as there exist winning plays that do not use either of these edges infinitely often.It turns out that an APA for this example will unavoidably require live groups -singleton live edges, as computed by GIST, will not suffice.
and similarly let µX.U ∪ tpre(X) gives the following computation of X, which we refer to as the tpre computation, We first observe that tpre of a set contains the pre of the set.
Proof.We know cpre 0 (U ) ⊆ attr 0 (U ) by the definition of attr, and cpre 1 (U ) ⊆ cpre 1 (attr 0 (U ) ∪ U ) by the monotonicity of cpre 0 .Then pre(U Now we show that every vertex that appears in the i-th iteration of the pre computation, also appears in the i-th iteration of tpre computation. Proof.We prove the claim by induction on i.For the base case, when i = 0, the statement is trivially true.For induction hypothesis (IH), assume that the statement holds for some i ∈ N.
⊇ tpre(X i ) ∪ X i , by IH and monotonicity of tpre (24) ⊇ pre(X i ) ∪ X i , by the claim above (25) Hence, by induction, the statement holds for any i ∈ N.
The claim show one direction of the lemma, that is µX.U ∪ pre(X) ⊆ µX.U ∪ tpre(X).Both the claims above also give that l ≤ k, where l and k are the terminating step of tpre and pre computations respectively.For the other direction, we show that every vertex that appears in the i-th iteration of tpre computation also eventually appears in the pre computation.
Proof.We again prove the claim by induction on i.For base case, again the statement holds trivially with j = 0. Then for induction hypothesis (IH), assume that the statement holds for some i ∈ N, with some j ≥ i, that is i+1 \ X t i be an arbitrary vertex.Then v ∈ attr 0 (X t i ) or v ∈ cpre 1 (attr 0 (X t i ) ∪ X t i ).In the earlier case, when v ∈ attr 0 (X t i ), we have ⊆ cpre 0,p (X i ), by IH (28) Then since attr 0 (X t i ) terminates in at most |V | = n many iterations, that is ⊆ cpre 1 (X n+i ∪ X j ), by IH and discussion above (32) . Then by induction, the claim holds true.
This claim shows that X k = X t k , since l ≤ k and X t k = X t l .Hence the lemma is proved.

C APA assumptions for Büchi games
For the convenience of the reader, we restate Thm. 2 here.
where X i is the set computed in the i-th iteration of the computation over X and in the last iteration of the computation over Y in TBüchi.Then Ψ = Ψ unsafe (S) ∧ Ψ live (H ) is an APA for G, where S = UnsafeA(G, U ).We write LiveA(G, U ) to denote the algorithm to construct live groups H as above, which runs in time O(n 3 ), where n = |V |.
Proof.Since front(X i ) ⊆ V 1 , we observe that every vertex in Z * ∩ V 0 is added to the least fixpoint computation of X in the attr 0 part of tpre.
With this observation we prove sufficiency, implementability and permissiveness below and finally comment on the complexity of UnsafeA.Implementability: Since the source of live groups is a subset of Player 1's vertices, the assumption is easily implementable if Player 1 plays one of these live group edges infinitely often, when the sources are visited infinitely often, and Player 0 can not falsify it.
Sufficiency: Consider the strategy π 0 for Player 0: at a vertex v ∈ V i , she plays the attr 0 strategy to reach X i−1 , and for other vertices, she plays arbitrarily.We show that π 0 is winning under assumption Ψ for all vertices in the cooperative winning region.
Case 2: If ρ ∈ L(Ψ live (H )), then ∃H i ∈ H , such that ρ visits src(H i ) = front(X i ) infinitely often, but no edge in H i is taken infinitely often.Then since the edges in H i lead to (X i+1 \front(X i )), the play must stay in either front(X i ) or goes to X j \ X i for some j > i + 1.In the first case, since U ∩ Z * ⊆ X 1 , ρ ∈ Φ, which would be a contradiction.On the other hand, in the second case, after going to X j , ρ has an edge going from some v ∈ Z * \X j−1 to some v ∈ X i (else U ∩ Z * ⊆ X 1 ⊆ X i can not be reached).But then v would be added to X i+1 , which contradicts to the fact that j > i+1.In either case, we get a contradiction, so ρ ∈ L(Ψ ).
Complexity analysis.The computation of live groups takes O(n 2 ) time to compute the inner least fixpoint variable X and there will be at most n such computations.While the inner fixpoint is being computed, in the last iteration of Y , with additive overhead of O(n 2 ), the live groups can be computed.Then the total computation time is O(n 3 ).

C.1 Faster algorithm for Büchi games Algorithm 2 LiveA
Input: All vertices are cooperatively Büchi winning 4: H ←ComputeLiveGroups((G, I), ∅) 5: return (S, H ) 6: procedure ComputeLiveGroups((G, I), H ) 7: U ← I 8: while U = V do 9: Proof.We first show that the algorithm terminates.We show that the procedure ComputeLiveGroups terminates.Since in step 3, the game graph is restricted to cooperative Büchi winning region Z * , we need to show that in the procedure, U = V = Z * eventually.Let U l be the value of U after the l-th iteration of ComputeLiveGroups((G, I), ∅, ∅), with U 0 = I.Since vertices are only added to U (and never removed) and there are only finitely many vertices, Since the other direction is trivial, we show that Z * ⊆ U m .Suppose this is not the case, i.e. v ∈ Z * \U m .Since v ∈ Z * , both players cooperately can visit Then if v l ∈ V 0 , it would be added to U in step 10 of (m + 1)-th iteration, i.e.U m = U m+1 .Else if v l ∈ V 1 , it would be added to U in step 13 of (m + 1)-th iteration since v l+1 ∈ U m , i.e.U m = U m+1 .In either case, we get a contradiction.Hence, v ∈ U m , implying Z * = U m .Hence the procedure ComputeLiveGroups, and hence the Algo.2, terminates.
We now show that the assumption obtained is adequately permissive.
Implementability: We note that in step 11, C ⊆ V 1 : since if v ∈ V 0 ∩ C, then there is an edge from v to U , and hence v ∈ U already by steps 9 and 10.Then since the source of live groups (which are only added in step 12) is a subset of Player 1's vertices, the assumption is easily implementable if Player 1 plays one of these live group edges infinitely often, when the sources are visited infinitely often, and Player 0 can not falsify it.
Sufficiency: Again, let U l and m be as defined earlier.Define X l := U l \U l−1 for 1 ≤ l ≤ m, and Consider the strategy π 0 for Player 0: at a vertex v ∈ V 0 ∩ X l , she plays the attr 0 strategy to reach U l−1 , and for other vertices, she plays arbitrarily.We show that π 0 is winning under assumption Ψ for Player 0 from all vertices in the cooperative winning region Z * .
Suppose ρ ∈ L(Φ), i.e. inf (ρ) ∩ I = ∅.Note that ρ never leaves Z * due to safety assumption template.Then consider the set R of vertices which occur infinitely often in ρ.
then by the definition of H , infinitely often reaching v implies infinitely often reaching attr 0 (U k−1 ).But again the play visits U k−1 by arguments above, giving the contradiction.
In any case, we get a contradiction, implying that the assumption is wrong.Hence, ρ ∈ L(Φ), and v 0 ∈ 0 Ψ Φ.
Case 1: If ρ ∈ L(Ψ unsafe (S)), then some edge (v, v ) ∈ S is taken in ρ.Then after reaching v , ρ still satisfies the Büchi condition.Hence, v ∈ Z * = 0, 1 ♦I, but then (v, v ) ∈ S, which is a contradiction.Case 2: If ρ ∈ L(Ψ live (H )), then ∃H i ∈ H , such that ρ visits src(H i ) = C l (for the value of C after l-th iteration) infinitely often, but no edge in H i is taken infinitely often.Then since the edges in H i lead to attr 0 (U l−1 ), the play must stay in either C l or goes to U k \ U l for some k > l + 1.In the first case, since I ∩ Z * ⊆ U 0 , ρ ∈ Φ, which would be a contradiction.On the other hand, in the second case, after going to U k \ U l , ρ has an edge going from some v ∈ Z * \U k−1 to some v ∈ U l (else I ∩ Z * ⊆ U 0 ⊆ U l can not be reached).But then v would be added to U k+1 , which contradicts to the fact that k > l + 1.In either case, we get a contradiction, so ρ ∈ L(Ψ ).
Complexity analysis.The computation of cooperative winning region can be done in time linear in size of the game graph, i.e.O(m + n).The procedure ComputeLiveGroups takes O(m + n) time.Hence, resulting in time linear in number of edges in the game graph.
where X i is the set computed in the i-th iteration of fixpoint variable X.Then Ψ = Ψ unsafe (S) ∧ Ψ colive (D) is an APA for G, where S = UnsafeA(G, U ).Moreover, D can be constructed in time O(n 3 ), where n is the number of vertices.
Proof.Analogous to the Büchi case, every vertex in Z * ∩ V 0 is added to the least fixpoint computation of X in the attr 0 part of tpre, and Consider the strategy π 0 for Player 0: plays the attr 0 strategy to reach X i−1 , and for other vertices, plays arbitrarily.
We again observe that the sources of the co-live edges are Player 1's vertices and by construction, each source has at least one alternative edge that is neither co-live nor unsafe.Hence, they can be easily implemented by Player 1, by taking those edges only finitely often.
The complexity analysis is similar to that for live group assumptions.Now we show that Ψ is an adequately permissive assumption.Again, let U l and m be as defined earlier.Define X l := U l \U l−1 for 1 ≤ l ≤ m, and X 0 = U 0 = I.Then every vertex v ∈ Z * is in X l for some l ∈ [0; m].
We again prove sufficiency, implementability and permissiveness separately and finally comment on the complexity of CoLiveA.
Implementability: We again observe that the sources of the co-live edges in D are Player 1's vertices and by construction, each source has at least one alternative edge that is neither co-live nor unsafe.Hence, they can be easily implemented by Player 1, by taking those edges only finitely often.
Sufficiency: Consider the following strategy π 0 for Player 0: at a vertex v ∈ X 0 ∩V 0 , she takes edge (v, v ) ∈ E such that v ∈ X 0 , at a vertex v ∈ X l ∩V 0 , for l ∈ [2; m], she plays the attr 0 strategy to reach U l−1 , and for all other vertices, she plays arbitrarily.
Since ρ ∈ L(Ψ unsafe (S)) and by definition of π i , v i ∈ Z * for all i.Now suppose ρ ∈ L(Φ), i.e. inf(ρ) ∩ (Z * \ I) = ∅.Let u ∈ Z * \ I. Then to reach u infinitely often some edge from D must be taken infinitely often in ρ, since π i makes the play go towards I.But this contradicts the fact that ρ ∈ L(Ψ ).Hence, ρ ∈ L(Φ).

E APA ssumption for parity games
For the convenience of the reader, we restate Thm. 5 here.Proof.We prove sufficiency, implementability and permissiveness below and then analyze the complexity of Alg. 1.
Implementability: We note that the assumption is implementable by the implementability of safety, liveness and co-liveness assumptions: if for a conditional live group, the corresponding vertex set is reached infinitely often, and also the sources of live groups are visited infinitely often, Player 1 can choose the live group edges, since they are controlled by Player 1.Moreover, there won't be any conflict due to the conditional live groups as there can be no unsafe or co-live edge that is included in a conditional live group by construction.
Sufficiency: We give a strategy for Player 0 depending on the parity of the highest priority d occurring in the game and show that it is winning under assumption Ψ for all vertices in the cooperative winning region Z * = Parity(G, C).The strategy uses finite memory and the winning strategies for Player 1 in subgames with Büchi (Thm.2) and co-Büchi (Thm.3) objectives.
By B üchi (G, U ), we denote the game (G, ♦U ), and by co-B üchi (G, U ), we denote the game (G, ♦ U ).We also use the definitions of d, W d and W ¬d , as in the Algo. 1.Consider the following strategy π 0 of Player 0: d is odd: If the play is in V \W ¬d , then Player 0 plays the co-B üchi (G, W ¬d ) winning strategy to eventually end up in W ¬d .If the play is in W ¬d ∩Z * , Player 0 plays the recursive winning strategy for (G| W ¬d , Parity(C)).Otherwise, she plays arbitrarily.
d is even: If the play is in W d , Player 0 switches its strategy among winning strategies, i.e., for each vertex, she first uses the first strategy in the above sequence, then when that vertex is repeated, she uses the second strategy for the next move, and keeps switching to the next strategies for every move from the same vertex.If the play is in V \ W d ∩ Z * , then she plays the recursive winning strategy for (G| W ¬d , Parity(C)), where C is modified again as in line 16.Otherwise, she plays arbitrarily.
We prove by induction, on the highest occurring priority d, that the above constructed strategy π 0 for Player 0 ensures satisfying the parity objective on the original game graph if the assumption Ψ is satisfied.For the base case, when d = 0, the constructed strategy is trivially winning, because the only existing color is even.Now let the strategy be winning for d − 1 ≥ 0.
Case 1: If d is odd, then since at vertices in V \ W ¬d , Player 0 plays to eventually stay in W ¬d , the play can not stay in W ¬d without violating Ψ colive (D).And if ρ eventually stays in W ¬d , then by the induction hypothesis, it is winning, since W ¬d ∩ C d = ∅.
Case 2: If d is even, then if the play stays in W d eventually, and if the play visits vertices of an odd priority i infinitely often, then winning strategy for infinitely many moves from every vertex occurring in ρ.Since π 1 satisfies LiveA(G, C i+1 ∪C i+2 ∪ • • • ∪ C d ), after these moves as well, the play visits Hence the play will visit vertices of an even color > i infinitely often, implying that ρ is winning.Else if ρ stays in V \ W d eventually, then it is winning by induc-tion hypothesis.This gives the sufficiency of the assumptions computed by the algorithm.
Permissiveness: Now for the permissiveness of the assumption, let ρ ∈ L(Φ).We prove the claim by contradiction and suppose that ρ ∈ L(Ψ ).
Case 1: If ρ ∈ Ψ unsafe (S).Then some edge (v, v ) ∈ S is taken in ρ.Then after reaching v , ρ still satisfies the parity objective.Hence, v ∈ Z * , but then (v, v ) ∈ S, which is a contradiction.
Case 2: If ρ ∈ Ψ cond (H ).Then for some even j and odd i < j, ρ visits W j ∩ C i infinitely often but does not satisfy the live transition group assumption , where G = G| Wj .Due to the construction of the set W j , it is easy to see that once ρ visits W j , it can never visit V \W j .Hence, eventually ρ stays in the game G and visits C i infinitely often.Since ρ ∈ L(Φ), it also visits some vertices of some even priority > i infinitely often, and hence, it satisfies , which contradicts the assumption.
Case 3: If ρ ∈ Ψ colive (D).Then for some odd i an edge (u, v) ∈ CoLiveA(G, W ¬i ) is taken infinitely often.Then the vertex v ∈ V \ W ¬i is visited infinitely often.Note that ρ can not be winning by visiting an even j > i, since otherwise v would have been in Büchi(G, C j ) as from v we can infinitely often see j, and hence would have been removed from G for the next recursive step.Hence, ρ visits some even j < i infinitely often, i.e. i is not visited infinitely often.Then v would be in W ¬d , which is a contradiction.
Complexity analysis.We note that the cooperative parity game can be solved in time O((n + m) log d), where n, m and d are the number of of vertices, edges and priorities respectively: consider the graph where pz owns all the vertices, find the strongly connected components in time O(n + m), check which of these components have a cycle with highest priority even by reduction to even-cycle problem [17].Then ComputeSets takes time O(n 2 ) for the even case, but is dominated by O(n 3 ) time for the odd case.For every priority, ComputeSets is called once, that is at most 2n calls in total.Then the total running time of the algorithm is O((n + m) log d + 2n.(n 3 )) = O(n 4 ).
F Equivalence of Def. 1 and Def.6 We prove the following result stating the equivalence of between Def. 1 and Def.6 for the class of assumptions we consider.Proposition 3. Given a parity game G, let Ψ be an APA computed by Thm. 5.Then, a Player 0 strategy is winning under Ψ by Def. 1 if and only if it is winning under Ψ by Def. 6.
Proof.Suppose a Player 0 strategy π 0 is winning under assumption Ψ by Def. 6.Then every play ρ ∈ L(π 0 ) either fails to satisfy the assumption Ψ or satisfies the specification Φ.Hence, L(π 0 ) ⊆ L(Φ) ∪ L(¬Ψ ).Now, let π 1 be a Player 1 strategy s.t.L(π 1 ) ⊆ L(Ψ ).Then, Hence, π 0 is also winning under assumption Ψ by Def. 1. Now, for the other direction, suppose π 0 is winning under assumption Ψ by Def. 1.Let ρ ∈ L(π 0 ).If ρ ∈ L(Ψ ), then we are done.Suppose ρ ∈ L(Ψ ), then we have to show that ρ ∈ L(Φ).We claim that there exists a Player 1 strategy π 1 such that π 1 ⊆ L(Ψ ) and ρ is compliant with it.Then, by Def. 1, L(π 0 π 1 ) ⊆ L(Φ).As ρ is compliant with both π 0 and π 1 , ρ ∈ L(Φ), and hence, we are done.Now we only need to prove the claim.As Ψ is an implementable assumption, there exists a Player 1 strategy π 1 * such that L(π 1 * ) ⊆ L(Ψ ) and by definition, the strategy is defined on all vertices.Let ρ = v 0 v 1 • • • .Now, let π 1 be another Player 1 strategy such that for every play prefix p ∈ V * V 1 , it is defined as follows: Then, clearly, ρ is compliant with π 1 .Now, let ρ ∈ L(π 1 ), then it is enough to show that ρ ∈ L(Ψ ).If ρ = ρ ∈ L(Ψ ), then we are done.Suppose not and let p be the maximal prefix of ρ that is also a prefix of ρ (which can also be empty).By construction, the moves taken after the prefix p in ρ are compliant with π 1 * .As the conditional live group templates and co-liveness templates are tail properties and are independent of prefixes, ρ satisfies those templates of assumption Ψ .Furthermore, as p is a prefix of ρ ∈ L(Ψ ) and the moves taken after p in ρ are compliant with π 1 * , the play ρ can not contain any unsafe edges marked by assumption Ψ .Therefore, ρ ∈ L(Ψ ).

Name
[a; b] to denote the set {n ∈ N | a ≤ n ≤ b}.For any given set [a; b], we write i ∈ even [a; b] and i ∈ odd [a; b] as short hand for i ∈ [a; b] ∩ {0, 2, 4, . ..} and i ∈ [a; b] ∩ {1, 3, 5, . ..} respectively.Given two sets A and B, a relation R ⊆ A × B, and an element a ∈ A, we write R(a) to denote the set {b ∈ B | (a, b) ∈ R}.

Fig. 4 :
Fig.4: Computation of µX.U ∪pre(X) (left) and µX.U ∪tpre(X) (right).Each colored region describes one iteration over X.The dotted region on the right is added by the attr part of tpre, and this allows only the vertex v5 to be in front({v1}).Each set of the same colored edges defines a live transition group.

Fig. 5 :
Fig.5: Left picture describes co-Büchi computation with pre, and the right with tpre.Each colored region describes how X grows after every iteration, and the dotted region on the right is added by the attr part of tpre.The edges in red describe the co-live edges in both cases.Again pre computation would give assumptions on Player 0 vertices, while that with tpre only gives assumptions on Player 1's vertices.

Fig. 8 :
Fig. 8: Illustration of a relevant part of the game graph for the 2-client arbiter.

Table 1 :
Summary of the experimental results