Adapting Behaviors via Reactive Synthesis

In the \emph{Adapter Design Pattern}, a programmer implements a \emph{Target} interface by constructing an \emph{Adapter} that accesses an existing \emph{Adaptee} code. In this work, we present a reactive synthesis interpretation to the adapter design pattern, wherein an algorithm takes an \emph{Adaptee} and a \emph{Target} transducers, and the aim is to synthesize an \emph{Adapter} transducer that, when composed with the {\em Adaptee}, generates a behavior that is equivalent to the behavior of the {\em Target}. One use of such an algorithm is to synthesize controllers that achieve similar goals on different hardware platforms. While this problem can be solved with existing synthesis algorithms, current state-of-the-art tools fail to scale. To cope with the computational complexity of the problem, we introduce a special form of specification format, called {\em Separated GR($k$)}, which can be solved with a scalable synthesis algorithm but still allows for a large set of realistic specifications. We solve the realizability and the synthesis problems for Separated GR($k$), and show how to exploit the separated nature of our specification to construct better algorithms, in terms of time complexity, than known algorithms for GR($k$) synthesis. We then describe a tool, called SGR($k$), that we have implemented based on the above approach and show, by experimental evaluation, how our tool outperforms current state-of-the-art tools on various benchmarks and test-cases.


Introduction
Inspired by the well known adapter design pattern [23], we study the use of reactive synthesis for generating adapters that translate inputs meant for a target transducer to inputs of an adaptee transducer. Consider, as one motivating example, the practice of adding code to an operating system that mitigates the risk posed by newly discovered hardware vulnerabilities like Spectre and Meltdown [29,34]. While the discovery of such vulnerabilities puts constraints on how the hardware can be used, the patch of the operating system (called adapter in this paper) takes upon itself to take care of running all applications without change [31]. It does so by allowing applications of the existing interface, while adapting their operation in way that ensures that the system is not exposed to the new threat.
Formally, we propose the following synthesis problem: given two finite-state transducers called Target and Adaptee, synthesize a finite-state transducer called Adapter such that Adaptee • Adapter Target.
The symbol • stands for standard transducer composition [36] and the symbol stands for an equivalence relation, a generalization of sequential equality, which we explain below. In words, we want an Adapter that stands between an Adaptee and its inputs and guarantees, such that the composition Adaptee • Adapter is equivalent to Target. In the vulnerability patching example, Adaptee is a model of the constrained hardware and Target is a model of the hardware as used before the discovery of the vulnerability, without the new constraints. The Adapter that we generate models the patch that mediates between the vulnerable hardware and applications that are not aware of the vulnerability. In our setting, an input to the synthesis algorithm is the equivalence relation along with the specification of the adaptee and of the target. While the problem of synthesizing an adapter such that Adaptee • Adapter is sequentially equal to Target may be useful in some cases [45], we study here a more general problem. This is called for by applications such as the vulnerability covering patches described above. Specifically, we allow our users to specify an equivalence relation between Adaptee • Adapter and Target that is not necessarily sequential equality. In this paper, we propose to use ω-regular properties [3] for specifying this equivalence relation, as follows. We assume, without loss of generality, that the outputs of both the Target and the Adaptee are assignments to disjoint sets of atomic propositions. We then consider sequences of pairs of such assignments that correspond to zipped runs of Adaptee •Adapter and of Target over the same input. Having this set of sequences in mind, the user specifies a set of temporal properties using an ω-regular formalism such as LTL or Büchi automata. The transducer Adaptee • Adapter is considered equivalent to Target if all the properties that the user specified are satisfied for each sequence in the set [24]. Note that the equivalence relation can be very different than sequential equality, it can, for example, say that Adaptee •Adapter must be, in a way, a "mirror image" of Target, as demonstrated by the cleaning robots example in Section 4.1, where Target is a robot that cleans some rooms and Adaptee • Adapter is a robot that clean all the rooms that Target did not clean.
The solution that we propose in this paper consists of two phases: we first transform the transducers to transition systems and arrive at a game structure that is more amenable for game-based techniques. Then we make use of the specific form of the resulting game and some simplifying assumptions about the form of the equivalence properties to solve the game efficiently. The game structures that we analyze consist of pairs of transition systems called Input and Output, accompanied by a set of ω-regular properties that specify equivalence relation between the two, as described above. The game that we solve is, then, to find a controller that reads the assignments to the variables of the Input and produces a valid sequence of assignments to the variables of the Output such that all the properties are satisfied. The translation of the transducers to this game structure is rather direct, as elaborated in Section 4. The Input transition system is generated from the Target transducer and the Output transition system is generated from the Adaptee transducer. This is because we want the Adapter , which we generate from the controller as described below, to consider the behavior of the Target and to translate it to a command that generates an equivalent behaviour of Adaptee. Once we find a controller that solves the game, we can transform it to an adapter we detail in Section 4.
The synthesis problem that we defined so far is as hard computationally as general LTL synthesis and is thus double exponential in the worst case [51]. To cope with this difficulty, we propose to use a well known fragment of LTL called GR(k). GR(k) generalizes the GR(1) subset of LTL [11], a practical fragment of LTL for which a feasible reactive synthesis algorithm exists (see, e.g., [10,32,39,46,47,53]). Furthermore, GR(k) formulas are known to be highly expressive, as they can encode most commonly appearing LTL industrial patterns [19,40,42] and DBA properties (see related works for details). In addition to using GR(k), since the Input and Output in our model are separated transition systems, with separated sets of atomic propositions, we focus on properties that separate input and output variables. That is, our specification has the form where the φ i and ψ i are conjunctions of LTL GF (Globally in the Future) formulas over Input variables only and Output variables only respectively. We call this model Separated GR(k). We show through several case-studies that this fragment of LTL suffices to specify a range of useful equivalence relations.
We study the problems of realizability and synthesis on Separated GR(k) game. For that, we first consider a sub-problem of solving a weak Büchi game. Then we identify and make use of a property of separated games that we call delay property: the system can delay its response to the environment indefinitely as long as it remains in the same connected component of the game graph. This allows us to decide the realizability of Separated GR(k) in O(|ϕ| + N ) symbolic operations, and to synthesize a controller for a realizable specification in O(|ϕ|N ) symbolic operations, where ϕ is the Separated GR(k) specification, and N is the size of the state-space. Thus, Separated GR(k) games are easier to solve that solving GR(k) games which require O(N k+1 k!) operations [49]. This demonstrates the efficiency of our framework, since |ϕ| tends to be smaller than N and in most practical cases, |ϕ| ∈ O(log(N )).
The benefits of the complexity-theoretic improvement are reflected in empirical evaluations on our case studies of separated GR(k) formulas. We demonstrate that while separated GR(k) formulas are challenging for state-of-the-art synthesis tools, a symbolic BDD-based implementation of our algorithm solves them scalably and efficiently.
The rest of the paper is organized as follows: Section 2 introduces necessary preliminaries. Separated GR(k) games are introduced and formulated in Section 3. In Section 4 we describe how to use Separated GR(k) games synthesis to generate the adapter transducer, and introduce several use-cases. Next, we turn to solving separated GR(k) games. An overview of our solution approach and a necessary property for correctness of algorithm, called the delay property, is given in Section 5. A complete symbolic algorithm is presented in Section 6. An empirical evaluation on case-studies is presented in Section 7. Finally, in Section 8 and Section 9 respectively, we give related work and conclude.

Preliminaries
General Definitions. Given a set of Boolean variables V, a state over V is an assignment s to the variables in V. We describe s as the subset of V that is assigned True in s. The set of primed variables of V is V = {v | v ∈ V}. Then s = {v | v ∈ s} is the primed state s over V . An assertion over V is a Boolean formula over variables V. A state s satisfies an assertion ρ over the same variables, denoted s |= ρ, if ρ evaluates to True by assigning true to the elements of s. We define the projection of a state s on a subset U ⊆ V as denoted by s| U = s ∩ U. We extend the notion of projection to a set of states S ⊆ 2 V by defining Our specification is a special form of Linear Temporal Logic (LTL). LTL [50] extends propositional logic with infinite-horizon temporal operators. The syntax of an LTL formula over a finite set of Boolean variables V is defined as follows: Here X (Next), U (Until), F (Eventually), G (Always) are temporal operators. The semantics of LTL can be found in [6,Chapter 5].
We model the adapters as transducers. A transducer is a deterministic finitestate machine with no accepting states, but with additional output alphabet and an additional function from the set of states to the output alphabet. A formal definition of a transducer can be found in [21], but is not required for this paper.
The algorithms developed in this paper are symbolic, i.e. manipulate implicit representations of sets of states. To this end, we use Binary Decision Diagrams (BDDs) [12,28] to represent assertions. For a BDD B and sets of variables V 1 , · · · V n , we write B(V 1 , . . . , V n ) to denote that B represents an assertion over V 1 ∪· · ·∪V n . For a state s over V, we write s |= B(V) to denote that the assertion that B represents is satisfied by the state s. BDDs support several symbolic operations: conjunction (∨), disjunction (∧), negation (¬), and extraction of variables using the ∃ and ∀ operators. We measure time complexity of a symbolic algorithm by a worst case #symbolic-operations it performs. A discussion on a rigorous treatment of BDD operations can be found in Appendix A.
Game Structures and Games. We follow the notations of [11]. A game structure GS = (I, O, θ I , θ O , ρ I , ρ O ) defines a turn-based interaction between an environment and a system players. The input variables I and output variables O are two disjoint sets of Boolean variables that are controlled by the environment and system, respectively. The environment's initial assumption θ I is an assertion over I, and the system's initial guarantee θ O is an assertion over I∪O. The environment's safety assumption ρ I is an assertion over I∪O∪I , where the interpretation of (i 0 , o 0 , i 1 ) |= ρ I is that from state (i 0 , o 0 ) the environment can assign i 1 to the input variables. W.l.o.g, we assume that ρ I is deadlock free, i.e., for all (i 0 , o 0 ) there exists an i 1 s.t. (i 0 , o 0 , i 1 ) |= ρ I . Similarly, the system's safety guarantee ρ O is an assertion over I∪O∪I ∪O , where the interpretation of (i 0 , o 0 , i 1 , o 1 ) |= ρ O is that from state (i 0 , o 0 ) when the environment assigns i 1 to the input variables, the system can assign o 1 to the output variables. Again, w.l.o.g, we assume that ρ O is deadlock free, i.e., for all (i 0 , o 0 , i 1 ) there exists an A play over GS progresses by the players taking turns to assign values to their own variables ad infinitum, where the players must satisfy the initial conditions at the start and the safety conditions thereafter. Formally, a play π = s 0 , s 1 , . . . is an infinite sequence of states over I ∪O such that s 0 |= θ I ∧θ O and (s j , s j+1 ) |= ρ I ∧ρ O for all j ≥ 0. A play prefix is either a play or a finite sequence of states that can be extended to a play. Then a strategy is a function f : Intuitively, a strategy directs the system on what to assign to the output variables, depending on the history of a play and the most recent assignment by the environment to the input variables. A play prefix is said to be consistent with a strategy f if for all states s j = (i j , o j ) in that prefix, f (s 0 , . . . , s j−1 , i j ) = o j for all j ≥ 0. A strategy is memoryless if it only depends on the last state and the most recent assignment to the input variables. Formally, a memoryless strategy is a function f : A game is a tuple (GS , ϕ) where GS is a game structure over inputs I and outputs O and ϕ is an LTL formula over I ∪ O called a winning condition. A play π is winning for the system if π |= ϕ. A strategy f wins from state s if every play π from s that is consistent with f is winning for the system. A strategy f wins from S, where S is an assertion over I ∪ O, if it wins from every state s |= S. The winning region of the system is the set of states from which it has a winning strategy. A strategy f is winning if for every state i |= θ I there exists a state o ∈ 2 O such that (i , o) |= θ O and f wins from (i , o). In this paper, we have the following games that are defined over the following winning conditions. Given a game (GS , ϕ), realizability is the problem of deciding whether a winning strategy for the system exists, and synthesis is the problem of constructing a winning strategy if one exists. We note that a realizability check can be reduced to the identification of the winning region, W : A winning strategy exists iff for Hence, the synthesis problem can be solved by constructing a strategy that wins from W .
Game Graphs and Weak Büchi Games. The game graph for a game structure GS is the directed graph (V, E) with vertices V = 2 I∪O and edges Intuitively, vertices are states over I and O, and edges represent valid transitions between states according to the safety conditions. The game graph can be useful for analyzing the structural properties of a game structure via graph-theoretical properties.
A finite path in a directed graph (V, E) is a sequence v 0 , . . . , v n ∈ V + such that (v j , v j+1 ) ∈ E for all 0 ≤ j < n. An infinite path v 0 , v 1 , . . . ∈ V ω is similarly defined. A vertex u is said to be reachable from another vertex v if there is a finite path from v to u. A strongly connected component (SCC) of a directed graph (V, E) is a maximal set of vertices within which every vertex is reachable from every other vertex. It is well known that SCCs partition the set of vertices of a directed graph, and that the set of SCCs is partially ordered with respect to reachability. Also note that every infinite path ultimately stays in an SCC.
Let (GS , GFϕ) be a game with a Büchi winning condition, and let S 0 . . . , S m be the set of SCCs that partition the game graph of GS . We say that (GS , GFϕ) is a weak Büchi game if, given the set F of states that satisfy the assertion ϕ, for every SCC S i , either S i ⊆ F or S i ∩ F = ∅. Thus, the SCCs of a weak Büchi game are either accepting components, meaning all of its states are contained in F, or non-accepting components, meaning none of its states is present in F. As a consequence, a play in a weak Büchi game is winning for the system if the play ultimately never exits an accepting component. Similarly, a strategy is winning for the system if it can guarantee that every play will ultimately remain inside an accepting component.

Separated GR(k) Games
Our framework relies on the core idea of reducing the problem of adapter generation to synthesizing a Separated GR(k) game, which we define in this section. At a high-level, a separated GR(k) differentiates from a regular GR(k) game in a separation between input and output variables in both the game structure and winning condition. We show in later sections that the separation of variables leads to algorithmic benefits to the synthesis problem. Formally, -The environment's initial assumption θ I is an assertion over I only.
-The system's initial guarantees θ O is an assertion over O only.
-The environment's safety assumption ρ I is an assertion over I ∪ I only.
-The system's safety guarantee ρ O is an assertion over O ∪ O only.
The interpretation of a game structure which separates variables is that the underlying game graph (V, E) is the product of two distinct directed graphs over disjoint sets of variables: G I over the variables I ∪ I , and G O over the variables O ∪ O . For J ∈ {I, O}, the vertices of G J correspond to states over J and there is an edge between states s and t if (s, t ) |= ρ J .
Next, the notion of separation of variables extends to games with GR(k) winning conditions as follows: ) such that each ϕ l,i is an assertion over I and each ψ l,j is an assertion over O.
A Separated GR(k) game is a GR(k) game (GS , ϕ) over I ∪ O in which both GS and ϕ separate variables w.r.t. I and O.
A major observation is that in a game played over a separated game structure, the actions of the two players are independent: the environment's actions do no limit the system's actions, and vice versa. In later sections we see how this observation leads to algorithmic improvements in solving separated GR(k) games over a regular GR(k) game. Specifically, in Section 4 we see how to use Separated GR(k) games to generate the adapter transducer. In Sections 5 and 6 we discuss algorithms for realizability and synthesis of Separated GR(k) games.

From Transducers to Separated GR(k)
We describe, using an end-to-end-example, how adapter transducer generation can be reduced to synthesis of Separated GR(k) games.
We begin with user-provided Target and Adaptee transducers. These transducers model the behavior of a system that we want to use (Adaptee) and the behavior of a system that we want to emulate (Target). For example, the transition systems in Figure 1 formulates the following scenario. (1) Target is a hardware with three modes that we use, such that the U (up) and the D (down) commands send the hardware from mode s 0 to modes s 1 and s 2 , respectively, from which the S (stay) command keeps the system looping at the chosen mode.
(2) Adaptee that is a hardware that we want to use and also has three modes, but which does not allow the command S after U . Instead, it allows a D command that switches the mode back to s 0 . The second step is a formulation of the equivalence relation, where we define the type of emulation that we require. In our example we want to maintain the following property: if Target visits a mode s i infinitely often for a certain input sequence, then so does Adaptee • Adapter . This can be expressed in LTL as: where bin t (s i ) denotes the binary representation of mode s i using variables t 1 , t 0 , and similarly for bin a (s i ) using variables a 1 , a 0 . Note that in this example we cannot just synthesize an adapter that cycles through all modes in Adaptee • Adapter infinitely often, since the Adaptee transducer does not allow that.
As a third step, to generate a separated GR(k) game, we translate the Target and Adaptee transducers to Input and Output transition systems as depicted, for example, in Figure 2. Since Adaptee and Target are two separate transducers, each with its own structure, it is natural to model these as two separate transition systems on distinct variables. Thus, the transition systems are produced by the well known projection construction that turns an FST into a FSA that accepts the output language of the transducers [45]. Note that in our setting Target is translated to Input and Adaptee is translated to Output. This may appear as a role inversion to readers. We propose it because the role of the controller in our setting is to translate the behavior of Target to an equivalent behavior of the Adaptee. Fig. 2: A direct translation of the Target transducer to an Input transition system and of the Adaptee transducer to an Output transition system. These separate transition systems, together with the specification described above, form a Separated GR(k) that, as a fourth step, we can feed to the Separated GR(k) synthesis algorithm. The output of the algorithm is a transducer called Controller, that maps runs of Input to runs of Output, as shown, in our example, in Figure 3. This, in fact, connects the output of the Target to the output of the Adaptee.
As a final step, from the controller we can construct the Adapter using the formula Adapter = Adaptee −1 • Controller • Target. This means that Adapter contains an internal model of the Target and of the Adaptee. These internal models are used to translate inputs to expected outputs of the adapter, then feed them to the controller, and then feed the output of the controller to the Fig. 3: A controller that reads runs of the Input transition system and generates runs of the Output transition system such that the specified Separated GR (2) formula is guaranteed to be true. reverse of Adaptee to generate an input to Adaptee that emulates the behaviour of Target. Note that it is possible to invert transducers symbolically [26].

Additional Usages of our Technique
We give two more examples to demonstrate uses of Separated GR(k).
Cleaning Robots. This example demonstrates how one can use our technique to fulfill tasks that have not been covered by an execution of an existing transducer. Consider a cleaning robot (the Target transducer) that moves along a corridor-shaped house, from room 1 to room n. The robot follows some plan and accordingly cleans some of the rooms. Our goal is to synthesize a controller that activates a second cleaning robot (the Adaptee transducer) that follows the first robot and cleans exactly those rooms left uncleaned. Each robot controls a set of variables indicating which room they are in and which rooms they have cleaned, and additionally the original robot controls a variable indicating whether it is done with its cleaning. Our controller is required to fulfill requirements of the form: GF(done) ∧ GF(!in:clean i ) → GF(out:clean i ), GF(done) ∧ GF(in:clean i ) → GF(!out:clean i ).
Railway Signalling. This example demonstrates how one can use our technique to improve the quality of an existing transducer. We consider a junction of n railways, each equipped with a signal that can be turned on (light in green) or off (light in red). Some railways overlap and thus their signals cannot be turned on simultaneously. We consider an overlapping pattern where railways 1-4 overlap, and similarly 3-6, 5-8, and so on.
An existing system (the Target transducer) was programmed to be strictly safe in order to avoid accidents, so it never raises two signals simultaneously. We want to improve the system's performance by synthesizing a controller that reads the assignments that the existing transducer produces and accordingly assign values to the signals in such a way as to produce both safe and maximal valuations: the ith signal is turned on if and only if the signal of every rail that overlaps with the ith rail is off. Furthermore, we want to maintain liveness properties of the Target system: (1) every signal that is turned on infinitely often by the existing system must be turned on infinitely often by the new system as well, and (2) if a signal is turned on at least once every m steps (where m is a parameter of the specification) by the existing system, then the same holds for the new system.
Note that, in terms of the GR(k) formula, this example is similar to the "hardware" example that we gave; we want to emulate the Target's execution. The crux of the example lies in its Adaptee. Here, unlike in the explanatory example, the Adaptee is not a given hardware, but rather a virtual component that the user introduced to improve the Target performance. In this case the Adaptee produces safe and maximal signals.

Overview for Solving Separated GR(k) Games
The adapter generation framework described in Section 4 relies on synthesizing a controller from a separated GR(k) game. In this section and the next, we describe how to solve separated GR(k) games. This section gives an overview of the algorithm in Section 5.1 and describes a necessary property, called the delay property, in Section 5.2. The delay property is necessary to prove correctness of our synthesis algorithm. Later, Section 6 gives the complete algorithm and proves its correctness.

Algorithm Overview and Intuition
Following Section 3, we are given a Separated GR(k) game that consists of a game structure GS and a winning condition in a GR(k) form ϕ = k l=1 ϕ l , where . Let G be the game graph of GS . Consider an infinite play π in GS . Like every infinite path on a finite graph, π eventually stabilizes in an SCC S. Due to separation of variables, the game graph G can be decomposed into an input graph G I and an output graph G O . Then the projection of S on the inputs is an SCC S I in G I , and the projection of S on the outputs is an SCC S O in G O . The input side of π converges to S I whereas the output side π converges to S O . Now, let S be an SCC with projections S I on G I and S O on G O . Then we call S accepting if for every constraint ϕ l , where l ∈ {1, . . . , k}, one of the following holds: Then from the definition of an accepting SCC we have the following: a strategy that makes sure that every play converges to an accepting SCC, in which all the relevant guarantee states are visited, is a winning strategy for the system in (GS , ϕ). To synthesize such a strategy, we do the following: (i) synthesize a strategy f B for which every play converges to an accepting SCC; (ii) synthesize a strategy f travel that travels within every accepting SCC, satisfying as many of the g l,j guarantees as possible. (iii) construct an overall winning strategy f that works as follows: the system plays f B until reaching an accepting SCC S, then the system switches to f travel to satisfy as many of the g l,j guarantees in S as possible; if the environment moves the play to a non-accepting SCC, the system can start playing f B again to reach a different accepting SCC.
The strategy f B can be found by synthesizing the weak Büchi game (GS , GF(acc)), where acc is the assertion that accepts exactly those states that belong to accepting SCCs (note that (GS , GF(acc)) is a well defined weak Büchi game). f travel can be constructed by simply finding a path in S O that satisfies the maximum number of guarantees.
A complication arises however when switching between f travel and f B , since it is conceivable that while the system is following f travel , the environment could move to a different SCC that is outside of the winning region of f B . Thus, it is not clear that we can combine these strategies to make an overall winning strategy for the system. To show that we can indeed combine both strategies, we need the following property that we call the delay property: if (i 1 , o 1 ) is a state in the winning region of f B , and (i 2 , o 0 ) is a state for which there is a path in G I from i 1 to i 2 and a path in G O from o 0 to o 1 , then (i 2 , o 0 ) is also in the winning region of f B . We formally state and prove the delay property in Section 5.2. In Section 6 we give details of the construction of f B , f travel and the use of the delay property to prove correctness of the overall winning strategy f .

The Delay Property
The delay property essentially says that if an SCC S is contained in the winning region, and the environment moves from S unilaterally to a different SCC S , then S is also in the winning region of the system. In this section, we prove that the Büchi game (GS , GF(acc)) where GS = (I, O, θ I , θ O , ρ I , ρ O ), as defined in Section 5.1, satisfies the delay property. Throughout this section, we write G I and G O to denote the graphs over 2 I and 2 O , respectively, as in Section 5.1. We start with the following lemma that states that the system can still win in spite of a single step delay. Lemma 1. Let i 0 , i 1 ∈ 2 I such that (i 0 , i 1 ) |= ρ I , and assume that the system can win from (i 0 , o 0 ). Then the system can also win from (i 1 , o 0 ).
Proof. Let f be a winning strategy for the system from (i 0 , o 0 ). We construct a winning strategy f d from (i 1 , o 0 ). Intuitively, f d acts from state (i 1 , o 0 ) as if it were following f from state (i 0 , o 0 ), with a delay of a single step: the input in the current step is used to choose the output in the next step.
We use f to define f d inductively over play prefixes of length m ≥ 1, by setting f d ((i 1 , o 0 ) Since the conditions for an SCC D to be accepting depend only on the relation between D| I and D| O , we have thatŜ is accepting since S is accepting as well.
We can now prove the delay property, following by straightforward induction from Lemma 1. Proof. From (i n , o −m ), the system can simply ignore the inputs and follow the path in G O to o 0 . Let (i n+m , o 0 ) be the state at that point in some play. Note that there is a path between i n and i n+m , and therefore there is a path between i 0 and i n+m . If the system can win from (i 0 , o 0 ) then by using Lemma 1 in the induction steps, the system can win by induction from (i , o 0 ) for all i such that there is a path in between i 0 and i . Therefore, the system can win from (i n+m , o 0 ), and by consequence from (i n , o −m ).
A corollary of Theorem 1 is the following statement about the structure of the winning region of the weak Büchi game B = (GS , GF(acc)) as defined in Section 5.1. We use Theorem 1 and Corollary 1 in the proof of correctness of the overall winning strategy f , as described in Section 6.2.

Algorithms for Solving Separated GR(k) Games
In this section we provide the exact details of our synthesis algorithm for Separated GR(k) games, as described in Section 5.1. Since constructing f B involves defining and solving a weak Büchi game , we first describe these in Section 6.1. We remark that our weak Büchi game synthesis algorithm works for all weak Büchi games, and not just for the special weak Büchi game defined in Section 5.1. Specifically, it works even when the underlying game structure does not separates variables. Next, in Section 6.2, we complete the algorithm construction and describe the correctness of our overall synthesis algorithm. Full proofs of the theorems in this section appear in Appendix B.

Realizability and Synthesis for Weak Büchi Games
We present a symbolic algorithm to solve synthesis of a weak Büchi game. When represented in explicit state-representation, weak Büchi games are known to be solved in linear-time in the size of the game [15,35]. In this section, we adapt the algorithm from [15,35] to symbolic state-space representation. For sake of exposition, we give an overview of the algorithm and then present our symbolic modification.
Overview Given a weak Büchi game, recall that each SCC in its game graph G is either an accepting SCC or a non-accepting SCC. The goal is to find the winning regions in the weak Büchi game. This can be done by backward induction on the topological ordering of the SCCs as follows. Let (S 0 , . . . S m ) be a topological sort of the SCCs in G.
Base Case: Consider all terminal partitions, say S j , . . . , S m ; that is, every SCC from which no other SCC is reachable. In this case, plays beginning in a terminal SCC will never leave it. Therefore, all states of terminal SCCs that are accepting are in the winning region of the system and all states of terminal SCCs that are non-accepting are not in the winning region of the environment. Induction Step: Let S = (S i+1 , . . . , S m ), and suppose that the set S has been classified into winning regions for the system W s i+1 and the environment W e i+1 , respectively. Let S new = (S j , S j+1 , . . . , S i ) be the SCCs from which all edges leaving the SCC lead to an SCC in S. Further, let A and N be the unions of all accepting SCCs and all non-accepting SCCs in S new , respectively. Then the basic idea is as follows: The system can win from s ∈ N if and only if it can force F(W s i+1 ) from s. Analogously, the system can win from s ∈ A if and only if it can force G(A ∪ W s i+1 ) from s. Hence, by solving these reachability and safety games, we can update W s i+1 and W e i+1 into W s j and W s j that partition the larger set (S j , . . . , S m ) into winning regions for the system and the environment. The winning strategy can be constructed in a standard way as a side-product of the reachability and safety games in each step, see for example [58,59].
Symbolic Algorithm for Weak Büchi Games. Given a weak Büchi game B = ((I, O, θ I , θ O , ρ I , ρ O ), GF(acc)) with BDDs representing θ I , θ O , ρ I , ρ O and acc, our goal is to compute a BDD for the winning region and to synthesize a memoryless winning strategy for the system. The construction follows a fixed-point computation that adapts the inductive procedure described in the overview: In the basis of the fixed point computation, the winning region is the set of accepting terminal SCCs; in the inductive step, the winning region includes winning states by examining SCCs that are higher in the topological ordering on SCCs. In what follows we describe a sequence of BDDs that we construct towards constructing the overall BDD for the winning region.We use the notation X to denote a set of variables over I ∪ O. For the sake of the current construction, memoryless strategies are given in the form of BDDs over X , X , for further details on the BDDs constructions refer to Appendix A.
BDD constructions. We start by constructing a BDD for a predicate that indicates whether two states in a game structure are present in the same SCC. Let Computing the Winning Region. We now describe the fixed-point computation to construct a BDD for the winning region in a weak Büchi game. Let Reachability (M,N ) (X ) denote a BDD generated by solving a reachability game that takes as input a set of source states M and target states N and outputs those states in M from which the system can guarantee to move into N . Similarly, let Safety (M,N ) (X ) denote a BDD generated by solving a safety game that takes as input a set of source states M and target states N and outputs those states in M from which the system can guarantee that all plays remain inside the set N . These constructions are standard, details can be found in [25,Chapter 2]. Now, let Win(s) denote that state s over I ∪ O is in the winning region. Then, Win(X ) is the fixed point of the BDD Win Aux defined below, where the construction essentially follows the high-level algorithm description. The BDD Acc(X ) represents the formula acc encoding the set of accepting states. In addition, DC i (X ) is the union S of the Downward-Closed set of SCCs, i.e. the SCCs that have already been classified into winning or not-winning, and DC i new (X )is the union S new of the SCCs in DC i (X ) that were not in DC i−1 (X ). Finally, N i (X ) is the subset N of non-accepting states in DC i new (X ), and A i (X ) is the subset A of accepting states in DC i new (X ). We then define Win Aux as follows.
To explain the construction of Win, note that a state s in DC i+1 (X ) is winning in one of these cases: (i) s is a winning state in DC i (X ). (ii) s is a non-accepting state in DC i+1 (X ) from which the system can force the play into a winning state in DC i (X ). This set of states can be obtained from Reachability (N i+1 (X),Win Aux i (X )) (X). (iii) s is an accepting state in DC i+1 (X ) from which the system can guarantee that every play that leaves the accepting SCC moves into a winning state in DC i (X ). This set of states can be obtained from Safety (A i+1 (X),A i+1 (X)∨Win Aux i (X )) (X).
The fixed-point computation can be extended in a standard way to also compute a BDD representation Fb(X, X ) of the winning strategy f B , such that (s, (i , o )) |= Fb(X, X ) iff f B (s, i) = o. See Appendix B.1 for details. We then have the following theorem that follows our construction.

Realizability and Synthesis for Separated GR(k) Games
We finally make use of the elements obtained so far towards solving synthesis for Separated GR(k) games. Our construction follows the overview from Section 5.1. To recall, we describe and construct two auxiliary strategies f B and f travel and combine them to generate the final strategy f . We use the delay property theorem from Section 5.2 to prove the correctness of our algorithm.
We are given a Separated GR(k) game structure GS = (I, O, θ I , θ O , ρ I , ρ O ) and a winning condition ϕ = k l=1 ϕ l , where ϕ l = n l i=1 GF(a l,i ) → m l j=1 GF(g l,j )). We first represent GS and ϕ as BDDs by standard means. We then define and construct the following.
Constructing f B . Auxiliary strategy f B is the winning strategy of the system player in a weak Büchi game constructed form the separated GR(k) game. To construct a weak Büchi game, we first construct, in O(|ϕ| + N ) symbolic operations, a BDD Acc(I ∪ O) that describes the set of accepting states. The construction is standard, see Lemma 5 in Appendix B.2 for details. Next, let acc be the assertion represented by Acc (the assertion defined in Section 5.1). Then the Weak Büchi game is B = (GS , GF(acc)). Finally, we construct f B as the winning strategy of B, following Section 6.1 and Section B.1.
Constructing f travel . For the construction of f travel , we arbitrarily order all guarantees that appear in our GR(k) formula: gar 0 , . . . , gar m−1 . For each guarantee gar j , we construct a reachability strategy f r(j) that, when applied inside an SCC S O in the output game graph G O , moves towards a state that satisfies gar j without ever leaving S O . In case no such state exists in S O , f r(j) returns a distinguished value ⊥. Note that this strategy can entirely ignore the inputs. We equip f travel with a memory variable mem that stores values from {0, . . . , m−1}. Then f travel (s, i) is operated as follows: for mem, mem +1, . . . we find the first mem +j (mod m) such that the SCC of s includes a gar j -state, and activate f r(mem+j) to reach such state. If no guarantees can be satisfied in S, we just return an arbitrary output to stay in S O . The construction of f travel requires O(|ϕ|N ) symbolic BDD-operations as we need to construct m reachability strategies (clearly, m ≤ |ϕ|).
Constructing the overall strategy f . Finally, we interleave the strategies f B and f travel into a single strategy f as follows: given a state s and an input i, if s |= Acc(X) (that is, if s is an accepting state), then set f (s, i) = f travel (s, i); otherwise set f (s, i) = f B (s, i). Whenever f switches from f B to f travel , the memory variable mem is reset to 0. The next lemma proves that if f B is winning then so is f .

Lemma 2.
If f B is a winning strategy for the weak Büchi game B = (GS , GF(acc)), then f is a winning strategy for the Separated GR(k) game (GS , ϕ).
Proof. Since f B is a winning strategy, then for every initial input i |= θ I there is an initial output o |= θ O such that (i, o) is in the winning region of GS.
We show that playing f always keeps the play in the winning region of GS, and therefore the play eventually converges to an accepting SCC. Once this happens, following f travel guarantees that ϕ is satisfied. We know that as long as the play is in the winning region of B, following f B will keep it inside the winning region. Therefore, when we switch from f B to f travel we must be inside the winning region and, by definition of f , in some accepting SCC S. Then f travel makes sure that as long as the environment remains in S| I , the projection of S over the inputs, the system remains in S| O , the projection of S over the output. Thus all in all the play remains in the winning region of S.
Therefore, the only way that the play can leave the winning region is if, when the system is in a state (i 0 , o 0 ) and chooses some output o −m according to f travel , the environment chooses input i n such that the play leaves S and moves to a state (i n , o −m ) in a different SCC of G. Note, however, that in this case there is a path from i 0 to i n and a path from o −m to o 0 (since by construction f travel remains in the same SCC in G O ). Since (i 0 , o 0 ) is in the winning region, by Theorem 1 we have that (i n , o −m ) is in the winning region as well.
Final Results. Given Lemma 2, we can obtain our final results on synthesis and realizability of Separated GR(k) games, as follows. Given a Separated GR(k) game (GS , ϕ), construct acc and solve the weak Büchi game (GS , GF(acc)). Then construct f B , f travel and f as described above. If realizable, then f B is a winning strategy and from Lemma 2 we have that f is a winning strategy for (GS , ϕ). If (GS , GF(acc)) is unrealizable, then the environment can force every play to converge to a non-accepting SCC. Since the GR(k) winning condition cannot be satisfied from a non-accepting SCC, (GS , ϕ) is also not realizable. Thus we have the following theorem, see Appendix B.2 for details. Theorem 3. Realizability for separated GR(k) games can be reduced to realizability of weak Büchi games.
The final result on solving Separated GR(k) games is as follows: This is an improvement over the complexity of GR(k) games, in general [49].

Implementation and Evaluation
We have implemented our Separated GR(k) framework for realizability and synthesis in a prototype tool SGR(k). The tool implements our symbolic algorithm using the CUDD The multi-mode hardware example is a generalization of the example presented at the beginning of Section 4. It is parameterized by the number of bits n and has 2 n modes. The Target can move from mode 0 to any mode and stay there, while the Adaptee can only move from mode 0 to odd-numbered modes, and up and down between modes 2i and 2i + 1. The specification consists of 2n variables. We generate 10 such benchmarks with n ∈ {1, . . . , 10}.
The cleaning robots example is parameterized in the number of rooms. For a scenario with n rooms, the specification is written over 4n + 1 variables. We create 10 such benchmarks with n ∈ {1 . . . , 10}.
The railways signalling example consists of two parameters: a junction of n railways and the frequency parameter m. With parameters n and m, the specification consists of (2 + 2 log m )n variables. We generate 18 benchmarks with n ∈ {2, . . . , 10} and m ∈ {2, 3}.
Experimental Setup and Methodology. We evaluate our tool against Strix [1,38,43], the current state-of-the-art tool for LTL synthesis and SYNT-COMP 2020 winner of 3 out of 4 tracks [2]. In order to run our benchmarks on Strix, we transform the benchmarks (a game structure and a winning condition) into an LTL formula that characterizes the same winning plays using the strict semantics from [27]. To the best of our knowledge, there is no other synthesis/realizability tool that operates on GR(k) specifications.
We compare the running time for checking realizability. For this, we compare the runtime of realizability checks of each benchmark on both tools. Every benchmark is tested 10 times on both tools. We do this to account for the randomness introduced during BDD construction due to the automatic variable ordering by CUDD. For each benchmark we evaluate (a). the number of executions on which the tools terminate and (b). the mean running time over 10 executions.
All experiments were executed on a single node of a high-performance computer cluster consisting of an Intel Xeon processor running at 2.6 GHz with 32GB of memory with a timeout of 10 mins. Observations and Inferences. Our experiments clearly demonstrate the scalability and efficiency of our tool in solving Separated GR(k) formulas. Figure 4 plots the mean running time for the three benchmarks. We further report the mean values in Table 1. The table rows refer to the benchmarks    The results show that our tool solves a significantly larger number of benchmarks than Strix. On the few benchmarks which Strix solves, our tool outperforms it by several orders of magnitude. Although the running time may vary depending on the automatic variable ordering chosen by CUDD, we do not believe it would vary enough to significantly change the results. Specifically, we calculated the 99% confidence interval for our results, and validated that for all data points our tool's entire interval lies below the entire interval for Strix.
Only three benchmarks were unsolvable by our tool (in the sense that the majority of the 10 executions timed out). The three benchmarks are the railway signal examples with (n = 10, m = 2), (n = 9, m = 3), and (n = 10, m = 3). These benchmarks consist of a large number of variables (54,40, and 60, respectively), making them particularly challenging. All executions of the remaining benchmarks were solved in less than 4 mins by our tool.
We also examined the number of solved executions per benchmark. Our tool solved all 10 executions for 35 out of 38 benchmarks. These are the 35 benchmarks that appear as solved in Figure 4. For the railway signalling benchmark with (n = 10, m = 2), our tool solved 2 out of 10 executions. In contrast, Strix was not able to solve even one execution for 31 out of 38 benchmarks. Even increasing the timeout to 8hrs only allowed Strix to solve a single additional benchmark. In total, Strix and our tool verified realizability of 7 benchmarks and 36 out of 38 benchmarks, respectively.
In summary, our experiments demonstrate that our tool is able to solve specifications which are challenging for existing tools.

Related Work
The Adapter design pattern was introduced in [23], and has been used in many software contexts since. Our interpretation of the pattern is inspired by automata based description of the pattern proposed by Pedrazzini [48]. We reformulated the problem as synthesis of reactive controllers that compose with existing systems to achieve a temporal specification. This is an active area of research [4,8,9,16,18,22]. Our work differs from existing frameworks in its variables separation feature. Shield synthesis is a notable similar problem in which a synthesized controller corrects safety violations of an existing controller [30]. In contrast, our problem is mostly concerned about liveness adaptation.
Reactive synthesis of LTL winning conditions is 2EXPTIME complete in the size of the formula [51], making it difficult to scale for applications. An approach to overcome the computational barrier has been to investigate fragments and variants of LTL with lower complexity for synthesis [5,17,20,37,60]. The GR(k) is one such fragment [11]. It offers a balance between efficiency and expressiveness. GR(k) games are known to be efficient as they are solved in exponential time in the number of conjunctions k rather than exponential in the state-space [49]. Several studies have also shown they are highly expressive. As evidence, all properties expressed by deterministic Büchi automata (DBA) can be expressed in GR(k) [20]. A study of commonly appearing LTL patterns has shown that 52 of 55 patterns are DBA properties [19,40]. DBA properties have also been identified as common patterns in robotics applications [42].
Finally, Separated GR(k) games exhibit the delay property. Intuitively, this property means that the system can win even after delaying its action for a finite amount of time while ignoring the environment before "catching up" with the environment. While this is reminiscent of asynchrony in reactive systems [7,52,55], a further exploration of relations between asynchrony and the delay property is required.

Conclusion
This paper presents a reactive systems-based model of the adapter design pattern. We model the adapters as transducers and reduce the problem of finding an Adapter transducer for a given Adaptee and Target systems, to the problem of synthesizing strategies for Separated GR(k) games. Through an analysis of theoretical complexity and algorithmic performance, we show that realizability and synthesis of Separated GR(k) games is efficient and scalable. Furthermore, by outperforming Strix, an existing state-of-the-art synthesis tool, we show that algorithms for the Separated GR(k) class of specifications add value to the portfolio of reactive synthesis tools.
The benefits of separation of input and output variables were previously shown in the context of Boolean Functional Synthesis [14]. Through this work, we showed that the separation leads to practically viable solutions even in the context of temporal reactive synthesis, more specifically when encoding the types of equivalence relations that appear in reactive adaptation, where properties of runs of the first system are compared to properties of runs of the other. Since the systems may be loosely coupled, i.e., they may not run on the same clock, specifications that impose joint temporal constraints on the two systems may not be realizable. Thus, our proposition to use the type of equivalence that separated GR(k) formulas allow, gives users the power needed for comparing the overall behaviors of the systems while allowing realizability and efficient synthesis.
The results presented in this paper encourage future studies on the separation of variables in a broader context. For instance, we could reason about variants of the adapter design pattern that do not separate variables all the way through. That is to say, variants that translate to more general GR(k) specifications in which the separation appears in the input and output systems but not in the specification itself. One could further study the notion of separation of variables in more the general LTL specifications. Another direction is to consider systems that gets two types of input: from the existing system (i.e. the Target) as well as from an environment. We believe that these future directions would enable the development of tools for synthesis from temporal specifications with a focus on expressing realistic applications as well as ensuring scalability and efficiency for practical needs.

A Binary Decision Diagrams
A Binary Decision Diagram (BDD) [12,28] is a DAG representation of an assertion. The non-leaves BDD nodes are labeled with variable names, the BDD arrows are labeled variable valuations, and BDD leaves with true/false. BDDs are considered a useful representation as they support Boolean operations and further satisfaction queries that can be computed efficiently [13]. Various professional tools that implement BDDs exist e.g. [33,44,54,56,57].
We say that a BDD B is over V 1 , . . . , V n , if it represents an assertion over V 1 ∪ · · · ∪V n , and write B(V 1 , . . . , V n ). For a BDD B(V 1 , . . . , V n ) and states s 1 , . . . , s n over V 1 , . . . , V n , respectively, we write (s 1 , . . . , s n ) |= B(V 1 , . . . , V n ) if the assertion that B(V 1 , . . . , V n ) represents is evaluated to true by the states s 1 , . . . , s n . For a set of variables V, we write V = V to denote the BDD that represents the assertion v∈V (v = v ). We also employ the following symbolic operations:

B Proofs for Section 6
We turn to prove Theorems 2, 3 and 4, given in Section 6. Throughout the appendix, we use the fact that ω-regular games are determined [25,41]. That is, for each game with an ω-regular winning condition and a state s, either the system has a winning strategy form s, or the environment has a spoiling strategy from s: A strategy that forces the negation of the winning condition regardless of the system's strategy.

B.1 Proof for Theorem 2
In this section we prove: Step.
We remind that we denote the union I ∪ O by X, and presented an O(N ) time construction for the next BDDs (see Section 6.1, BDD Constructions): SCC(X, X ). (s, t ) |= SCC(X, X ) iff s and t belong to the same SCC of G, the game graph of the game structure GS . Terminal(X). s |= Terminal(X) if s belongs to a terminal SCC S; no other SCC is reachable from S.
To provide a rigorous proof for Theorem 2, in Algorithm 1 we modify the weak Büchi winning states construction from Section 6 to construct also a corresponding winning strategy. We remind some notations. Given sets of states A and B: Reachability (A,B) (X ). A BDD for the states in A from which the system can force reaching B. Safety (A,B) (X ). A BDD for the states in A from which the system can force the play to stay in B.
In addition to those BDDs, to reason about the weak Büchi strategy construction, we consider also their corresponding (memoryless) strategies: FReachability (A,B) (X , X ) and FSafety (A,B) (X, X ). Note that these two strategies are given as BDDs over I∪O∪I ∪O , in compliance with the discussion given in Appendix A. We argue that algorithm 1 is correct.
Lemma 3. The fixed-points of Win Aux(X) and Fb Aux(X, X ) in Algorithm 1 are BDDs for the winning states and a winning strategy in the weak Büchi game (GS , GF(Acc)), respectively.
Proof. Let Win(X) and Fb(X, X ) be the fixed-points of Win Aux(X) and Fb Aux(X, X ). We divide the proof into soundness and completeness. That is, we show that (a). Fb(X, X ) wins from Win(X), and (b). no other state is winning for the system. Soundness. We show, by induction, that for each i, Fb Aux i (X, X ) is a winning strategy from Win Aux i (X). The claim clearly holds for Fb Aux 0 (X), because every strategy is winning from the accepting terminal SCCs.
For the induction step, take s ∈ Win Aux i+1 (X): and Proof. We start with Reachability (N i+1 (X),Win Aux i (X )) (X). To compute this set of states we perform the next fixed-point computation in the sub-game over DC i+1 (X): Z 0 = Win Aux i (X), and Z j+1 = pre s (Z j ) ∪ Z j , where pre s (Z) is the set of states from which the system can force reaching Z in a single step. Thus, Z 0 ⊆ Z 1 ⊆ · · · , and the winning region is Z j that satisfies Z j = Z j+1 . We refer the reader to [25,Chapter 20] for a comprehensive discussion. Now, at each step j 0 < j, at least one state is added to Z j0 thus there are at most |DC i+1 (X) \ Win Aux i (X)| steps, each is performed in O(1) operations. Furthermore, if s ∈ DC i (X) \ Win Aux i (X), then we never add s to a set Z j0 . This claim is argued is follows. If we add s to some Z j0 , then the system can force reaching Win Aux i (X) from s, in contradiction to what we proved in Lemma 3 (specifically, the completeness). Hence, an upper bound of O(|DC i+1 (X) \ DC i (X)|) operations is established.
A similar argument proves the same for Safety (A i+1 (X),A i+1 (X)∨Win Aux i (X ) (X). This BDD is computed by Z 0 = A i+1 (X) ∨ Win Aux i (X ), and Z j+1 = Z j ∩ pre s (Z i ). Thus, Z 0 ⊇ Z 1 ⊇ · · · , and the winning region is Z j that satisfies Z j = Z j+1 . As before, at each step j 0 < j at least a single state is removed from Z j0 and hence, an upper bound of O(|A i+1 (X)∨Win Aux i (X )|) steps holds. Furthermore, a state s ∈ Win Aux i (X) is never removed from every Z j . This claim is argued as follows. Take s ∈ Win Aux i (X), and assume that s ∈ Z j0 \ Z j0+1 . Consequently, the system cannot win the safety game from s. Hence, the environment has a strategy to reach DC i (X) \ Win Aux i (X) form s, in contradiction to what we have proved in Lemma 3. Therefore, the computation terminates after at most O(|A i+1 (X)|) ≤ O(|DC i+1 (X) \ DC i (X)|) steps, as required.
We turn to prove Theorem 2.
Proof (proof of Theorem 2). By Lemma 3, Algorithm 1 computes both the system's winning region and a corresponding memoryless winning strategy. Lemma 4 implies that the algorithm terminates in time O(N ).

B.2 Proofs for Theorem 3 and Theorem 4
In this section we prove: Theorem 3. Realizability for separated GR(k) games can be reduced to realizability of weak Büchi games.  ). First, we identify all states that their SCC includes a g l,j -state. For j ∈ {1, . . . , m l }, let GAR l,j (X, X ) be a BDD such that (i 0 , o 0 , i 1 , o 1 ) |= GAR l,j (X, X ) iff o 1 |= g l,j . Write SGAR l,j (X) := ∃X (SCC ∧ GAR l,j ). Therefore, s |= SGAR l,j (X) iff the SCC of s includes a g l,j -state. Now, we focus on the assumptions. In similar to the former case, let ASM l,j (X, X ) be a BDD such that (i 0 , o 0 , i 1 , o 1 ) |= ASM l,j iff i 1 |= a l,i . Write SASM l,j(X) := ∀X (SCC(X, X ) → ¬ASM l,j (X, X )). Therefore, s |= SASM l,j (X) iff s's SCC does not include an a l,j -state.
As a result, the BDD Acc := k l=1 (( m l j=1 SGAR l,j )∨( n l i=1 SASM l,i )) satisfies the require, and its construction is performed in time O(|ϕ| + N ).
After constructing the BDD Acc(X ) for acc, we can obtain a winning strategy f B for the weak Büchi game (GS , GF(acc)) via Algorithm 1. Algorithm 2 then describes how to construct a winning strategy f for the separated GR(k) game (GS , ϕ) using acc, f B and the strategy f travel for visiting all guarantees in an SCC.
The strategy f travel is described in Section 6.2 and detailed in the second part of Algorithm 2. Note that, for the sake of presentation, we assume that if s ∈ S, an SCC that includes no gar j -state, then the reachability strategy f r(j) (s, i ) for guarantee gar j returns a distinguished value ⊥. If all f r(j) return ⊥, then no guarantee is satisfiable in S, and therefore f travel simply returns an arbitrary output that remains inside S (line 17). Also note that f travel updates its memory variable mem whenever it reaches a state that satisfies a guarantee (line 12), and f resets mem whenever it switches from f travel to f B (line 4).
Lemma 6. Let W be the system's winning region for the weak Büchi game B = (GS , GF(acc)). Then, Algorithm 2 constructs a winning strategy from W .
Proof. We will prove that if s ∈ W , then the strategy f described in Algorithm 2 wins (GS , ϕ) from s. We reason by cases.
First, consider the case where s |= acc, i.e., s is in an accepting SCC S. Then, f (s, i) returns the output of f travel (s, i). There are two possibilities: