Finite-Trace and Generalized-Reactivity Speciﬁcations in Temporal Synthesis

Linear Temporal Logic ( LTL ) synthesis aims at automatically synthesizing a program that complies with desired properties expressed in LTL . Unfortunately it has been proved to be too difﬁcult computationally to perform full LTL synthesis. There have been two success stories with LTL synthesis, both having to do with the form of the speciﬁcation. The ﬁrst is the GR (1) approach: use safety conditions to determine the possible transitions in a game between the environment and the agent, plus one powerful notion of fairness, Generalized Reactivity(1), or GR (1). The second, inspired by AI planning, is focusing on ﬁnite-trace temporal synthesis, with LTL f ( LTL on ﬁnite traces) as the speciﬁcation language. In this paper we take these two lines of work and bring them together. We ﬁrst study the case in which we have an LTL f agent goal and a GR (1) assumption. We then add to the framework safety conditions for both the environment and the agent, obtaining a highly expressive yet still scalable form of LTL synthesis.


Introduction
Program synthesis is considered the culmination of the ideal of declarative programming [Finkbeiner, 2016;Ehlers et al., 2017].By describing a system in terms of what it should do, instead of how it should do it, we are able, on the one hand, to simplify the program design process while avoiding human mistakes and, on the other hand, to allow an autonomous agent to self-program itself just from high-level specifications.Linear Temporal Logic (LTL) synthesis [Pnueli and Rosner, 1989] is possibly one of the most popular variants of program synthesis, being the problem of automatically designing a reactive system with the guarantee that all its behaviors comply with desired dynamic properties expressed in LTL, the most used system/process specification language in Formal Methods.Unfortunately this dream of LTL synthesis has proven to be too difficult, and, in spite of a full-fledged theory, we still do not have good scalable algorithms after more than 30 years [Kupferman, 2012].
There have been two successful responses to these difficulties, both having to do with limiting the expressive power of the formalism used for the specification.The first approach, developed in Formal Methods, has been what we may call the GR(1), response [Bloem et al., 2012]: essentially you focus on safety conditions, determining the possible transitions in a game between the environment and the agent, plus one powerful notion of fairness called Generalized Reactivity(1), or GR(1).This approach has found numerous applications, for example, in robotic motion-and-mission planning [Kress-Gazit et al., 2009].The second approach, developed in AI and inspired by classical AI planning, is of finite-horizon temporal synthesis, with LTL f (LTL on finite traces) [De Giacomo and Vardi, 2013] as the specification language.In this approach [De Giacomo and Vardi, 2015], we specify the agent's goal in LTL f , together possibly with some assumptions on the environment, such as safety conditions, possibly specified as a nondeterministic planning domains [Camacho et al., 2017;De Giacomo and Rubin, 2018;Aminof et al., 2018;Camacho et al., 2018;He et al., 2019], or simple fairness and stability conditions (both special cases of GR(1) fairness) [Zhu et al., 2020].There are also studies in which general LTL assumptions are used for LTL f goals, but in this case the difficulties of handling LTL can indeed manifest [Camacho et al., 2018;Aminof et al., 2019;De Giacomo et al., 2020b].Since LTL f is a fragment of LTL, as shown in [De Giacomo and Vardi, 2013], the problem of LTL f synthesis under LTL assumptions can be reduced to LTL synthesis, as, e.g., explicitly pointed out by [Camacho et al., 2018].However, LTL synthesis algorithms do not scale well due to the difficulty of Büchi automata determinization, see e.g., [Finkbeiner, 2016].
In this work we propose to take these two lines of work, which are really the only successful stories in LTL synthesis, and bring them together.We first study the case in which we have an LTL f agent goal and a GR(1) assumption.We propose an approach based on using the automaton corresponding to the LTL f goal as the game arena on which the environment has to satisfy its GR(1) assumption.This means that we are able to reduce the problem to that of GR(1) synthesis over the new arena.We prove the correctness of the approach.
We then add to the framework safety conditions for both the environment and the agent, obtaining a highly expressive yet still scalable form of LTL synthesis.These two kinds of safety conditions differ, since the environment needs to main-tain its safety indefinitely (as usual for safety), while the agent has to maintain its safety conditions only until s/he fulfils its LTL f goal, i.e., within a finite horizon, something that makes them similar to "maintenance goals" in Planning [Ghallab et al., 2004].We show that we can specify these safety conditions in a very general way by using LTL f .In particular, our safety conditions require that all prefixes of a trace satisfy an LTL f formula.For the environment safety conditions we consider all finite prefixes of infinite traces, while for the agent safety conditions we consider all prefixes of the finite trace satisfying the agent's LTL f goal.Again we prove the correctness of our approach and demonstrate its scalability through an experimental analysis.

Preliminaries
LTL and LTL f .LTL is one of the most popular logics for temporal properties [Pnueli, 1977].Given a set of propositions P rop, the formulas of LTL are generated as follows: where a ∈ P rop.We use common abbreviations, so we have eventually as 3ϕ ≡ true U ϕ and always as 2ϕ ≡ ¬3¬ϕ.
LTL formulas are interpreted over infinite traces π ∈ (2 P rop ) ω .A trace π = π 0 , π 1 , . . . is a sequence of propositional interpretations (sets), where for every i ≥ 0, π i ∈ 2 P rop is the i-th interpretation of π.Intuitively, π i is interpreted as the set of propositions that are true at instant i.Given π, we define when an LTL formula ϕ holds at position i, written as π, i |= ϕ, inductively on the structure of ϕ, as: there exists j ≥ i such that π, j |= ϕ 2 , and for all k, i ≤ k < j we have that π, k |= ϕ 1 .
We say π satisfies ϕ, written as π |= ϕ, if π, 0 |= ϕ.LTL f is a variant of LTL interpreted over finite traces instead of infinite traces [De Giacomo and Vardi, 2013].The syntax of LTL f is exactly the same to the syntax of LTL.We define π, i |= ϕ, stating that ϕ holds at position i, as for LTL, except that for the temporal operators we have: there exists j such that i ≤ j ≤ last(π) and π, j |= ϕ 2 , and for all k, i ≤ k < j we have that π, k |= ϕ 1 .
where we denote the last position (i.e., index) in the finite trace π by last(π).In addition we define the weak next operator • as abbreviation of • ϕ ≡ ¬ ¬ϕ.Note that, over finite traces, ¬ ϕ ≡ ¬ϕ, instead ¬ ϕ ≡ • ¬ϕ.We say that a trace satisfies an LTL f formula ϕ, written π |= ϕ, if π, 0 |= ϕ.Generalized Reactivity(1) formulas.Generalized Reactivity(1) [Piterman et al., 2006], or GR(1), is a fragment of LTL that generalizes fairness (23ϕ) and stability (32ϕ) formulas (cf.[Zhu et al., 2020]).Given a set of propositions P rop, a GR(1) formula ϕ is required to be of the form where J i and K j are Boolean formulas over P rop.Deterministic Automata.A deterministic automaton (DA, for short) is a tuple A = (Σ, S, s 0 , δ, α), where Σ is a finite alphabet, S is a finite set of states, s 0 ∈ S is the initial state, δ : S × Σ → S is the transition function, α ⊆ S ω is an acceptance condition.Given an infinite word w = a 0 a 1 a 2 . . .∈ Σ ω , the run of A on w, denoted by A(w) is the sequence r = s 0 s 1 s 2 . . .∈ S ω starting at the initial state s 0 where s i+1 = δ(s i , a i ).The automaton A accepts the word w if A(π) ∈ α.The language of A, denoted by L(A), is the set of words accepted by A. In this work we specifically consider reachability, safety, and reachability-safety acceptance conditions: Reachability conditions.Given a set T ⊆ S of target states, Reach(T ) = {s 0 s 1 s 2 . . .∈ S ω | ∃k ≥ 0 : s k ∈ T } requires that a state in T is visited at least once.Safety conditions.Given a set T ⊆ S of target states, Saf e(T ) = {s 0 s 1 s 2 . . .∈ S ω | ∀k ≥ 0 : s k ∈ T } requires that only states in T are visited.This is the dual of reachability conditions.Reachability-Safety conditions.Given two sets T 1 , T 2 ⊆ S of target states corresponding to reachability and safety conditions, respectively, requires that a state in T 1 is visited at least once, and until then only states in T 2 are visited.
We define the complement of a DA A = (Σ, S, s 0 , δ, α) as A = (Σ, S, s 0 , δ, S ω \α).Note that L(A) = Σ ω \L(A).Note also that S ω \Reach(T ) = Saf e(S\T ) and S ω \Saf e(T ) = Reach(S \ T ).Therefore, the complement of a DA with a reachability acceptance condition is a DA with a safety acceptance condition, and vice-versa.We also define the in- GR(1) Games.Following [Piterman et al., 2006], we define a GR(1) game structure as a tuple G = V, I, O, θ a , θ p , ρ a , ρ p , ϕ where: V = {v 1 , . . ., v k } is a set of Boolean state variables.A state of the game is given by an assignment s ∈ 2 V of these variables.I ⊆ V is the set of input variables controlled by the antagonist.O = V \ I is the set of output variables controlled by the protagonist.θ a is a Boolean formula over I representing the initial states of the antagonist.θ p is a Boolean formula over V representing the initial states of the protagonist.ρ a is a Boolean formula over V ∪ I , where I is the set of primed copies of I.This formula represents the transition re-lation of the antagonist, between a state s ∈ 2 V and a possible input s I ∈ 2 I for the next state.ρ p is a Boolean formula over V ∪ I ∪ O , where O is the set of primed copies of O.This formula represents the transition relation of the protagonist, relating a pair (s, s I ) ∈ 2 V × 2 I of state s and input s I to an output s O .ϕ is the winning condition for the protagonist given by a GR(1) formula.We use the terms antagonist and protagonist instead of environment and agent to avoid confusion when we switch roles.

LTL f Synthesis under GR(1) Assumptions
In this section, we first study LTL f synthesis under assumptions, i.e., assuming that the behaviour of the environment is forced to satisfy certain restrictions, specified as GR(1) formulas.Formally, we are interested in solving synthesis for ϕ e GR(1) → ϕ a task1 ,where ϕ a task is an LTL f formula specifying the agent task, and ϕ e GR( 1) is a GR(1) formula that expresses restrictions on the environment behaviour.Definition 1 (LTL f Synthesis under GR(1) Assumptions).1.The problem is described as a tuple P = X , Y, ϕ e GR(1) , ϕ a task , where X and Y are two disjoint sets of Boolean variables, controlled respectively by the environment and the agent, ϕ e GR( 1) is a GR(1) formula, and ϕ a task in an LTL f formula.2.An agent strategy σ ag : (2 X ) * → 2 Y realizes ϕ a task under assumption ϕ e GR(1) if for every π = π 0 , π 1 , . . .∈ (2 X ∪Y ) ω consistent with σ ag such that π |= ϕ e GR(1) , there exists k ≥ 0 such that π k = π 0 , . . ., π k |= ϕ a task .3. Solving P consists in finding an agent strategy that realizes ϕ a task under assumption ϕ e GR(1) .To solve the problem P, we first observe that the agent's goal is to satisfy ¬ϕ e GR(1) ∨ ϕ a task , while the environment's goal is to satisfy ϕ e GR(1) ∧ ¬ϕ a task .Moreover, we know that ϕ a task can be represented by a DA with a reachability acceptance condition [De Giacomo and Vardi, 2015].Then, focusing on the environment point of view, we show that P can be reduced into a GR(1) game in which the game arena is the complement of the DA for ϕ a task , i.e., a DA with safety condition, and ϕ e GR(1) is the GR(1) winning condition.Since we want a winning strategy for the agent, we need to deal with the complement of the GR(1) game to obtain a winning strategy for the antagonist.More specifically, we can solve the problem by taking the following steps: with T = S \ T , and Σ = 2 X ∪Y .Note that A ag accepts a trace π iff π has no prefix satisfying ϕ a task .3. Define a GR(1) game G P with the environment as the protagonist, where the arena is given by A ag and the winning condition is given by ϕ e GR(1) .
4. Solve this game for the antagonist, i.e. the agent.
Building the GR(1) Game.We now detail how to build the GR(1) game G P (c.f., step 3 above).Given A ag = (2 X ∪Y , S, s 0 , δ, Saf e(T )), we start by encoding the state space S into a logarithmic set of variables Z (similarly to [Zhu et al., 2017b]).In what follows we identify assignments to Z with states in S, respectively.Given a subset Y ⊆ V and a state s ∈ 2 V , we denote by s| Y the projection of s to Y.We then construct the GR(1) game structure G P = V, I, O, θ a , θ p , η a , η p , ϕ as follows: • ϕ = ϕ e GR(1) .In the game G P , the environment takes the role of protagonist, and the agent of antagonist.States in the game are given by assignments of X ∪ Y ∪ Z, where the X and Y components represent respectively the last assignment of the environment and agent variables chosen by the players, and the Z component represent the current state of A ag .The agent first chooses the Y component of the next state.There is no restriction on what it can be, so θ a = η a = . Then, the environment chooses the X component, and based on the chosen assignments assigns the Z variables as well.θ p and η p enforce that the assignment to the Z variables is consistent with A ag , and η p also enforces that the safety condition Saf e(T ) is not violated.Note that a play of G P , given by ρ = ρ 0 ρ 1 . . .
Given a play ρ of G P , there are two ways the environment can lose in ρ.The first is by being unable to pick an assignment that satisfies η p .Since the transition relation δ of A ag is total, this can only happen if s| Z ∈ T , meaning that A ag rejects a run visiting s| Z .The other is by failing to satisfy ϕ e GR(1) .These correspond to the two ways that the specification can be satisfied: by satisfying ϕ a task or by violating the GR(1) assumption.Therefore, a play satisfies the specification iff it is losing for the protagonist of G P (i.e, the environment) and thus wining for the antagonist (i.e., the agent).
Theorem 1. P = X , Y, ϕ e GR(1) , ϕ a task is realizable iff the antagonist has a winning strategy in the GR(1) game G P .
We observe that an alternative approach to the problem of LTL f synthesis under GR(1) assumptions can be obtained by a reduction to standard LTL synthesis, as ϕ e GR(1) is a GR(1) formula (and therefore already in LTL), and ϕ a task is an LTL f formula, which can be linearly translated into LTL [De Giacomo and Vardi, 2013;Zhu et al., 2020].
Next we introduce safety conditions into the framework.Safety conditions are properties that assert that the behavior of the environment or the agent always remains within some allowed boundaries.A notable example of safety conditions for the environment are effect specifications in planning domains that describe how the environment can react to agent actions in a given situation.A notable example of safety conditions for the agent are action preconditions, i.e. the agent cannot violate the precondition of actions.Another notable example of safety conditions for the agent coming from planning are maintenance goals (c.f.[Ghallab et al., 2004]).Observe though that there is a difference between the safety conditions on the environment and those on the agent: the first must hold forever, while the second must hold until the agent task is terminated, i.e., the goal is fulfilled.
Typically we capture general safety conditions as LTL formulas that, if invalid, are always violated within a finite number of steps.2Alternatively, we can think of them as properties that need to hold for all prefixes of an infinite trace.Under this second view we can also describe the finite variant of safety by simply requiring that the safety condition holds for all prefixes of a finite trace determined by the LTL f agent task requirement.This view of safety conditions as properties that must hold for all prefixes also allows us to specify them in LTL f .Indeed all prefixes are indeed finite traces.Formally, in order to use LTL f formulas to specify safety conditions, we need to define an alternative notion of satisfaction that interprets a formula over all prefixes of a trace: Definition 2. A (finite or infinite) trace π satisfies an LTL f formula ϕ on all prefixes, denoted π |= ∀ ϕ, if every non-empty finite prefix of π satisfies ϕ.That is, π k = π 1 , . . ., π k |= ϕ, for every 1 ≤ k ≤ |π|.
Next we show that we can specify all possible safety conditions expressible in LTL, i.e., all first-order (logic) safety properties [Lichtenstein et al., 1985], using LTL f on prefixes.Theorem 2. Every first-order safety property can be expressed as an LTL f formula on all prefixes.
Turning to safety conditions for the agent, we observe that the fact that an LTL f formula holds for every prefix of an finite trace (in our case the trace satisfying the task of the agent), is expressible in first-order logic on finite traces, and hence directly as an LTL f formula [De Giacomo and Vardi, 2013].

Adding Safety into LTL f Synthesis under GR(1) Assumptions
We now enrich our synthesis framework by adding safety assumptions, expressed in LTL f , both on the environment and on the agent, following the considerations made previously.
In this setting, we are interested in solving the synthesis for: (ϕ e GR(1) ∧ ϕ e saf e ) → (ϕ a task ∧ ϕ a saf e ) where ϕ e GR(1) and ϕ e saf e are, respectively, a GR(1) formula and an LTL f formula expressing safety conditions, and ϕ a task and ϕ a saf e are LTL f formulas that express the agent task and the safety conditions, respectively.
The problem that we aim to solve is defined as follows.Definition 3 (LTL f under assumptions GR(1) assumptions, adding safety conditions).1.The problem is described as a tuple P = X , Y, Env , Goal , where X and Y are two disjoint sets of Boolean variables, controlled respectively by the environment and the agent, Env = ϕ e GR(1) , ϕ e saf e , and Goal = ϕ a task , ϕ a saf e , where ϕ e GR( 1) is a This class of synthesis problem is able to naturally reflect the structure of many reactive systems in practice.We illustrate this with a relatively simple example representing a three-way handshake used to establish a TCP connection.Example 1.In this example, the server and client involved in TCP connection are considered as environment and agent, respectively.Let X = {SynAck} and Y = {Syn, Ack}.
• The server can only send a SYN-ACK message after the client has sent a SYN message.ϕ e saf e = 2¬Syn → 2¬SynAck • If the client keeps sending a SYN message, the server eventually responds with a SYN-ACK message.ϕ e GR(1) = 23Syn → 23SynAck • The client should eventually send an ACK message, establishing the TCP connection.ϕ a task = 3Ack • The client can only send an ACK message after the server has sent a SYN-ACK message.ϕ a saf e = 2¬SynAck → 2¬Ack We now show that the synthesis problem P can be reduced into a GR(1) game G P , analogously to the construction of G P in Section 3. To solve this problem, the first thing to note is that ϕ a task ∧ ϕ a saf e can be represented by a DA with reachability-safety condition.As we will show later in this section, this DA can then be reduced into one with a pure reachability condition.Now, since the environment's goal is to satisfy ϕ e GR(1) ∧ ϕ e saf e ∧ ¬(ϕ a task ∧ ϕ a saf e ), then we can reduce P to solving a GR(1) game whose game arena is the product of the DA for ϕ e saf e with safety condition and the complement of the DA for ϕ a task ∧ ϕ a saf e with reachability condition, i.e. a DA with safety condition.Note that in what follows, we consider Σ = 2 X ∪Y .
To solve the synthesis problem P we proceed as follows: Note that A a t∧s accepts a trace π iff there exists k ≥ 0 such that π k |= ϕ a task and π k |= ∀ ϕ a saf e .4. Reduce A a t∧s to A ag = (Σ, S 1 × S 2 , (s 0 1 , s 0 2 ), δ , Reach(T )), as described later in this section.We have that L(A a t∧s ) = L(A ag ).

Complement
Note that B accepts exactly the safe prefixes for the environment.
8. Define a GR(1) game G P with the environment as the protagonist where the arena is given by B and the winning condition is given by ϕ e GR(1) (see Section 3).9. Solve this game for the antagonist, i.e. the agent.
We now detail the construction at Step 4 above.Let A = (Σ, S, s 0 , δ, α) be a DA with a reachability-safety condition α = Reach − Saf e(T 1 , T 2 ).We describe a reduction to a A = (Σ, S, s 0 , δ , α ) with a reachability condition α = Reach(T ) such that L(A ) = L(A).We define the transition relation of A as follows: Intuitively, the only change we make is to turn all non-safe states (states not in T 2 ) into sink states.We then define the reachability condition as α = Reach(T 1 ∩ T 2 ).Intuitively, we want to reach a goal state (a state in T 1 ) that is also safe (i.e., it is in T 2 ).The two automata are indeed equivalent: Lemma 1.Let A and A be as above, then L(A ) = L(A).
Proof.(Sketch) The idea is that since all unsafe states (states not in T 2 ) are converted to sinks, a run that reaches an unsafe state always gets stuck there, and therefore never reaches the accepting states T 1 ∪ T 2 .
Hence, we are able to reduce synthesis problem P = X , Y, Env , Goal to a GR(1) game as well.
Theorem 3. P = X , Y, Env , Goal , with Env = ϕ e GR(1) , ϕ e saf e and Goal = ϕ a task , ϕ a saf e , is realizable iff the antagonist has a winning strategy in the GR(1) game G P .

Experimental Analysis
We implemented the approach described in Section 5, which subsumes the method described in Section 3, in a tool called GFSYNTH.In this section, we first describe the implementation of GFSYNTH, and then introduce two representative benchmarks that are able to capture commonly used sensorbased robotic tasks.An empirical evaluation is shown at the end to show the performance of our approach.

Implementation
GFSYNTH runs in three steps: automaton construction, reduction to GR(1) game, and GR(1) game solving.In the first step, we use code from the LTL f -synthesis tool SYFT [Zhu et al., 2017b] to read and parse the input and construct corresponding DAs.We then perform the reduction to a GR(1) game following the steps in Section 5. Since all DAs are symbolically represented by Binary Decision Diagrams (BDDs), as in [Zhu et al., 2017b], we make use of the BDD library CUDD-3.0.0 [Somenzi, 2016] to implement operations such as bounded intersection, the reduction from reachabilitysafety to reachability and the final reduction to a GR(1) game.Finally we save the GR(1) game in the input format of the GR(1)-synthesis tool SLUGS [Ehlers and Raman, 2016].To solve and compute a strategy for the antagonist, we call SLUGS using the --CounterStrategy option.

Benchmarks
For the experimental evaluation we use two sets of benchmarks based on examples of reactive synthesis from the literature, slightly modified to adapt them to our framework.Both examples involve an agent navigating around an environment in order to perform a task.In both cases we can use a parameter n to scale the number of regions, and thus measure how our tool performs as the size of the problem grows.Finding Nemo.Based on the running example from [Kress-Gazit et al., 2009].The agent is a robot that moves in a workspace consisting of a circular hallway with n sections, each leading into two different rooms.The agent is looking for "Nemo", who can appear in any of the odd-numbered rooms.The agent has a camera and its task is to record three timesteps worth of footage of Nemo.Workstation Resupply.Based on the scenario presented in [DeCastro et al., 2014] of a robot responsible for resupplying workstations in a factory with parts from a stockroom.The robot's task is to resupply n separate stations, something it can only do after picking up a part from the stockroom.A workstation may be occupied, in which case the robot has to wait until it is vacated before going inside.
The agent safety conditions ensure movement constraints and other domain requirements, for example that the agent has picked up a part in the stockroom before resupplying a station.The environment safety conditions guarantee for example that the sensors are well-behaved and that stations don't become occupied while the agent is inside.The GR(1) condition in Finding Nemo guarantees that if the robot visits the odd-numbered rooms infinitely often, it will find Nemo infinitely often, while in Workstation Resupply it guarantees that each workstation will be vacated infinitely often.

Empirical Evaluation
Comparing to LTL Synthesis.We want to compare GF-SYNTH to a state-of-the-art LTL synthesis tool.In Section 3 we pointed out how ϕ e GR(1) and ϕ a task can be translated to LTL.We can handle the environment safety condition ϕ e saf e by observing that (ϕ e GR(1) ∧ ϕ e saf e ) → ϕ a task (with ϕ e saf e interpreted on all prefixes) is equivalent to ϕ e GR(1) → ϕ a task with ϕ a task = (¬ϕ e saf e ∨ ϕ a task ) interpreted using standard LTL f semantics.Indeed, the agent can violate the environment safety condition by producing a single prefix that violates ϕ e saf e .Then, (¬ϕ e saf e ∨ ϕ a task ) can be interpreted as a single LTL f formula and reduced to LTL.Handling the agent safety condition ϕ a saf e , however, cannot be done easily, since, as discussed in Section 4, there is no known way to translate an LTL f formula on all prefixes to LTL f and LTL without exponential blowups.Hence, in order to compare GFSYNTH with tools for LTL synthesis, we manually translated the specific ϕ a saf e of our benchmarks above to an equivalent LTL f formula to be included directly as a new conjunct in ϕ a task .We then converted the entire specification to LTL and synthesized it with STRIX [Meyer et al., 2018], the winner of the LTL-synthesis track of the synthesis competition SYNT-COMP 2020 [Jacobs and Perez, 2020], using it as the baseline of comparison to our tool.Note that since our benchmarks assume that the agent moves first, while STRIX assumes the environment moves first, we had to slightly modify the specifications by adding a before all variables controlled by the environment, a transformation that essentially corresponds to ignoring the first move by the environment.
Baseline and Experiment Setup.All tests were run on a computer cluster.Each test had exclusive access to a node with Intel(R) Xeon(R) CPU E5-2650 v2 processors running at 2.60GHz.Time out was set to two hours (7200 seconds).
Correctness.Our implementation was verified by comparing the results returned by GFSYNTH with those from STRIX.No inconsistency encountered for the solved cases.
Results.We compared GFSYNTH against STRIX by performing an end-to-end (from specification to winning strategy if realizable) comparison experiment over the benchmarks described in Section 6.2.Comparison on both classes of benchmarks show that GFSYNTH outperforms STRIX.
Figure 1 and Figure 2 show the running time of GFSYNTH and STRIX on both benchmarks, respectively.The x-axis indicates the value of the scalable parameter n for each benchmark.The y-axis is in log scale.Results of cases on which both tools failed are not shown.For benchmark Finding Nemo, in small cases where n ≤ 2, there is no large gap in the time cost.However, as n grows, the time cost of GF-SYNTH increases linearly, while the time cost of STRIX increases exponentially.Regarding benchmark Workstation Resupply, the exponential gap is not so obvious.Nevertheless, as the benchmark grows, STRIX almost always takes around 10 times longer than GFSYNTH.STRIX also failed for n = 5.Discussions.Looking deeper into GFSYNTH, we observed that on those cases where GFSYNTH fails, the automata can not be constructed by the MONA library employed by SYFT for automata construction from LTL f .There has been various studies on LTL f -to-automata translation.Possibly the most successful attempt is the decompositional approach presented in [Bansal et al., 2020].For future work, we will take this approach into account to improve GFSYNTH.
Figure 1: Benchmark Finding Nemo GR(1) formula and ϕ a task , ϕ e saf e and ϕ a saf e are LTL f formulas.2.An agent strategy σ ag : (2 X ) |= ∀ ϕ e saf e , then there exists k ≥ 0 s.t.π k |= ϕ a task and π k |= ∀ ϕ a saf e .3. Solving P consists in finding an agent strategy that realizes Goal under assumption Env .