Specification decomposition for reactive synthesis

Reactive synthesis is the task of automatically deriving a correct implementation from a specification. It is a promising technique for the development of verified programs and hardware. Despite recent advances in terms of algorithms and tools, however, reactive synthesis is still not practical when the specified systems reach a certain bound in size and complexity. In this paper, we present a sound and complete modular synthesis algorithm that automatically decomposes the specification into smaller subspecifications. For them, independent synthesis tasks are performed, significantly reducing the complexity of the individual tasks. Our decomposition algorithm guarantees that the subspecifications are independent in the sense that completely separate synthesis tasks can be performed for them. Moreover, the composition of the resulting implementations is guaranteed to satisfy the original specification. Our algorithm is a preprocessing technique that can be applied to a wide range of synthesis tools. We evaluate our approach with state-of-the-art synthesis tools on established benchmarks: the runtime decreases significantly when synthesizing implementations modularly.


Introduction
Reactive synthesis automatically derives an implementation that satisfies a given specification.It is a pushbutton method producing implementations which are correct by construction.Therefore, reactive synthesis is a promising technique for the development of probably correct systems since it allows for concentrating on what a system should do instead of how it should be done.
Despite recent advances in terms of efficient algorithms and tools, however, reactive synthesis is still not practical when the specified systems reach a certain bound in size and complexity.It is long known that the scalability of model checking algorithms can be improved significantly by using compositional approaches, i.e., by breaking down the analysis of a system into several smaller subtasks.[4,31].In this paper, we apply compositional concepts to reactive synthesis: We present and extend a modular synthesis algorithm [12] that decomposes a specification into several subspecifications.Then, independent synthesis tasks are performed for them.The implementations obtained from the subtasks are combined into an implementation for the initial specification.The algorithm uses synthesis as a black box and can thus be applied to a wide range of synthesis algorithms.In particular, it can be seen as a preprocessing step for reactive synthesis that enables compositionality for existing algorithms and tools.
Soundness and completeness of modular synthesis strongly depends on the decomposition of the specification into subspecifications.We introduce a criterion, non-contradictory independent sublanguages, for subspecifications that ensures soundness and completeness: The original specification is equirealizable to the subspecifications and the parallel composition of the implementations for the subspecifications is guaranteed to satisfy the original specification.The key question is now how to decompose a specification such that the resulting subspecifications satisfy the criterion.
Lifting the language-based criterion to the automaton level, we present a decomposition algorithm for nondeterministic Büchi automata that directly implements the independent sublanguages paradigm.Thus, using subspecifications obtained with this decomposition algorithm ensures soundness and completeness of modular synthesis.A specification given in the standard temporal logic LTL can be translated into an equivalent nondeterministic Büchi automaton and hence the decomposition algorithm can be applied as well.
However, while the decomposition algorithm is semantically precise, it utilizes several expensive automaton operations.For large specifications, the decomposition thus becomes infeasible.Therefore, we present an approximate decomposition algorithm for LTL specification that still ensures soundness and completeness of modular synthesis but is more scalable.It is approximate in the sense that, in contrast to the automaton decomposition algorithm, it does not necessarily find all possible decompositions.Moreover, we present an optimization of the LTL decomposition algorithm for formulas in a common assumption-guarantee format.It analyzes the assumptions and drops those that do not influence the realizability of the rest of the formula, yielding more fine-grained decompositions.We extend the optimization from specifications in a strict assumeguarantee format to specifications consisting of several conjuncts in assume-guarantee format.This allows for applying the optimization to even more of the common LTL synthesis benchmarks.
We have implemented both decomposition procedures as well as the modular synthesis algorithm and used it with the two state-of-the-art synthesis tools BoSy [9] and Strix [25].We evaluate our algorithms on the established benchmarks from the synthesis competition SYNTCOMP [18].As expected, the decomposition algorithm for nondeterministic Büchi automata becomes infeasible when the specifications grow.For the LTL decomposition algorithm, however, the experimental results are excellent: Decomposition terminates in less than 26 milliseconds on all benchmarks.Hence, the overhead of LTL decomposition is negligible, even for non-decomposable specifications.Out of 39 decomposable specifications, BoSy and Strix increase their number of synthesized benchmarks by nine and five, respec-tively.For instance, on the generalized buffer benchmark [17,20] with three receivers, BoSy is able to synthesize a solution within 28 seconds using modular synthesis while neither the non-compositional version of BoSy, nor the non-compositional version of Strix terminates within one hour.For twelve and nine further benchmarks, respectively, BoSy and Strix reduce their synthesis times significantly, often by an order of magnitude or more, when using modular synthesis instead of their classical algorithms.The remaining benchmarks are too small and too simple for compositional methods to pay off.Thus, decomposing the specification into smaller subspecifications indeed increases the scalability of synthesis on larger systems.
Related Work: Compositional approaches are long known to improve the scalability of model checking algorithms significantly [4,31].The approach that is most related to our contribution is a preprocessing algorithm for compositional model checking [6].It analyzes dependencies between the properties that need to be checked in order to reduce the number of model checking tasks.We lift this idea from model checking to reactive synthesis.The dependency analysis in our algorithm, however, differs inherently from the one for model checking.
There exist several compositional approaches for reactive synthesis.The algorithm by Filiot et al. depends, like our LTL decomposition approach, heavily on dropping assumptions [10].They use an heuristic that, in contrast to our criterion, is incomplete.While their approach is more scalable than a non-compositional one, one does not see as significant differences as for our algorithm.The algorithm by Kupferman et al. is designed for incrementally adding requirements to a specification during system design [21].Thus, it does not perform independent synthesis tasks but only reuses parts of the already existing solutions.In contrast to our algorithm, both [21] and [10] do not consider dependencies between the components to obtain prior knowledge about the presence or absence of conflicts in the implementations.
Assume-guarantee synthesis algorithms [2,3,14,23] take dependencies between components into account.In this setting, specifications are not always satisfiable by one component alone.Thus, a negotiation between the components is needed.While this yields more finegrained decompositions, it produces a significant overhead that, as our experiments show, is often not necessary for common benchmarks.Avoiding negotiation, dependency-based compositional synthesis [13] decomposes the system based on a dependency analysis of the specification.The analysis is more fine-grained than the one presented in this paper.Moreover, a weaker winning condition for synthesis, remorsefree dominance [5], is used.While this allows for smaller synthesis tasks since the specification can be decomposed further, both the dependency analysis and using a different winning condition produce a larger overhead than our approach.
The reactive synthesis tools Strix [25], Unbeast [8], and Safety-First [32] decompose the given specification.Strix uses decomposition to find suitable automaton types for internal representation and to identify isomorphic parts of the specification.Unbeast and Safety-First in contrast, decompose the specification to identify safety parts.All three tools do not perform independent synthesis tasks for the subspecifications.In fact, our experiments show that the scalability of Strix still improves notably with our algorithm.
Independent of [12], Mavridou et al. introduce a compositional realizability analysis of formulas given in FRET [16] that is based on similar ideas as our LTL decomposition algorithm [24].They only study the realizability of formulas but do not synthesize solutions.
Optimized assumption handling cannot easily be integrated into their approach.For a detailed comparison of both approaches, we refer to [24].The first version [12] of our modular synthesis approach is already well-accepted in the synthesis community: Our LTL decomposition algorithm has been integrated into the new version [30] of the synthesis tool ltlsynt [26].

Preliminaries
LTL. Linear-time temporal logic (LTL) [28] is a specification language for linear-time properties.For a finite set Σ of atomic propositions, the syntax of LTL is given by ϕ, ψ :: where a ∈ Σ.We define the operators ϕ := true U ϕ and ϕ := ¬ ¬ϕ and use standard semantics.The atomic propositions in ϕ are denoted by prop(ϕ), where every occurrence of true or false in ϕ does not add any atomic propositions to prop(ϕ).The language L(ϕ) of ϕ is the set of infinite words that satisfy ϕ.
states where q 1 ∈ Q 0 and (q i , σ i , q i+1 ) ∈ δ holds for all i ≥ 1.A run is accepting if it contains infinitely many accepting states.A accepts a word σ if there is an accepting run of σ on A. The language L(A) of A is the set of all accepted words.Two NBAs are equivalent if their languages are.An LTL specification ϕ can be translated into an equivalent NBA A ϕ with a single exponential blow up [22].

Implementations and
Counterstrategies.An implementation of a system with inputs I, outputs O, and variables mapping a history of variables and the current input to outputs.An infinite word The set of all compatible words of f is denoted by C(f ).An implementation f realizes a specification s if σ ∈ L(s) holds for all σ ∈ C(f ).A specification is called realizable if there exists an implementation realizing it.If a specification is unrealizable, there is a counterstrategy Reactive Synthesis.Given a specification, reactive synthesis derives an implementation realizing it.For LTL specifications, synthesis is 2EXPTIME-complete [29].In this paper, we use reactive synthesis as a black box procedure and thus we do not go into detail here.Instead, we refer the interested reader to [11].

Modular Synthesis
In this section, we introduce a modular synthesis algorithm that divides the synthesis task into independent subtasks by splitting the specification into several subspecifications.The decomposition algorithm has to ensure that the synthesis tasks for the subspecifications can be solved independently and that their results are non-contradictory, i.e., that they can be combined into an implementation satisfying the initial specification.Note that when splitting the specification, we assign a set of relevant in-and output variables to every subspecification.The corresponding synthesis subtask is then performed on these variables.
Algorithm 1 describes this modular synthesis approach.First, the specification is decomposed into a list of subspecifications using an adequate decomposition algorithm.Then, the synthesis tasks for all subspecifications are solved.If a subspecification is unrealizable, its counterstrategy is extended to a counterstrategy for the whole specification.This construction is given in Definition 3.1.Otherwise, the implementations of the subspecifications are composed.Intuitively, the behavior of the counterstrategy of an unrealizable subspecification s i violates the full specification s as well.A counterstrategy for the full specification, however, needs to be defined on all variables of s, i.e., also on the variables that do not occur in s i .Thus, we extend the counterstrategy for ϕ i such that it ignores outputs outside of s i and produces an arbitrary valuation of the input variables outside of s i : ) ω such that L(s 1 ) || L(s 1 ) = L(s).Let s 1 be unrealizable and let f c 1 : (2 V1 ) * → 2 I∩V1 be a counterstrategy for s 1 .We construct a counterstrategy is an arbitrary valuation of the input variables outside of V 1 .
The counterstrategy for the full specification constructed as in Definition 3.1 then indeed fulfills the condition of a counterstrategy for the full specification, i.e., all of its compatible words violate the full specification: ) * → 2 I∩V1 be a counterstrategy for s 1 .The function f c constructed as in Definition 3.1 from f c i is a counterstrategy for s.
for all n ∈ N and hence, by construction of f c , we have Hence, for all σ ∈ C(f c ), σ ∈ L(s) and thus C(f c ) ⊆ L(s).Thereforde, f c is a counterstrategy for s.

⊓ ⊔
Soundness and completeness of modular synthesis depend on three requirements: Equirealizability of the initial specification and the subspecifications, non-contradictory composability of the subresults, and satisfaction of the initial specification by the parallel composition of the subresults.Intuitively, these requirements are met if the decomposition algorithm neither introduces nor drops parts of the system specification and if it does not produce subspecifications that allow for contradictory implementations.To obtain composability of the subresults, the implementations need to agree on shared variables.We ensure this by assigning disjoint sets of output variables to the synthesis subtasks: Since every subresult only defines the behavior of the assigned output variables, the implementations are noncontradictory.Since the language alphabets of the subspecifications thus differ, the composition of their languages is non-contradictory: Definition 3.2 (Language Composition) Let L 1 , L 2 be languages over 2 Σ1 and 2 Σ2 , respectively.The non-contradictory composition of L 1 and L 2 is given by The satisfaction of the initial specification by the composed subresults can be guaranteed by requiring the subspecifications to be independent sublanguages: From these two requirements, i.e., the subspecifications form non-contradictory and independent sublanguages, equirealizability of the initial specification and the subspecifications follows: Theorem 3.1 Let s, s 1 , and s 2 be specifications with and L(s 1 ) and L(s 2 ) are independent sublanguages of L(s), then s is realizable if, and only if, both s 1 and s 2 are realizable.
Proof First, suppose that s 1 and s 2 are realizable.Let be implementations realizing s 1 and s 2 , respectively.We construct an implementation f : n ∈ N follows by construction of f and thus σ = σ ′ ∪ σ ′′ holds.Further, σ ′ ∈ C(f 1 ) and σ ′′ ∈ C(f 2 ) and thus, since s 1 and s 2 are realizable by assumption, σ ′ ∈ L(s 1 ) and σ ′′ ∈ L(s 2 ).Since L(s 1 ) and L(s 2 ) are independent sublanguages by assumption, L(s 1 ) || L(s 2 ) = L(s) holds.Hence, by definition of language composition, σ 1 ∪ σ 2 ∈ L(s) follows and thus, σ ∈ L(s) holds.Hence, for all σ ∈ C(f ), σ ∈ L(s) and therefore f realizes s.Second, let s i is unrealizable for some i ∈ {1, 2} and let f c i : (2 V ) * → 2 I∩V1 be a counterstrategy for s i .We construct a counterstrategy The soundness and completeness of Algorithm 1 for adequate decomposition algorithms now follows directly with Theorem 3.1 and the properties of such algorithms described above: They produce subspecifications that (1) do not share output variables and that (2) form independent sublanguages of the initial specification.Theorem 3.2 Let s be a specification.Moreover, let S = {s 1 , . . ., s k } be a set of subspecifications of s with and such that L(s 1 ), . . ., L(s k ) are independent sublanguages of L(s).If s is realizable, Algorithm 1 yields an implementation realizing s.Otherwise, Algorithm 1 yields a counterstrategy for s.
Proof First, let s be realizable.Then, by applying Theorem 3.1 recursively, it follows that s i is realizable for all s i ∈ S. Since V i ∩V j ⊆ I holds for any s i , s j ∈ S with i = j, the implementations realizing s 1 , . . ., s k are noncontradictory.Hence, Algorithm 1 returns their composition: Implementation f .Since fines the behavior of all outputs.By construction, f realizes all s i ∈ S. Since the L(s i ) are non-contradictory, independent sublanguages of L(s), f thus realizes s.
Next, let s be unrealizable.Then, by applying Theorem 3.1 recursively, s i is unrealizable for some s i ∈ S. Thus, Algorithm 1 returns the extension of s i 's counterstrategy to a counterstrategy for the full specification.Its correctness follows with Lemma 3.1.

Decomposition of Büchi Automata
To ensure soundness and completeness of modular synthesis, a specification decomposition algorithm needs to meet the language-based adequacy conditions of Theorem 3.1.In this section, we lift these conditions from the language level to nondeterministic Büchi automata and present a decomposition algorithm for specifications given as NBAs on this basis.Since the algorithm works directly on NBAs and not on their languages, we consider their composition instead of the composition of their languages: The parallel composition of NBAs reflects the composition of their languages: Hence, σ i is accepted by A i .Thus, by definition of automaton composition and since σ 1 and σ 2 agree on shared variables, Using the above lemma, we can formalize the independent sublanguage criterion on NBAs directly: Two automata To apply Theorem 3.1, the alphabets of the subautomata may not share output variables.Our decomposition algorithm achieves this by constructing the subautomata from the initial automaton by projecting to disjoint sets of outputs.Intuitively, the projection to a set X abstracts from the variables outside of X.Hence, it only captures the parts of the initial specification concerning the variables in X. Formally: The decomposition algorithm for NBAs is described in Algorithm 2. It is a recursive algorithm that, starting with the initial automaton A, guesses a subset X of the output variables out.It abstracts from the output variables outside of X by building the projection A X of A to X∪inp, where inp is the set of input variables.Similarly, it builds the projection . Accepting states are marked with double circles.
Accepting states are marked with double circles.
are disjoint and therefore A X and A Y do not share output variables, A X and A Y are a valid decomposition of A. The subautomata are then decomposed recursively.If no further decomposition is possible, the algorithm returns the subautomata.By only considering unexplored subsets of output variables, no subset combination X, Y is checked twice.
As an example for the specification decomposition algorithm based on NBAs, consider the specification ) ω and hence A ′ X and A ′ Y are no valid decomposition.Algorithm 2 ensures soundness and completeness of modular synthesis: The subspecifications do not share output variables and they are equirealizable to the initial specification.This follows from the construction of the subautomata, Lemma 4.1, and Theorem 3.1: Proof Clearly, there are NBAs that cannot be decomposed further, e.g., automata whose alphabet contains only one output variable.Thus, since there are only finitely many subsets of O, Algorithm 2 terminates.
We show that the algorithm returns subspecifications that only share input variables, define all output variables of the system, and that are independent sublanguages of the initial specification by structural induction on the initial automaton: For any automaton A ′ that is not further decomposable, Algorithm 2 returns a list S ′ solely containing A ′ .Clearly, the parallel composition of all automata in S ′ is equivalent to A ′ and the alphabets of the languages of the subautomata do not share output variables.
Next, let A ′ be an NBA such that there exists a set By induction hypothesis, the calls to the algorithm with A ′ π(X∪inp) and A ′ π(Y∪inp) return lists S ′ X and S ′ Y , respectively, where the parallel composition of all automata in S ′ Z is equivalent to A ′ π(Z∪inp) for Z ∈ {X, Y}.Thus, the parallel composition of all automata in the concatenation of S ′ X and and thus, by construction of X, to A ′ .Hence, their languages are independent sublanguages of A ′ .Furthermore, by induction hypothesis, the alphabets of the automata in S ′ Z do not share output variables for Z ∈ {X, Y} and, by construction, they are subsets of the alphabet of A ′ π(Z) .Hence, since clearly (X ∪ inp) ∩ ((out \ X) ∪ inp) ⊆ inp holds, the alphabets of the automata in the concatenation of S ′ X and S ′ Y do not share output variables.Moreover, the union of the alphabets of the automata in S ′ Z equals the alphabet of A π(Z∪inp) for Z ∈ {X, Y} by induction hypothesis.Since clearly X ∪ Y = out, it follows that the union of the alphabets of the automata in the concatenation of S ′ X and S ′ Y equals inp ∪ out.Thus, 1≤i≤k V i = V and V i ∩V j ⊆ I for 1 ≤ i, j ≤ k with i = j.Moreover, L(A 1 ), . . ., L(A k ) are independent sublanguages of L(A).Thus, by Theorem 3.1, A is realizable if, and only if, all A i ∈ S are realizable.⊓ ⊔ Since Algorithm 2 is called recursively on every subautomaton obtained by projection, it directly follows that the nondeterministic Büchi automata contained in the returned list are not further decomposable: Theorem 4.2 Let A be an NBA and let S be the set of NBAs that Algorithm 2 returns on input A. Then, for each A i ∈ S over alphabet 2 Vi , there are no NBAs A ′ , A ′′ over alphabets 2 V ′ and 2 Hence, Algorithm 2 yields perfect decompositions and is semantically precise.Yet, it performs several expensive automaton operations such as projection, composition, and language containment checks.For large automata, this is infeasible.For specifications given as LTL formulas, we thus present an approximate decomposition algorithm in the next section that does not yield non-decomposable subspecifications, but that is free of the expensive automaton operations.

Decomposition of LTL Formulas
An LTL specification can be decomposed by translating it into an equivalent NBA and by then applying Algorithm 2. To circumvent expensive automaton operations, though, we introduce an approximate decomposition algorithm that, in contrast to Algorithm 2, does not necessarily find all possible decompositions.In the following, we assume that V = prop(ϕ) holds for the initial specification ϕ.Note that any implementation for the variables in prop(ϕ) can easily be extended to one for the variables in V if prop(ϕ) ⊂ V holds by ignoring the inputs in I \ prop(ϕ) and by choosing arbitrary valuations for the outputs in O \ prop(ϕ).
The main idea of the decomposition algorithm is to rewrite the initial LTL formula ϕ into a conjunctive form ϕ = ϕ 1 ∧• • • ∧ϕ k with as many top-level conjuncts as possible by applying distributivity and pushing temporal operators inwards whenever possible.Then, we build subspecifications ϕ i consisting of subsets of the conjuncts.Each conjunct occurs in exactly one subspecification.We say that conjuncts are independent if they do not share output variables.Given an LTL formula with two independent conjuncts, the languages of the conjuncts are independent sublanguages of the language of the whole formula: Lemma 5.1 Let ϕ = ϕ 1 ∧ ϕ 2 be an LTL formula over atomic propositions V with conjuncts ϕ 1 and ϕ 2 over V 1 and V 2 , respectively, with V 1 ∪ V 2 ⊆ V .Then, L(ϕ 1 ) and L(ϕ 2 ) are independent sublanguages of L(ϕ).
Proof First, let σ ∈ L(ϕ).Then, σ ∈ L(ϕ i ) holds for all i ∈ {1, 2}.Since prop(ϕ i ) ⊆ V i holds and since the satisfaction of ϕ i only depends on the valuations of the variables in prop(ϕ i ), we have σ Next, let σ ∈ L(ϕ 1 ) || L(ϕ 2 ).Then, there are words Since σ 1 and σ 2 agree on shared variables, σ ∈ L(ϕ 1 ) and σ ∈ L(ϕ 2 ).Hence, σ ∈ L(ϕ 1 ∧ ϕ 2 ).⊓ ⊔ Our decomposition algorithm then ensures that different subspecifications share only input variables by merging conjuncts that share output variables into the same subspecification.Then, equirealizability of the initial formula and the subformulas follows directly from Theorem 3.1 and Lemma 5.1: To determine which conjuncts of an LTL formula ϕ = ϕ 1 ∧ • • • ∧ ϕ n share variables, we build the dependency graph D ϕ = (V, E) of the output variables, where V = O and (a, b) ∈ E if, and only if, a ∈ prop(ϕ i ) and b ∈ prop(ϕ i ) for some 1 ≤ i ≤ n.Intuitively, outputs a and b that are contained in the same connected component of D ϕ depend on each other in the sense that they either occur in the same conjunct or that they occur in conjuncts that are connected by other output variables.Hence, to ensure that subspecifications do not share output variables, conjuncts containing a or b need to be assigned to the same subspecification.Output variables that are contained in different connected components, however, are not linked and therefore implementations for their requirements can be synthesized independently, i.e., with independent subspecifications.Algorithm 3 describes how an LTL formula is decomposed into subspecifications.First, the formula is rewritten into conjunctive form.Then, the dependency graph is built and the connected components are computed.For each connected component as well as for all input variables, a subspecification is built by adding the conjuncts containing variables of the respective connected component or an input variable, respectively.To also consider the input variables is necessary to assign every conjunct, including input-only ones, to at least one subspecification.By construction, no conjunct is added to the subspecifications of two different connected components.Yet, a conjunct could be added to both a subspecification of a connected component and the subspecification for the input-only conjuncts.This is circumvented by the break in Line 11.Hence, every conjunct is added to exactly one subspecification.To define the input and output variables for the synthesis subtasks, the algorithm assigns the inputs and outputs occurring in ϕ i to the subspecification ϕ i .While restricting the inputs is not necessary for correctness, it may improve the runtime of the synthesis task.
As an example for the decomposition of LTL formulas, consider the specification ϕ = o 1 ∧ (i → o 2 ) with I = {i} and O = {o 1 , o 2 } again.Since ϕ is already in conjunctive form, no rewriting has to be performed.The two conjuncts of ϕ do not share any variables and therefore the dependency graph D ϕ does not contain any edges.Therefore, we obtain two subspecifications Soundness and completeness of modular synthesis with Algorithm 3 as a decomposition algorithm for LTL formulas follows directly from Corollary 5.1 if the subspecifications do not share any output variables: Theorem 5.1 Let ϕ be an LTL formula over V .Then, Algorithm 3 terminates with a set S = {ϕ 1 , . . ., ϕ k } of LTL formulas on ϕ with L(ϕ i ) ∈ (2 Vi ) ω such that and such that ϕ is realizable, if, and only if, for all subspecifications ϕ i ∈ S, ϕ i is realizable.
Proof Since an output variable is part of exactly one connected component and since all conjuncts containing an output are contained in the same subspecification, every output is part of exactly one subspecification.Therefore, V i ∩ V j ⊆ I holds for 1 ≤ i, j ≤ k with i = j.Moreover, the last component added in Line 8 contains all inputs.Hence, all variables that occur in a conjunct of ϕ are featured in at least one subspecification.Thus, 1≤i≤k V i = prop(ϕ) holds and hence, since V = prop(ϕ) by assumption, 1≤i≤k V i = V follows.Therefore, equirealizability of ϕ and the formulas in S directly follows with Corollary 5.1.

⊓ ⊔
While Algorithm 3 is simple and ensures soundness and completeness of modular synthesis, it strongly depends on the structure of the formula: When rewriting formulas in assume-guarantee format, i.e., formulas of the form ϕ = m i=1 ϕ i → n j=1 ψ j , to a conjunctive form, the conjuncts contain both assumptions ϕ i and guarantees ψ j .Hence, if a, b ∈ O occur in assumption ϕ i and guarantee ψ j , respectively, they are dependent.Thus, all conjuncts featuring a or b are contained in the same subspecification according to Algorithm 3. Yet, ψ j might be realizable even without ϕ i .An algorithm accounting for this might yield further decompositions and thus smaller synthesis subtasks.
In the following, we present a criterion for dropping assumptions while maintaining equirealizability.Intuitively, we can drop an assumption ϕ for a guarantee ψ if they do not share any variable.However, if ϕ can be violated by the system, i.e., if ¬ϕ is realizable, equirealizability is not guaranteed when dropping ϕ.For instance, consider the formula ϕ = (i 1 ∧ o 1 ) → (i 2 ∧ o 2 ), where I = {i 1 , i 2 } and O = {o 1 , o 2 }.Although assumption and guarantee do not share any variables, the assumption cannot be dropped: An implementation that never sets o 1 to true satisfies ϕ but (i 2 ∧ o 2 ) is not realizable.Furthermore, dependencies between input variables may yield unrealizability if an assumption is dropped as information about the remaining inputs might get lost.For instance, in the formula ϕ → ψ with ϕ no assumption can be dropped: Otherwise the information about the global behavior of i 1 , which is crucial for the existence of an implementation, is incomplete.These observations lead to the following criterion for safely dropping assumptions.
follows by construction of f .Hence, σ ∩ V 1 ∈ C(f 1 ) holds and thus, since Hence, the valuations of the variables in prop(ϕ 2 ) do not affect the satisfaction of ϕ 1 → ψ.Thus, we have ϕ2) for ¬ϕ 2 and all words compatible with f c 2 satisfy ϕ 2 .Given a finite sequence η ∈ (2 V1 ) * , let η ∈ (2 V ) * be the sequence obtained by lifting η to V using the output of f c 2 .Formally, let η = h(ε, η), where h : (2 V ) * × (2 V1 ) * → (2 V ) * is a function defined by h(τ, ε) = τ for the empty word ε and, when ).We construct an implementation g : (2 V1 ) * × 2 I1 → 2 O1 based on f and η as follows: Let σ f be the corresponding infinite sequence obtained from g when not restricting the output of f to O 1 .Hence, Clearly, by construction of g, we have σ f ∈ C(f ) and hence, since f realizes ϕ by assumption, σ f ∈ L(ϕ).Furthermore, we have σ f ∈ L(ϕ 2 ) by construction of g since η forces f to satisfy ϕ 2 .Hence, σ f ∈ L(ϕ 1 → ψ).Since ϕ 2 neither shares variables with ϕ 1 nor with ψ by assumption, the satisfaction of ϕ 1 → ψ is not influenced by the variables outside of V 1 .Thus, since we have By dropping assumptions, we are able to decompose LTL formulas of the form ϕ = m i=1 ϕ i → n j=1 ψ j in further cases: We rewrite ϕ to n j=1 ( m i=1 ϕ i → ψ j ) and then drop assumptions for the individual guarantees.If the resulting subspecifications only share input variables, they are equirealizable to ϕ.
Next, let both (ϕ 1 ∧ ϕ 3 ) → ψ 1 and (ϕ 2 ∧ ϕ 3 ) → ψ 2 be realizable and let follows from the construction of f .Moreover, σ ∩ V 1 and σ ∩ V 2 agree on shared variables and thus (σ∩V 1 )∪(σ∩V 2 ) ∈ L(ϕ ′ ∧ϕ ′′ ) holds.Therefore, we have (σ ∩ V 1 ) ∪ (σ ∩ V 2 ) ∈ L(ϕ) as well by the semantics of conjunction and implication.Since Analyzing assumptions thus allows for decomposing LTL formulas in further cases and still ensures soundness and completeness of modular synthesis.In the following, we present an optimized LTL decomposition algorithm that incorporates assumption dropping into the search for independent conjuncts.Intuitively, the algorithm needs to identify variables that cannot be shared safely among subspecifications.If an assumption contains such non-sharable variables, we say that it is bound to guarantees since it can influence the possible decompositions.Otherwise, it is called free.
To determine which assumptions are relevant for decomposition, i.e., which assumptions are bounded assumptions, we build a slightly modified version of the dependency graph that is only based on assumptions and not on all conjuncts of the formula.Moreover, all variables serve as the nodes of the graph, not only the output variables.An undirected edge between two variables in the modified dependency graph denotes that the variables occur in the same assumption.Variables that are contained in the same connected component as an output variable o ∈ O are thus connected to o over a path of one or more assumptions.Therefore, they may not be shared among subspecifications as they might influence o and thus may influence the decomposability of the specification.These variables are then called decomposition-critical.Given the modified dependency graph, we can compute the decompositioncritical propositions with a simple depth-first search.
After computing the decomposition-critical propositions, we create the dependency graph and extract connected components in the same way as in Algorithm 3 to decompose the LTL specification.Instead of using only output variables as nodes of the graph, though, we use all decomposition-critical variables.We then exclude free assumptions and add all other assumptions to their respective subspecification similar to Algorithm 3. We assign the guarantees to their subspecification in the same manner.Lastly, we add the remaining assumptions.Since all of these assumptions are free, they could be safely added to all subspecifications.Yet, to obtain small subspecifications, we only add them to subspecifications for which they are needed.Note that we have to add all assumptions featuring an input variable that occurs in the subspecification.Therefore, we analyze the assumptions and add them in one step, as a naive approach could have an unfavorable running time.The whole LTL decomposition algorithm with optimized assumption handling is shown in Algorithm 4. The decomposition algorithm does not check for assumption violations.The unrealizability of the negation of the dropped assumption, however, is an essential part of the criterion for assumption dropping (c.f.Theorem 5.2).Therefore, we incorporate the check for assumption violations into the modular synthesis algorithm: Before decomposing the specification, we perform synthesis on the negated assumptions.If synthesis returns that the negated assumptions are realizable, the system is able to violate an assumption.The implementation satisfying the negated assumptions is then extended to an implementation for the whole specification that violates the assumptions and thus realizes the specification.Otherwise, if the negated assumptions are unrealizable, the conditions of Theorem 5.2 are satisfied.Hence, we can use the decomposition algorithm and proceed as in Algorithm 1.The modified modular synthesis algorithm that incorporates the check for assumption violations is shown in Algorithm 5.
Note that Algorithm 4 is only applicable to specifications in a strict assume-guarantee format since Theorem 5.2 assumes a top-level implication in the formula.In the next section, we thus present an extension of the LTL decomposition algorithm with optimized as- sumption handling to specifications consisting of several assume-guarantee conjuncts, i.e., specifications of the form ϕ = (ϕ 6 Optimized LTL Decomposition for Formulas with Several Assume-Guarantee Conjuncts Since Corollary 5.1 can be applied recursively, classical LTL decomposition, i.e., as described in Algorithm 3, is applicable to specifications with several conjuncts.That is, in particular, it is applicable to specifications with several assume-guarantee conjuncts, i.e., specifications of the form ϕ = (ϕ 1 → ψ 1 ) ∧ • • • ∧ (ϕ k → ψ k ).Algorithm 4, in contrast, is restricted to LTL specifications consisting of a single assume-guarantee pair since Theorem 5.2, on which Algorithm 4 relies, assumes a toplevel implication in the specification.Hence, we cannot apply the optimized assumption handling to specifications with several assume-guarantee conjuncts directly.
A naive approach to extend assumption dropping to formulas with several assume-guarantee conjuncts is to first drop assumptions for all conjuncts separately and then to decompose the resulting specification using Algorithm 3. In general, however, this is not sound: The other conjuncts may introduce dependencies between assumptions and guarantees that prevent the dropping of the assumption.When considering the conjuncts during the assumption dropping phase separately, however, such dependencies are not detected.For instance, consider a system with I = {i}, O = {o 1 , o 2 }, and the spec- Clearly, ϕ is realizable by an implementation that sets o 1 to ¬i and o 2 to i in every time step.Since the first conjunct contains both o 1 and o 2 , Corollary 5.1 is not applicable and thus Algorithm 3 does not decompose ϕ.The naive approach for incorporating assumption dropping described above considers the third conjunct of ϕ separately and checks whether whether the assumption i can be dropped.Since the assumptions and guarantees do not share any variables, Lemma 5.2 is applicable and thus the naive algorithm drops i, yielding Yet, ϕ ′ is not realizable: If i is constantly set to false, the second conjunct of ϕ ′ enforces o 1 to be always set to true.The third conjunct enforces that o 2 is constantly set to true irrespective of the input i.The first conjunct, requires in every time step one of the output variables to be false.Thus, although Lemma 5.2 is applicable to i → o 1 , dropping the assumption safely is not possible in the context of the other two conjuncts.In particular, the first conjunct of ϕ introduces a dependency between o 1 and o 2 while the second conjunct introduces one between i and o 1 .Hence, there is a transitive dependency between i and o 1 due to which the assumption i cannot be dropped.This dependency is not detected when considering the conjuncts separately during the assumption dropping phase.
In this section, we introduce an optimization of the LTL decomposition algorithm which is able to decompose specifications with several conjuncts (possibly) in assume-guarantee format and which is, in contrast to the naive approach described before, sound.Similar to the naive approach, the main idea is to first check for assumptions that can be dropped in the different conjuncts and to then perform the classical LTL decomposition algorithm.Yet, the assumption dropping phase is not performed completely separately for the individual conjuncts but takes the other conjuncts and thus possible transitive dependencies between the assumptions and guarantees into account.
Next, let ϕ be realizable and let f : Thus, an implementation f 1 that behaves as f restricted to the variables in V ′ realizes ψ ′ ∧ ((ϕ 1 ∧ ϕ 2 ∧ ϕ 3 ) → ψ 1 ).An implementation f 2 that behaves as f restricted to the variables in V ′′ realizes ψ ′′ ∧ ((ϕ 1 ∧ ϕ 2 ∧ ϕ 3 ) → ψ 2 ).By assumption, for i ∈ {1, 2}, ϕ i does not share any ) and ϕ ′′ are equirealizable.Thus, since f 1 and f 2 realize the former formulas, ϕ ′ and ϕ ′′ are both realizable.⊓ ⊔ Utilizing Theorem 6.1, we extend Algorithm 4 to LTL specifications that do not follow a strict assumeguarantee form but consist of multiple conjuncts.The extended algorithm is depicted in Algorithm 6.We assume that the specification is not decomposable by Algorithm 3, i.e., we assume that no plain decompositions are possible.In practice, we thus first rewrite the specification and apply Algorithm 3 afterwards before then applying Algorithm 6 to the resulting subspecifications.
Hence, we assume that the dependency graph built from the output propositions of all given conjuncts con-sists of a single connected component.Theorem 6.1 hands us the tools to "break a link" in that chain of dependencies.This link has to be induced by a suitable implication.Algorithm 6 assumes that at least one of the conjuncts is an implication.In case of more than one implication, the choice of the implication consequently determines whether or not a decomposition os found.Therefore, it is crucial to reapply the algorithm on the subspecifications after a decomposition has been found and to try all implications if no decomposition is found.Since iterating through all conjuncts does not pose a large overhead in computing time, the choice of the implication is not further specified in the algorithm.
The extended algorithm is similar to Algorithm 4. Note that the dependency graph used for finding the decomposition-critical propositions is built only from the assumptions of the chosen implication as we are only seeking for droppable assumptions of this implication.In contrast to Algorithm 4, the dependency graph in line 5 of Algorithm 6 also includes the dependencies induced by the other conjuncts, similarly to the dependency graph in Algorithm 3. Here, we consider all decomposition-critical variables in the conjuncts, not only output variables, as an assumption can only be dropped if there are no shared variables with the remaining conjuncts.Therefore, the additional conjuncts are treated in the same way as the guarantees.This carries over to when the conjuncts are added to the subspecifications.Lastly, Algorithm 6 slightly differs from Algorithm 4 when the free assumptions are added to the subspecifications.Here, the remaining conjuncts have to be considered, too, since we may not drop assumptions that share variables with the outside conjunct.Consequently, all free assumptions that share an input with one of the remaining conjuncts, needs to be added.
One detail that has to be taken into account when integrating this LTL decomposition algorithm with extended optimized assumption handling into a synthesis tool, is that, like Algorithm 4, Algorithm 6 assumes that all negated assumptions are unrealizable.For formulas in a strict assume-guarantee format, the consequences of realizable assumptions is that we have found a strategy for the implementation.This changes when considering formulas with additional conjuncts since they might forbid this strategy.To detect such strategies, we can verify the synthesized strategy against the remaining conjunct and only extend it to a counterstrategy for the whole specification in the positive case.

Experimental Evaluation
We implemented the modular synthesis algorithm as well as the decomposition approaches and evaluated them on the 346 publicly available SYNTCOMP [18] 2020 benchmarks.Note that only 207 of the benchmarks have more than one output variable and are therefore realistic candidates for decomposition.The automaton decomposition algorithm utilizes Spot's [7] automaton library (Version 2.9.6).The LTL decomposition relies on SyFCo [19] for formula transformations (Version 1.2.1.1).We first decompose the specification with our algorithms and then run synthesis on the resulting subspecifications.We compare the CPU time of the synthesis task as well as the number of gates, and latches of the synthesized AIGER circuit for the original specification to the sum of the corresponding attributes of all subspecifications.Thus, we calculate the runtime for sequential modular synthesis.Parallelization of the synthesis tasks may further reduce the runtime.

LTL Decomposition
LTL decomposition with optimized assumption handling (c.f.Section 6) terminates on all benchmarks in less than 26ms.Thus, even for non-decomposable specifications, the overhead of trying to perform decompositions is negligible.The algorithm decomposes 39 formulas into several subspecifications, most of them yielding two or three subspecifications.Only a handful of formulas are decomposed into more than six subspecifications.The full distribution of the number of subspecifications for all specifications is shown in Table 7.1 We evaluate our modular synthesis approach with two state-of-the-art synthesis tools: BoSy [9], a bounded synthesis tool, and Strix [25], a game-based synthesis tool, both in their 2019 release.We used a machine with a 3.6GHz quad-core Intel Xeon processor and 32GB RAM as well as a timeout of 60 minutes.
In Figure 7.1, the comparison of the accumulated runtimes of the synthesis tasks of the subspecifications and of the original formula is shown for the decomposable SYNTCOMP benchmarks.For both BoSy and Strix, decomposition generates a slight overhead for small specifications.For larger and more complex specifications, however, modular synthesis decreases the execution time significantly, often by an order of magnitude or more.Note that due to the negligible runtime of specification decomposition, the plot looks similar when considering all SYNTCOMP benchmarks.Table 7.2 shows the running times of BoSy and Strix for modular and non-compositional synthesis on exem- plary benchmarks.For modular synthesis, the accumulated running time of all synthesis tasks is depicted.On almost all of them, both tools decrease their synthesis times with modular synthesis notably compared to the original non-compositional approaches.Particularly noteworthy is the benchmark generalized buffer 3. In the last synthesis competition, SYNTCOMP 2021, no tool was able to synthesize a solution for it within one hour.With modular synthesis, however, BoSy yields a result in less than 28 seconds.In Tables 7.3 and 7.4, the number of gates and latches, respectively, of the AIGER circuits [1] corresponding to the implementations computed by BoSy and Strix for modular and non-compositional synthesis are depicted for exemplary benchmarks.For most specifications, the solutions of modular synthesis are of the same size or smaller in terms of gates than the solutions for the original specification.The size of the solutions in terms of latches, however, varies.Note that BoSy Table 7.5: Distribution of the number of subspecifications over all specifications for NBA decomposition.For 79 specifications, the timeout (60min) was reached.For 39 specification, the memory limit (16GB) was reached.does not generate solutions with less than one latch in general.Hence, the modular solution will always have at least as many latches as subspecifications.

Automaton Decomposition
Besides LTL specifications, Strix also accepts specifications given as deterministic parity automata (DPAs) in extended HOA format [27], an automaton format wellsuited for synthesis.Thus, our implementation for decomposing specifications given as NBAs performs Algorithm 2, converts the resulting automata to DPAs and then synthesizes solutions with Strix.
For 235 out of the 346 benchmarks, NBA decomposition terminates within ten minutes, yielding several subspecifications or proving that the specification is not decomposable.In 79 of the other cases, the tool timed out after 60 minutes and in the remaining 32 cases it reached the memory limit of 16GB or the internal limits of Spot.Note, however, that for 81 specifications even plain DPA generation failed.The distribution of the number of subspecifications for all specifications is shown in Table 7.5.Thus, while automaton decomposition yields more fine-grained decompositions than the approximate LTL approach, it becomes infeasible when the specifications grow.Hence, the advantage of smaller synthesis subtasks cannot pay off.However, the coarser LTL decomposition suffices to reduce the synthesis time on common benchmarks significantly.Thus, LTL decomposition is in the right balance between small subtasks and a scalable decomposition.
For 43 specifications, the automaton approach yields decompositions and many of them consist of four or more subspecifications.For 22 of these specifications, the LTL approach yields a decomposition as well.Yet, they differ in most cases, as the automaton approach yields more fine-grained decompositions.
Recall that only 207 SYNTCOMP benchmarks are realistic candidates for decomposition.The automaton approach proves that 90 of those specifications (43.6%) are not decomposable.Thus, our implementations yield decompositions for 33.33% (LTL) and 36.75% (NBA) of the potentially decomposable specifications.We observed that decomposition works exceptionally well for specifications that stem from real system designs, for instance the Syntroids [15] case study, indicating that modular synthesis is particularly beneficial in practice.

Conclusion
We have presented a modular synthesis algorithm that applies compositional techniques to reactive synthesis.It reduces the complexity of synthesis by decomposing the specification in a preprocessing step and then performing independent synthesis tasks for the subspecifications.We have introduced a criterion for decomposition algorithms that ensures soundness and completeness of modular synthesis as well as two algorithms for specification decomposition satisfying the criterion: A semantically precise one for specifications given as nondeterministic Büchi automata, and an approximate algorithm for LTL specifications.We presented optimizations of the LTL decomposition algorithm for formulas in a strict assume-guarantee format and for formulas consisting of several assume-guarantee conjuncts.Both optimizations are based on dropping assumptions that do not influence the realizability of the rest of the formula.We have implemented the modular synthesis algorithm as well as both decomposition algorithms and we compared our approach for the state-of-the-art synthesis tools BoSy and Strix to their non-compositional forms.Our experiments clearly demonstrate the significant advantage of modular synthesis with LTL decomposition over traditional synthesis algorithms.While the overhead is negligible, both BoSy and Strix are able to synthesize solutions for more benchmarks with modular synthesis than in their non-compositional form.Moreover, on large and complex specifications, BoSy and Strix improve their synthesis times notably, demonstrating that specification decomposition is a gamechanger for practical LTL synthesis.
Building up on the presented approach, we can additionally analyze whether the subspecifications fall into fragments for which efficient synthesis algorithms exist, for instance safety specifications.Since modular synthesis performs independent synthesis tasks for the subspecifications, we can choose, for each synthesis task, an algorithm that is tailored to the fragment the respective subspecification lies in.Moreover, parallelizing the individual synthesis tasks may increase the advantage of modular synthesis over classical algorithms.Since the number of subspecifications computed by the LTL decomposition algorithm highly depends on the rewriting of the initial formula, a further promising next step is to develop more sophisticated rewriting algorithms.

Fig. 4 . 2 :
Fig. 4.2: Minimized NBAs for the projections A π(V1) and A π(V2) of the NBA A from Figure 4.1 to the sets of variables V 1 = {i, o 1 } and V 2 = {i, o 2 }, respectively.Accepting states are marked with double circles.
and 4.2b.Clearly, V 1 ∩ V 2 ⊆ I holds.Moreover, their parallel composition is exactly A depicted in Figure4.1 and therefore their parallel composition accepts exactly those words that satisfy ϕ.For a slightly modified specification ϕ

Fig. 7 . 1 :
Fig.7.1:Comparison of the performance of modular and non-compositional synthesis with BoSy and Strix on the decomposable SYNTCOMP benchmarks.For the modular approach, the accumulated time for all synthesis tasks is depicted.

Table 7 . 1 :
Distribution of the number of subspecifications over all specifications for LTL decomposition.

Table 7 .
2: Synthesis time in seconds of BoSy and Strix for non-compositional and modular synthesis on exemplary SYNTCOMP benchmarks with a timeout of 60 minutes.

Table 7 .
3: Gates of the synthesized solutions of BoSy and Strix for non-compositional and modular synthesis on exemplary SYNTCOMP benchmarks.Entry -denotes that no solution was found within 60 minutes.

Table 7 . 4 :
Latches of the synthesixed solutions of BoSy and Strix for non-compositional and modular synthesis on exemplary SYNTCOMP benchmarks.Entry -denotes that no solution was found within 60 minutes.