figure a
figure b

1 Introduction

Nowadays, hardware and software systems are everywhere around us. One way to ensure their correct functioning is to automatically synthesize them from a formal specification. This has two advantages over alternatives such as testing and model checking: the design part of the program-development process can be completely bypassed and the synthesized program is correct by construction.

In this work we are interested in synthesizing reactive systems [17]. These maintain a continuous interaction with their environment. Examples of reactive systems include communication, network, and multimedia protocols as well as operating systems. For the specification, we consider linear temporal logic (LTL) [27]. LTL allows to naturally specify time dependence among events that make up the formal specification of a system. The popularity of LTL as a formal specification language extends to, amongst others, AI [8, 15, 16], hybrid systems and control [6], software engineering [21], and bio-informatics [1].

The classical doubly-exponential-time synthesis algorithm can be decomposed into three steps: 1. compile the LTL formula into an automaton of exponential size [32], 2. determinize the automaton [26, 29] incurring a second exponential blowup, and 3. determine the winner of a two-player zero-sum game played on the latter automaton [28]. Most alternative approaches focus on avoiding the determinization step of the algorithm. This has motivated the development of so-called Safra-less approaches, e.g., [10, 11, 20, 31]. Worth mentioning are the on-the-fly game construction implemented in the Strix tool [24] and the downset-based (or “antichain-based”) on-the-fly bounded determinization described in [13] and implemented in Acacia+ [5]. Both avoid constructing the doubly-exponential deterministic automaton. Acacia+ was not ranked in recent editions of SYNTCOMP [18] (see since it is no longer maintained despite remaining one of the main references for new advancements in the field (see, e.g., [2, 12, 22, 30, 33]).

Contribution. We present the Acacia approach to solving the problem at hand and propose a new implementation that allows for a variety of optimization steps. For now, we have focused on (Büchi automata) realizability, i.e., the decision problem which takes as input an automaton compiled from the LTL formula and asks whether a controller satisfying it exists. In our tool, we compile the input LTL formula into an automaton using Spot [9]. We entirely specialize our presentation on the technical problem at hand and strive to distillate the algorithmic essence of the Acacia approach in that context. The main algorithm is presented in Section 3.4 and the different implementation options are listed in Section 4. Benchmarks are included in Section 6.

All benchmarks were executed on the revision of the software that can be found at:

2 Preliminaries

Throughout this paper, we assume the existence of two alphabets, \(I\) and \(O\); although these stand for input and output, the actual definitions of these two terms is slightly more complex: An input (resp. output) is a boolean combination of symbols of \(I\) (resp. O) and it is pure if it is a conjunction in which all the symbols in \(I\) (resp. O) appear exactly once; e.g., with \(I = \{i_1, i_2\}\), the expressions \(\top \) (true), \(\bot \) (false), and \((i_1 \vee i_2)\) are inputs, and \((i_1 \wedge \lnot i_2)\) is a pure input. Similarly, an IO is a boolean combination of symbols of \(I \cup O\), and it is pure if it is a conjunction in which all the symbols in \(I \cup O\) appear exactly once. We use \(i, j\) to denote inputs and \(x, y\) for IOs. Two IOs \(x\) and \(y\) are compatible if \(x \wedge y \ne \bot \).

A Büchi automaton \(\mathcal {A}\) is a tuple \((Q, q_0, \delta , B)\) with \(Q\) a set of states, \(q_0\) the initial state, \(\delta \) the transition relation that uses IOs as labels, and \(B \subseteq Q\) the set of Büchi states. The actual semantics of this automaton will not be relevant to our exposition, we simply note that these automata are usually defined to recognize infinite sequences of pure IOs. We assume, throughout this paper, the existence of some automaton \(\mathcal {A} \).

We will be interested in valuations of the states of \(\mathcal {A} \) that encode the number of visits to Büchi states—again, we do not go into details here. We will simply speak of vectors over \(\mathcal {A}\) for elements in \(\mathbb {Z} ^Q\), mapping states to integers. We will write \(\vec {v}\) for such vectors, and \(v_q\) for its value for state \(q\). In practice, these vectors will range into a finite subset of \(\mathbb {Z} \), with \(-1\) as an implicit minimum value (meaning that \((-1) - 1\) is still \(-1\)) and an upper bound provided by the problem.

For a vector \(\vec {v}\) over \(\mathcal {A} \) and an IO \(x\), we define a function that takes one step back in the automaton, decreasing components that have seen Büchi states. Write \(\chi _B(q)\) for the function mapping a state \(q\) to \(1\) if \(q \in B\), and \(0\) otherwise. We then define \(\textrm{bwd} (\vec {v}, x)\) as the vector over \(\mathcal {A}\) that maps each state \(p \in Q\) to:

$$\min _{\begin{array}{c} (p, y, q) \in \delta \\ x \text { compatible with } y \end{array}} \left( v_q - \chi _B(q)\right) ,$$

and we generalize this to sets: \(\textrm{bwd} (S, x) = \{\textrm{bwd} (\vec {v}, x) \mid \vec {v} \in S\}\). For a set \(S\) of vectors over \(\mathcal {A}\) and a (possibly nonpure) input \(i\), define:

$$\begin{aligned} \textrm{CPre} _i(S) = S \cap \bigcup _{\begin{array}{c} x \text { pure IO}\\ x \text { compatible with } i \end{array}} \textrm{bwd} (S, x). \end{aligned}$$

It can be proved that iterating \(\textrm{CPre}\) with any possible pure input stabilizes to a fixed point that is independent from the order in which the inputs are selected. We define \(\textrm{CPre} ^*(S)\) to be that set.

All the sets that we manipulate will be downsets: we say that a vector \(\vec {u}\) dominates another vector \(\vec {v}\) if for all \(q \in Q\), \(u_q \ge v_q\), and we say that a set is a downset if \(\vec {u} \in S\) and \(\vec {u}\) dominates \(\vec {v}\) implies that \(\vec {v} \in S\). This allows to implement these sets by keeping only dominating elements, which form, as they are pairwise nondominating, an antichain. In practice, it may be interesting to keep more elements than just the dominating ones or even to keep all of the elements to avoid the cost of computing domination.

Finally, we define \(\textrm{Safe} _k\) as the downset \(\{i \mid i \le k\}^Q\), i.e., all vectors with values bounded by \(k\). We are now equipped to define the computational problem we focus on:


  • Given: A Büchi automaton \(\mathcal {A}\) and an integer \(k > 0\),

  • Question: Is there a \(\vec {v} \in \textrm{CPre} ^*(\textrm{Safe} _k)\) with \(v_{q_0} \ge 0\)?

We note, for completeness, that (for sufficiently large values of k) this problem is equivalent to deciding the realizability problem associated with \(\mathcal {A}\): the question has a positive answer if and only if the output player wins the Gale-Stewart game with payoff set the complement of the language of \(\mathcal {A} \).

3 Realizability algorithm

The problem admits a natural algorithmic solution: start with the initial set, pick an input \(i\), apply \(\textrm{CPre} _i\) on the set, and iterate until all inputs induce no change to the set, then check whether this set contains a vector that maps \(q_0\) to \(0\). We first introduce some degrees of freedom in this approach, then present a slight twist on that solution that will serve as a canvas for the different optimizations.

3.1 Boolean states

This opportunity for optimization was identified in [4] and implemented in Acacia+, we simply introduce it in a more general setting and succinctly present the original idea when we mention how it can be implemented in Section 4.2. We start with an example. Consider the Büchi automaton from Figure 1 with \(q_0,q_1 \not \in B\).

Fig. 1.
figure 1

Small automaton with \(q_0,q_1 \not \in B\).

Recall that we are interested in whether the initial state can carry a nonnegative value, after \(\textrm{CPre}\) has stabilized. In that sense, the crucial information associated with \(q_0\) is boolean in nature: is its value positive or \(-1\)? Even further, this same remark can be applied to \(q_1\) since \(q_1\) being valued \(6\) or \(7\) is not important to the valuation of \(q_0\). Hence the set of states may be partitioned into integer-valued states and boolean-valued ones. Naturally, detecting which states can be made boolean comes at a cost and not doing it is a valid option.

3.2 Actions

For each IO \(x\), we will have to compute \(\textrm{bwd} (\vec {v}, x)\) oftentimes. This requires to refer to the underlying Büchi automaton and checking for each transition therein whether \(x\) is compatible with the condition. It may be preferable to precompute, for each \(x\), what are the relevant pairs \((p, q)\) for which \(x\) can go from \(p\) to \(q\). We call the set of such pairs the io-action of \(x\) and denote it \(\text {io-act} (x)\); in symbols:

$$\text {io-act} (x) = \{(p, q) \mid (\exists (p, y, q) \in \delta )[x \text { is compatible with } y]\}.$$

Further, as we will be computing \(\textrm{CPre} _i(S)\) for inputs \(i\), we abstract in a similar way the information required for this computation. We use the term input-action for the set of io-actions of IOs compatible with \(i\) and denote it \(\text {i-act} (i)\); in symbols:

$$\begin{aligned} \text {i-act} (i) = \bigcup _{\begin{array}{c} x \text { an IO}\\ \text {compatible with }i \end{array}} \text {io-act} (x) . \end{aligned}$$

In other words, actions contain exactly the information necessary to compute \(\textrm{CPre} \). Note that from an implementation point of view, we do not require that the actions be precomputed. Indeed, when iterating through pairs \((p, q) \in \text {io-act} (x)\), the underlying implementation can choose to go back to the automaton.

3.3 Sufficient inputs

As we consider the transitions of the Büchi automaton as being labeled by boolean expressions, it becomes more apparent that some pure IOs can be redundant. For instance, consider a Büchi automaton with \(I = \{i\}, O = \{o_1, o_2\}\), but the only transitions compatible with \(i\) are labeled \((i \wedge o_1)\) and \((i \wedge \lnot o_1)\). Pure IOs compatible with the first label will be \((i \wedge o_1 \wedge o_2)\) and \((i \wedge o_1 \wedge \lnot o_2)\), but certainly, these two IOs have the same io-actions, and optimally, we would only consider \((i \wedge o_1)\). However, we should not consider \((i \wedge o_2)\), as it induces an io-action that is not induced by a pure IO. We will thus allow our main algorithm to select certain inputs and IOs and introduce the following notion:

Definition 1

An IO (resp. input) is valid if there exists any pure IO (resp. input) with the same io-action (resp. input-action). A set \(X\) of valid IOs is sufficient if it represents all the possible io-actions of pure IOs: \(\{\text {io-act} (x) \mid x \in X\} = \{\text {io-act} (x) \mid x \text { is a pure IO}\}.\) A sufficient set of inputs is defined similarly with input-actions.

3.4 Algorithm

We solve BackwardRealizability by computing \(\textrm{CPre} ^*\) explicitly:

figure c

Our algorithm requires that the “input-action picker” used in line 8 decides whether we have reached a fixed point. As the picker could check whether \(S\) has changed, this is without loss of generality.

The computation of \(\textrm{CPre} _a\) is the intuitive one, optimizations therein coming from the internal representation of actions. That is, it is implemented by iterating through all io-actions compatible with \(a\), applying \(\textrm{bwd} \) on \(S\) for each of them, taking the union over all these applications, and finally intersecting the result with \(S\).

4 The many options at every line

The main computational costs of the algorithm are in finding input-actions and computing \(\textrm{CPre} _a\). For the former, reducing the number of candidates is crucial (by considering a good set of sufficient inputs). For the latter, reducing the size of the automaton (hence the dimension of the vectors) and providing efficient data types for downsets is key. Additionally, for the “input-action picker” to return an input that will make progress, it has to explore \(S\) in some way — this can again be a costly operation that would be sped up by better data structures for downsets. Let us now review these potential optimizations line by line.

4.1 Preprocessing of the automaton (line 1)

In this step, one can provide a heuristic that removes certain states that do not contribute to the computation. We provide an optional step that detects surely losing states, as presented in [14].

4.2 Boolean states (line 2)

We provide an implementation of the detection of boolean states, in addition to an option to not detect them. Our implementation is based on the concept of bounded state, as presented in [4]. A state is bounded if it cannot be reached from a Büchi state that lies in a nontrivial strongly connected component. This can be detected in several ways, although it is not an intrinsically costly operation.

4.3 Vectors and downsets (line 3)

The most basic data structure in the main algorithm is that of a vector used to give a value to the states. We provide a handful of different vector classes:

  • Standard C++ vector and array types (std::vector, std::array). Note that arrays are of fixed size; our implementation precompiles arrays of different sizes (up to \(300\) by default), and defaults to vectors if more entries are needed.

  • Vectors and arrays backed by SIMDFootnote 1 registers. This makes use of the type std::experimental::simd and leverages modern CPU optimizations.

Additionally, all these implementations can be glued to an array of booleans (std::bitset) to provide a type that combines boolean and integer values. These types can optionally expose an integer that is compatible with the partial order (here, the sum of all the elements in the vector: if \(\vec {u}\) dominates \(\vec {v}\), then the sum of the elements in \(\vec {u}\) is larger than that of \(\vec {v}\)). This value can help the downset implementations in sorting the vectors.

Downset types are built on top of a vector type. We provide:

  • Implementations using sets or vectors of vectors, either containing only the dominating vectors, or containing explicitly all the vectors;

  • An implementation that relies on \(k\)-d trees, a space-partitioning data structure for organizing points in a \(k\)-dimensional space; [3]

  • Implementations that store the vectors in specific bins depending on the information exposed by the vector type.

4.4 Selecting sufficient inputs (line 5)

Recall our discussion on sufficient inputs of Section 3.3. We introduce the notion of terminal IO following the intuition that there is no restriction of the IO that would lead to a more specific action:

Definition 2

An IO \(x\) is said to be terminal if for every compatible IO \(y\), we have \(\text {io-act} (x) \subseteq \text {io-act} (y)\). An input \(i\) is said to be terminal if for every compatible input \(j\) we have \(\text {i-act} (i) \subseteq \text {i-act} (j)\).

Our approaches to input selection focus on efficiently searching for a sufficient set of terminal IOs and inputs. The key property of terminal inputs is that they are automatically valid, while still being more general than pure inputs.

Proposition 1

Any pure IO and any input is terminal. Any terminal IO and any terminal input is valid.


Any pure IO is terminal.   Consider a pure IO \(x\) and a compatible IO \(y\). If \((p, q) \in \text {io-act} (x)\), then there is a transition \((p, z, q) \in \delta \) such that \(x\) is compatible with \(z\), and thus \(x \wedge z = x\). Consequently, \(x \wedge z \wedge y = x \wedge y \ne \bot \), hence \(y\) and \(z\) are compatible and \((p, q) \in \text {io-act} (y)\). This shows that \(\text {io-act} (x) \subseteq \text {io-act} (y)\) and that \(x\) is terminal.

Any pure input is terminal.   Consider now a pure input \(i\) and a compatible input \(j\). Let \(\text {io-act} (x) \in \text {i-act} (i)\). It holds that \(x\) is compatible with \(i\), hence \(i \wedge x \ne \bot \). Since \(i\) is pure, \(i \wedge j = i\), thus \(i \wedge j \wedge x \ne \bot \), and \(x\) is also compatible with \(j\), implying that \(\text {io-act} (x) \in \text {i-act} (j)\). This shows that \(\text {i-act} (i) \subseteq \text {i-act} (j)\) and that \(i\) is terminal.

Any terminal IO and input is valid.   We prove the case for inputs, the IO case being similar. Let \(i\) be a terminal input and \(j\) be a compatible pure input (at least one exists), then \(\text {i-act} (i) \subseteq \text {i-act} (j)\). Since \(j\) is pure, it is also terminal, hence \(\text {i-act} (j) \subseteq \text {i-act} (i)\). Hence \(\text {i-act} (i) = \text {i-act} (j)\) and \(i\) is valid.   \(\square \)

We present a simple algorithm for computing a sufficient set of terminal IOs. This is done by iteratively refining a set \(P\) of terminal IOs, starting by assuming that \(\{\top \}\) is such a set and using any counterexample to split the IOs:

figure d

We provide 3 implementations of input selection:

  • No precomputation, i.e., return pure inputs/IOs;

  • Applying Algorithm 2 twice: for IOs and inputs;

  • Use a pure BDD approach to do the previous algorithm; this relies on extra variables to have the loop “for every element \(y\) in \(P\)” iterate only over elements \(y\) that satisfy \(x \wedge y \ne \bot \).

4.5 Precomputing actions (line 6)

Since computing \(\textrm{CPre} _i\) for an input \(i\) requires to go through \(\text {i-act} (i)\), possibly going back to the automaton and iterating through all transitions, it may be beneficial to precompute this set. We provide this step as an optional optimization that is intertwined with the computation of a sufficient set of IOs; for instance, rather than iterating through labels in Algorithm 2, one could iterate through all transitions, and store the set of transitions that are compatible with each terminal IO on the fly.

4.6 Main loop: Picking input-actions (line 8)

We provide several implementations of the input-action picker:

  • Return each input-action in turn, until no change has occurred to \(S\) while going through all possible input-actions;

  • Search for an input-action that is certain to change \(S\). This is based on the concept of critical input as presented in [4]. This is reliant on how input-actions are ordered themselves, so we provide multiple options (using a priority queue to prefer inputs that were recently returned, randomize part of the array of input-actions, and randomize the whole array).

4.7 When are we done?

The main algorithm answers either “yes, the formula is realizable” or “don’t know.” Indeed, for the value of \(k\) to provide an exact value, it has to be very large and reaching a fixed point in the computation becomes impossible in practice. However, it is not necessary to restart the whole algorithm with larger values of \(k\) in order to converge towards the correct answer: one can just increase all the components of all the vectors in \(S\) (our main set), and go back to the main loop. There are thus two parameters that can be adjusted: the starting value of \(k\) and the increment to \(S\) each time the loop is restarted.

5 Checking unrealizability of LTL specifications

As mentioned in the preliminaries, for large values of k the BackwardRealizability problem is equivalent to a non-zero sum game whose payoff set is the complement of the language of the given automaton. More precisely, for small values of k, a negative answer for the BackwardRealizability problem does not imply that the output player does not win the game. Instead, if one is interested in whether the output player wins, a property known as determinacy [23] can be leveraged to instead ask whether a complementary property holds: does the input player win the game?

We thus need to build an automaton \(\mathcal {B}\) for which a positive answer to the BackwardRealizability translates to the previous property. To do so, we can consider the negation of the input formula, \(\lnot \phi \), and inverse the roles of the players, that is, swap the inputs and outputs. However, to make sure the semantics of the game is preserved, we also need to have the input player play first, and the output player react to the input player’s move. To do so, we simply need to have the outputs moved one step forward (in the future, in the LTL sense). This can be done directly on the input formula, by putting an \(X\) (neXt) operator on each output. This can however make the formula much more complex.

We propose an alternative to this: Obtain the automaton for \(\lnot \phi \), then push the outputs one state forward. This means that a transition \((p, \langle i, o \rangle , q)\) is translated to a transition \((p, i, q)\), and the output \(o\) should be fired from \(q\). In practice, we would need to remember that output, and this would require the construction to consider every state \((q, o)\), augmenting the number of states tremendously. Algorithm 3 for this task, however, tries to minimize the number of states \((q, o)\) necessary by considering nonpure outputs that maximally correspond to a pure input compatible with the original transition label.

figure e
figure f

6 Benchmarks

6.1 Protocol

For the past few years, the yardstick of performance for synthesis tools is the SYNTCOMP competition [19]. The organizers provide a bank of nearly a thousand LTL formulas, and candidate tools are run with a time limit of one hour on each of them. The tool that solves the most instances in this timeframe wins the competition.

To benchmark our tool, we relied on the 930 LTL formulas that were used in the 2021 SYNTCOMP competition, of which about 60% are realizable. Notably, 864 of all the tests were solved in less than 20 seconds by some tool during the competition, and among the 66 tests left out, 50 were not solved by any tool. This showcases a usual trend of synthesis tools: either they solve an instance fast, or they are unlikely to solve it at all. To better focus on the fine performance differences between the tools, we set a timeout of 60 seconds for all tests.

We compared Acacia-Bonsai against itself using different choices of options, and against Acacia+ [5], Strix [24], and ltlsynt [9, 25]. The benchmarks were completed on a Linux computer with the following specifications:

  • CPU: Intel® Core™ i7-8700 CPU @ 3.20GHz. This CPU has 6 hyperthreaded cores, meaning that 12 threads can run concurrently. It supports Intel® AVX2, meaning that it has SIMD registers of up to 256 bits.

  • Memory: The CPU has 12 MiB of cache, the computer has 16 GiB of DDR4-2666 RAM.

We present some of these results in the form of survival plots (also called cactus plots). They indicate how many instances can be solved within a set time, where the time limit is for each instance. As a rule of thumb, the lower the curve, the better. Since the tool tend to solve a lot of instances under one second, we elected to present these graphics with a logarithmic y-axis.

6.2 Results

The options of Acacia-Bonsai. We compared 25 different configurations of Acacia-Bonsai, in order to single out the best combination of options. We elected to start with some sensible defaults and test each parameter by diverging from the defaults by a single option each time.

  • Preprocessing of the automaton (Section 4.1). This has little impact, although a handful of tests saw an important boost. Overall, the performance was slightly worse with automaton preprocessing, owing to the cost of computing the surely loosing states. We elected to deactivate this option in our best configuration, as this allowed four more tests to pass.

  • Boolean states (Section 4.2). This step allowed solving about 5% more tests when activated, globally.

  • Vectors and downsets (Section 4.3). Despite a wealth of different implementations, only the \(k\)-d tree implementation really stands out, in that it solves 5% fewer tests than the rest. The impact on using SIMD vectors and tailoring downset algorithms to leverage SIMD operations appears to be minimal. This is likely caused by two factors: 1. The increasing ability for modern compilers to automatically identify where SIMD instructions can benefit performances; 2. The relative uselessness of pointwise vector operations in the task at hand.

  • Precomputing a sufficient set of inputs and IO (Section 4.4). Computing that set using Algorithm 2 turned out to offer the best performance, solving 23 more tests than using the pure inputs/IOs. The pure BDD approach for this step was slightly more costly.

  • Picking input-actions (Section 4.6). The approaches performed equivalently, with a slight edge for the choice of critical inputs without randomizing or priority queue.

  • Initial value and increments of \(k\) (Section 4.7). We compared several combinations, which had little impact on overall performance, with the best one solving 3 more tests than the worst.

  • Unrealizability (Section 5). The following figure shows how the formula-based and the automaton-based approaches to unrealizability compare. We only show the unrealizable tests and add the configuration we use in practice: start two threads, one for each option, and stop as soon as one returns.

    Despite the automaton-based approach showing better overall results, we note that this approach provides a larger automaton than the formula-based approach in about 99.5% of the tests. Additionally, the automaton-based approach offers better performances even when looking at the running time without the formula-to-automaton part of the process. This seems to indicate that the automaton that is produced is somewhat simpler for the main algorithm.

Fig. 2.
figure 2

Reducing unrealizability to realizability. Timeout set at 20 seconds.

Acacia-Bonsai and foes. The following plot shows the performance of the tools together. Within our parameters, Acacia-Bonsai solves 699 tests, while Acacia+ solves 560, ltlsynt 703, and Strix 770.

Fig. 3.
figure 3

Survival plot for SYNTCOMP tools and Acacia-Bonsai

Instances solved by one tool but not the other. To better understand the intrinsic algorithmic competitiveness of the different tools, we study which instances were solved by our tool but not the others, and conversely:

  • ltlsynt.   This tool solves 4 more instances than Acacia-Bonsai overall. It solves 61 instances on which Acacia-Bonsai times out, with less than a third of them being unrealizable instances. It would be interesting to implement, within ltlsynt, the unrealizability techniques we describe in Section 5.

  • Strix.   This tool solves 71 more instances than Acacia-Bonsai overall. It solves 124 instances on which Acacia-Bonsai times out, 58% of which are unrealizable. For 90% of these 124 instances, Strix answers in less than 2 seconds. Conversely, of the instances on which Acacia-Bonsai answers while Strix times out, three quarters are solved within two seconds. This naturally hints at the possibility of combining the approaches of the two tools, using parallelization.

7 Conclusion

We provided multiple degrees of freedom in the main algorithm for downset-based LTL realizability and implemented options for each of these degrees. In this paper, we presented the main ideas behind these. Experiments show that this careful reimplementation surpasses the performance of the original Acacia+, making Acacia-Bonsai competitive against modern LTL realizability tools. Along with implementing some optimizations present in previous implementations, we introduced several new ones: reduction of the input-output alphabet, alternative antichain data structures, different strategies for input-picking, and constructing a “shifted automaton” to test unrealizability.

A somewhat disappointing conclusion of our experiments concerns code that makes explicit use of SIMD registers, i.e., large CPU registers that support pointwise vector operations. Our experiments indicate that downset-based algorithms and downset data structures are not able to take full advantage of SIMD. In the future, we plan on investigating data structures for downsets that delay some of their computations in order to better leverage vectorized operations. Such a data structure would not provide better theoretical performances, but would potentially outperform our other data structures.

One surprise that prompts for further investigation is brought by our approach to unrealizability (Section 5): we provided two options for processing the input LTL formula into an automaton that expresses a realizable game iff the original formula was unrealizable. Although one option consistently produces larger automata than the other, it appears that the downset-based realizability algorithm performs better on the larger automata. A close study of the resulting automata may help in identifying salient features of automata that are easier for the Acacia algorithm.

Lastly, we should note that this reimplementation of Acacia+ is not complete, since a few options of Acacia+ have not yet been included in Acacia-Bonsai yet. One such option consists in decomposing LTL formulas that are conjunctions of subformulas into smaller instances of the realizability problem. We plan on implementing this before the next edition of SYNTCOMP.