figure a
figure b

1 Introduction

Nondeterministic Büchi automata (BAs) [8] are an elegant and conceptually simple framework to model infinite behaviors of systems and the properties they are expected to satisfy. BAs are widely used in many important verification tasks, such as termination analysis of programs [30], model checking [54], or as the underlying formal model of decision procedures for some logics (such as S1S [8] or a fragment of the first-order logic over Sturmian words [31]). Many of these applications require to perform complementation of BAs: For instance, in termination analysis of programs within Ultimate Automizer  [30], complementation is used to keep track of the set of paths whose termination still needs to be proved. On the other hand, in model checkingFootnote 1 and decision procedures of logics, complement is usually used to implement negation and quantifier alternation. Complementation is often the most difficult automata operation performed here; its worst-case state complexity is \(\mathcal {O}((0.76n)^n)\) [2, 48] (which is tight [55]).

In these applications, efficiency of the complementation often determines the overall efficiency (or even feasibility) of the top-level application. For instance, the success of Ultimate Automizer in the Termination category of the International Competition on Software Verification (SV-COMP) [51] is to a large degree due to an efficient BA complementation algorithm [6, 11] tailored for BAs with a special structure that it often encounters (as of the time of writing, it has won 6 gold medals in the years 2017–2022 and two silver medals in 2015 and 2016). The special structure in this case are the so-called semi-deterministic BAs (SDBAs), BAs consisting of two parts: (i) an initial part without accepting states/transitions and (ii) a deterministic part containing accepting states/transitions that cannot transition into the first part.

Complementation of SDBAs using one from the family of the so-called NCSB algorithms [5, 6, 11, 28] has the worst-case complexity \(\mathcal {O}(4^n)\) (and usually also works much better in practice than general BA complementation procedures). Similarly, there are efficient complementation procedures for other subclasses of BAs, e.g., (i) deterministic BAs (DBAs) can be complemented into BAs with 2n states [35] (or into co-Büchi automata with \(n+1\) states) or (ii) inherently weak BAs (BAs where in each strongly connected component (SCC), either all cycles are accepting or all cycles are rejecting) can be complemented into DBAs with \(\mathcal {O}(3^n)\) states using the Miyano-Hayashi algorithm [42].

For a long time, there has been no efficient algorithm for complementation of BAs that are highly structured but do not fall into one of the categories above, e.g., BAs containing inherently weak, deterministic, and some nondeterministic SCCs. For such BAs, one needed to use a general complementation algorithm with the \(\mathcal {O}((0.76n)^n)\) (or worse) complexity. To the best of our knowledge, only recently has there appeared works that exploit the structure of BAs to obtain a more efficient complementation algorithm: (i) The work of Havlena et al. [29], who introduce the class of elevator automata (BAs with an arbitrary mixture of inherently weak and deterministic SCCs) and give a \(\mathcal {O}(16^n)\) algorithm for them. (ii) The work of Li et al. [37], who propose a BA determinization procedure (into a deterministic Emerson-Lei automaton) that is based on decomposing the input BA into SCCs and using a different determinization procedure for different types of SCCs (inherently weak, deterministic, general) in a synchronous construction.

In this paper, we propose a new BA complementation algorithm inspired by [37], where we exploit the fact that complementation is, in a sense, more relaxed than determinization. In particular, we present a framework where one can plug-in different partial complementation procedures fine-tuned for SCCs with a specific structure. The procedures work only with the given SCCs, to some degree independently (thus reducing the potential state space explosion) from the rest of the BA. Our top-level algorithm then orchestrates runs of the different procedures in a synchronous manner (or completely independently in the so-called postponed strategy), obtaining a resulting automaton with potentially a more general acceptance condition (in general an Emerson-Lei condition), which can help keeping the result small. If the procedures satisfy given correctness requirements, our framework guarantees that its instantiation will also be correct. We also propose its optimizations by, e.g., using round-robin to decrease the amount of nondeterminism, using a shared breakpoint to reduce the size and the number of colours for certain class of partial algorithms, and generalize simulation-based pruning of macrostates.

We provide a detailed description of partial complementation procedures for inherently weak, deterministic, and initial deterministic SCCs, which we use to obtain a new exponentially better upper bound of \(\mathcal {O}(4^n)\) for the class of elevator automata (i.e., the same upper bound as for its strict subclass of SDBAs). Furthermore, we also provide two partial procedures for general SCCs based on determinization (from [37]) and the rank-based construction. Using a prototype implementation, we then show our algorithm complements well existing approaches and significantly improves the state of the art.

2 Preliminaries

We fix a finite non-empty alphabet \(\Sigma \) and the first infinite ordinal \(\omega \). An (infinite) word \(w\) is a function \(w:\omega \rightarrow \Sigma \) where the i-th symbol is denoted as \(w_{i}\). Sometimes, we represent \(w\) as an infinite sequence \(w= w_{0} w_{1} \dots \) We denote the set of all infinite words over \(\Sigma \) as \(\Sigma ^\omega \); an \(\omega \)-language is a subset of \(\Sigma ^\omega \).

Emerson-Lei Acceptance Conditions. Given a set \(\Gamma = \{0, \ldots , k -1\}\) of k colours (often depicted as , , etc.), we define the set of Emerson-Lei acceptance conditions \(\mathbb{E}\mathbb{L}(\Gamma )\) as the set of formulae constructed according to the following grammar:

$$\begin{aligned} \alpha \,\,{:}{:}\!= \textsf{Inf}(c) \mid \textsf{Fin}(c) \mid (\alpha \wedge \alpha ) \mid (\alpha \vee \alpha ) \end{aligned}$$

for \(c \in \Gamma \). The satisfaction relation \(\models \) for a set of colours \(M \subseteq \Gamma \) and condition \(\alpha \) is defined inductively as follows (for \(c \in \Gamma \)):

$$\begin{aligned} M \models \textsf{Fin}(c)&\text { iff } c \notin M,&M \models \alpha _1 \vee \alpha _2&\text {~ iff ~} M \models \alpha _1 \text { or } M \models \alpha _2, \\ M \models \textsf{Inf}(c)&\text { iff } c \in M,&M \models \alpha _1 \wedge \alpha _2&\text {~ iff ~} M \models \alpha _1 \text { and } M \models \alpha _2. \end{aligned}$$

Emerson-Lei Automata. A (nondeterministic transition-basedFootnote 2) Emerson-Lei automaton (TELA) over \(\Sigma \) is a tuple \(\mathcal {A}= (Q, \delta , I, \Gamma , \textsf{p}, \textsf{Acc})\), where \(Q\) is a finite set of states, \(\delta \subseteq Q\times \Sigma \times Q\) is a set of transitionsFootnote 3, \(I\subseteq Q\) is the set of initial states, \(\Gamma \) is the set of colours, \(\textsf{p}:\delta \rightarrow 2^{\Gamma }\) is a colouring function of transitions, and \(\textsf{Acc}\in \mathbb{E}\mathbb{L}(\Gamma )\). We use \(p \overset{a}{\rightarrow } q\) to denote that \((p,a,q) \in \delta \) and sometimes also treat \(\delta \) as a function \(\delta :Q\times \Sigma \rightarrow 2^{Q}\). Moreover, we extend \(\delta \) to sets of states \(P \subseteq Q\) as \(\delta (P, a) = \bigcup _{p \in P} \delta (p,a)\). We use \(\mathcal {A}{}[q]\) for \(q \in Q\) to denote the automaton \(\mathcal {A}{}[q] = (Q, \delta , \{q\}, \Gamma , \textsf{p}, \textsf{Acc})\), i.e., the TELA obtained from \(\mathcal {A}\) by setting q as the only initial state. \(\mathcal {A}\) is called deterministic if \(|I|\le 1\) and \(|\delta (q,a)|\le 1\) for each \(q\in Q\) and \(a \in \Sigma \). If and , we call \(\mathcal {A}\)Büchi automaton (BA) and denote it as \(\mathcal {A}= (Q, \delta , I, F)\) where \(F\) is the set of all transitions coloured by , i.e., ). For a BA, we use (and extend the notation to sets of states as for \(\delta \)). A BA \(\mathcal {A}= (Q, \delta , I, F)\) is called semi-deterministic (SDBA) if for every accepting transition \((p \overset{a}{\rightarrow } q) \in F\), the reachable part of \(\mathcal {A}{}[q]\) is deterministic.

run of \(\mathcal {A}\) from \(q \in Q\) on an input word \(w\) is an infinite sequence \(\rho :\omega \rightarrow Q\) that starts in q and respects \(\delta \), i.e., \(\rho _0 = q\) and \(\forall i \ge 0:\rho _i \overset{w_{i}}{\rightarrow }\rho _{i+1} \in \delta \). Let \(\textrm{inf}_{\delta }(\rho )\subseteq \delta \) denote the set of transitions occurring in \(\rho \) infinitely often and \(\textrm{inf}_{\Gamma }(\rho )= \bigcup \{\textsf{p}(x) \mid x \in \textrm{inf}_{\delta }(\rho )\}\) be the set of infinitely often occurring colours. A run \(\rho \) is accepting in \(\mathcal {A}\) iff \(\textrm{inf}_{\Gamma }(\rho )\models \textsf{Acc}\) and the language of \(\mathcal {A}\), denoted as \(\mathcal {L}(\mathcal {A})\), is defined as the set of words \(w \in \Sigma ^\omega \) for which there exists an accepting run in \(\mathcal {A}\) starting with some state in \(I\).

Consider a BA \(\mathcal {A}= (Q, \delta , I, F)\). For a set of states \(S\subseteq Q\) we use \(\mathcal {A}_S\) to denote the copy of \(\mathcal {A}\) where accepting transitions only occur between states from S, i.e., the BA where . We say that a non-empty set of states \(C \subseteq Q\) is a strongly connected component (SCC) if every pair of states of C can reach each other and C is a maximal such set. An SCC of \(\mathcal {A}\) is trivial if it consists of a single state that does not contain a self-loop and non-trivial otherwise. An SCC C is accepting if it contains at least one accepting transition and inherently weak iff either (i) every cycle in C contains a transition from \(F\) or (ii) no cycle in C contains any transitions from \(F\). An SCC C is deterministic iff the BA for any \(q \in C\) is deterministic. We denote inherently weak components as IWCs, accepting deterministic components that are not inherently weak as DACs (deterministic accepting), and the remaining accepting components as NACs (nondeterministic accepting). A BA \(\mathcal {A}\) is called an elevator automaton if it contains no NAC.

We assume that \(\mathcal {A}\) contains no accepting transition outside its SCCs (no run can cycle over such transitions). We use \(\delta _{\textrm{SCC}}\) to denote the restriction of \(\delta \) to transitions that do not leave their SCCs, formally, \(\delta _{\textrm{SCC}}= \{p \overset{a}{\rightarrow } q \in \delta \mid p \text { and } q \text { are in the same SCC}\}\). A partition block \(P \subseteq Q\) of \(\mathcal {A}\) is a nonempty union of its accepting SCCs, and a partitioning of \(\mathcal {A}\) is a sequence \(P_1, \ldots , P_n\) of pairwise disjoint partition blocks of \(\mathcal {A}\) that contains all accepting SCCs of \(\mathcal {A}\). Given a \(P_i\), let \(\mathcal {A}_{P_i}\) be the BA obtained from \(\mathcal {A}\) by removing colours from transitions outside \(P_i\). The following fact serves as the basis of our decomposition-based complementation procedure.

Fact 1

\(\mathcal {L}(\mathcal {A})= \mathcal {L}(\mathcal {A}_{P_1}) \cup \ldots \cup \mathcal {L}(\mathcal {A}_{P_n})\)

The complement (automaton) of a BA \(\mathcal {A}\) is a TELA that accepts the complement language \(\Sigma ^\omega \setminus \mathcal {L}(\mathcal {A})\) of \(\mathcal {L}(\mathcal {A})\). In the paper, we call a state and a run of a complement automaton a macrostate and a macrorun, respectively.

3 A Modular Complementation Algorithm

In a nutshell, the main idea of our BA complementation algorithm is to first decompose a BA \(\mathcal {A}\) into several partition blocks according to their properties, and then perform complementation for each of the partition blocks (potentially using a different algorithm) independently, using either a synchronous construction, synchronizing the complementation algorithms for all partition blocks in each step, or a postponed construction, which complements the partition blocks independently and combines the partial results using automata product construction. The decomposition of \(\mathcal {A}\) into partition blocks can either be trivial—i.e., with one block for each accepting SCC—, or more elaborate, e.g., a partitioning where one partition block contains all accepting IWCs, another contains all DACs, and each NAC is given its own partition block. In this way, one can avoid running a general complementation algorithm for unrestricted BAs with the state complexity upper bound \(\mathcal {O}((0.76n)^n)\) and, instead, apply the most suitable complementation procedure for each of the partition blocks. This comes with three main advantages:

  1. 1.

    The complementation algorithm for each partition block can be selected differently in order to exploit the properties of the block. For instance, for partition blocks with IWCs, one can use complementation based on the breakpoint (the so-called Miyano-Hayashi) construction [42] with \(\mathcal {O}(3^n)\) macrostates (cf. Sec. 4.1), while for partition blocks with only DACs, one can use an algorithm with the state complexity \(\mathcal {O}(4^n)\) based on an adaptation of the NCSB construction [5, 6, 11, 28] for SDBAs (cf. Sec. 4.2). For NACs, one can choose between, e.g., rank- [10, 21, 24, 29, 34, 48] or determinization-based [43, 45, 46] algorithms, depending on the properties of the NACs (cf. Sec. 6).

  2. 2.

    The different complementation algorithms can focus only on the respective blocks and do not need to consider other parts of the BA. This is advantageous, e.g., for rank-based algorithms, which can use this restriction to obtain tighter bounds on the considered ranks (even tighter than using the refinement in [29]).

  3. 3.

    The obtained automaton can be more compact due to the use of a more general acceptance condition than Büchi [47]—in general, it can be a conjunction of any \(\mathbb{E}\mathbb{L}\) conditions (one condition for each partition block), depending on the output of the complementation procedures; this can allow a more compact encoding of the produced automaton allowed by using a mixture of conditions. E.g., a deterministic BA can be complemented with constant extra generated states when using a co-Büchi condition rather than a linear number of generated states for a Büchi condition (see Sec. 5.1).

Those partial complementation algorithms then need to be orchestrated by a top-level algorithm to produce the complement of \(\mathcal {A}\).

One might regard our algorithm as an optimization of an approach that would for each partition block P obtain a BA \(\mathcal {A}_P\), complement \(\mathcal {A}_P\) using the selected algorithm, and perform the intersection of all obtained \(\mathcal {A}_P\)’s (which would, however, not be able to get the upper bound for elevator automata that we give in Sec. 4.3). Indeed, we also implemented the mentioned procedure (called the postponed approach, described in Sec. 5.2) and compared it to our main procedure (called the synchronous approach).

3.1 Basic Synchronous Algorithm

In this section, we describe the basic synchronous top-level algorithm. Then, in Sec. 4, we provide its instantiation for elevator automata and give a new upper bound for their complementation; in Sec. 5, we discuss several optimizations of the algorithm; and in Sec. 6, we give a generalization for unrestricted BAs. Let us fix a BA \(\mathcal {A}= (Q, \delta , I, F)\) and, w.l.o.g., assume that \(\mathcal {A}\) is complete, i.e., \(|I| > 0\) and all states \(q \in Q\) have an outgoing transition over all symbols \(a \in \Sigma \).

The synchronous algorithm works with partial complementation algorithms for BA’s partition blocks. Each such algorithm \(\texttt{Alg}\) is provided with a structural condition \(\varphi _\texttt{Alg}\) characterizing partition blocks it can complement. For a BA \(\mathcal {B}\), we use the notation \(\mathcal {B}\models \varphi \) to denote that \(\mathcal {B}\) satisfies the condition \(\varphi \). We say that \(\texttt{Alg}\) is a partial complementation algorithm for a partition block P if \(\mathcal {A}_P \models \varphi _\texttt{Alg}\). We distinguish between \(\texttt{Alg}\), a general algorithm able to complement a partition block of a given type, and \(\texttt{Alg}_{P}\), its instantiation for the partition block P. Each instance \(\texttt{Alg}_P\) is required to provide the following:

  • \(\texttt{T}^{\texttt{Alg}_P}\) — the type of the macrostates produced by the algorithm;

  • \(\texttt{Colours}^{\texttt{Alg}_P} = \{0, \ldots , k^{\texttt{Alg}_P}-1\}\) — the set of used colours;

  • \(\texttt{Init}^{\texttt{Alg}_P} \in 2^{\texttt{T}^{\texttt{Alg}_P}}\) — the set of initial macrostates;

  • \(\texttt{Succ}^{\texttt{Alg}_P}:(2^Q\times \texttt{T}^{\texttt{Alg}_P} \times \Sigma ) \rightarrow 2^{\texttt{T}^{\texttt{Alg}_P} \times \texttt{Colours}^{\texttt{Alg}_P}}\) — a function returning the successors of a macrostate such that \(\texttt{Succ}^{\texttt{Alg}_P}(H, M, a) = \{(M_1, \alpha _1), \ldots , (M_k, \alpha _k)\}\), where H is the set of all states of \(\mathcal {A}\) reached over the same word, M is the \(\texttt{Alg}_P\)’s macrostate for the given partition block, a is the input symbol, and each \((M_i, \alpha _i)\) is a pair (macrostate, set of colours) such that \(M_i\) is a successor of M over a w.r.t. H and \(\alpha _i\) is a set of colours on the edge from M to \(M_i\) (H helps to keep track of new runs coming into the partition block); and

  • \(\texttt{Acc}^{\texttt{Alg}_P}\in \mathbb{E}\mathbb{L}(\texttt{Colours}^{\texttt{Alg}_P})\) — the acceptance condition.

Let \(P_1, \ldots , P_n\) be a partitioning of \(\mathcal {A}\) (w.l.o.g., we assume that \(n > 0\)), and \(\texttt{Alg}^1, \ldots , \texttt{Alg}^n\) be a sequence of algorithms such that \(\texttt{Alg}^i\) is a partial complementation algorithm for \(P_i\). Furthermore, let us define the following auxiliary renumbering function \(\lambda \) as \(\lambda (c, j) = c + \sum _{i=1}^{j-1} |\texttt{Colours}^{\texttt{Alg}^i_{P_i}}|\), which is used to make the colours and acceptance conditions from the partial complementation algorithms disjoint. We also lift \(\lambda \) to sets of colours in the natural way, and also to \(\mathbb{E}\mathbb{L}\) conditions such that \(\lambda (\varphi , j)\) has the same structure as \(\varphi \) but each atom \(\textsf{Inf}(c)\) is substituted with the atom \(\textsf{Inf}(\lambda (c, j))\) (and likewise for \(\textsf{Fin}\) atoms). The synchronous complementation algorithm then produces the TELA \(\textsc {ModCompl}(\texttt{Alg}^1_{P_1}, \dots , \texttt{Alg}^n_{P_n},\mathcal {A}) = (Q^{\mathcal {C}}, \delta ^{\mathcal {C}}, I^{\mathcal {C}}, \Gamma ^{\mathcal {C}}, \textsf{p}^{\mathcal {C}}, \textsf{Acc}^{\mathcal {C}})\) with components defined as follows (we use \([S_i]_{i=1}^n\) to abbreviate \(S_1 \times \cdots \times S_n\)):

figure m

Footnote 4

In order for \(\textsc {ModCompl}\) to be correct, the partial complementation algorithms need to satisfy certain properties, which we discuss below.

For a structural condition \(\varphi \) and a BA \(\mathcal {B}= (Q, \delta , I, F)\), we define \(\mathcal {B}\models _{P}\varphi \) iff \(\mathcal {B}\models \varphi \), P is a partition block of \(\mathcal {B}\), and \(\mathcal {B}\) contains no accepting transitions outside P. We can now provide the correctness condition on \(\texttt{Alg}\).

Definition 1

We say that \(\texttt{Alg}\) is correct if for each BA \(\mathcal {B}\) and partition block P such that \(\mathcal {B}\models _{P}\varphi _\texttt{Alg}\) it holds that \(\mathcal {L}(\textsc {ModCompl}(\texttt{Alg}_{P}, \mathcal {B})) = \Sigma ^\omega \setminus \mathcal {L}(\mathcal {B})\).

The correctness of the synchronous algorithm (provided that each partial complementation algorithm is correct) is then established by Theorem 1.

Theorem 1

Let \(\mathcal {A}\) be a BA, \(P_1, \ldots , P_n\) be a partitioning of \(\mathcal {A}\), and \(\texttt{Alg}^1, \ldots , \texttt{Alg}^n\) be a sequence of partial complementation algorithms such that \(\texttt{Alg}^i\) is correct for \(P_i\). Then, we have \(\mathcal {L}(\textsc {ModCompl}(\texttt{Alg}^1_{P_1}, \dots , \texttt{Alg}^n_{P_n},\mathcal {A})) = \Sigma ^\omega \setminus \mathcal {L}(\mathcal {A})\).

4 Modular Complementation of Elevator Automata

In this section, we first give partial algorithms to complement partition blocks with only accepting IWCs (Sec. 4.1) and partition blocks with only DACs (Sec. 4.2). Then, in Sec. 4.3, we show that using our algorithm, the upper bound on the size of the complement of elevator BAs is in \(\mathcal {O}(4^n)\), which is exponentially better than the known upper bound \(\mathcal {O}(16^n)\) established in [29].

4.1 Complementation of Inherently Weak Accepting Components

First, we introduce a partial algorithm \(\texttt{MH}\) with the condition \(\varphi _\texttt{MH}\) specifying that all SCCs in the partition block P are accepting IWCs. Let P be a partition block of \(\mathcal {A}\) such that \(\mathcal {A}_P \models \varphi _\texttt{MH}\). Our proposed approach makes use of the Miyano-Hayashi construction [42]. Since in accepting IWCs, all runs are accepting, the idea of the construction is to accept words such that all runs over the words eventually leave P.

Therefore, we use a pair (CB) of sets of states as a macrostate for complementing P. Intuitively, we use C to denote the set of all runs of \(\mathcal {A}\) that are in P (C for “check”). The set \(B\subseteq C\) represents the runs being inspected whether they leave P at some point (B for “breakpoint”). Initially, we let \(C = I\cap P\) and also sample into breakpoint all runs in P, i.e., set \(B = C\). Along reading an \(\omega \)-word w, if all runs that have entered P eventually leave P, i.e., B becomes empty infinitely often, the complement language of P should contain w (when B becomes empty, we sample B with all runs from the current C). We formalize \(\texttt{MH}_P\) as a partial procedure in the framework from Sec. 3.1 as follows:

figure p

We can see that checking whether w is accepted by the complement of P reduces to check whether B has been cleared infinitely often. Since every time when B becomes empty, we emit the colour , we have that w is not accepted by \(\mathcal {A}\) within P if and only if occurs infinitely often. Note that the transition function \(\texttt{Succ}^{\texttt{MH}_P}\) is deterministic, i.e., there is exactly one successor.

Lemma 1

The partial algorithm \(\texttt{MH}\) is correct.

4.2 Complementation of Deterministic Accepting Components

In this section, we give a partial algorithm \(\texttt{CSB}\) with the condition \(\varphi _\texttt{CSB}\) specifying that a partition block P consists of DACs. Let P be a partition block of \(\mathcal {A}\) such that \(\mathcal {A}_P \models \varphi _\texttt{CSB}\). Our approach is based on the NCSB family of algorithms [5, 6, 11, 28] for complementing SDBAs, in particular the NCSB-MaxRank construction [28]. The algorithm utilizes the fact that runs in DACs are deterministic, i.e., they do not branch into new runs. Therefore, one can check that a run is non-accepting if there is a time point from which the run does not see accepting transitions any more. We call such a run that does not see accepting transitions any more safe. Then, an \(\omega \)-word w is not accepted in P iff all runs over w in P either (i) leave P or (ii) eventually become safe.

For checking point (i), we can use a similar technique as in algorithm \(\texttt{MH}\), i.e., use a pair (CB). Moreover, to be able to check point (ii), we also use the set S that contains runs that are supposed to be safe, resulting in macrostates of the form (CSB)Footnote 5. To make sure that all runs are deterministic, we will use \(\delta _{\textrm{SCC}}\) instead of \(\delta \) when computing the successors of S and B since there may be nondeterministic jumps between different DACs in P; we will not miss any run in P since if a run moves between DACs of P, it can be seen as the run leaving P and a new run entering P. Since a run eventually stays in one SCC, this guarantees that the run will not be missed.

We formalize \(\texttt{CSB}_P\) in the top-level framework as follows:

figure s

Intuitively, when \(\delta _{F}(B, a) \cap \delta _{\textrm{SCC}}(B, a) = \emptyset \), we make the following guess: (i) either the runs in B all become safe (we move them to S) or (ii) there might be some unsafe runs (we keep them in B). Since the runs in B are deterministic, the number of tracked runs in B will not increase. Moreover, if all runs in B are eventually safe, we are guaranteed to move all of them to S at the right time point, e.g., the maximal time point where all runs are safe since the number of runs is finite.

Fig. 1.
figure 1

Left: BA \(\mathcal {A}_{ ex }\) (dots represent accepting transitions). Right: the outcome of \(\textsc {ModCompl}(\texttt{CSB}_{P_0}, \texttt{MH}_{P_1},\mathcal {A}_{ ex })\) with . States are given as \((H, (C_0, S_0, B_0), (C_1, B_1))\); to avoid too many braces, sets are given as sums.

As mentioned above, w is not accepted within P iff all runs over w either (i) leave P or (ii) become safe. In the context of the presented algorithm, this corresponds to (i) B becoming empty infinitely often and (ii) \(\delta _{F}(S, a)\) never seeing an accepting transition. Then we only need to check if there exists an infinite sequence of macrostates \(\hat{\rho } = (C_0, S_0, B_0) \ldots \) that emits  infinitely often.

Lemma 2

The partial algorithm \(\texttt{CSB}\) is correct.

It is worth noting that when the given partition block P contains all DACs of \(\mathcal {A}\), we can still use the construction above, while the construction in [28] only works on SDBAs.

Example 1

In Fig. 1, we give an example of the run of our algorithm on the BA \(\mathcal {A}_{ ex }\). The BA contains three SCCs, one of them (the one containing p) non-accepting (therefore, it does not need to occur in any partition block). The partition block \(P_0\) contains a single DAC, so we can use algorithm \(\texttt{CSB}\), and the partition block \(P_1\) contains a single accepting IWC, so we can use \(\texttt{MH}\). The resulting \(\textsc {ModCompl}(\texttt{CSB}_{P_0}, \texttt{MH}_{P_1},\mathcal {A}_{ ex })\) uses two colours,  from \(\texttt{CSB}\) and  from \(\texttt{MH}\). The acceptance condition is .    \(\square \)

4.3 Upper-bound for Elevator Automata Complementation

We now give an upper bound on the size of the complement generated by our algorithm for elevator automata, which significantly improves the best previously known upper bound of \(\mathcal {O}(16^n)\) [29] to \(\mathcal {O}(4^n)\), the same as for SDBAs, which are a strict subclass of elevator automata [6] (we note that this upper bound cannot be obtained by a determinization-based algorithm, since determinization of SDBAs is in \(\Omega (n!)\) [17, 40]).

Theorem 2

Let \(\mathcal {A}\) be an elevator automaton with n states. Then there exists a BA with \(\mathcal {O}(4^n)\) states accepting the complement of \(\mathcal {L}(\mathcal {A})\).


(Sketch). Let \(Q_W\) be all states in accepting IWCs, \(Q_D\) be all states in DACs, and \(Q_N\) be the remaining states, i.e., \(Q = Q_W \uplus Q_D \uplus Q_N\). We make two partition blocks: \(P_0 = Q_W\) and \(P_1 = Q_D\) and use \(\texttt{MH}\) and \(\texttt{CSB}\) respectively as the partial algorithms, with macrostates of the form \((H, (C_0, B_0), (C_1, S_1, B_1))\). For each state \(q_N \in Q_N\), there are two options: either \(q_N \notin H\) or \(q_N \in H\). For each state \(q_W \in Q_W\), there are three options: (i) \(q_W \notin C_0\), (ii) \(q_W \in C_0 \setminus B_0\), or (iii) \(q_W \in C_0 \cap B_0\). Finally, for each \(q_D \in Q_D\), there are four options: (i) \(q_D \notin C_1 \cup S_1\), (ii) \(q_D \in S_1\), (iii) \(q_D \in C_1 \setminus B_1\), or (iv) \(q_D \in C_1 \cap B_1\). Therefore, the total number of macrostates is \(2 \cdot 2^{|Q_N|} \cdot 3^{|Q_W|} \cdot 4^{|Q_D|} \in \mathcal {O}(4^n)\) where the initial factor 2 is due to degeneralization from two to one colour (the two colours can actually be avoided by using our shared breakpoint optimization from Sec. 5.4).    \(\square \)

5 Optimizations of the Modular Construction

In this section, we propose optimizations of the basic modular algorithm. In Sec. 5.1, we give a partial algorithm to complement initial partition blocks with DACs. Further, in Sec. 5.2, we propose the postponed construction allowing to use automata reduction on intermediate results. In Sec. 5.3, we propose the round-robin algorithm alleviating the problem with the explosion of the size of the Cartesian product of partial successors. In Sec. 5.4, we provide an optimization for partial algorithms that are based on the breakpoint construction, and, finally, in Sec. 5.5, we show how to employ simulation to decrease the size of macrostates in the synchronous construction.

5.1 Complementation of Initial Deterministic Partition Blocks

Our first optimization is an algorithm \(\texttt{CoB}\) for a subclass of partition blocks containing DACs. In particular, the condition \(\varphi _{\texttt{CoB}}\) specifies that the partition block P is deterministic and can be reached only deterministically in \(\mathcal {A}\) (i.e., \(\mathcal {A}_P\) after removing redundant states is deterministic). Then, we say that P is an initial deterministic partition block. The algorithm is based on complementation of deterministic BAs into co-Büchi automata.

The algorithm \(\texttt{CoB}_P\) is formalized below:

figure y

Intuitively, all runs reach P deterministically, which means that over a word w, at most one run can reach P (so \(|\texttt{Init}^{\texttt{CoB}_P}| = 1\)). Thus, we have \(|\delta (H, w_{j}) \cap P| = 1 \) for some \(j \ge 0\) if there is a run over w to P, corresponding to \(\delta (H,a)\cap P = \{r\}\) in the construction. To check whether w is not accepted in P, we only need to check whether the run from \(r \in P\) over w visits accepting transitions only finitely often. We give an example of complementation of a BA containing an initial deterministic partition block in [27].

Lemma 3

The partial algorithm \(\texttt{CoB}\) is correct.

5.2 Postponed Construction

The modular synchronous construction from Sec. 3.1 utilizes the assumption that in the simultaneous construction of successors for each partition block over a, if one partial macrostate \(M_i\) does not have a successor over a, then there will be no successor of the \((H, M_1, \ldots , M_n)\) macrostate in \(\delta ^{\mathcal {C}}\) as well. This is useful, e.g., for inclusion testing, where it is not necessary to generate the whole complement. On the other hand, if we need to generate the whole automaton, a drawback of the proposed modular construction is that each partial complementation algorithm itself may generate a lot of useless states. In this section, we propose the postponed construction, which complements the partition blocks (with their surrounding) independently and later combines the intermediate results to obtain the complement automaton for \(\mathcal {A}\). The main advantage of the postponed construction is that one can apply automata reduction (e.g., based on removing useless states or using simulation [1, 9, 13, 18]) to decrease the size of the intermediate automata.

In the postponed construction, we use product-based BA intersection operation (i.e., for two TELAs \(\mathcal {B}_1\) and \(\mathcal {B}_2\), a product automaton \(\mathcal {B}_1\cap \mathcal {B}_2\) satisfying \(\mathcal {L}(\mathcal {B}_1 \cap \mathcal {B}_2) = \mathcal {L}(\mathcal {B}_1) \cap \mathcal {L}(\mathcal {B}_2)\)Footnote 6). Further, we employ a function \(\texttt{Red}\) performing some language-preserving reduction of an input TELA. Then, the postponed construction for an elevator automaton \(\mathcal {A}\) with a partitioning \(P_1, \ldots , P_n\) and a sequence \(\texttt{Alg}^1, \ldots , \texttt{Alg}^n\) where \(\texttt{Alg}^i\) is a partial complementation algorithm for \(P_i\), is defined as follows:

$$\begin{aligned} \textsc {PostpCompl}(\texttt{Alg}^1_{P_1}, \dots , \texttt{Alg}^n_{P_n},\mathcal {A}) = \bigcap _{i=1}^n \texttt{Red}\left( \textsc {ModCompl}(\texttt{Alg}^i_{P_i}, \mathcal {A}_{P_i})\right) . \end{aligned}$$

The correctness of the construction is then summarized by the following theorem.

Theorem 3

Let \(\mathcal {A}\) be a BA, \(P_1, \ldots , P_n\) be a partitioning of \(\mathcal {A}\), and \(\texttt{Alg}^1, \ldots , \texttt{Alg}^n\) be a sequence of partial complementation algorithms such that \(\texttt{Alg}^i\) is correct for \(P_i\). Then, \(\mathcal {L}(\textsc {PostpCompl}(\texttt{Alg}^1_{P_1}, \dots , \texttt{Alg}^n_{P_n},\mathcal {A})) = \Sigma ^\omega \setminus \mathcal {L}(\mathcal {A})\).

5.3 Round-Robin Algorithm

The proposed basic synchronous approach from Sec. 3.1 may suffer from the combinatorial explosion because the successors of a macrostate are given by the Cartesian product of all successors of the partial macrostates. To alleviate this explosion, we propose a round-robin top-level algorithm. Intuitively, the round-robin algorithm actively tracks runs in only one partial complementation algorithm at a time (while other algorithms stay passive). The algorithm periodically changes the active algorithm to avoid starvation (the decision to leave the active state is, however, fully directed by the partial complementation algorithm). This can alleviate an explosion in the number of successors for algorithms that generate more than one successor (e.g., for rank-based algorithms where one needs to make a nondeterministic choice of decreasing ranks of states in order to be able to accept [10, 21, 24, 29, 34, 48]; such a choice needs to be made only in the active phase while in the passive phase, the construction just needs to make sure that the run is consistent with the given ranking, which can be done deterministically).

The round-robin algorithm works on the level of partial complementation round-robin algorithms. Each instance of the partial algorithm provides passive types to represent partial macrostates that are passive and active types to represent currently active partial macrostates. In contrast to the basic partial complementation algorithms from Sec. 3.1, which provide only a single successor function, the round-robin partial algorithms provide several variants of them. In particular, \(\texttt{SuccPass}^{}\) returns (passive) successors of a passive partial macrostate, \(\texttt{Lift}^{}\) gives all possible active counterparts of a passive macrostate, and \(\texttt{SuccAct}^{}\) returns successors of an active partial macrostate. If \(\texttt{SuccAct}^{}\) returns a partial macrostate of the passive type, the round-robin algorithm promotes the next partial algorithm to be the active one. For instance, in the round-robin version of \(\texttt{CSB}\), the passive type does not contain the breakpoint and only checks that safe runs stay safe, so it is deterministic. Due to space limitations, we give a formal definition and more details about the round-robin algorithm in [27].

5.4 Shared Breakpoint

The partial complementation algorithms \(\texttt{CSB}\) and \(\texttt{MH}\) (and later \(\texttt{RNK}\) defined in Sec. 6) use a breakpoint to check whether the runs under inspection are accepting or not. As an optimization, we consider merging of breakpoints of several algorithms and keeping only a single breakpoint for all supported algorithms. The top-level algorithm then needs to manage only one breakpoint and emit a colour only if this sole breakpoint becomes empty. This may lead to a smaller number of generated macrostates since we synchronize the breakpoint sampling among several algorithms. The second benefit is that this allows us to generate fewer colours (in the case of elevator automata complemented using algorithms \(\texttt{CSB}\) and \(\texttt{MH}\), we get only one colour).

5.5 Simulation Pruning

Our construction can be further optimized by a simulation (or other compatible) relation for pruning macrostates.Footnote 7 A simulation is, broadly speaking, a relation \({\preccurlyeq } \subseteq Q\times Q\) implying language inclusion of states, i.e., \(\forall p, q \in Q:p\preccurlyeq q \Longrightarrow \mathcal {L}(\mathcal {A}{}[p]) \subseteq \mathcal {L}(\mathcal {A}{}[q])\). Intuitively, our optimization allows to remove a state p from a macrostate M if there is also a state q in M such that (i) \(p \preccurlyeq q\), (ii) p is not reachable from q, and (iii) p is smaller than q in an arbitrary total order over Q (this serves as a tie-breaker for simulation-equivalent mutually unreachable states). The reason why p can be removed is that its behaviour can be completely mimicked by q. In our construction, we can then, roughly speaking, replace each call to the functions \(\delta (U,a)\) and \(\delta _F(U,a)\), for a set of states U, by \( pr (\delta (U,a))\) and \( pr (\delta _F(U,a))\) respectively in each partial complementation algorithm, as well as in the top-level algorithm, where \( pr (S)\) is obtained from S by pruning all eligible states. The details are provided in [27].

6 Modular Complementation of Non-Elevator Automata

A non-elevator automaton \(\mathcal {A}\) contains at least one NAC, besides possibly other IWCs or DACs. To complement \(\mathcal {A}\) in a modular way, we apply the techniques seen in Sec. 4 to its DACs and IWCs, while for its NACs we resort to a general complementation algorithm \(\texttt{Alg}\). In theory, rank- [34], slice- [32], Ramsey- [50], subset-tuple- [2], and determinization- [46] based complementation algorithms adapted to work on a single partition block instead of the whole automaton are all valid instantiations of \(\texttt{Alg}\). Below, we give a high-level description of two such algorithms: rank- and determinization-based.

Rank-based partial complementation algorithm. Working on each NAC independently benefits the complementation algorithm even if the input BA contains only NACs. For instance, in rank-based algorithms [10, 21, 24, 29, 33, 34, 48], the fact whether all runs of \(\mathcal {A}\) over a given \(\omega \)-word w are non-accepting is determined by ranks of states, given by the so-called ranking functions. A ranking function is a (partial) function from Q to \(\omega \). The main idea of rank-based algorithms is the following: (i) every run is initially nondeterministically assigned a rank, (ii) ranks can only decrease along a run, (iii) ranks need to be even every time a run visits an accepting transition, and (iv) the complement automaton accepts iff all runs eventually get trapped in odd ranksFootnote 8. In the standard rank-based procedure, the initial assignment of ranks to states in (i) is a function \(Q \mathrel {\rightharpoonup }\{0, \ldots , 2n-1\}\) for \(n = |Q|\). Using our framework, we can, however, significantly restrict the considered ranks in a partition block P to only \(P \mathrel {\rightharpoonup }\{0, \ldots , 2m-1\}\) for \(m = |P|\) (here, it makes sense to use partition blocks consisting of single SCCs). One can further reduce the considered ranks using the techniques introduced in, e.g., [24, 29].

In order to adapt the rank-based construction as a partial complementation algorithm \(\texttt{RNK}\) in our framework, we need to extend the ranking functions by a fresh “box state”  representing states outside the partition block. The ranking function then uses  to represent ranks of runs newly coming into the partition block. The box-extension also requires to change the transition in a way that  always represents reachable states from the outside. We provide the details of the construction, which includes the MaxRank optimization from [24], in [27].

Determinization-based partial complementation algorithm. In [29, 52] we can see that determinization-based complementation is also a good instantiation of \(\texttt{Alg}\) in practice, so, we also consider the standard Safra-Piterman determinization [43, 45, 46] as a choice of \(\texttt{Alg}\) for complementing NACs. Determinization-based algorithms use a layered subset construction to organize all runs over an \(\omega \)-word w. The idea is to identify a subset \(S \subseteq H\) of reachable states that occur infinitely often along reading w such that between every two occurrences of S, we have that (i) every state in the second occurrence of S can be reached by a state in the first occurrence of S and (ii) every state in the second occurrence is reached by a state in the first occurrence while seeing an accepting transition. According to König’s lemma, there must then be an accepting run of \(\mathcal {A}\) over w.

The construction initially maintains only one set H: the set of reachable states. Since S as defined does not necessarily need to be H, every time there are runs visiting accepting transitions, we create a new subset C for those runs and remember which subset C is coming from. This way, we actually organize the current states of all runs into a tree structure and do subset construction in parallel for the sets in each tree node. If we find a tree node whose labelled subset, say \(S'\), is equal to the union of states in its children, we know the set \(S'\) satisfies the condition above and we remove all its child nodes and emit a good event. If such good event happens infinitely often, it means that \(S'\) also occurs infinitely often. So in complementation, we only need to make sure those good events only happen for finitely many times. Working on each NAC separately also benefits the determinization-based approach since the number of possible trees will be less with smaller number of reachable states. Following the idea of [37], to adapt for the construction as the partial complementation algorithm, we put all the newly coming runs from other partition blocks in a newly created node without a parent node. In this way, we actually maintain a forest of trees for the partial complementation construction. We denote the determinization-based construction as \(\texttt{DET}\); cf. [37] for details.

Table 1. Statistics for our experiments. The column unsolved classifies unsolved instances by the form timeouts : out of memory : other failures. For the cases of VBS we provide just the number of unsolved cases. The columns states and runtime provide mean : median of the number of states and runtime, respectively.

7 Experimental Evaluation

To evaluate the proposed approach, we implemented it in a prototype tool Kofola  [25] (written in C++) built on top of Spot  [16] and compared it against COLA  [37], Ranker  [28] (v. 2), Seminator  [5] (v. 2.0), and Spot  [15, 16] (v. 2.10.6), which are the state of the art in BA complementation [28, 29, 37]. Due to space restrictions, we give results for only two instantiations of our framework: \(\textsc {Kofola}_{S}\) and \(\textsc {Kofola}_{P}\). Both instantiations use \(\texttt{MH}\) for IWCs, \(\texttt{CSB}\) for DACs, and \(\texttt{DET}\) for NACs. The partitioning selection algorithm merges all IWCs into one partition block, all DACs into one partition block, and keeps all NACs separate. Simulation-based pruning from Sec. 5.5 is turned on, and round-robin from Sec. 5.3 is turned off (since the selected algorithms are quite deterministic). \(\textsc {Kofola}_{S}\) employs the synchronous and \(\textsc {Kofola}_{P}\) employs the postponed strategy. We also consider the Virtual Best Solver (VBS), i.e., a virtual tool that would choose the best solver for each single benchmark among all tools (\(\textsc {VBS}_{+}\)) and among all tools except both versions of Kofola (\(\textsc {VBS}_{-}\)). We ran our experiments on an Ubuntu 20.04.4 LTS system running on a desktop machine with 16 GiB RAM and an Intel 3.6 GHz i7-4790 CPU. To constrain and collect statistics about the executions of the tools, we used BenchExec [3] and imposed a memory limit of 12 GiB and a timeout of 10 minutes; we used Spot to cross-validate the equivalence of the automata generated by the different tools. An artifact reproducing our experiments is available as [26].

As our data set, we used 39,837 BAs from the automata-benchmarks repository [36] (used before by, e.g., [28, 29, 37]), which contains BAs from the following sources: (i) randomly generated BAs used in [52] (21,876 BAs), (ii) BAs obtained from LTL formulae from the literature and randomly generated LTL formulae [5] (3,442 BAs), (iii) BAs obtained from Ultimate Automizer [11] (915 BAs), (iv) BAs obtained from the solver for first-order logic over Sturmian words Pecan [31] (13,216 BAs), (v) BAs obtained from an S1S solver [23] (370 BAs), and (vi) BAs from LTL to SDBA translation [49] (18 BAs). From these BAs, 23,850 are deterministic, 6,147 are SDBAs (but not deterministic), 4,105 are elevator (but not SDBAs), and 5,735 are the rest.

In Table 1 we present an overview of the outcomes. Despite being a prototype, Kofola can already complement a large portion of the input automata, with very few cases that can be complemented successfully only by Spot or COLA. Regarding the mean number of states, \(\textsc {Kofola}_{S}\) has the least mean value from all tools (except Ranker, which, however, had 1,000 unsolved cases) Moreover, Kofola significantly decreased the mean number of states when included into the VBS: from 96 to 78! We consider this to be a strong validation of the usefulness of our approach. Regarding the runtime, both versions of Kofola are rather similar; Kofola is just slightly slower than Spot and COLA but much faster than both Ranker and Seminator (cf. [27]).

Fig. 2.
figure 2

Scatter plots comparing the numbers of states generated by the tools.

In Fig. 2 we present a comparison of the number of states generated by \(\textsc {Kofola}_{S}\) and other tools; we omit \(\textsc {VBS}_{+}\) since the corresponding plot can be derived from the one for \(\textsc {VBS}_{-}\) (since Ranker and Seminator only output BAs, we compare the sizes of outputs transformed into BAs for all tools to be fair). In the plots, the number of benchmarks represented by each mark is given by its colour; a mark above the diagonal means that \(\textsc {Kofola}_{S}\) generated a BA smaller than the other tool while a mark on the top border means that the other tool failed while \(\textsc {Kofola}_{S}\) succeeded, and symmetrically for the bottom part and the right-hand border. Dashed lines represent the maximum number of states generated by one of the tools in the plot, axes are logarithmic.

figure ac

From the results, \(\textsc {Kofola}_{S}\) clearly dominates state-of-the-art tools that are not based on SCC decomposition (Ranker, Spot, Seminator). The outputs are quite comparable to COLA, which also uses SCC decomposition and can be seen as an instantiation of our framework. This supports our intuition that working on the single SCCs helps in reducing the size of the final automaton, confirming the validity of our modular mix-and-match Büchi complementation approach. Lastly, in the figure in the right we compare our algorithm for elevator automata with the one in Ranker (the only other tool with a dedicated algorithm for this subclass). Our new algorithm clearly dominates the one in Ranker.

8 Related Work

To the best of our knowledge, we provide the first general framework where one can plug-in different BA complementation algorithms while taking advantage of the specific structure of SCCs. We will discuss the difference between our work and the literature.

The breakpoint construction [42] was designed to complement BAs with only IWCs, while our construction treats it as a partial complementation procedure for IWCs and differs in the need to handle incoming states from other partition blocks. The NCSB family of algorithms [5, 6, 11, 28] for SDBAs do not work when there are nondeterministic jumps between DACs; they can, however, be adapted as partial procedures for complementing DACs in our framework, cf. Sec. 4.2. In [29], a deelevation-based procedure is applied to elevator automata to obtain BAs with a fixed maximum rank of 3, for which a rank-based construction produces a result of the size in \(\mathcal {O}(16^n)\). In our work, we exploit the structure of the SCCs much more to obtain an exponentially better upper bound of \(\mathcal {O}(4^n)\) (the same as for SDBAs). The upper bound \(\mathcal {O}(4^n)\) for complementing unambiguous BAs was established in [39], which is orthogonal to our work, but seems to be possible to incorporate into our framework in the future.

There is a huge body of work on complementation of general BAs [2, 5, 7, 8, 10, 19,20,21,22, 24, 29, 32, 34, 43, 45, 46, 48, 50, 52, 53]; all of them work on the whole graph structure of the input BAs. Our framework is general enough to allow including all of them as partial complementation procedures for NACs. On the contrary, our framework does not directly allow (at least in the synchronous strategy) to use algorithms that do not work on the structure of the input BA, such as the learning-based complementation algorithm from [38]. The recent determinization algorithm from [37], which serves as our inspiration, also handles SCCs separately (it can actually be seen as an instantiation of our framework). Our current algorithm is, however, more flexible, allowing to mix-and-match various constructions, keep SCCs separate or merge them into partition blocks, and allows to obtain the complexity \(\mathcal {O}(4^n)\), while [37] only allowed \(\mathcal {O}(n!)\) (which is tight since SDBA determinization is in \(\Omega (n!)\) [17, 40]).

Regarding the tool Spot  [15, 16], it should not be perceived as a single complementation algorithm. Instead, Spot should be seen as a highly engineered platform utilizing breakpoint construction for inherently weak BAs, NCSB [6, 11] for SDBAs, and determinization-based complementation [43, 45, 46] for general BAs, while using many other heuristics along the way. Seminator uses semi-determinization [4, 5, 14] to make sure the input is an SDBA and then uses NCSB [6, 11] to compute the complement.

9 Conclusion and Future Work

We have proposed a general framework for BA complementation where one can plug-in different partial complementation procedures for SCCs by taking advantage of their specific structure. Our framework not only obtains an exponentially better upper bound for elevator automata, but also complements existing approaches well. As shown by the experimental results (especially for the VBS), our framework significantly improves the current portfolio of complementation algorithms.

We believe that our framework is an ideal testbed for experimenting with different BA complementation algorithms, e.g., for the following two reasons: (i) One can develop an efficient complementation algorithm that only works for a quite restricted sub-class of BAs (such as the algorithm for initial deterministic SCCs that we showed in Sec. 5.1) and the framework can leverage it for complementation of all BAs that contain such a sub-structure. (ii) When one tries to improve a general complementation algorithm, they can focus on complementation of the structurally hard SCCs (mainly the nondeterministic accepting SCCs) and do not need to look for heuristics that would improve the algorithm if there were some easier substructure present in the input BA (as was done, e.g., in [29]). From how the framework is defined, it immediately offers opportunities for being used for on-the-fly BA language inclusion testing, leveraging the partial complementation procedures present. Finally, we believe that the framework also enables new directions for future research by developing smart ways, probably based on machine learning, of selecting which partial complementation procedure should be used for which SCC, based on their features. In future, we want to incorporate other algorithms for complementation of NACs, and identify properties of SCCs that allow to use more efficient algorithms (such as unambiguous NACs [39]). Moreover, it seems that generalizing the Delayed optimization from [24] on the top-level algorithm could also help reduce the state space.