1 Introduction

Modern software and hardware systems harness asynchronous interactions to improve speed, responsiveness, and power consumption: delay-insensitive circuits, networks of sensors, multi-threaded programs and interacting web services are all asynchronous in nature. Various factors contribute to asynchrony, such as unpredictable transmission delays, concurrency, distributed execution, and parallelism. The common result is that each component of a system operates with partial, out-of-date knowledge of the state of the others, which considerably complicates system design and programming. Yet, it is often easier to state the desired behavior of an asynchronous program. We therefore consider the question of automatically constructing (i.e., synthesizing) a correct reactive asynchronous program directly from its temporal specification.

The asynchronous synthesis problem was originally formulated by Pnueli and Rosner in 1989 on the heels of their work on synchronous synthesis [31, 32]. The task is that of constructing a (finite-state) program which interacts asynchronously with its environment while meeting a temporal specification on the actions at the interface between program and environment. Given a linear temporal specification \(\varphi \), Pnueli-Rosner show that asynchronous synthesis can be reduced to checking whether a derived specification \(\varphi '\), specifying the required behavior of the scheduler, is synchronously synthesizable. That is, an asynchronous program can implement \(\varphi \) iff a synchronous program can implement \(\varphi '\).

It may then appear straightforward to construct asynchronous programs using one of the many tools that exist for synchronous synthesis. However, the derived formula \(\varphi '\) embeds a nontrivial stutter quantification, which requires a complex intermediate automaton construction; it has not, to the authors’ knowledge, ever been implemented. This situation is in stark contrast to that of synchronous synthesis, for which multiple tools and algorithms have been created.

Alternative methods have been proposed for asynchronous synthesis: Finkbeiner and Schewe reduce a bounded form of the problem to a SAT/SMT query [35], and Klein, Piterman and Pnueli show that some GR(1) specificationsFootnote 1 can be transformed as above to an approximate synchronous GR(1) property [21, 22]. These alternatives, however, have drawbacks of their own. The SAT/SMT reduction is exponential in the number of interface (input and output) bits, an important parameter; the GR(1) specifications amenable to transformation are limited and are characterized by semantic conditions that are not easily checked.

This work presents two key simplifications. First, we define a new property, \(\mathsf {PR}(\varphi )\) (named in honor of Pnueli-Rosner’s pioneering work) which, like \(\varphi '\), is synchronously realizable if, and only if, \(\varphi \) is asynchronously realizable. We then present an automaton construction for \(\mathsf {PR}(\varphi )\) that is direct and simpler, and results in an exponentially smaller automaton than the one for \(\varphi '\). In particular, the automaton for \(\mathsf {PR}(\varphi )\) has only at most twice the states of the automaton for \(\varphi \), as opposed to the exponential blowup of the state space (in the number of interface bits) incurred in the construction of the automaton for \(\varphi '\). As almost all synchronous automaton-based synthesis tools use an explicit encoding for automaton states, this reduction is vital in practice.

We show how to implement the transformation \(\mathsf {PR}\) symbolically (with BDDs), so that interface bits are always represented in symbolic form. One can then apply the modular strategy of Pnueli-Rosner: a symbolic automaton for \(\varphi \) is transformed to a symbolic automaton for \(\mathsf {PR}(\varphi )\) (instead of \(\varphi '\)), which is analyzed with a synchronous synthesis tool. We establish that \(\mathsf {PR}\) is conjunctive and preserves safetyFootnote 2. These are important properties, used by tools such as Acacia+ [8] and Unbeast [11] to optimize the synchronous synthesis task. The new construction has been implemented in a prototype tool, BAS, and experiments demonstrate feasibility in practice.

In addition, we establish that for several classes of temporal properties, which are easily characterized by syntax, the automaton-based method can be avoided entirely and replaced with Boolean constraint solving. The constraints are quantified Boolean formulae, with prefix \(\exists \forall \) and a kernel that is derived from the original specification. This surprising reduction, which resolves a temporal problem with Boolean reasoning, is a consequence of the highly adversarial role of the environment in the asynchronous setting.

These contributions turn a seemingly intractable synthesis task into one that is feasible in practice.

2 Preliminaries

Temporal Specifications. Linear Temporal Logic (LTL) [29] extends propositional logic with temporal operators. LTL formulae are defined as \(\varphi {::}= \mathsf {True}\ |\ \mathsf {False}\) \(|\ p\ |\ \lnot \varphi \ |\ \varphi _1\wedge \varphi _2\ |\ X\varphi \ |\ \varphi _1 U \varphi _2 | \ \lozenge \,\varphi \ |\ \square \,\varphi \ |\ \boxminus \varphi \). Here p is a proposition, and \(\mathsf { X }\)(Next), \(\mathsf {\,U\,}\)(Until), \(\lozenge \,\)(Eventually), \(\square \,\)(Always), and \(\boxminus \)(Always in the past) are temporal operators. The LTL semantics is standard, and is in the full version of the paper. For an LTL formula \(\varphi \), let \(\mathcal {L}(\varphi )\) denote the set of words (over subsets of propositions) that satisfy \(\varphi \).

GR(1) is a useful fragment of LTL, where formulae are of the form \((\square S_e \wedge \bigwedge _{i=0}^m \square \lozenge P_i )\,\Rightarrow \,(\square S_s \wedge \bigwedge _{i=0}^n \square \lozenge Q_i)\), for propositional formulae \(S_e\), \(S_s\), \(P_i\), \(Q_i\). Typically, the left-hand side of the implication is used to restrict the environment, by requiring safety and liveness assumptions to hold, while the right-hand side is used to define the safety and liveness guarantees required of the system.

LTL specifications can be turned into equivalent Büchi automata, using standard constructions. A Büchi automaton, A, is specified by the tuple \((Q,Q_0,\varSigma ,\delta ,G)\), where Q is a set of states, \(Q_0 \subseteq Q\) defines the initial states, \(\varSigma \) is the alphabet, \(\delta \subseteq Q \times \varSigma \times Q\) is the transition relation, and \(G \subseteq Q\) defines the “green” (also known as “accepting” or “final”) states. A run r of the automaton on an infinite word \(\sigma =a_0,a_1,\ldots \) over \(\varSigma \) is an infinite sequence \(r=q_0,a_0,q_1,a_1,\ldots \) such that \(q_0\) is an initial state, and for each k, \((q_k,a_k,q_{k+1})\) is in the transition relation. Run r is accepting if a green state appears on it infinitely often; the language of A, denoted \(\mathcal {L}(A)\), is the set of words that have an accepting run.

The Asynchronous Synthesis Model. The goal of synthesis is to construct an “open” program M meeting a specification at its interface. In the asynchronous setting, the program M interacts in a fair interleaved manner with its environment E. The fairness restriction requires that E and M are each scheduled infinitely often in all infinite executions. Let E /  / M denote this composition. The interface between E and M is formed by the variables x and y. Variable x is written by E and is read-only for M, while y is written by M and is read-only for E. One can consider x (resp., y) to represent a vector of variables, i.e., \(x=(x_1,\ldots ,x_n)\) (resp., \(y=(y_1,\ldots ,y_m)\)) which is read (resp., written) atomically. Many of our results also extend to non-atomic reads and writes, and are discussed in the full version of the paper.

The synthesis task is to construct a program M which satisfies a temporal property \(\varphi (x,y)\) over the interface variables in the composition E /  / M, for any environment E. The most adversarial environment is the one which sets x to an arbitrary value at each scheduled step, we denote it by \(\mathsf {CHAOS}(x)\). The behaviors of the composition \(\mathsf {CHAOS}(x)//M\) simulate those of E /  / M for all E. Hence, it suffices to produce M which satisfies \(\varphi \) in the composition \(\mathsf {CHAOS}(x)//M\). One can limit the set of environments through an assumption in the specification.

This leads to the formal definition of an asynchronous schedule, given by a pair of functions, \(r,w : \mathbb {N}\rightarrow \mathbb {N}\), which represent read and write points, respectively. The initial write point, \(w(0)=0\), and represents the choice of initial value for the variable y. Without loss of generality, the read-write points alternate, i.e., for all \(i\ge 0\), \(w(i) \le r(i) < w(i+1)\) and \(r(i) < w(i+1) \le r(i+1)\). A strict asynchronous schedule does not allow read and write points to overlap, i.e., the constraints are strengthened to \(w(i)< r(i) < w(i+1)\) and \(r(i)< w(i+1) < r(i+1)\). A tight asynchronous schedule is the strict schedule without any non-read-write gaps, i.e., \(r(k)=2k+1\) and \(w(k)=2k\), for all k. A synchronous schedule is the special non-strict schedule where \(r(i)=i\) and \(w(i)=i\), for all i.

Let \(D^v\) denote the binary domain \(\{\mathsf {True},\mathsf {False}\}\) for a variable v. A program M can be represented semantically as a function \(f: (D^{x})^* \rightarrow D^{y}\). For an asynchronous schedule (rw), a sequence \(\sigma = (D^x \times D^y)^{\omega }\) is said to be an asynchronous execution of f over (rw) if the value of y is changed only at writing points, in a manner that depends only on the values of x at prior reading points. Formally, for all \(i\ge 0\), \(y_{w(i+1)} = f(x_{r(0)}\dots x_{r(i)})\), and for all j such that \(w(i)\le \) \(j\) \(w(i+1)\), \(y_j\) \(=\) \(y_{w(i)}\). The initial value of y is the value it has at point \(w(0)=0\). The set of such sequences is denoted as \(\mathsf {asynch}(f)\). Over synchronous schedules, the set of such sequences is denoted by \(\mathsf {synch}(f)\). Function f is an asynchronous implementation of \(\varphi \) if all asynchronous executions of f over all possible schedules satisfy \(\varphi \), i.e., if \(\mathsf {asynch}(f) \subseteq \mathcal {L}(\varphi )\).

This formulation agrees with that given by Pnueli and Rosner for strict schedules. For synchronous schedules (and other non-strict schedules), our formulation has a Moore-style semantics – the output depends on strictly earlier inputs – while Pnueli and Rosner formulate a Mealy semantics. A Moore semantics is more appropriate for modeling software programs, where the output variable is part of the state, and fits well with the theoretical constructions that follow.

Definition 1

(Asynchronous LTL Realizability). Given an LTL property \(\varphi (x,y)\) over the input variable x and output variable y, the asynchronous LTL realizability problem is to determine whether there is an asynchronous implementation for \(\varphi \).

Definition 2

(Asynchronous LTL Synthesis). Given a realizable LTL-formula \(\varphi \), the asynchronous LTL synthesis problem is to construct an asynchronous implementation of \(\varphi \).

Examples. Pnueli and Rosner give a number of interesting specifications. The specification \(\square \,\) \((y\) \(\,\equiv \,\) \(Xx)\) (“the current output equals the next input”) is satisfiable but not realizable, as any implementation would have to be clairvoyant. On the other hand, the flipped specification \(\square \,\) \((x\) \(\,\equiv \,\) \(Xy)\) (“the next output equals the current input”) is synchronously realizable by a Moore machine which replays the current input as the next output. The specification \(\lozenge \,\square \,\) \(x\) \(\,\equiv \,\) \(\lozenge \,\square \,\) \(y\) is synchronously realizable by the same machine, but is asynchronously unrealizable, as shown next. Consider two input (x) sequences, under a schedule where reads happen only at odd positions. In both, let \(x\) \(=\) \(\mathsf {true}\) at all reading points. Then any program must respond to both inputs with the same output sequence for y. Now suppose that in the first sequence x is \(\mathsf {false}\) at all non-read positions, while in the second, x is \(\mathsf {true}\) at all non-read positions. In the first case, the specification forces the output y-sequence to be \(\mathsf {false}\) infinitely often; in the second, y is forced to be \(\mathsf {true}\) from some point on, a contradiction.

The negated specification \(\lozenge \,\square \,\) \(x\) \(\not \equiv \) \(\lozenge \,\square \,\) \(y\) is also asynchronously unrealizable, for the same reason. This “gap” illustrates an intriguing difference from the synchronous case, where either a specification is realizable for the system, or its negation is realizable for the environment. The two halves of the equivalence, i.e., \(\lozenge \,\square \,\) \(x\) \(\,\Rightarrow \,\) \(\lozenge \,\square \,\) \(y\) and \(\lozenge \,\square \,\) \(y\) \(\,\Rightarrow \,\) \(\lozenge \,\square \,\) \(x\) are individually asynchronously realizable, by strategies that fix the output to \(y\) \(=\) \(\mathsf {true}\) and to \(y\) \(=\) \(\mathsf {false}\), respectively.

From Asynchronous to Synchronous Synthesis. Pnueli and Rosner reduced asynchronous LTL synthesis to synchronous synthesis of Büchi objectives. Their reduction applied to LTL formulas with a single input and output variable [32]; it was later extended to the non-atomic case [30]. The original Rosner-Pnueli reduction deals exclusively with strict schedules, since they showed that it is sufficient to consider only strict schedules.

Two infinite sequences are said to be stuttering equivalent if one sequence can be obtained from the other by a finite duplication (“stretching”) of a given state or by deletion (“compressing”) of finitely many contiguous identical states retaining at least one of them. The stuttering quantification \(\exists ^{\approx }\) is defined as follows: \(\exists ^{\approx }x.\varphi \) holds for sequence \(\pi \) if \(\exists x. \varphi \) holds for a sequence \(\pi '\) that is stuttering equivalent to \(\pi \). Pnueli-Rosner showed that an \(\mathsf {LTL}\)-formula \(\varphi (x,y)\) over input x and output y is asynchronously realizable iff a “kernel” formula (this is the precise formula referred to as \(\varphi '\) in the Introduction) \(\mathcal {K}(r, w, x, y) = \alpha (r, w) \rightarrow \beta (r, w, x, y)\) over read sequence r, write sequence w, input sequence x and output sequence y is synchronously realizable:

$$\begin{aligned} \begin{array}{r@{\quad }c} \alpha (r, w) =&{} (\lnot r \wedge \lnot w \mathsf {\,U\,}r) \wedge \square \,\lnot (r \wedge w) \wedge \square \,(r \,\Rightarrow \,(r \mathsf {\,U\,}(\lnot r) \mathsf {\,U\,}w)) \\ &{} \wedge \square \,(w \,\Rightarrow \,(w \mathsf {\,U\,}(\lnot w) \mathsf {\,U\,}r)) \\ \beta (r,w,x,y) =&{} \varphi (x,y) \wedge \forall a. \square \,((y=a) \,\Rightarrow \,((y=a)\mathsf {\,U\,}(\lnot w \wedge (y=a) \mathsf {\,U\,}w))) \\ &{} \wedge \forall ^{\approx }x'. (\square \,(\lnot r \,\Rightarrow \,\lnot r \mathsf {\,U\,}(x=x'))\,\Rightarrow \,\varphi (x', y)) \end{array} \end{aligned}$$

Here, \(\alpha \) encodes the strict scheduling constraints on read and write points, while \(\beta \) encodes conditions which assure a correct asynchronous execution over (rw). The \(\forall ^{\approx }\) quantification, intuitively, quantifies over all adversarial schedules similar to the current (rw): it requires \(\varphi \) to hold over all sequences obtained from the current sequence \(\sigma \) by stretching or compressing the segments between read and write points, and choosing different values for x on those segments.

3 Symbolic Asynchronous Synthesis

Pnueli and Rosner’s procedure for asynchronous synthesis [32] is as follows: first, a Büchi automaton is built for the kernel formula \(\lnot \mathcal {K}\). This automaton is then determinized and complemented to form a deterministic word automaton for \(\mathcal {K}\), which is then re-interpreted as a tree automaton and tested for non-emptiness. The transformations use standard constructions, except for the interpretation of the \(\exists ^{\approx }\) operator in the formation of the Büchi automaton for \(\lnot \mathcal {K}\). For a Büchi automaton A, an automaton for \(\exists ^{\approx }\mathcal {L}(A)\) is constructed in two steps: first applying a “stretching” transformation on A, followed by a “compressing” transformation. Stretching introduces new automaton states of the form (qa), for each state q of A and each letter a.

When this general construction is applied to the formula \(\lnot \mathcal {K}\), the alphabet of the automaton A is formed of all possible valuations of the pair of variables (xy), which has size exponential in the number of interface bits. The stretching step introduces a copy of an automaton state for each letter, which results in an exponential blow-up of the state space of the constructed automaton. As all current tools for synchronous synthesis represent automaton states explicitlyFootnote 3, the exponential blowup introduced by the stuttering quantification is a significant obstacle to implementation.

In Pnueli-Rosner’s construction, the determinization and complementation steps are also complex, utilizing Safra’s construction. These steps are simplified by the “Safraless” procedure adopted in current tools for synchronous synthesis.

The other major issue with the Pnueli-Rosner construction is that the kernel formula \(\mathcal {K}\) introduces the scheduling variables rw as input variables. However, the actions of a synthesized program should not rely on the values of these variables. Pnueli-Rosner ensure this by checking satisfiability over “canonical” tree models; it is unclear, however, how to realize this effect using a synchronous synthesis tool as a black box.

We define a new property, \(\mathsf {PR}(\varphi )\), that differs from \(\mathcal {K}\) but, similarly, is synchronously realizable if, and only if, \(\varphi \) is asynchronously realizable. We then present an automaton construction for \(\mathsf {PR}(\varphi )\) that bypasses the general construction for \(\exists ^{\approx }\), avoiding the exponential blowup and resulting in an automaton with at most twice the states of the original. Moreover, this construction refers only to x and y, avoiding the second issue as well. We then show that this construction can be implemented fully symbolically.

3.1 Basic Formulations and Properties

As formulated in Sect. 2, an asynchronous execution of f is determined by the schedule (rw). For a strict schedule, any infinite sequence representing an asynchronous behavior of f over (rw) may be partitioned into a sequence of blocks, as follows. The start of the i’th block is at the i’th writing point, w(i), and it ends just before the \(i\,+\,1\)’st writing point, \(w(i+1)\). The schedule ensures the i’th block includes the i’th reading point, r(i), associated with the input-output value \((x_i,y_i)\). As the value of y changes only at writing points, \(y_i\) is constant in the i’th block. Thus, the i’th block follows the pattern \((\bot ,y_i)^{*}(x_i,y_i)(\bot ,y_i)^{*}\), where \(\bot \) denotes an arbitrary choice of x-value. Figure 1 illustrates a strict asynchronous computation and its decomposition into blocks.

Expansions. The set of expansions of sequence \(\delta =(x_0,y_0) (x_1,y_1) \ldots \) consists of all sequences obtained by simultaneously replacing each \((x_i,y_i)\) in \(\delta \) by a block with the pattern \((\bot ,y_i)^{*} (x_i,y_i) (\bot ,y_i)^{*}\). Formally, given sequences \(\delta =(x_0,y_0) (x_1,y_1) \ldots \) and \(\sigma = (\bar{x}_0, \bar{y}_0) (\bar{x}_1,\bar{y}_1) \ldots \), \(\delta \) expands to \(\sigma \), denoted as \(\delta \,\mathsf {exp}\,\sigma \), if there exists an asynchronous schedule \((\hat{r}, \hat{w})\) for which \(\sigma \) is an execution that is a block pattern of \(\delta \), i.e., for all i, \(x_i = \bar{x}_{\hat{r}(i)}\) and \(y_i = \bar{y}_{\hat{w}(i)}\) and for all j, \(\hat{w}(i) \le j < \hat{w}(i+1)\) it is the case that \(\bar{y}_j = \bar{y}_{\hat{w}(i)}\). The inverse relation (read as contracts to) is denoted by \(\,\mathsf {exp}\,^{-1}\). Figure 2 shows the synchronous computation that contracts the computation shown in Fig. 1.

Relational Operators. For a relation R, the modal operators \(\langle {R}\rangle \) and \([{R}]\) are defined as follows. For any set S,

$$ u \in \langle {R}\rangle S = (\exists v: uRv \,\wedge \,v\in S) \qquad \quad u \in [{R}]S = (\forall v: uRv \,\Rightarrow \,v\in S) $$

By definition, the operators are negation duals, i.e., \(\lnot {\langle {R}\rangle (\lnot {S})} = [{R}](S)\) for any R and any S. For an LTL formula \(\varphi \) and a relation R over infinite sequences, we let \(\langle {R}\rangle \varphi \) abbreviate \(\langle {R}\rangle (\mathcal {L}(\varphi ))\), and similarly, let \([{R}]\varphi \) abbreviate \([{R}](\mathcal {L}(\varphi ))\).

Galois Connections. Given partial orders \((A,\preceq _{A})\) and \((B,\preceq _{B})\), a pair of functions \(g: A \rightarrow B\) and \(h: B \rightarrow A\) form a Galois connection if, for all \(a\in A,b\in B\): \(g(a) \preceq _{B} b\) is equivalent to \(a \preceq _{A} h(b)\). From the definitions, it is clear that the operators \((\langle {{R}^{-1}}\rangle , [{R}])\) form a Galois connection over the partial orders defined by the subset relation. I.e., for any sets S and T: \(\langle {{R}^{-1}}\rangle S \subseteq T\) iff, \(S \subseteq [{R}]T\).

Fig. 1.
figure 1

A strict asynchronous computation for f. Values of x at non-reading points are shown as dotted. The y-value is constant between writing points, illustrated by a solid rectangle. Blocks are shown as dashed rectangles.

We first establish that the asynchronous executions of f are precisely the synchronous executions of f under an inverse expansion.

Theorem 1

For an implementation f, \(\mathsf {asynch}(f) = \langle {\,\mathsf {exp}^{-1}\,}\rangle \mathsf {synch}(f)\).

Proof

(ping) Let \(\sigma \) be an execution in \(\mathsf {asynch}(f)\), generated for some schedule (rw). For any k, consider the k’th block of \(\sigma \). This is the set of positions from w(k) to \(w(k+1)-1\), which includes the k’th reading point r(k), say with the value \((x_k,y_k)\). Then the block follows the pattern \((\bot ,y_k)^*(x_k,y_k)(\bot ,y_k)^*\). So \(\sigma \) is an expansion of the sequence \(\delta =(x_0,y_0) (x_1,y_1) \ldots \). By the definition of an asynchronous execution, the value \(y_{k+1} = f(x_0,\ldots ,x_k)\). This is precisely the requirement for \(\delta \) to be a synchronous execution of f. Hence, we have that there is a \(\delta \) such that \(\delta \,\mathsf {exp}\,\sigma \) and \(\delta \in \mathsf {synch}(f)\). Therefore, \(\sigma \in \langle {\,\mathsf {exp}^{-1}\,}\rangle \mathsf {synch}(f)\).

(pong) Let \(\sigma \) be in \(\langle {\,\mathsf {exp}^{-1}\,}\rangle \mathsf {synch}(f)\). By definition, there is a \(\mathsf {synch}(f)\) execution \(\delta =(x_0,y_0) (x_1,y_1) \ldots \) such that \(\delta \,\mathsf {exp}\,\sigma \). As \(\delta \) is a synchronous execution of f, the value \(y_{k+1}=f(x_0,x_1,\ldots ,x_k)\), for all k. Then \(\sigma \) is an asynchronous execution of f under the schedule where the k-th reading point is the point that the k’th entry, \((x_k,y_k)\), from \(\delta \) is mapped to in \(\sigma \), and the \((k+1)\)-th writing point is the first point of the \((k+1)\)’st block in the expansion.    \(\square \)

Fig. 2.
figure 2

The contracted synchronous (Moore) computation

We now use the Galois connection to show how asynchronous synthesis can be reduced to an equivalent synchronous synthesis task. Consider a property \(\varphi \) that must hold asynchronously for an implementation f.

Theorem 2

Let f be an implementation function, and \(\varphi \) a property. Then \(\mathsf {asynch}(f) \subseteq \mathcal {L}(\varphi )\) if, and only if, \(\mathsf {synch}(f) \subseteq [{\,\mathsf {exp}\,}]\varphi \).

Proof

From Theorem 1, \(\mathsf {asynch}(f) \subseteq \mathcal {L}(\varphi )\) holds iff \(\langle {\,\mathsf {exp}^{-1}\,}\rangle \mathsf {synch}(f) \subseteq \mathcal {L}(\varphi )\) does. By the Galois connection, this is equivalent to \(\mathsf {synch}(f) \subseteq [{\,\mathsf {exp}\,}]\varphi \).    \(\square \)

3.2 The Pnueli-Rosner Closure

We refer to the property \([{\,\mathsf {exp}\,}]\varphi \) as the Pnueli-Rosner closure of \(\varphi \), in honor of their pioneering work on this problem, and denote it by \(\mathsf {PR}(\varphi )\). This has interesting mathematical properties, which are useful in practice.

Theorem 3

\(\mathsf {PR}(\varphi ) = [{\,\mathsf {exp}\,}]\varphi \) has the following properties.

  1. 1.

    (Closure) \(\mathsf {PR}\) is monotonic and a downward closure, i.e., \(\mathsf {PR}(\varphi ) \subseteq \mathcal {L}(\varphi )\)

  2. 2.

    (Conjunctivity) \(\mathsf {PR}\) is conjunctive, i.e., \(\mathsf {PR}(\bigwedge _i \varphi _i) = \bigcap _i \mathsf {PR}(\varphi _i)\)

  3. 3.

    (Safety Preservation) If \(\varphi \) is a safety property, so is \(\mathsf {PR}(\varphi )\)

The closure property relies on the reflexivity and transitivity of \(\,\mathsf {exp}\,\), and that \([{R}]\) is monotonic for every R. Conjunctivity follows from the conjunctivity of \([{R}]\) for any R. Safety preservation is based on the Alpern-Schneider [4] formulation of safety over infinite words. Proofs are in the full version of the paper.

Conjunctivity is exploited by the tools Acacia+ [8] and Unbeast [11] to optimize the synchronous synthesis procedure. The Unbeast tool also separates out safety from non-safety sub-properties to optimize the synthesis procedure. Thus, if a specification \(\varphi \) has the form \(\varphi _1 \,\wedge \,\varphi _2\), where \(\varphi _1\) is a safety property, then \(\mathsf {PR}(\varphi ) = \mathsf {PR}(\varphi _1) \,\cap \,\mathsf {PR}(\varphi _2)\) also denotes the intersection of the safety property \(\mathsf {PR}(\varphi _1)\) with another property.

3.3 The Closure Automaton Construction

By negation duality, \(\mathsf {PR}(\varphi )\) equals \(\lnot {\langle {\,\mathsf {exp}\,}\rangle (\lnot \varphi )}\). We use this property to reduce asynchronous to synchronous synthesis, as follows.

  1. 1.

    Construct a non-deterministic Büchi automaton A for \(\lnot \varphi \),

  2. 2.

    Transform A to a non-deterministic Büchi automaton B for the negated Pnueli-Rosner closure of \(\varphi \), i.e., the language of B is \(\langle {\,\mathsf {exp}\,}\rangle \mathcal {L}(A) = \langle {\,\mathsf {exp}\,}\rangle (\lnot \varphi )\),

  3. 3.

    Consider the structure of B as that of a universal co-Büchi automaton, which has language \(\lnot {\mathcal {L}(B)}\),

  4. 4.

    Synthesize an implementation f in the synchronous model which satisfies \(\lnot {\mathcal {L}(B)} = \lnot {\langle {\,\mathsf {exp}\,}\rangle \mathcal {L}(A)} = \lnot {\langle {\,\mathsf {exp}\,}\rangle (\lnot \varphi )} = [{\,\mathsf {exp}\,}]\varphi = \mathsf {PR}(\varphi ) \).

The new step is the second one, which constructs B from A; the others use standard constructions and tools. This construction is as follows.

  • The states and alphabet of B are the states and alphabet of A.

  • The transitions of B are determined by a saturation procedure. For every pair of states \(q,q'\), and letter (xy), let \(\varPi (q,(x,y),q')\) be the set of paths in A from q to \(q'\) where the sequence of letters on the path matches the expansion pattern \((\bot ,y)^{*}(x,y)(\bot ,y)^*\). The transition \((q,(x,y),q')\) is in B if, and only if, this set is non-empty,

  • If some path in \(\varPi (q,(x,y),q')\) passes through a green (accepting) state of A, the transition \((q,(x,y),q')\) in B is colored “green” and that path is assigned as the witness to the transition in B. On the other hand, if none of the paths in \(\varPi (q,(x,y),q')\) pass through a green state, this transition is not colored in B, and one of the paths in the set is chosen as the witness for this transition,

  • The automaton B inherits the accepting (“green”) states of A and it may have, in addition, green transitions introduced as defined above,

  • A sequence is accepted by B if there is a run of B on the sequence such that either there are infinitely many green states, or infinitely many green transitions on that run.

We establish that \(\mathcal {L}(B) = \langle {\,\mathsf {exp}\,}\rangle \mathcal {L}(A)\) through the following two lemmas.

Lemma 1

\(\langle {\,\mathsf {exp}\,}\rangle \mathcal {L}(A) \subseteq \mathcal {L}(B)\).

Proof

Let \(\delta = (x_0,y_0) (x_1,y_1) \ldots \) be a sequence in \(\langle {\,\mathsf {exp}\,}\rangle \mathcal {L}(A)\). By definition, there exists a sequence \(\sigma \) in \(\mathcal {L}(A)\) such that \(\delta \,\mathsf {exp}\,\sigma \). The expansion \(\sigma \) follows the pattern \([(\bot ,y_0)^{*} (x_0,y_0) (\bot ,y_0)^{*} ][(\bot ,y_1)^{*} (x_1,y_1) (\bot ,y_1)^{*} ]\ldots \), where \([\ldots ]\) are used merely to indicate the boundaries of a block. An accepting run of A on \(\sigma \) has the form \(q_0 [(\bot ,y_0)^{*} (x_0,y_0) (\bot ,y_0)^{*} ]q_1 [(\bot ,y_1)^{*} (x_1,y_1) (\bot ,y_1)^{*} ]q_2 \ldots \), where the states on the run inside a block have been elided. By the definition of B, the segment \(q_0 (\bot ,y_0)^{*} (x_0,y_0) (\bot ,y_0)^{*} q_1\) induces a transition from \(q_0\) to \(q_1\) in B on the letter \((x_0,y_0)\). Similarly, the following segment induces a transition from \(q_1\) to \(q_2\) on letter \((x_1,y_1)\), and so forth. These transitions together form a run \(q_0 (x_0,y_0) q_1 (x_1,y_1) q_2 \ldots \) of B on \(\delta \).

If one of the \(\{q_i\}\) is green and appears infinitely often on the run on \(\sigma \), the induced run on \(\delta \) is accepting. Otherwise, as the run on \(\sigma \) is accepting, some green state of A occurs in the interior of infinitely many segments of that run. The transitions of B induced by those segments must be green, so the corresponding run on \(\delta \) has infinitely many green edges, and is accepting for B.    \(\square \)

Lemma 2

\(\mathcal {L}(B) \subseteq \langle {\,\mathsf {exp}\,}\rangle \mathcal {L}(A)\).

Proof

Let \(\delta \) be accepted by B. We show that there is \(\sigma \) such that \(\delta \,\mathsf {exp}\,\sigma \) and \(\sigma \) is accepted by A. Let \(\delta \) have the form \((x_0,y_0) (x_1,y_1) \ldots , \). Denote the accepting run of B on \(\delta \) by \(r = q_0 (x_0,y_0) q_1 (x_1,y_1) \ldots \). From the construction of B, the transition from \(q_0\) to \(q_1\) on \((x_0,y_0)\) has an associated witness path through A from \(q_0\) to \(q_1\), which follows the expansion pattern \((\bot ,y_0)^{*} (x_0,y_0) (\bot ,y_0)^{*}\) on its edge labels. Stitching together the witness paths for each transition of r, we obtain both a sequence \(\sigma \) that is an expansion of \(\delta \) and a run \(r'\) of A on \(\sigma \).

As r is accepting for B, it must enter infinitely often either a green state or a green edge. If it enters a green state infinitely often, that state appears infinitely often on \(r'\). If r enters a green edge infinitely often, the witness path for that edge contains a green state of A, say q; as this path is repeated infinitely often on \(\sigma \), q appears infinitely often on \(r'\). In either case, a green state of A appears infinitely often on \(r'\), which is therefore, an accepting run of A on \(\sigma \).    \(\square \)

Automaton B can be placed in standard form by converting its green edges to green states as follows, forming a new automaton, \(\hat{B}\). Form a green copy of the state space, i.e., for each state q, form a green variant, G(q), which is marked as an accepting state. Set up transitions as follows. If \((q,a,q')\) is an original non-green transition, then \((q,a,q')\) and \((G(q),a,q')\) are new transitions. If \((q,a,q')\) is an original green transition, then \((q,a,G(q'))\) and \((G(q),a,G(q'))\) are new transitions. This at most doubles the size of the automaton. It is straightforward to establish that \(\mathcal {L}(B) = \mathcal {L}(\hat{B})\).

3.4 Symbolic Construction

The symbolic construction of \(\hat{B}\) closely follows the definitions above. It is easily implemented with BDDs representing predicates on the input and output variables x and y. The crucial step is to use fixpoints to formulate the existence of paths in the set \(\varPi \) used in the definition of B. These definitions are similar to the fixpoint definition of the CTL modality \(\mathsf {EF}\). We use \(A(q,(x,y),q')\) to denote the predicate on (xy) describing the transition from q to \(q'\) in automaton A.

Fixed Don’t-Care Path. Let \(\mathsf {EfixedY}(q,y,q')\) hold if there is a path of length 0 or more from q to \(q'\) in A where the value of y is fixed. This is the least fixpoint (in Z) of the following implications:

  • \((q'=q) \,\Rightarrow \,Z(q,y,q')\), and

  • \((\exists x,r: A(q,(x,y),r) \,\wedge \,Z(r,y,q')) \,\Rightarrow \,Z(q,y,q')\)

The predicate \(A^{\bot }(q,y,r) = (\exists x: A(q,(x,y),r))\) is pre-computed. Then, the least fixpoint is computed iteratively as follows.

$$\begin{aligned}&\mathsf {EfixedY}^0(q,y,q')=(q=q') \\&\mathsf {EfixedY}^{i+1}(q,y,q') = \mathsf {EfixedY}^{i}(q,y,q') \,\vee \,(\exists r: A^{\bot }(q,y,r) \,\wedge \,\mathsf {EfixedY}^{i}(r,y,q')) \end{aligned}$$

Let predicate \(\mathsf {green}_{A}(r)\) be true for an accepting state r of A. The predicate \(\mathsf {Efixedgreen}(q,y,q')\) holds if there is a fixed y-path from q to \(q'\) where one of the states on it is green:

$$\begin{aligned} \mathsf {Efixedgreen}(q,y,q') = (\exists r: \mathsf {EfixedY}(q,y,r) \,\wedge \,\mathsf {green}_A(r) \,\wedge \,\mathsf {EfixedY}(r,y,q')) \end{aligned}$$

Paths and Green Paths. Let \(\mathsf {Epath}(q,(x,y),q')\) hold if there is a path following the block pattern \((\bot ,y)^{*} (x,y) (\bot ,y)^{*}\) from q to \(q'\) in A. Then,

$$\begin{aligned} \mathsf {Epath}(q,(x,y),q') = (\exists r,r': \mathsf {EfixedY}(q,y,r) \,\wedge \,A(r,(x,y),r') \,\wedge \,\mathsf {EfixedY}(r',y,q')) \end{aligned}$$

Similarly, let \(\mathsf {Egreenpath}(q,(x,y),q')\) hold if there is a path following the block pattern \((\bot ,y)^{*} (x,y) (\bot ,y)^{*}\) from q to \(q'\) in A, with an intermediate green state.

$$\begin{aligned} \mathsf {Egreenpath}&(q,(x,y),q') = \\&(\exists r,r': \mathsf {Efixedgreen}(q,y,r) \,\wedge \,A(r,(x,y),r') \,\wedge \,\mathsf {EfixedY}(r',y,q')) \,\vee \,\\&(\exists r,r': \mathsf {EfixedY}(q,y,r) \,\wedge \,A(r,(x,y),r') \,\wedge \,\mathsf {Efixedgreen}(r',y,q')) \end{aligned}$$

State Space of \(\hat{B}\). The state space of \(\hat{B}\) is formed by pairs (qg), where q is a state of A and g is a Boolean indicating whether it is a new green state. The accepting condition \(\mathsf {green}_{\hat{B}}(q,g)\) of \(\hat{B}\) is given by \(\mathsf {green}_{A}(q) \,\vee \,g\).

Initial States. The initial predicate \(I_{\hat{B}}(q,g)\) is \(I_A(q) \wedge \lnot g\), where \(I_A(q)\) is true for initial states of the input automata A.

Transition Relation of \(\hat{B}\). The transition relation \(\hat{B}((q,g),(x,y),(q',g'))\) is

$$\begin{aligned} \hat{B}((q,g),(x,y),(q',g')) = \mathsf {Epath}(q,(x,y),q') \,\wedge \,(g' \,\equiv \,\mathsf {Egreenpath}(q,(x,y),q')) \end{aligned}$$

4 Implementation and Experiments

The \(\mathsf {PR}\) algorithm has been implemented in a framework called BAS (Bounded Asynchronous Synthesis). It uses the LTL-to-automaton converter LTL3BA [3, 6], and follows the modular method, connecting to either of two solvers, BoSy [2, 12] and Acacia+ [1, 8] to solve the synchronous realizability of \(\mathsf {PR}(\varphi )\). The \(\mathsf {PR}\) construction is implemented in about 1200 lines of OCaml, using an external BDD library. (The core construction requires only about 400 lines of code.) For an \(\mathsf {LTL}\) specification \(\varphi \), the BAS workflow for asynchronous synthesis is as follows:

  1. 1.

    Check whether \(\varphi \) is synchronously realizable; if not, return UNREALIZABLE,

  2. 2.

    Construct Büchi  automata A for \(\lnot \varphi \), and \(\hat{A}\) for \(\varphi \),

  3. 3.

    Concurrently

    1. (a)

      Construct \(\mathsf {PR}(\varphi )\) from A and check whether it is synchronously realizable; if so, return REALIZABLE and synthesize the implementation.

    2. (b)

      Construct \(\mathsf {PR}(\lnot \varphi )\) from \(\hat{A}\) and check whether it is synchronously realizable for the environment; if so, return UNREALIZABLE.

    Upon termination of any, terminate the other execution as well.

The synchronous synthesis tools successively increase a bound until a limit (computed based on automaton structure) is reached. Thus, in theory, only the check in step 3(a) is needed. However, the checks in steps 1 and 3(b) may allow the tool to terminate early (before reaching the limit bound), if a winning strategy for the environment can be discovered.

To evaluate BAS we consider the list of examples presented in Table 1. The reported experiments were performed on a VM configured to have 8 CPU cores at 2.4 GHz, 8 GB RAM, running 64-bit Linux. The running times are reported in milliseconds. For each specification (presented in the second column) we report whether it is asynchronously realizable (third column), the time for the \(\mathsf {PR}\) construction (our contribution), and the time for checking whether the specification is realizable using BoSy and Acacia+ solvers (resp., fifth and sixth columns).

Table 1. BAS asynchronous synthesis runtime evaluation (times in milliseconds). We let BoSy run upto 2 h, and Acacia+ upto 1000 iterations. “Na” denotes cases where the executions did not find a winning strategy within these boundaries.

The first set of examples (Specifications 1–11) list specifications discussed in this paper and in related works. As parameterized example we consider 2 variants of arbiter specifications. The arbiter has n inputs in which clients request permissions, and n outputs in which the clients are granted permissions. In both variants of the arbiter example, no two grants are allowed to be set simultaneously. The first arbiter example (Specification 12) requires that whenever an input request \(r_i\) is set, the corresponding output grant \(g_i\) must eventually be set. The second variant (Specification 13) also requires that a grant \(g_i\) is set only if request \(r_i\) is set as well. That is, in order for a client to be granted a permission, its corresponding request must be constantly set. Since the asynchronous case cannot observe the request in between read events, this variant of the arbiter is not realizable. The results are shown for \(n=2,4,6\). Note that the only comparable experimental evaluation is given in [18], where they report that asynchronous synthesis of the first arbiter example (Specification 12) takes over 8 h.

The second specification \(\varphi \) is the one discussed in Sect. 2. It is surprisingly difficult to solve. Both \(\varphi \) and its negation are asynchronously unrealizable. Moreover, \(\varphi \) is synchronously realizable. Thus, the early detection tests (steps 1 and 3(b)) failed to discover a winning strategy for the environment; the bounded synthesis tools increase the considered bound monotonically without converging to an answer in a reasonable amount of time. This example highlights the need for better tests for unrealizability. The results in the following section provide simple QBF tests of unrealizability for subclasses of LTL.

5 Efficiently Solvable Subclasses of LTL

The high complexity of direct LTL (synchronous) synthesis has encouraged the search for general procedures that work well in practice, such as Safraless and bounded synthesis [24, 35]. Another useful direction has been to identify fragments of LTL with efficient synthesis algorithms [5]. Among the most noteworthy is the GR(1) subclass, for which there is an efficient, symbolic synthesis procedure ([28]). We explore this direction for asynchronous synthesis. Surprisingly, we show that synthesis for certain fragments of LTL can be reduced to Boolean reasoning over properties in QBF. The results cover several types of GR(1) formulae, although the question of a reduction for all of GR(1) is open.

The QBF formulae that arise have the form \(\exists y \forall x. p(x,y)\), where x and y are disjoint sets of variables, and p is a propositional formula over xy. An assignment \(y=b\) for which \(\forall x. p(x,b)\) holds is called a witness to the formula. The first such reduction is for the property \(\square \,\lozenge \,P\).

Theorem 4

\(\varphi = \square \,\lozenge \,P\) is asynchronously realizable iff \(\exists y \forall x P\) is \(\mathsf {True}\).

Proof

(ping) Let b be a witness to \(\exists y\forall x . P\). The function that constantly outputs \(y=b\) satisfies \(\varphi \) for any asynchronous schedule.

(pong) Let f be a candidate implementation function and suppose that \(\forall y \exists x (\lnot P)\) holds. Fix any schedule. For every value \(y=b\) that function f outputs at a writing point, there exists an input value \(x=a\) such that \(\lnot P(a,b)\) holds. Thus, the environment, by issuing \(x=a\) in the interval from the current writing point (with \(y=b\)) up to the next one, can ensure that \(\lnot {P}\) holds throughout the execution. Thus the specification \(\varphi =\square \,\lozenge \,P\) does not hold on this execution.    \(\square \)

The result in Theorem 4 applies to asynchronous synthesis, but does not apply to synchronous synthesis. For example, the property \(\square \,\lozenge \,(x \,\equiv \,y)\) is asynchronously unrealizable, as \(\exists y \forall x (x \,\equiv \,y)\) is \(\mathsf {False}\). On the other hand, it is synchronously realizable with a Mealy machine that sets y to x at each point.

Theorem 4 extends easily to conjunction and disjunction of \(\square \,\lozenge \,\) properties.

Theorem 5

Specification \(\varphi = \bigvee _{i=0}^m \square \,\lozenge \,P_i\) is asynchronously realizable iff   \(\exists y \forall x . (\bigvee _{i=0}^m P_i)\) holds. Additionally, specification \(\varphi = \bigwedge _{i=0}^m \square \,\lozenge \,P_i\) is asynchronously realizable iff for all \(i \in \{0, 1\dots m\}\), \(\exists y \forall x . P_i\) holds.

Proof

The first claim follows directly from the identity \(\bigvee _{i=0}^m \square \,\lozenge \,P_i \,\equiv \,\square \,\lozenge \,(\bigvee _{i=0}^m P_i)\) and Theorem 4.

For the second, for each i, let \(y=b_i\) be an assignment such that \(\forall x. P_i(x,b_i)\) holds. The function that generates sequence \(b_0, b_1,\dots b_m\), ad infinitum, is an asynchronous implementation of \(\bigwedge _{i=0}^m \square \,\lozenge \,P_i\). On the other hand, suppose that for some i, \(\forall y \exists x \lnot P_i\) holds, then following the construction from Theorem 4, one can define an execution where \(P_i\) is always \(\mathsf {False}\).    \(\square \)

Theorem 6

\(\varphi = \lozenge \,\square \,P\) is asynchronously realizable iff \(\exists {y} \forall {x}.P\) is \(\mathsf {True}\).

The proof is similar to that for Theorem 4. Theorem 6 also extends to conjunctions and disjunctions of \(\lozenge \,\square \,\) properties, by arguments similar to those for Theorem 5. Namely, \(\bigwedge _{i=0}^m \lozenge \,\square \,P_i\) is asynchronously realizable iff \(\exists y \forall x (\bigwedge _{i=0}^m P_i)\) is \(\mathsf {True}\), and, \(\bigvee _{i=0}^m \lozenge \,\square \,P_i\) is asynchronously realizable iff for some \(i \in \{0, 1,\dots m\}\), \(\exists y \forall x . P_i\) is \(\mathsf {True}\). Theorems 46 apply to non-atomic reads and writes of multiple input and output variables. Proofs are in the full version of the paper.

We now consider a more general type of GR(1) formula. The strict semantic of GR(1) formula \(\square S_e \wedge \square \,\lozenge \,P \,\Rightarrow \,\square S_s \wedge \square \,\lozenge \,Q\) is defined to be \(\square (\boxminus S_e \,\Rightarrow \,S_s) \wedge (\square S_e \wedge \square \,\lozenge \,P \,\Rightarrow \,\square \,\lozenge \,Q)\) – i.e., \(S_s\) is required to hold so long as \(S_e\) has always held in the past; and if \(S_e\) holds always and P holds infinitely often, then Q holds infinitely often. This is the interpretation supported by GR(1) synchronous synthesis tools.

Theorem 7

The strict semantics of GR(1) specification \(\square S_e \wedge \square \,\lozenge \,P \,\Rightarrow \,\square S_s \wedge \square \,\lozenge \,Q\) is asynchronously realizable iff \(\exists y \forall x . (S_e \,\Rightarrow \,( S_s \,\wedge \,(P \,\Rightarrow \,Q)))\) is \(\mathsf {True}\).

Proof

(ping) If \(y=b\) is a witness to \(\exists y \forall x . (S_e \,\Rightarrow \,( S_s \,\wedge \,(P \,\Rightarrow \,Q)))\), let f be a function that always generates b. Suppose \(S_e\) holds up to point i, then as \(y=b\), regardless of the x-value, \(S_s\) holds at point i. This shows that the first part of the specification holds. For the second, suppose that \(S_e\) holds always and P is true infinitely often. Then, by choice of \(y=b\), \((P \,\Rightarrow \,Q)\) holds always, thus Q holds infinitely often as well.

(pong) To prove the other side of the implication, we proceed as in Theorem 4. Let f be a candidate implementation. Fix a schedule, and suppose that \(\forall y \exists x . (S_e \,\wedge \,( \lnot S_s \,\vee \,\lnot (P \,\Rightarrow \,Q)))\) holds. Then for every step of the execution and for every value \(y=b\) that function f outputs at a writing point, there exists a value \(x=a\) which the environment can choose from that writing point to the next such that \(S_e(a,b)\) is true, and one of \(S_s(a,b)\) or \((P \,\Rightarrow \,Q)(a,b)\) is false at every point in that interval.

On this execution, \(S_e\) holds throughout. If \(S_s\) is false at some point, this violates the first part of the specification. If not, then \((P \,\Rightarrow \,Q)\) must be false everywhere; i.e., at every point P is true but Q is false. Thus, \(S_e\) holds everywhere and P holds infinitely often but Q does not hold infinitely often, violating the second part of the specification.    \(\square \)

Theorem 7 applies to atomic reads and writes, showing that asynchronous synthesis of GR(1) specification can be reduced to Boolean reasoning over properties in QBF. For non-atomic reads and writes, safety in asynchronous systems is more nuanced, since there is a delay between the write points of the first and last outputs in each round. This is discussed in the full version of the paper. This proof strategy does not generalize easily to the full GR(1) format, where more than one \(\square \,\lozenge \,\) property can appear on either side of the implication.

These results establish that the asynchronous synthesis problem for such specifications is easily solvable–more easily than in the synchronous setting, surprisingly avoiding entirely the need for automaton constructions and bounded synthesis. From another, equally valuable, point of view, the results show that such types of specifications may be of limited interest for automated synthesis, as solvable cases have very simple solutions.

6 Conclusions and Related Work

This work tackles the task of asynchronous synthesis from temporal specifications. The main results are a new symbolic automaton construction for general temporal properties, and the reduction of the synthesis question for several classes of specifications to QBF. These are mathematically interesting, being substantial simplifications of prior methods. Moreover, they make it feasible to implement an asynchronous synthesis tool following the modular process suggested by Pnueli and Rosner in 1989, by reducing asynchronous synthesis to a synchronous synthesis question. To the best of our knowledge, this is the first such tool. The prototype, which builds on tools for synchronous synthesis, is able to quickly synthesize asynchronous programs for several interesting properties. There are, undoubtedly, several challenges, one of which is the quick detection of unrealizable specifications.

Our work builds upon several earlier results, which we discuss here. The synthesis question for temporal properties originates from a question posed by Church in the 1950s (see [37]). The problem of synthesizing a synchronous reactive system from a linear temporal specification was formulated and studied by Pnueli and Rosner [31], who gave a solution based on non-emptiness of tree automata. There has been much progress on the synchronous synthesis question since. Key developments include the discovery of efficient symbolic (BDD-based) solutions for the GR(1) class [7, 28], the invention of “Safraless” procedures [24], the application of these ideas for bounded synthesis [15, 35], and their implementation in a number of tools, e.g. [8, 10, 11, 13, 20, 34]. These have been applied in many settings (cf. [9, 23, 25,26,27]).

The problem of synthesizing asynchronous programs was also formulated and studied by Pnueli and Rosner [32] but has proved to be much more challenging, with only limited progress. The original Pnueli-Rosner constructions are complex and were not implemented. Work by Klein, Piterman and Pnueli, nearly 20 years later [22], shows tractability for some GR(1) specifications. However, the class of specifications that can be so handled is characterized by semantic constraints such as stuttering-closure and memoryless-ness, which are difficult to recognize.

Finkbeiner and Schewe [18, 35] present an alternative method, based on bounded synthesis, that applies to all LTL properties: it encodes the existence of a deductive proof for a bounded program into SAT/SMT constraints. However, the encoding represents inputs and outputs explicitly and is, therefore, exponential in the number of input and output bits. The exponential blowup has practical consequences: an asynchronous arbiter specification requires over 8 h to synthesize [18]; the same specification can be synthesized by our method in seconds. (Note, however, that the method in [18] is not specialized to asynchronous synthesis, and this difference may not be solely due to the explicit state representation, as the specification has only 4 bits.) Recent work gives an alternative encoding of synchronous bounded synthesis into QBF constraints, retaining input and output bits in symbolic form [12]. We believe that a similar encoding applies to asynchronous bounded synthesis as well, this is a topic for future work.

Pnueli and Rosner’s model of interface communication is not the only choice. Other models for asynchrony could, for instance, be based on CCS/CSP-style rendezvous communication at the interface, or permit shared read-write variables with atomic lock/unlock actions. Petri net game models have also been suggested for distributed synthesis [16]. An orthogonal direction is to weaken the adversarial power of the environment through a probabilistic model which can be used to constrain unlikely, highly adversarial input patterns to have probability 0, thus turning the synthesis problem into one where programs satisfy their specifications with high probability. (The synthesis of multiple processes is known to be undecidable in most cases [17, 33].)

In the broader context of fully automatic program synthesis, there are various approaches to the synthesis of single-threaded, terminating programs from formal pre- and post-condition specifications and from examples, using type information and other techniques to prune the search space. (We will not attempt to survey this large field, some examples are [14, 19, 36].) An intriguing question is to investigate how the techniques developed in these distinct lines of work can be fruitfully combined to aid the development of asynchronous, reactive software.