These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Concurrent shared-memory programming is notoriously difficult and error-prone. Program synthesis for concurrency aims to mitigate this complexity by synthesizing synchronization code automatically [4, 5, 8, 11]. However, specifying the programmer’s intent may be a challenge in itself. Declarative mechanisms, such as assertions, suffer from the drawback that it is difficult to ensure that the specification is complete and fully captures the programmer’s intent.

We propose a solution where the specification is implicit. We observe that a core difficulty in concurrent programming originates from the fact that the scheduler can preempt the execution of a thread at any time. We therefore give the developer the option to program assuming a friendly, non-preemptive, scheduler. Our tool automatically synthesizes synchronization code to ensure that every behavior of the program under preemptive scheduling is included in the set of behaviors produced under non-preemptive scheduling. Thus, we use the non-preemptive semantics as an implicit correctness specification.

The non-preemptive scheduling model dramatically simplifies the development of concurrent software, including operating system (OS) kernels, network servers, database systems, etc. [13, 14]. In this model, a thread can only be descheduled by voluntarily yielding control, e.g., by invoking a blocking operation. Synchronization primitives may be used for communication between threads, e.g., a producer thread may use a semaphore to notify the consumer about availability of data. However, one does not need to worry about protecting accesses to shared state: a series of memory accesses executes atomically as long as the scheduled thread does not yield.

In defining behavioral equivalence between preemptive and non-preemptive executions, we focus on externally observable program behaviors: two program executions are observationally equivalent if they generate the same sequences of calls to interfaces of interest. This approach facilitates modular synthesis where a module’s behavior is characterized in terms of its interaction with other modules. Given a multi-threaded program \(\mathcal {C}\) and a synthesized program \(\mathcal {C}'\) obtained by adding synchronization to \(\mathcal {C}\), \(\mathcal {C}'\) is preemption-safe w.r.t. \(\mathcal {C}\) if for each execution of \(\mathcal {C}'\) under a preemptive scheduler, there is an observationally equivalent non-preemptive execution of \(\mathcal {C}\). Our synthesis goal is to automatically generate a preemption-safe version of the input program.

We rely on abstraction to achieve efficient synthesis of multi-threaded programs. We propose a simple, data-oblivious abstraction inspired by an analysis of synchronization patterns in OS code, which tend to be independent of data values. The abstraction tracks types of accesses (read or write) to each memory location while ignoring their values. In addition, the abstraction tracks branching choices. Calls to an external interface are modeled as writes to a special memory location, with independent interfaces modeled as separate locations. To the best of our knowledge, our proposed abstraction is yet to be explored in the verification and synthesis literature.

Two abstract program executions are observationally equivalent if they are equal modulo the classical independence relation I on memory accesses: accesses to different locations are independent, and accesses to the same location are independent iff they are both read accesses. Using this notion of equivalence, the notion of preemption-safety is extended to abstract programs.

Under abstraction, we model each thread as a nondeterministic finite automaton (NFA) over a finite alphabet, with each symbol corresponding to a read or a write to a particular variable. This enables us to construct NFAs N, representing the abstraction of the original program \(\mathcal {C}\) under non-premptive scheduling, and P, representing the abstraction of the synthesized program \(\mathcal {C}'\) under preemptive scheduling. We show that preemption-safety of \(\mathcal {C}'\) w.r.t. \(\mathcal {C}\) is implied by preemption-safety of the abstract synthesized program w.r.t. the abstract original program, which, in turn, is implied by language inclusion modulo I of NFAs P and N. While the problem of language inclusion modulo an independence relation is undecidable [2], we show that the antichain-based algorithm for standard language inclusion [9] can be adapted to decide a bounded version of language inclusion modulo an independence relation.

Our overall synthesis procedure works as follows: we run the algorithm for bounded language inclusion modulo I, iteratively increasing the bound, until it reports that the inclusion holds, or finds a counterexample, or reaches a timeout. In the first case, the synthesis procedure terminates successfully. In the second case, the counterexample is generalized to a set of counterexamples represented as a Boolean combination of ordering constraints over control-flow locations (as in [11]). These constraints are analyzed for patterns indicating the type of concurrency bug (atomicity, ordering violation) and the type of applicable fix (lock insertion, statement reordering). After applying the fix(es), the procedure is restarted from scratch; the process continues until we find a preemption-safe program, or reach a timeout.

We implemented our synthesis procedure in a new prototype tool called Liss (Language Inclusion-based Synchronization Synthesis) and evaluated it on a series of device driver benchmarks, including an Ethernet driver for Linux and the synchronization skeleton of a USB-to-serial controller driver. First, Liss was able to detect and eliminate all but two known race conditions in our examples; these included one race condition that we previously missed when synthesizing from explicit specifications [5], due to a missing assertion. Second, our abstraction proved highly efficient: Liss runs an order of magnitude faster on the more complicated examples than our previous synthesis tool based on the CBMC model checker. Third, our coarse abstraction proved surprisingly precise in practice: across all our benchmarks, we only encountered three program locations where manual abstraction refinement was needed to avoid the generation of unnecessary synchronization. Overall, our evaluation strongly supports the use of the implicit specification approach based on non-preemptive scheduling semantics as well as the use of the data-oblivious abstraction to achieve practical synthesis for real-world systems code.

Contributions. First, we propose a new specification-free approach to synchronization synthesis. Given a program written assuming a friendly, non-preemptive scheduler, we automatically generate a preemption-safe version of the program. Second, we introduce a novel abstraction scheme and use it to reduce preemption-safety to language inclusion modulo an independence relation. Third, we present the first language inclusion-based synchronization synthesis procedure and tool for concurrent programs. Our synthesis procedure includes a new algorithm for a bounded version of our inherently undecidable language inclusion problem. Finally, we evaluate our synthesis procedure on several examples. To the best of our knowledge, Liss is the first synthesis tool capable of handling realistic (albeit simplified) device driver code, while previous tools were evaluated on small fragments of driver code or on manually extracted synchronization skeletons.

Related Work. Synthesis of synchronization is an active research area [36, 1012, 15, 16]. Closest to our work is a recent paper by Bloem et al. [3], which uses implicit specifications for synchronization synthesis. While their specification is given by sequential behaviors, ours is given by non-preemptive behaviors. This makes our approach applicable to scenarios where threads need to communicate explicitly. Further, correctness in [3] is determined by comparing values at the end of the execution. In contrast, we compare sequences of events, which serves as a more suitable specification for infinitely-looping reactive systems.

Many efforts in synthesis of synchronization focus on user-provided specifications, such as assertions (our previous work [4, 5, 11]). However, it is hard to determine if a given set of assertions represents a complete specification. In this paper, we are solving language inclusion, a computationally harder problem than reachability. However, due to our abstraction, our tool performs significantly better than tools from [4, 5], which are based on a mature model checker (CBMC [7]). Our abstraction is reminiscent of previously used abstractions that track reads and writes to individual locations (e.g., [1, 17]). However, our abstraction is novel as it additionally tracks some control-flow information (specifically, the branches taken) giving us higher precision with almost negligible computational cost. The synthesis part of our approach is based on [11].

In [16] the authors rely on assertions for synchronization synthesis and include iterative abstraction refinement in their framework. This is an interesting extension to pursue for our abstraction. In other related work, CFix [12] can detect and fix concurrency bugs by identifying simple bug patterns in the code.

2 Illustrative Example

Fig. 1.
figure 1

Running example and its abstraction

Fig. 1a contains our running example. Consider the case where the procedures open_dev() and close_dev() are invoked in parallel, possibly multiple times (modeled as a non-deterministic while loop). The functions power_up() and power_down() represent calls to a device. For the non-preemptive scheduler, the sequence of calls to the device will always be a repeating sequence of one call to power_up(), followed by one call to power_down(). Without additional synchronization, however, there could be two calls to power_up() in a row when executing it with a preemptive scheduler. Such a sequence is not observationally equivalent to any sequence that can be produced when executing with a non-preemptive scheduler.

Fig. 1b contains the abstracted versions (we omit tracking of branching choices in the example) of the two procedures, open_dev_abs() and close_dev_abs(). For instance, the instruction open = open + 1 is abstracted to the two instructions labeled (C) and (D). The abstraction is coarse, but still captures the problem. Consider two threads T1 and T2 running the open_dev_abs() procedure. The following trace is possible under a preemptive scheduler, but not under a non-preemptive scheduler: T1.A; T2.A; T1.B; T1.C; T1.D; T2.B; T2.C; T2.D. Moreover, the trace cannot be transformed by swapping independent events into any trace possible under a non-preemptive scheduler. This is because instructions A and D are not independent. Hence, the abstract trace exhibits the problem of two successive calls to power_up() when executing with a preemptive scheduler. Our synthesis procedure finds this problem, and fixes it by introducing a lock in open_dev() (see Sect. 5).

3 Preliminaries and Problem Statement

Syntax. We assume that programs are written in a concurrent while language \(\mathcal W\). A concurrent program \(\mathcal {C}\) in \(\mathcal W\) is a finite collection of threads \(\langle \mathtt {T}_1, \ldots , \mathtt {T}_n \rangle \) where each thread is a statement written in the syntax from Fig. 2. All \(\mathcal W\) variables (program variables std_var, lock variables lock_var, and condition variable cond_var) range over integers and each statement is labeled with a unique location identifier \(l\). The only non-standard syntactic constructs in \(\mathcal W\) relate to the tags. Intuitively, each tag is a communication channel between the program and an interface to an external system, and the \(\mathsf{input}\mathsf{(tag)}\) and \(\mathsf{output}\mathsf{(tag, expr)}\) statements read from and write to the channel. We assume that the program and the external system interface can only communicate through the channel. In practice, we use the tags to model device registers. In our presentation, we consider only a single external interface. Our implementation can handle communication with several interfaces.

Fig. 2.
figure 2

Syntax of \(\mathcal W\)

Semantics. We begin by defining the semantics of a single thread in \(\mathcal W\), and then extend the definition to concurrent non-preemptive and preemptive semantics. Note that in our work, reads and writes are assumed to execute atomically and further, we assume a sequentially consistent memory model.

Single-Thread Semantics. A program state is given by \(\langle \mathcal V, P\mathcal \rangle \) where \(\mathcal V\) is a valuation of all program variables, and \( P \) is the statement that remains to be executed. Let us fix a thread identifier \( tid \).

The operational semantics of a thread executing in isolation is given in Fig. 3. A single execution step \(\langle \mathcal V, P\mathcal \rangle \xrightarrow {\alpha } \langle \mathcal V', P\mathcal ' \rangle \) changes the program state from \(\langle \mathcal V, P\mathcal \rangle \) to \(\langle \mathcal V', P\mathcal '\rangle \) while optionally outputting an observable symbol \(\alpha \). The absence of a symbol is denoted using \(\epsilon \). Most rules from Fig. 3 are standard—the special rules are the Havoc, Input, and Output rules.

  1. 1.

    Havoc: Statement \(l: x:= \mathsf{havoc}\) assigns x a non-deterministic value (say k) and outputs the observable \(( tid , {\mathsf {havoc}}, k, x)\).

  2. 2.

    Input, Output: \(l: x:=\mathsf{input} (t)\) and \(l: \mathsf{output}(t,e)\) read and write values to the channel t, and output \(( tid , \mathsf{input}, k, t)\) and \(( tid , \mathsf{output}, k, t)\), where k is the value read or written, respectively.

Intuitively, the observables record the sequence of non-deterministic guesses, as well as the input/output interaction with the tagged channels. In the following, \(e\) represents an expression and \(e[v / \mathcal{V} [v]]\) evaluates an expression by replacing all variables v with their values in \(\mathcal V\).

Fig. 3.
figure 3

Single thread semantics of \(\mathcal W\)

Non-Preemptive Semantics. The non-preemptive semantics of \(\mathcal W\) is presented in the full version [18]. The non-preemptive semantics ensures that a single thread from the program keeps executing as detailed above until one of the following occurs: (a) the thread finishes execution, or it encounters (b) a yield statement, or (c) a lock statement and the lock is taken, or (d) an await statement and the condition variable is not set. In these cases, a context-switch is possible.

Preemptive Semantics. The preemptive semantics of a program is obtained from the non-preemptive semantics by relaxing the condition on context-switches, and allowing context-switches at all program points (see full version [18]).

3.1 Problem Statement

A non-preemptive observation sequence of a program \(\mathcal {C}\) is a sequence \(\alpha _0\ldots \alpha _k\) if there exist program states \(S_0^{pre}\), \(S_0^{post}\), ..., \(S_k^{pre}\), \(S_k^{post}\) such that according to the non-preemptive semantics of \(\mathcal W\), we have: (a) for each \(0 \le i \le k\), \(\langle S_i^{pre} \rangle \xrightarrow {\alpha _i} \langle S_i^{post} \rangle \), (b) for each \(0 \le i < k\), , and (c) for the initial state \(S_\iota \) and a final state (i.e., where all threads have finished execution) \(S_f\), and . Similarly, a preemptive observation sequence of a program \(\mathcal {C}\) is a sequence \(\alpha _0\ldots \alpha _k\) as above, with the non-preemptive semantics replaced with preemptive semantics. We denote the sets of non-preemptive and preemptive observation sequences of a program \(\mathcal {C}\) by \([\![ \mathcal {C} ]\!]^{NP}\) and \([\![ \mathcal {C} ]\!]^P\), respectively.

We say that observation sequences \(\alpha _0\ldots \alpha _k\) and \(\beta _0\ldots \beta _k\) are equivalent if:

  • The subsequences of \(\alpha _0\ldots \alpha _k\) and \(\beta _0\ldots \beta _k\) containing only symbols of the form \(( tid , \mathsf {Input}, k, t)\) and \(( tid , \mathsf {Output}, k, t)\) are equal, and

  • For each thread identifier \( tid \), the subsequences of \(\alpha _0\ldots \alpha _k\) and \(\beta _0\ldots \beta _k\) containing only symbols of the form \(( tid , \mathsf {Havoc}, k, x)\) are equal.

Intuitively, observable sequences are equivalent if they have the same interaction with the interface, and the same non-deterministic choices in each thread. For sets of observable sequences \(\mathcal {O}_1\) and \(\mathcal {O}_2\), we write \(\mathcal {O}_1 \subseteq \mathcal {O}_2\) to denote that each sequence in \(\mathcal {O}_1\) has an equivalent sequence in \(\mathcal {O}_2\). Given a concurrent program \(\mathcal {C}\) and a synthesized program \(\mathcal {C}'\) obtained by adding synchronization to \(\mathcal {C}\), the program \(\mathcal {C}'\) is preemption-safe w.r.t. \(\mathcal {C}\) if \([\![ \mathcal {C}' ]\!]^{P} \subseteq [\![ \mathcal {C} ]\!]^{NP}\).

We are now ready to state our synthesis problem. Given a concurrent program \(\mathcal {C}\), the aim is to synthesize a program \(\mathcal {C}'\), by adding synchronization to \(\mathcal {C}\), such that \(\mathcal {C}'\) is preemption-safe w.r.t. \(\mathcal {C}\).

3.2 Language Inclusion Modulo an Independence Relation

We reduce the problem of checking if a synthesized solution is preemption-safe w.r.t. the original program to an automata-theoretic problem.

Abstract Semantics for \(\mathcal W\). We first define a single-thread abstract semantics for \(\mathcal W\) (Fig. 4), which tracks types of accesses (read or write) to each memory location while abstracting away their values. Inputs/outputs to an external interface are modeled as writes to a special memory location (dev). Even inputs are modeled as writes because in our applications we cannot assume that reads from the external interface are free of side-effects. Havocs become ordinary writes to the variable they are assigned to. Every branch is taken non-deterministically and tracked. The only constructs preserved are the lock and condition variables. The abstract program state consists of the valuations of the lock and condition variables and the statement that remains to be executed. In the abstraction, an observable is of the form \(( tid , \{\mathsf {read,write,exit,loop,then,else}\}, v, l)\) and observes the type of access (read/write) to variable v and records non-deterministic branching choices (exit/loop/then/else). The latter are not associated with any variable.

In Fig. 4, given expression \(e\), the function \( Reads ( tid , e, l) \) represents the sequence \(( tid , \mathsf {read}, v_1, l)\cdot \ldots \cdot ( tid , \mathsf {read}, v_n, l)\) where \(v_1,\ldots , v_n\) are the variables in \(e\), in the order they are read to evaluate \(e\).

Fig. 4.
figure 4

Single thread abstract semantics of \(\mathcal W\)

The abstract program semantics is the same as the concrete program semantics where the single thread semantics is replaced by the abstract single thread semantics. Locks and conditionals and operations on them are not abstracted.

As with the concrete semantics of \(\mathcal W\), we can define the non-preemptive and preemptive observable sequences for abstract semantics. For a concurrent program \(\mathcal {C}\), we denote the sets of abstract preemptive and non-preemptive observable sequences by \([\![ \mathcal {C} ]\!]^{P}_{abs}\) and \([\![ \mathcal {C} ]\!]^{NP}_{abs}\), respectively.

Abstract observation sequences \(\alpha _0\ldots \alpha _k\) and \(\beta _0\ldots \beta _k\) are equivalent if:

  • For each thread \( tid \), the subsequences of \(\alpha _0\ldots \alpha _k\) and \(\beta _0\ldots \beta _k\) containing only symbols of the form \(( tid ,a,v,l)\), with \(a\in \{\mathsf {read,write,exit,loop,then,else}\}\) are equal,

  • For each variable v, the subsequences of \(\alpha _0\ldots \alpha _k\) and \(\beta _0\ldots \beta _k\) containing only write symbols (of the form \(( tid , \mathsf {write}, v, l)\)) are equal, and

  • For each variable v, the multisets of symbols of the form \(( tid , \mathsf {read}, v, l)\) between any two write symbols, as well as before the first write symbol and after the last write symbol are identical.

We first show that the abstract semantics is sound w.r.t. preemption-safety (see full version for the proof [18]).

Theorem 1

Given concurrent program \(\mathcal {C}\) and a synthesized program \(\mathcal {C}'\) obtained by adding synchronization to \(\mathcal {C}\), \([\![ \mathcal {C}' ]\!]^{P}_{abs} \subseteq [\![ \mathcal {C} ]\!]^{NP}_{abs}\Rightarrow [\![ \mathcal {C}' ]\!]^{P} \subseteq [\![ \mathcal {C} ]\!]^{NP}\).

Abstract Semantics to Automata. An NFA \(\mathcal A\) is a tuple \((Q, \varSigma , \varDelta , Q_\iota , F)\) where \(\varSigma \) is a finite alphabet, \(Q,Q_\iota ,F\) are finite sets of states, initial states and final states, respectively and \(\varDelta \) is a set of transitions. A word \(\sigma _0\ldots \sigma _k \in \varSigma ^*\) is accepted by \(\mathcal A\) if there exists a sequence of states \(q_0\ldots q_{k+1}\) such that \(q_0\in Q_\iota \) and \(q_{k+1}\in F\) and \(\forall i:(q_i, \sigma _i, q_{i+1}) \in \varDelta \). The set of all words accepted by \(\mathcal A\) is called the language of \(\mathcal A\) and is denoted \(\mathcal {L}(\mathcal A)\).

Given a program \(\mathcal {C}\), we can construct automata \(\mathcal A([\![ \mathcal {C} ]\!]^{NP}_{abs})\) and \(\mathcal A([\![ \mathcal {C} ]\!]^{P}_{abs})\) that accept exactly the observable sequences under the respective semantics. We describe their construction informally. Each automaton state is a program state of the abstract semantics and the alphabet is the set of abstract observable symbols. There is a transition from one state to another on an observable symbol (or an \(\epsilon \)) iff the program can execute one step under the corresponding semantics to reach the other state while outputting the observable symbol (on an \(\epsilon \)).

Language Inclusion Modulo an Independence Relation. Let I be a non-reflexive, symmetric binary relation over an alphabet \(\varSigma \). We refer to I as the independence relation and to elements of I as independent symbol pairs. We define a symmetric binary relation \(\approx \) over words in \(\varSigma ^*\): for all words \(\sigma , \sigma ' \in \varSigma ^*\) and \((\alpha , \beta ) \in I\), \((\sigma \cdot \alpha \beta \cdot \sigma ', \sigma \cdot \beta \alpha \cdot \sigma ') \in \, \approx \). Let \(\approx ^t\) denote the reflexive transitive closure of \(\approx \).Footnote 1 Given a language \(\mathcal{L}\) over \(\varSigma \), the closure of \(\mathcal{L}\) w.r.t. I, denoted \(\mathrm {Clo}_I(\mathcal{L})\), is the set \(\{\sigma \in \varSigma ^* {:}\ \exists \sigma ' \in \mathcal L \text { with } (\sigma ,\sigma ') \in \, \approx ^t\}\). Thus, \(\mathrm {Clo}_I(\mathcal{L})\) consists of all words that can be obtained from some word in \(\mathcal{L}\) by repeatedly commuting adjacent independent symbol pairs from I.

Definition 1

(Language Inclusion Modulo an Independence Relation). Given NFAs AB over a common alphabet \(\varSigma \) and an independence relation I over \(\varSigma \), the language inclusion problem modulo I is: \(\mathcal L(\text{ A }) \subseteq \mathrm {Clo}_I(\mathcal L(\text{ B }))\)?

We reduce preemption-safety under the abstract semantics to language inclusion modulo an independence relation. The independence relation I we use is defined on the set of abstract observable symbols as follows: \((( tid , a,v, l), ( tid ', a',v', l')) \in I\) iff \( tid \ne tid '\), and one of the following holds: (a) \(v \ne v'\) or (b) \(a \ne \mathsf {write} \wedge a'\ne \mathsf {write}\).

Proposition 1

Given concurrent programs \(\mathcal {C}\) and \(\mathcal {C}'\), \([\![ \mathcal {C}' ]\!]^{P}_{abs} \subseteq [\![ \mathcal {C} ]\!]^{NP}_{abs}\) iff \(\mathcal L(\mathcal A([\![ \mathcal {C}' ]\!]^{P}_{abs})) \subseteq \mathrm {Clo}_I(\mathcal L(\mathcal A([\![ \mathcal {C} ]\!]^{NP}_{abs})))\).

4 Checking Language Inclusion

We first focus on the problem of language inclusion modulo an independence relation (Definition 1). This question corresponds to preemption-safety (Theorem. 1, Proposition 1) and its solution drives our synchronization synthesis (Sect. 5).

Theorem 2

For NFAs AB over alphabet \(\varSigma \) and an independence relation \(I\subseteq \varSigma \times \varSigma \), \(\mathcal L(A)\subseteq \mathrm {Clo}_I(\mathcal L(B))\) is undecidable [2].

Fortunately, a bounded version of the problem is decidable. Recall the relation \(\approx \) over \(\varSigma ^*\) from Sect. 3.2. We define a symmetric binary relation \(\approx _i\) over \(\varSigma ^*\): \((\sigma , \sigma ') \in \, \approx _i\) iff \(\exists (\alpha ,\beta ) \in I\): \((\sigma , \sigma ') \in \, \approx \), \(\sigma [i] = \sigma '[i+1] = \alpha \) and \(\sigma [i+1] = \sigma '[i] = \beta \). Thus \(\approx ^i\) consists of all words that can be optained from each other by commuting the symbols at positions i and \(i+1\). We next define a symmetric binary relation \(\asymp \) over \(\varSigma ^*\): \((\sigma ,\sigma ') \in \, \asymp \) iff \(\exists \sigma _1,\ldots ,\sigma _t\): \((\sigma ,\sigma _1) \in \, \approx _{i_1},\ldots , (\sigma _{t},\sigma ') \in \, \approx _{i_{t+1}}\) and \(i_1 < \ldots < i_{t+1}\). The relation \(\asymp \) intuitively consists of words obtained from each other by making a single forward pass commuting multiple pairs of adjacent symbols. Let \(\asymp ^k\) denote the k-composition of \(\asymp \) with itself. Given a language \(\mathcal{L}\) over \(\varSigma \), we use \(\mathrm {Clo}_{k,I}(\mathcal{L})\) to denote the set \(\{\sigma \in \varSigma ^*: \exists \sigma ' \in \mathcal L \text { with } (\sigma ,\sigma ') \in \, \asymp ^{\scriptstyle k} \}\). In other words, \(\mathrm {Clo}_{k,I}(\mathcal{L})\) consists of all words which can be generated from \(\mathcal{L}\) using a finite-state transducer that remembers at most k symbols of its input words in its states.

Definition 2

(Bounded Language Inclusion Modulo an Independence Relation). Given NFAs \(A, B\) over \(\varSigma \), \(I\subseteq \varSigma \times \varSigma \) and a constant \(k>0\), the k-bounded language inclusion problem modulo I is: \(\mathcal L(\text{ A })\subseteq \mathrm {Clo}_{k,I}(\mathcal L(\text{ B }))\)?

Theorem 3

For NFAs \(A, B\) over \(\varSigma \), \(I\subseteq \varSigma \times \varSigma \) and a constant \(k>0\), \(\mathcal L(\text{ A }) \subseteq \mathrm {Clo}_{k,I}(\mathcal L(\text{ B }))\) is decidable.

We present an algorithm to check k-bounded language inclusion modulo I, based on the antichain algorithm for standard language inclusion [9].

Antichain Algorithm for Language Inclusion. Given a partial order \((X, \sqsubseteq )\), an antichain over X is a set of elements of X that are incomparable w.r.t. \(\sqsubseteq \). In order to check \(\mathcal L(A)\subseteq \mathrm {Clo}_I(\mathcal L(B))\) for NFAs \(A = (Q_A,\varSigma ,\varDelta _A,Q_{\iota ,A},F_A)\) and \(B = (Q_B,\varSigma ,\varDelta _B,Q_{\iota ,B},F_B)\), the antichain algorithm proceeds by exploring \(A\) and \(B\) in lockstep. While \(A\) is explored nondeterministically, \(B\) is determinized on the fly for exploration. The algorithm maintains an antichain, consisting of tuples of the form \((s_A, S_B)\), where \(s_A\in Q_A\) and \(S_B\subseteq Q_B\). The ordering relation \(\sqsubseteq \) is given by \((s_A, S_B) \sqsubseteq (s'_A, S'_B)\) iff \(s_A= s'_A\) and \(S_B\subseteq S'_B\). The algorithm also maintains a frontier set of tuples yet to be explored.

Given state \(s_A\in Q_A\) and a symbol \(\alpha \in \varSigma \), let \(succ_\alpha (s_A)\) denote \(\{s_A' \in Q_A: (s_A,\alpha ,s_A') \in \varDelta _A\}\). Given set of states \(S_B\subseteq Q_B\), let \(succ_\alpha (S_B)\) denote \(\{s_B'\in Q_B: \exists s_B\in S_B:\ (s_B,\alpha ,s_B')\in \varDelta _B\}\). Given tuple \((s_A, S_B)\) in the frontier set, let \(succ_\alpha (s_A, S_B)\) denote \(\{(s'_A,S'_B): s'_A\in succ_\alpha (s_A), S'_B= succ_\alpha (s_B)\}\).

In each step, the antichain algorithm explores \(A\) and \(B\) by computing \(\alpha \)-successors of all tuples in its current frontier set for all possible symbols \(\alpha \in \varSigma \). Whenever a tuple \((s_A, S_B)\) is found with \(s_A\in F_A\) and \(S_B\cap F_B=\varnothing \), the algorithm reports a counterexample to language inclusion. Otherwise, the algorithm updates its frontier set and antichain to include the newly computed successors using the two rules enumerated below. Given a newly computed successor tuple \(p'\):

  • Rule 1: if there exists a tuple p in the antichain with \(p \sqsubseteq p'\), then \(p'\) is not added to the frontier set or antichain,

  • Rule 2: else, if there exist tuples \(p_1, \ldots , p_n\) in the antichain with \(p' \sqsubseteq p_1, \ldots , p_n\), then \(p_1, \ldots , p_n\) are removed from the antichain.

The algorithm terminates by either reporting a counterexample, or by declaring success when the frontier becomes empty.

Antichain Algorithm for k -Bounded Language Inclusion modulo I . This algorithm is essentially the same as the standard antichain algorithm, with the automaton \(B\) above replaced by an automaton \(B_{k,I}\) accepting \(\mathrm {Clo}_{k,I}(\mathcal L(\text{ B }))\). The set \(Q_{B_{k,I}}\) of states of \(B_{k,I}\) consists of triples \((s_B, \eta _1, \eta _2)\), where \(s_B\in Q_B\) and \(\eta _1, \eta _2\) are k-length words over \(\varSigma \). Intuitively, the words \(\eta _1\) and \(\eta _2\) store symbols that are expected to be matched later along a run. The set of initial states of \(B_{k,I}\) is \(\{(s_B,\varnothing ,\varnothing ): s_B\in I_B\}\). The set of final states of \(B_{k,I}\) is \(\{(s_B,\varnothing ,\varnothing ): s_B\in F_B\}\). The transition relation \(\varDelta _{B_{k,I}}\) is constructed by repeatedly applying the following rules, in order, for each state \((s_B, \eta _1, \eta _2)\) and each symbol \(\alpha \). In what follows, \(\eta [\setminus i]\) denotes the word obtained from \(\eta \) by removing its \(i^{th}\) symbol.

  1. 1.

    Pick new \(s'_B\) and \(\beta \in \varSigma \) such that \((s_B, \beta , s_B') \in \varDelta _B\)

  2. 2.

    (a) If \(\forall i\): \(\eta _1[i] \ne \alpha \) and \(\alpha \) is independent of all symbols in \(\eta _1\), \(\eta _2' \, \mathtt {:=}\, \eta _2\cdot \alpha \) and \(\eta _1' \, \mathtt {:=}\, \eta _1\), (b) else, if \(\exists i\): \(\eta _1[i] = \alpha \) and \(\alpha \) is independent of all symbols in \(\eta _1\) prior to i, \(\eta _1' \, \mathtt {:=}\, \eta _1[\setminus i]\) and \(\eta _2'\, \mathtt {:=}\, \eta _2\) (c) else, go to 1

  3. 3.

    (a) If \(\forall i\): \(\eta _2'[i] \ne \beta \) and \(\beta \) is independent of all symbols in \(\eta _2'\), \(\eta _1' \, \mathtt {:=}\eta _1' \,\cdot \beta \), (b) else, if \(\exists i\): \(\eta _2'[i] = \beta \) and \(\beta \) is independent of all symbols in \(\eta _2'\) prior to i, \(\eta _2' \, \mathtt {:=}\, \eta _2'[\setminus i]\) (c) else, go to 1

  4. 4.

    Add \(((s_B, \eta _1, \eta _2),\alpha ,(s'_B, \eta _1', \eta _2'))\) to \(\varDelta _{B_{k,I}}\) and go to 1.

Example 1

In Fig. 5, we have an NFA \(B\) with \(\mathcal L(\text{ B })= \{\alpha \beta , \beta \}\), \(I = \{(\alpha ,\beta )\}\) and \(k = 1\). The states of \(B_{k,I}\) are triples \((q, \eta _1, \eta _2)\), where \(q \in Q_B\) and \(\eta _1, \eta _2\in \{\varnothing ,\alpha ,\beta \}\). We explain the derivation of a couple of transitions of \(B_{k,I}\). The transition shown in bold from \((q_0, \varnothing ,\varnothing )\) on symbol \(\beta \) is obtained by applying the following rules once: 1. Pick \(q_1\) since \((q_0, \alpha , q_1) \in \varDelta _B\). 2(a). \(\eta _2'\ \mathtt {:=}\ \beta \), \(\eta _1'\ \mathtt {:=}\ \varnothing \). 3(a). \(\eta _1'\ \mathtt {:=}\ \alpha \). 4. Add \(((q_0, \varnothing , \varnothing ),\beta ,(q_1, \alpha , \beta ))\) to \(\varDelta _{B_{k,I}}\). The transition shown in bold from \((q_1, \alpha ,\beta )\) on symbol \(\alpha \) is obtained as follows: 1. Pick \(q_2\) since \((q_1, \beta , q_2) \in \varDelta _B\). 2(b). \(\eta _1'\ \mathtt {:=}\ \varnothing \), \(\eta _2'\ \mathtt {:=}\ \beta \). 3(b). \(\eta _2'\ \mathtt {:=}\ \varnothing \). 4. Add \(((q_1, \alpha , \beta ),\beta ,(q_2, \varnothing , \varnothing ))\) to \(\varDelta _{B_{k,I}}\). It can be seen that \(B_{k,I}\) accepts the language \(\{\alpha \beta ,\beta \alpha ,\beta \} = \mathrm {Clo}_{k,I}(B)\).

Fig. 5.
figure 5

Example for illustrating construction of \(B_{k,I}\) for \(k = 1\) and \(I = \{(\alpha , \beta )\}\).

Proposition 2

Given \(k>0\), NFA \(B_{k,I}\) described above accepts \(\mathrm {Clo}_{k,I}(\mathcal L(\text{ B }))\).

We develop a procedure to check language inclusion modulo I by iteratively increasing the bound k (see the full version [18] for the complete algorithm). The procedure is incremental: the check for \(k+1\)-bounded language inclusion modulo I only explores paths along which the bound k was exceeded in the previous iteration.

5 Synchronization Synthesis

We now present our iterative synchronization synthesis procedure, which is based on the procedure in [11]. The reader is referred to [11] for further details. The synthesis procedure starts with the original program \(\mathcal {C}\) and in each iteration generates a candidate synthesized program \(\mathcal {C}'\). The candidate \(\mathcal {C}'\) is checked for preemption-safety w.r.t. \(\mathcal {C}\) under the abstract semantics, using our procedure for bounded language inclusion modulo I. If \(\mathcal {C}'\) is found preemption-safe w.r.t. \(\mathcal {C}\) under the abstract semantics, the synthesis procedure outputs \(\mathcal {C}'\). Otherwise, an abstract counterexample \(cex\) is obtained. The counterexample is analyzed to infer additional synchronization to be added to \(\mathcal {C}'\) for generating a new synthesized candidate.

The counterexample trace \(cex\) is a sequence of event identifiers: \( tid _0.l_0 ; \ldots ; tid _n.l_n\), where each \(l_i\) is a location identifier. We first analyze the neighborhood of \(cex\), denoted \(nhood(cex)\), consisting of traces that are permutations of the events in \(cex\). Note that each trace corresponds to an abstract observation sequence. Furthermore, note that preemption-safety requires the abstract observation sequence of any trace in \(nhood(cex)\) to be equivalent to that of some trace in \(nhood(cex)\) feasible under non-preemptive semantics. Let bad traces refer to the traces in \(nhood(cex)\) that are feasible under preemptive semantics and do not meet the preemption-safety requirement. The goal of our counterexample analysis is to characterize all bad traces in \(nhood(cex)\) in order to enable inference of synchronization fixes.

In order to succinctly represent subsets of \(nhood(cex)\), we use ordering constraints. Intuitively, ordering constraints are of the following forms: (a) atomic constraints \(\varPhi = A < B\) where A and B are events from \(cex\). The constraint \(A < B\) represents the set of traces in \(nhood(cex)\) where event A is scheduled before event B; (b) Boolean combinations of atomic constraints \(\varPhi _1 \wedge \varPhi _2\), \(\varPhi _1 \vee \varPhi _2\) and \(\lnot \varPhi _1\). We have that \(\varPhi _1 \wedge \varPhi _2\) and \(\varPhi _1 \vee \varPhi _2\) respectively represent the intersection and union of the set of traces represented by \(\varPhi _1\) and \(\varPhi _2\), and that \(\lnot \varPhi _1\) represents the complement (with respect to \(nhood(cex)\)) of the traces represented by \(\varPhi _1\).

Non-preemptive Neighborhood. First, we generate all traces in \(nhood(cex)\) that are feasible under non-preemptive semantics. We represent a single trace \(\pi \) using an ordering constraint \(\varPhi _\pi \) that captures the ordering between non-independent accesses to variables in \(\pi \). We represent all traces in \(nhood(cex)\) that are feasible under non-preemptive semantics using the expression \(\varPhi = \bigvee _{\pi } \varPhi _\pi \). The expression \(\varPhi \) acts as the correctness specification for traces in \(nhood(cex)\).

Example. Recall the counterexample trace from the running example in Sect. 2: \(cex= \mathtt {T1.A; T2.A; T1.B; T1.C; T1.D; T2.B; T2.C; T2.D}\). There are two trace in \(nhood(cex)\) that are feasible under non-preemptive semantics:\(\pi _1=\mathtt {T1.A;T1.B;T1.C;T1.D;T2.A;T2.B;T2.C;T2.D}\) and \(\pi _2=\mathtt {T2.A;T2.B;T2.C;}\) \({T2.D;T1.A;T1.B;T1.C;T1.D}\). We represent \(\pi _1\) as \(\varPhi (\pi _1)= \{\mathtt {T1.A,T1.C,T1.D}\} <\mathtt {T2.D} \wedge \mathtt {T1.D}<\{\mathtt {T2.A,T2.C,T2.D}\} \wedge \mathtt {T1.B}<\mathtt {T2.B}\) and \(\pi _2\) as \(\varPhi (\pi _2) = \mathtt {T2.D} < \{\mathtt {T1.A,T1.C,T1.D}\} \wedge \{\mathtt {T2.A,T2.C,T2.D}\} < \mathtt {T1.D} \wedge \mathtt {T2.B}<\mathtt {T1.B}\). The correctness specification is \(\varPhi = \varPhi (\pi _1) \vee \varPhi (\pi _2)\).

Counterexample Generalization. We next build a quantifier-free first order formula \(\varPsi \) over the event identifiers in \(cex\) such that any model of \(\varPsi \) corresponds to a bad trace in \(nhood(cex)\). We iteratively enumerate models \(\pi \) of \(\varPsi \), building a constraint \(\rho = \varPhi (\pi )\) for each model \(\pi \), and generalizing each \(\rho \) into \(\rho _g\) to represent a larger set of bad traces.

Example. Our trace cex from Sect. 2 would be generalized to \(\mathtt {T2.A}<\mathtt {T1.D} \wedge \mathtt {T1.D}<\mathtt {T2.D}\). Any trace that fulfills this constraint is bad.

Inferring Fixes. From each generalized formula \(\rho _g\) described above, we infer possible synchronization fixes to eliminate all bad traces satisfying \(\rho _g\). The key observation we exploit is that common concurrency bugs often show up in our formulas as simple patterns of ordering constraints between events. For example, the pattern \( tid _1.l_1 < tid _2.l_2 \; \wedge \; tid _2.l'_2 < tid _1.l'_1\) indicates an atomicity violation and can be rewritten into \(\mathtt {lock}( tid _1.[l_1:l'_1], tid _2.[l_2:l'_2])\). The complete list of such rewrite rules is in the full version [18]. This list includes inference of locks and reordering of notify statements. The set of patterns we use for synchronization inference are not complete, i.e., there might be generalized formulae \(\rho _g\) that are not matched by any pattern. In practice, we found our current set of patterns to be adequate for most common concurrency bugs, including all bugs from the benchmarks in this paper. Our technique and tool can be easily extended with new patterns.

Example. The generalized constraint \(\mathtt {T2.A < T1.D \; \wedge \; T1.D<T2.D}\) matches the lock rule and yields \(\mathtt {lock(T2.[A:D],T1.[D:D])}\). Since the lock involves events in the same function, the lock is merged into a single lock around instructions \(\mathtt {A}\) and \(\mathtt {D}\) in open_dev_abs. This lock is not sufficient to make the program preemption-safe. Another iteration of the synthesis procedure generates another counterexample for analysis and synchronization inference.

Proposition 3

If our synthesis procedure generates a program \(\mathcal {C}'\), then \(\mathcal {C}'\) is preemption-safe with respect to \(\mathcal {C}\).

Note that our procedure does not guarantee that the synthesized program \(\mathcal {C}'\) is deadlock-free. However, we avoid obvious deadlocks using heursitics such as merging overlapping locks. Further, our tool supports detection of any additional deadlocks introduced by synthesis, but relies on the user to fix them.

6 Implementation and Evaluation

We implemented our synthesis procedure in Liss. Liss is comprised of 5000 lines of C++ code and uses Clang/LLVM and Z3 as libraries. It is available as open-source software along with benchmarks at The language inclusion algorithm is available separately as a library called Limi ( Liss implements the synthesis method presented in this paper with several optimizations. For example, we take advantage of the fact that language inclusion violations can often be detected by exploring only a small fraction of the input automata by constructing \(\mathcal A([\![ \mathcal {C} ]\!]^{NP}_{abs})\) and \(\mathcal A([\![ \mathcal {C} ]\!]^{P}_{abs})\) on the fly.

Our prototype implementation has several limitations. First, Liss uses function inlining and therefore cannot handle recursive programs. Second, we do not implement any form of alias analysis, which can lead to unsound abstractions. For example, we abstract statements of the form “*x = 0” as writes to variable x, while in reality other variables can be affected due to pointer aliasing. We sidestep this issue by manually massaging input programs to eliminate aliasing.

Finally, Liss implements a simplistic lock insertion strategy. Inference rules (see Sect. 5) produce locks expressed as sets of instructions that should be inside a lock. Placing the actual lock and unlock instructions in the C code is challenging because the instructions in the trace may span several basic blocks or even functions. We follow a structural approach where we find the innermost common parent block for the first and last instructions of the lock and place the lock and unlock instruction there. This does not work if the code has gotos or returns that could cause control to jump over the unlock statement. At the moment, we simply report such situations to the user.

We evaluate our synthesis method against the following criteria: (1) Effectiveness of synthesis from implicit specifications; (2) Efficiency of the proposed synthesis procedure; (3) Precision of the proposed coarse abstraction scheme on real-world programs.

Implicit vs Explicit Synthesis. In order to evaluate the effectiveness of synthesis from implicit specifications, we apply Liss to the set of benchmarks used in our previous ConRepair tool for assertion-based synthesis [5]. In addition, we evaluate Liss and ConRepair on several new assertion-based benchmarks (Table 1). The set includes microbenchmarks modeling typical concurrency bug patterns in Linux drivers and the usb-serial macrobenchmark, which models a complete synchronization skeleton of the USB-to-serial adapter driver. We preprocess these benchmarks by eliminating assertions used as explicit specifications for synthesis. In addition, we replace statements of the form assume(v) with await(v), redeclaring all variables v used in such statements as condition variables. This is necessary as our program syntax does not include assume statements.

Table 1. Experiments

We use Liss to synthesize a preemption-safe version of each benchmark. This method is based on the assumption that the benchmark is correct under non-preemptive scheduling and bugs can only arise due to preemptive scheduling. We discovered two benchmarks (lc-rc.c and myri10ge.c) that violated this assumption, i.e., they contained race conditions that manifested under non-preemptive scheduling; Liss did not detect these race conditions. Liss was able to detect and fix all other known races without relying on assertions. Furthermore, Liss detected a new race in the usb-serial family of benchmarks, which was not detected by ConRepair due to a missing assertion. We compared the output of Liss with manually placed synchronization (taken from real bug fixes) and found that the two versions were similar in most of our examples.

Performance and Precision. ConRepair uses CBMC for verification and counterexample generation. Due to the coarse abstraction we use, both steps are much cheaper with Liss. For example, verification of usb-serial.c, which was the most complex in our set of benchmarks, took Liss 103 s, whereas it took ConRepair 20 min [5].

The loss of precision due to abstraction may cause the inclusion check to return a counterexample that is spurious in the concrete program, leading to unnecessary synchronization being synthesized. On our existing benchmarks, this only occurred once in the usb-serial driver, where abstracting away the return value of a function led to an infeasible trace. We refined the abstraction manually by introducing a condition variable to model the return value.

While this result is encouraging, synthetic benchmarks are not necessarily representative of real-world performance. We therefore implemented another set of benchmarks based on a complete Linux driver for the TI AR7 CPMAC Ethernet controller. The benchmark was constructed as follows. We manually preprocessed driver source code to eliminate pointer aliasing. We combined the driver with a model of the OS API and the software interface of the device written in C. We modeled most OS API functions as writes to a special memory location. Groups of unrelated functions were modeled using separate locations. Slightly more complex models were required for API functions that affect thread synchronization. For example, the free_irq function, which disables the driver’s interrupt handler, blocks waiting for any outstanding interrupts to finish. Drivers can rely on this behavior to avoid races. We introduced a condition variable to model this synchronization. Similarly, most device accesses were modeled as writes to a special ioval variable. Thus, the only part of the device that required a more accurate model was its interrupt enabling logic, which affects the behavior of the driver’s interrupt handler thread.

Our original model consisted of eight threads. Liss ran out of memory on this model, so we simplified it to five threads by eliminating parts of driver functionality. Nevertheless, we believe that the resulting model represents the most complex synchronization synthesis case study, based on real-world code, reported in the literature.

The CPMAC driver used in this case study did not contain any known concurrency bugs, so we artificially simulated five typical race conditions that commonly occur in drivers of this type [4]. Liss was able to detect and automatically fix each of these defects (bottom part of Table 1). We only encountered two program locations where manual abstraction refinement was necessary.

We conclude that (1) our coarse abstraction is highly precise in practice; (2) manual effort involved in synchronization synthesis can be further reduced via automatic abstraction refinement; (3) additional work is required to improve the performance of our method to be able to handle real-world systems without simplification. In particular, our analysis indicates that significant speed-up can be obtained by incorporating a partial order reduction scheme into the language inclusion algorithm.

7 Conclusion

We believe our approach and the encouraging experimental results open several directions for future research. Combining the abstraction refinement, verification (checking language inclusion modulo an independence relation), and synthesis (inserting synchronization) more tightly could bring improvements in efficiency. An additional direction we plan on exploring is automated handling of deadlocks, i.e., extending our technique to automatically synthesize deadlock-free programs. Finally, we plan to further develop our prototype tool and apply it to other domains of concurrent systems code.