From non-preemptive to preemptive scheduling using synchronization synthesis

We present a computer-aided programming approach to concurrency. The approach allows programmers to program assuming a friendly, non-preemptive scheduler, and our synthesis procedure inserts synchronization to ensure that the final program works even with a preemptive scheduler. The correctness specification is implicit, inferred from the non-preemptive behavior. Let us consider sequences of calls that the program makes to an external interface. The specification requires that any such sequence produced under a preemptive scheduler should be included in the set of sequences produced under a non-preemptive scheduler. We guarantee that our synthesis does not introduce deadlocks and that the synchronization inserted is optimal w.r.t. a given objective function. The solution is based on a finitary abstraction, an algorithm for bounded language inclusion modulo an independence relation, and generation of a set of global constraints over synchronization placements. Each model of the global constraints set corresponds to a correctness-ensuring synchronization placement. The placement that is optimal w.r.t. the given objective function is chosen as the synchronization solution. We apply the approach to device-driver programming, where the driver threads call the software interface of the device and the API provided by the operating system. Our experiments demonstrate that our synthesis method is precise and efficient. The implicit specification helped us find one concurrency bug previously missed when model-checking using an explicit, user-provided specification. We implemented objective functions for coarse-grained and fine-grained locking and observed that different synchronization placements are produced for our experiments, favoring a minimal number of synchronization operations or maximum concurrency, respectively.


Introduction
Concurrent shared-memory programming is notoriously difficult and error-prone.Program synthesis for concurrency aims to mitigate this complexity by synthesizing synchronization code automatically [4,5,8,11].However, specifying the programmer's intent may be a challenge in itself.Declarative mechanisms, such as assertions, suffer from the drawback that it is difficult to ensure that the specification is complete and fully captures the programmer's intent.
We propose a solution where the specification is implicit.We observe that a core difficulty in concurrent programming originates from the fact that the scheduler can preempt the execution of a thread at any time.We therefore give the developer the option to program assuming a friendly, non-preemptive, scheduler.Our tool automatically synthesizes synchronization code to ensure that every behavior of the program under preemptive scheduling is included in the set of behaviors produced under non-preemptive scheduling.Thus, we use the non-preemptive semantics as an implicit correctness specification.
The non-preemptive scheduling model dramatically simplifies the development of concurrent software, including operating system (OS) kernels, network servers, database systems, etc. [13,14].In this model, a thread can only be descheduled by voluntarily yielding control, e.g., by invoking a blocking operation.Synchronization primitives may be used for communication between threads, e.g., a producer thread may use a semaphore to notify the consumer about availability of data.However, one does not need to worry about protecting accesses to shared state: a series of memory accesses executes atomically as long as the scheduled thread does not yield.
In defining behavioral equivalence between preemptive and non-preemptive executions, we focus on externally observable program behaviors: two program executions are observationally equivalent if they generate the same sequences of calls to interfaces of interest.This approach facilitates modular synthesis where a module's behavior is characterized in terms of its interaction with other modules.Given a multi-threaded program C and a synthesized program C ′ obtained by adding synchronization to C, C ′ is preemption-safe w.r.t.C if for each execution of C ′ under a preemptive scheduler, there is an observationally equivalent nonpreemptive execution of C. Our synthesis goal is to automatically generate a preemption-safe version of the input program.
We rely on abstraction to achieve efficient synthesis of multi-threaded programs.We propose a simple, data-oblivious abstraction inspired by an analysis of synchronization patterns in OS code, which tend to be independent of data values.The abstraction tracks types of accesses (read or write) to each memory location while ignoring their values.In addition, the abstraction tracks branching choices.Calls to an external interface are modeled as writes to a special memory location, with independent interfaces modeled as separate locations.To the best of our knowledge, our proposed abstraction is yet to be explored in the verification and synthesis literature.
Two abstract program executions are observationally equivalent if they are equal modulo the classical independence relation I on memory accesses: accesses to different locations are independent, and accesses to the same location are independent iff they are both read accesses.Using this notion of equivalence, the notion of preemption-safety is extended to abstract programs.
Under abstraction, we model each thread as a nondeterministic finite automaton (NFA) over a finite alphabet, with each symbol corresponding to a read or a write to a particular variable.This enables us to construct NFAs N , representing the abstraction of the original program C under non-premptive scheduling, and P , representing the abstraction of the synthesized program C ′ under preemptive scheduling.We show that preemption-safety of C ′ w.r.t.C is implied by preemption-safety of the abstract synthesized program w.r.t. the abstract original program, which, in turn, is implied by language inclusion modulo I of NFAs P and N .While the problem of language inclusion modulo an indepen-dence relation is undecidable [2], we show that the antichain-based algorithm for standard language inclusion [9] can be adapted to decide a bounded version of language inclusion modulo an independence relation.
Our overall synthesis procedure works as follows: we run the algorithm for bounded language inclusion modulo I, iteratively increasing the bound, until it reports that the inclusion holds, or finds a counterexample, or reaches a timeout.In the first case, the synthesis procedure terminates successfully.In the second case, the counterexample is generalized to a set of counterexamples represented as a Boolean combination of ordering constraints over control-flow locations (as in [11]).These constraints are analyzed for patterns indicating the type of concurrency bug (atomicity, ordering violation) and the type of applicable fix (lock insertion, statement reordering).After applying the fix(es), the procedure is restarted from scratch; the process continues until we find a preemption-safe program, or reach a timeout.
We implemented our synthesis procedure in a new prototype tool called Liss (Language Inclusion-based Synchronization Synthesis) and evaluated it on a series of device driver benchmarks, including an Ethernet driver for Linux and the synchronization skeleton of a USB-to-serial controller driver.First, Liss was able to detect and eliminate all but two known race conditions in our examples; these included one race condition that we previously missed when synthesizing from explicit specifications [5], due to a missing assertion.Second, our abstraction proved highly efficient: Liss runs an order of magnitude faster on the more complicated examples than our previous synthesis tool based on the CBMC model checker.Third, our coarse abstraction proved surprisingly precise in practice: across all our benchmarks, we only encountered three program locations where manual abstraction refinement was needed to avoid the generation of unnecessary synchronization.Overall, our evaluation strongly supports the use of the implicit specification approach based on non-preemptive scheduling semantics as well as the use of the data-oblivious abstraction to achieve practical synthesis for real-world systems code.Contributions.First, we propose a new specification-free approach to synchronization synthesis.Given a program written assuming a friendly, non-preemptive scheduler, we automatically generate a preemption-safe version of the program.Second, we introduce a novel abstraction scheme and use it to reduce preemptionsafety to language inclusion modulo an independence relation.Third, we present the first language inclusion-based synchronization synthesis procedure and tool for concurrent programs.Our synthesis procedure includes a new algorithm for a bounded version of our inherently undecidable language inclusion problem.Finally, we evaluate our synthesis procedure on several examples.To the best of our knowledge, Liss is the first synthesis tool capable of handling realistic (albeit simplified) device driver code, while previous tools were evaluated on small fragments of driver code or on manually extracted synchronization skeletons.Related work.Synthesis of synchronization is an active research area [3-6,10-12, 15, 16].Closest to our work is a recent paper by Bloem et al. [3], which uses implicit specifications for synchronization synthesis.While their specification is given by sequential behaviors, ours is given by non-preemptive behaviors.This makes our approach applicable to scenarios where threads need to communicate Fig. 1: Running example and its abstraction explicitly.Further, correctness in [3] is determined by comparing values at the end of the execution.In contrast, we compare sequences of events, which serves as a more suitable specification for infinitely-looping reactive systems.
Many efforts in synthesis of synchronization focus on user-provided specifications, such as assertions (our previous work [4,5,11]).However, it is hard to determine if a given set of assertions represents a complete specification.In this paper, we are solving language inclusion, a computationally harder problem than reachability.However, due to our abstraction, our tool performs significantly better than tools from [4,5], which are based on a mature model checker (CBMC [7]).Our abstraction is reminiscent of previously used abstractions that track reads and writes to individual locations (e.g., [1,17]).However, our abstraction is novel as it additionally tracks some control-flow information (specifically, the branches taken) giving us higher precision with almost negligible computational cost.The synthesis part of our approach is based on [11].
In [16] the authors rely on assertions for synchronization synthesis and include iterative abstraction refinement in their framework.This is an interesting extension to pursue for our abstraction.In other related work, CFix [12] can detect and fix concurrency bugs by identifying simple bug patterns in the code.

Illustrative Example
Fig. 1a contains our running example.Consider the case where the procedures open dev() and close dev() are invoked in parallel, possibly multiple times (modeled as a non-deterministic while loop).The functions power up() and power down() represent calls to a device.For the non-preemptive scheduler, the sequence of calls to the device will always be a repeating sequence of one call to power up(), followed by one call to power down().Without additional synchronization, however, there could be two calls to power up() in a row when executing it with a preemptive scheduler.Such a sequence is not observationally equivalent to any sequence that can be produced when executing with a nonpreemptive scheduler.
Fig. 1b contains the abstracted versions (we omit tracking of branching choices in the example)of the two procedures, open dev abs() and close dev abs().For instance, the instruction open = open + 1 is abstracted to the two instructions labeled (C) and (D).The abstraction is coarse, but still captures the problem.Consider two threads T1 and T2 running the open dev abs() procedure.The following trace is possible under a preemptive scheduler, but not under a non-preemptive scheduler: T1.A; T2.A; T1.B; T1.C; T1.D; T2.B; T2.C; T2.D.Moreover, the trace cannot be transformed by swapping independent events into any trace possible under a non-preemptive scheduler.This is because instructions A and D are not independent.Hence, the abstract trace exhibits the problem of two successive calls to power up() when executing with a preemptive scheduler.Our synthesis procedure finds this problem, and fixes it by introducing a lock in open dev() (see Sec. 5).

Preliminaries and Problem Statement
Syntax.We assume that programs are written in a concurrent while language W. A concurrent program C in W is a finite collection of threads ⟨T 1 , . . ., T n ⟩ where each thread is a statement written in the syntax from Fig. 2. All W variables (program variables std var, lock variables lock var, and condition variable cond var) range over integers and each statement is labeled with a unique location identifier l.The only non-standard syntactic constructs in W relate to the tags.Intuitively, each tag is a communication channel between the program and an interface to an external system, and the input(tag) and output(tag, expr) statements read from and write to the channel.We assume that the program and the external system interface can only communicate through the channel.
In practice, we use the tags to model device registers.In our presentation, we consider only a single external interface.Our implementation can handle communication with several interfaces.Semantics.We begin by defining the semantics of a single thread in W, and then extend the definition to concurrent non-preemptive and preemptive semantics.Note that in our work, reads and writes are assumed to execute atomically and further, we assume a sequentially consistent memory model.Single-thread semantics.A program state is given by ⟨V, P ⟩ where V is a valuation of all program variables, and P is the statement that remains to be executed.Let us fix a thread identifier tid .
The operational semantics of a thread executing in isolation is given in Fig. 3.A single execution step ⟨V, P ⟩ α → ⟨V ′ , P ′ ⟩ changes the program state from ⟨V, P ⟩ to ⟨V ′ , P ′ ⟩ while optionally outputting an observable symbol α.The absence of a symbol is denoted using ǫ.Most rules from Fig. 3 are standard-the special rules are the Havoc, Input, and Output rules.
1. Havoc: Statement l ∶ x ∶= havoc assigns x a non-deterministic value (say k) and outputs the observable (tid , havoc, k, x). 2. Input, Output: l ∶ x ∶= input(t) and l ∶ output(t, e) read and write values to the channel t, and output (tid , input, k, t) and (tid , output, k, t), where k is the value read or written, respectively.
Intuitively, the observables record the sequence of non-deterministic guesses, as well as the input/output interaction with the tagged channels.In the following, e represents an expression and e[v V [v]] evaluates an expression by replacing all variables v with their values in V.

Output
Fig. 3: Single thread semantics of W Non-preemptive semantics.The non-preemptive semantics of W is presented in Appendix A. The non-preemptive semantics ensures that a single thread from the program keeps executing as detailed above until one of the following occurs: (a) the thread finishes execution, or it encounters (b) a yield statement, or (c) a lock statement and the lock is taken, or (d) an await statement and the condition variable is not set.In these cases, a context-switch is possible.Preemptive semantics.The preemptive semantics of a program is obtained from the non-preemptive semantics by relaxing the condition on context-switches, and allowing context-switches at all program points (see Appendix A).

Problem statement
A non-preemptive observation sequence of a program C is a sequence α 0 . . .α k if there exist program states S pre 0 , S post 0 , . . ., S pre k , S post k such that according to the non-preemptive semantics of W, we have: (a) for each 0 We are now ready to state our synthesis problem.Given a concurrent program C, the aim is to synthesize a program C ′ , by adding synchronization to C, such that C ′ is preemption-safe w.r.t.C.

Language Inclusion Modulo an Independence Relation
We reduce the problem of checking if a synthesized solution is preemption-safe w.r.t. the original program to an automata-theoretic problem.Abstract semantics for W. We first define a single-thread abstract semantics for W (Fig. 4), which tracks types of accesses (read or write) to each memory location while abstracting away their values.Inputs/outputs to an external interface are modeled as writes to a special memory location (dev).Even inputs are modeled as writes because in our applications we cannot assume that reads from the external interface are free of side-effects.Havocs become ordinary writes to the variable they are assigned to.Every branch is taken non-deterministically and tracked.The only constructs preserved are the lock and condition variables.The abstract program state consists of the valuations of the lock and condition variables and the statement that remains to be executed.In the abstraction, an observable is of the form (tid , {read, write, exit, loop, then, else}, v, l) and observes the type of access (read/write) to variable v and records non-deterministic branching choices (exit/loop/then/else).The latter are not associated with any variable.
In Fig. 4, given expression e, the function Reads(tid , e, l) represents the sequence (tid , read, v 1 , l) ⋅ . . .⋅ (tid , read, v n , l) where v 1 , . . ., v n are the variables in e, in the order they are read to evaluate e.   6 and 7) is the same as the concrete program semantics where the single thread semantics is replaced by the abstract single thread semantics.Locks and conditionals and operations on them are not abstracted.
As with the concrete semantics of W, we can define the non-preemptive and preemptive observable sequences for abstract semantics.For a concurrent program C, we denote the sets of abstract preemptive and non-preemptive observable sequences by -For each thread tid , the subsequences of α 0 . . .α k and β 0 . . .β k containing only symbols of the form (tid , a, v, l), with a ∈ {read, write, exit, loop, then, else} are equal, -For each variable v, the subsequences of α 0 . . .α k and β 0 . . .β k containing only write symbols (of the form (tid , write, v, l)) are equal, and -For each variable v, the multisets of symbols of the form (tid , read, v, l) between any two write symbols, as well as before the first write symbol and after the last write symbol are identical.We first show that the abstract semantics is sound w.r.t.preemption-safety (see Appendix B for the proof).
where Σ is a finite alphabet, Q, Q ι , F are finite sets of states, initial states and final states, respectively and ∆ is a set of transitions.A word σ 0 . . .σ k ∈ Σ * is accepted by A if there exists a sequence of states q 0 . . .q k+1 such that q 0 ∈ Q ι and q k+1 ∈ F and ∀i ∶ (q i , σ i , q i+1 ) ∈ ∆.The set of all words accepted by A is called the language of A and is denoted L(A).
Given a program C, we can construct automata A([[C]] N P abs ) and A([[C]] P abs ) that accept exactly the observable sequences under the respective semantics.We describe their construction informally.Each automaton state is a program state of the abstract semantics and the alphabet is the set of abstract observable symbols.There is a transition from one state to another on an observable symbol (or an ǫ) iff the program can execute one step under the corresponding semantics to reach the other state while outputting the observable symbol (on an ǫ).Language inclusion modulo an independence relation.Let I be a nonreflexive, symmetric binary relation over an alphabet Σ.We refer to I as the independence relation and to elements of I as independent symbol pairs.We define a symmetric binary relation ≈ over words in Σ * : for all words σ, σ ′ ∈ Σ * and (α, β) ∈ I, (σ ⋅ αβ ⋅ σ ′ , σ ⋅ βα ⋅ σ ′ ) ∈ ≈.Let ≈ t denote the reflexive transitive closure of ≈. 5 Given a language L over Σ, the closure of L w.r.t.I, denoted Clo I (L), is the set {σ ∈ Σ * ∶ ∃σ ′ ∈ L with (σ, σ ′ ) ∈ ≈ t }.Thus, Clo I (L) consists of all words that can be obtained from some word in L by repeatedly commuting adjacent independent symbol pairs from I.

Definition 1 (Language inclusion modulo an independence relation).
Given NFAs A, B over a common alphabet Σ and an independence relation I over Σ, the language inclusion problem modulo I is: L(A) ⊆ Clo I (L(B))?
We reduce preemption-safety under the abstract semantics to language inclusion modulo an independence relation.The independence relation I we use is defined on the set of abstract observable symbols as follows: ((tid , a, v, l), (tid ′ , a ′ , v ′ , l ′ )) ∈ I iff tid ≠ tid ′ , and one of the following holds:

Checking Language Inclusion
We first focus on the problem of language inclusion modulo an independence relation (Definition 1).This question corresponds to preemption-safety (Thm. 1, Prop. 1) and its solution drives our synchronization synthesis (Sec.5).
Theorem 2. For NFAs A, B over alphabet Σ and an independence relation Fortunately, a bounded version of the problem is decidable.Recall the relation ≈ over Σ * from Sec. 3.2.We define a symmetric binary relation Thus ≈ i consists of all words that can be optained from each other by commuting the symbols at positions i and i + 1.We next define a symmetric binary relation ≍ over Σ * : (σ, σ ′ ) ∈ ≍ iff ∃σ 1 , . . ., σ t : (σ, σ 1 ) ∈ ≈ i1 , . . ., (σ t , σ ′ ) ∈ ≈ it+1 and i 1 < . . .< i t+1 .The relation ≍ intuitively consists of words obtained from each other by making a single forward pass commuting multiple pairs of adjacent symbols.Let ≍ k denote the k-composition of ≍ with itself.Given a language L over Σ, we use Clo k,I (L) to denote the set {σ ∈ Σ * ∶ ∃σ ′ ∈ L with (σ, σ ′ ) ∈ ≍ k }.In other words, Clo k,I (L) consists of all words which can be generated from L using a finite-state transducer that remembers at most k symbols of its input words in its states.
Definition 2 (Bounded language inclusion modulo an independence relation).Given NFAs A, B over Σ, I ⊆ Σ × Σ and a constant k > 0, the k-bounded language inclusion problem modulo I is: We present an algorithm to check k-bounded language inclusion modulo I, based on the antichain algorithm for standard language inclusion [9].Antichain algorithm for language inclusion.Given a partial order (X, ⊑), an antichain over X is a set of elements of X that are incomparable w.r.t.⊑.

In order to check
, the antichain algorithm proceeds by exploring A and B in lockstep.While A is explored nondeterministically, B is determinized on the fly for exploration.The algorithm maintains an antichain, consisting of tuples of the form (s A , S B ), where The algorithm also maintains a frontier set of tuples yet to be explored.
Given state In each step, the antichain algorithm explores A and B by computing αsuccessors of all tuples in its current frontier set for all possible symbols α ∈ Σ. Whenever a tuple (s A , S B ) is found with s A ∈ F A and S B ∩ F B = ∅, the algorithm reports a counterexample to language inclusion.Otherwise, the algorithm updates its frontier set and antichain to include the newly computed successors using the two rules enumerated below.Given a newly computed successor tuple p ′ : -Rule 1: if there exists a tuple p in the antichain with p ⊑ p ′ , then p ′ is not added to the frontier set or antichain, -Rule 2: else, if there exist tuples p 1 , . . ., p n in the antichain with p ′ ⊑ p 1 , . . ., p n , then p 1 , . . ., p n are removed from the antichain.The algorithm terminates by either reporting a counterexample, or by declaring success when the frontier becomes empty.Antichain algorithm for k-bounded language inclusion modulo I.This algorithm is essentially the same as the standard antichain algorithm, with the automaton B above replaced by an automaton B k,I accepting Clo k,I (L(B)).The set Q B k,I of states of B k,I consists of triples (s B , η 1 , η 2 ), where s B ∈ Q B and η 1 , η 2 are k-length words over Σ. Intuitively, the words η 1 and η 2 store symbols that are expected to be matched later along a run.The set of initial states of B k,I is The transition relation ∆ B k,I is constructed by repeatedly applying the following rules, in order, for each state (s B , η 1 , η 2 ) and each symbol α.In what follows, η[∖i] denotes the word obtained from η by removing its i th symbol.

Synchronization Synthesis
We now present our iterative synchronization synthesis procedure, which is based on the procedure in [11].The reader is referred to [11] for further details.The synthesis procedure starts with the original program C and in each iteration generates a candidate synthesized program C ′ .The candidate C ′ is checked for preemption-safety w.r.t.C under the abstract semantics, using our procedure for bounded language inclusion modulo I.If C ′ is found preemption-safe w.r.t.C under the abstract semantics, the synthesis procedure outputs C ′ .Otherwise, an abstract counterexample cex is obtained.The counterexample is analyzed to infer additional synchronization to be added to C ′ for generating a new synthesized candidate.
The counterexample trace cex is a sequence of event identifiers: tid 0 .l0 ; . . .; tid n .ln , where each l i is a location identifier.We first analyze the neighborhood of cex, denoted nhood(cex), consisting of traces that are permutations of the events in cex.Note that each trace corresponds to an abstract observation sequence.Furthermore, note that preemption-safety requires the abstract observation sequence of any trace in nhood(cex) to be equivalent to that of some trace in nhood(cex) feasible under non-preemptive semantics.Let bad traces refer to the traces in nhood(cex) that are feasible under preemptive semantics and do not meet the preemption-safety requirement.The goal of our counterexample analysis is to characterize all bad traces in nhood(cex) in order to enable inference of synchronization fixes.
In order to succinctly represent subsets of nhood(cex), we use ordering constraints.Intuitively, ordering constraints are of the following forms: (a) atomic constraints Φ = A < B where A and B are events from cex.The constraint A < B represents the set of traces in nhood(cex) where event A is scheduled before event B; (b) Boolean combinations of atomic constraints Φ 1 ∧ Φ 2 , Φ 1 ∨ Φ 2 and ¬Φ 1 .We have that Φ 1 ∧ Φ 2 and Φ 1 ∨ Φ 2 respectively represent the intersection and union of the set of traces represented by Φ 1 and Φ 2 , and that ¬Φ 1 represents the complement (with respect to nhood(cex)) of the traces represented by Φ 1 .
Non-preemptive neighborhood.First, we generate all traces in nhood(cex) that are feasible under non-preemptive semantics.We represent a single trace π using an ordering constraint Φ π that captures the ordering between nonindependent accesses to variables in π.We represent all traces in nhood(cex) that are feasible under non-preemptive semantics using the expression Φ = ⋁ π Φ π .The expression Φ acts as the correctness specification for traces in nhood(cex).Counterexample generalization.We next build a quantifier-free first order formula Ψ over the event identifiers in cex such that any model of Ψ corresponds to a bad trace in nhood(cex).We iteratively enumerate models π of Ψ , building a constraint ρ = Φ(π) for each model π, and generalizing each ρ into ρ g to represent a larger set of bad traces.Example.Our trace cex from Sec. 2 would be generalized to T2.A < T1.D ∧ T1.D < T2.D. Any trace that fulfills this constraint is bad.Inferring fixes.From each generalized formula ρ g described above, we infer possible synchronization fixes to eliminate all bad traces satisfying ρ g .The key observation we exploit is that common concurrency bugs often show up in our formulas as simple patterns of ordering constraints between events.For example, the pattern tid 1 .l 1 < tid 2 .l 2 ∧ tid 2 .l′ 2 < tid 1 .l′ 1 indicates an atomicity violation and can be rewritten into lock(tid ).The complete list of such rewrite rules is in Appendix D. This list includes inference of locks and reordering of notify statements.The set of patterns we use for synchronization inference are not complete, i.e., there might be generalized formulae ρ g that are not matched by any pattern.In practice, we found our current set of patterns to be adequate for most common concurrency bugs, including all bugs from the benchmarks in this paper.Our technique and tool can be easily extended with new patterns.Note that our procedure does not guarantee that the synthesized program C ′ is deadlock-free.However, we avoid obvious deadlocks using heursitics such as merging overlapping locks.Further, our tool supports detection of any additional deadlocks introduced by synthesis, but relies on the user to fix them.
We implemented our synthesis procedure in Liss.Liss is comprised of 5000 lines of C++ code and uses Clang/LLVM and Z3 as libraries.It is available as open-source software along with benchmarks at https://github.com/thorstent/Liss.The language inclusion algorithm is available separately as a library called Limi (https://github.com/thorstent/Limi). Liss implements the synthesis method presented in this paper with several optimizations.For example, we take advantage of the fact that language inclusion violations can often be detected by exploring only a small fraction of the input automata by constructing A([[C]] N P abs ) and A([[C]] P abs ) on the fly.Our prototype implementation has several limitations.First, Liss uses function inlining and therefore cannot handle recursive programs.Second, we do not implement any form of alias analysis, which can lead to unsound abstractions.For example, we abstract statements of the form "*x = 0" as writes to variable x, while in reality other variables can be affected due to pointer aliasing.We sidestep this issue by manually massaging input programs to eliminate aliasing.
Finally, Liss implements a simplistic lock insertion strategy.Inference rules in Figure 8 produce locks expressed as sets of instructions that should be inside a lock.Placing the actual lock and unlock instructions in the C code is challenging because the instructions in the trace may span several basic blocks or even functions.We follow a structural approach where we find the innermost common parent block for the first and last instructions of the lock and place the lock and unlock instruction there.This does not work if the code has gotos or returns that could cause control to jump over the unlock statement.At the moment, we simply report such situations to the user.
We evaluate our synthesis method against the following criteria: (1) Effectiveness of synthesis from implicit specifications; (2) Efficiency of the proposed synthesis procedure; (3) Precision of the proposed coarse abstraction scheme on real-world programs.Implicit vs explicit synthesis In order to evaluate the effectiveness of synthesis from implicit specifications, we apply Liss to the set of benchmarks used in our previous ConRepair tool for assertion-based synthesis [5].In addition, we evaluate Liss and ConRepair on several new assertion-based benchmarks (Table 1).The set includes microbenchmarks modeling typical concurrency bug patterns in Linux drivers and the usb-serial macrobenchmark, which models a complete synchronization skeleton of the USB-to-serial adapter driver.We preprocess these benchmarks by eliminating assertions used as explicit specifications for synthesis.In addition, we replace statements of the form assume(v) with await(v), redeclaring all variables v used in such statements as condition variables.This is necessary as our program syntax does not include assume statements.
We use Liss to synthesize a preemption-safe version of each benchmark.This method is based on the assumption that the benchmark is correct under non-preemptive scheduling and bugs can only arise due to preemptive scheduling.We discovered two benchmarks (lc-rc.cand myri10ge.c)that violated  1: Experiments this assumption, i.e., they contained race conditions that manifested under nonpreemptive scheduling; Liss did not detect these race conditions.Liss was able to detect and fix all other known races without relying on assertions.Furthermore, Liss detected a new race in the usb-serial family of benchmarks, which was not detected by ConRepair due to a missing assertion.We compared the output of Liss with manually placed synchronization (taken from real bug fixes) and found that the two versions were similar in most of our examples.
Performance and precision.ConRepair uses CBMC for verification and counterexample generation.Due to the coarse abstraction we use, both steps are much cheaper with Liss.For example, verification of usb-serial.c, which was the most complex in our set of benchmarks, took Liss 82 seconds, whereas it took ConRepair 20 minutes [5].
The loss of precision due to abstraction may cause the inclusion check to return a counterexample that is spurious in the concrete program, leading to unnecessary synchronization being synthesized.On our existing benchmarks, this only occurred once in the usb-serial driver, where abstracting away the return value of a function led to an infeasible trace.We refined the abstraction manually by introducing a condition variable to model the return value.
While this result is encouraging, synthetic benchmarks are not necessarily representative of real-world performance.We therefore implemented another set of benchmarks based on a complete Linux driver for the TI AR7 CPMAC Ethernet controller.The benchmark was constructed as follows.We manually preprocessed driver source code to eliminate pointer aliasing.We combined the driver with a model of the OS API and the software interface of the device written in C. We modeled most OS API functions as writes to a special memory location.Groups of unrelated functions were modeled using separate locations.Slightly more complex models were required for API functions that affect thread synchronization.For example, the free irq function, which disables the driver's interrupt handler, blocks waiting for any outstanding interrupts to finish.Drivers can rely on this behavior to avoid races.We introduced a condition variable to model this synchronization.Similarly, most device accesses were modeled as writes to a special ioval variable.Thus, the only part of the device that required a more accurate model was its interrupt enabling logic, which affects the behavior of the driver's interrupt handler thread.
Our original model consisted of eight threads.Liss ran out of memory on this model, so we simplified it to five threads by eliminating parts of driver functionality.Nevertheless, we believe that the resulting model represents the most complex synchronization synthesis case study, based on real-world code, reported in the literature.
The CPMAC driver used in this case study did not contain any known concurrency bugs, so we artificially simulated five typical race conditions that commonly occur in drivers of this type [4].Liss was able to detect and automatically fix each of these defects (bottom part of Table 1).We only encountered two program locations where manual abstraction refinement was necessary.
We conclude that (1) our coarse abstraction is highly precise in practice; (2) manual effort involved in synchronization synthesis can be further reduced via automatic abstraction refinement; (3) additional work is required to improve the performance of our method to be able to handle real-world systems without simplification.In particular, our analysis indicates that significant speed-up can be obtained by incorporating a partial order reduction scheme into the language inclusion algorithm.

Conclusion
We believe our approach and the encouraging experimental results open several directions for future research.Combining the abstraction refinement, verification (checking language inclusion modulo an independence relation), and synthesis (inserting synchronization) more tightly could bring improvements in efficiency.An additional direction we plan on exploring is automated handling of deadlocks, i.e., extending our technique to automatically synthesize deadlock-free programs.Finally, we plan to further develop our prototype tool and apply it to other domains of concurrent systems code.

Skip α = Fig. 4 :
Fig.4: Single thread abstract semantics of W The abstract program semantics (Figures6 and 7) is the same as the concrete program semantics where the single thread semantics is replaced by the abstract [[C]] P abs and [[C]] N P abs , respectively.Abstract observation sequences α 0 . . .α k and β 0 . . .β k are equivalent if:

Theorem 1 .
Given concurrent program C and a synthesized program C ′ obtained by adding synchronization to

Proposition 3 .
Example.The generalized constraint T2.A < T1.D ∧ T1.D < T2.D matches the lock rule and yields lock(T2.[A∶ D], T1.[D ∶ D]).Since the lock involves events in the same function, the lock is merged into a single lock around instructions A and D in open dev abs.This lock is not sufficient to make the program preemption-safe.Another iteration of the synthesis procedure generates another counterexample for analysis and synchronization inference.If our synthesis procedure generates a program C ′ , then C ′ is preemption-safe with respect to C.
and (c) for the initial state S ι and a final state (i.e., where all threads have finished execution) S f , ⟨S ι ⟩ ⟨S f ⟩.Similarly, a preemptive observation sequence of a program C is a sequence α 0 . . .α k as above, with the non-preemptive semantics replaced with preemptive semantics.We denote the sets of non-preemptive and preemptive observation sequences of a program C by [[C]] N P and [[C]] P , respectively.We say that observation sequences α 0 . . .α k and β 0 . . .β k are equivalent if: -The subsequences of α 0 . . .α k and β 0 . . .β k containing only symbols of the form (tid , Input, k, t) and (tid , Output, k, t) are equal, and -For each thread identifier tid , the subsequences of α 0 . . .α k and β 0 . . .β k containing only symbols of the form (tid , Havoc, k, x) are equal.Intuitively, observable sequences are equivalent if they have the same interaction with the interface, and the same non-deterministic choices in each thread.For sets of observable sequences O 1 and O 2 , we write O 1 ⊆ O 2 to denote that each sequence in O 1 has an equivalent sequence in O 2 .Given a concurrent program C and a synthesized program C ′ obtained by adding synchronization to C, the program ǫ → *