Symbolic Partial-Order Execution for Testing Multi-Threaded Programs

We describe a technique for systematic testing of multi-threaded programs. We combine Quasi-Optimal Partial-Order Reduction, a state-of-the-art technique that tackles path explosion due to interleaving non-determinism, with symbolic execution to handle data non-determinism. Our technique iteratively and exhaustively finds all executions of the program. It represents program executions using partial orders and finds the next execution using an underlying unfolding semantics. We avoid the exploration of redundant program traces using cutoff events. We implemented our technique as an extension of KLEE and evaluated it on a set of large multi-threaded C programs. Our experiments found several previously undiscovered bugs and undefined behaviors in memcached and GNU sort, showing that the new method is capable of finding bugs in industrial-size benchmarks.


Introduction
Advances in formal testing and the increased availability of affordable concurrency have spawned two opposing trends: While it has become possible to analyze increasingly complex sequential programs in new and powerful ways, many projects are now embracing parallel processing to fully exploit modern hardware, thus raising the bar for practically useful formal testing. In order to make formal testing accessible to software developers working on parallel programs, two main problems need to be solved. Firstly, a significant portion of the API in concurrency libraries such as libpthread must be supported. Secondly, the analysis must be accessible to non-experts in formal verification. Currently, this niche is mostly occupied by manual and fuzz testing, oftentimes combined with dynamic concurrency checkers such as ThreadSanitizer [47] or Helgrind [2].
Data non-determinism in sequential and concurrent programs, and scheduling non-determinism are two major sources of path explosion in program analysis. Symbolic execution [30, 9,10,22,40] is a technique to reason about input data in sequential programs. It is capable of dealing with real-world programs. Partial-Order Reductions (PORs) [4,43,20,19] are a large family of techniques to explore a reduced number of thread interleavings without missing any relevant behavior.
In this paper we propose a technique that combines symbolic execution and a Quasi-Optimal POR [36]. In essence, our approach (1) runs the program using a symbolic executor, (2) builds a partial order representing the occurrence of POSIX threading synchronization primitives (library functions pthread_*) seen during that execution, (3) adds the partial order to an underlying tree-like, unfolding [43,33] data structure, (4) computes the first events of the next partial orders to explore, and (5) selects a new partial order to explore and starts again. We use cutoff events [33] to prune the exploration of different traces that reach the same state, thus natively dealing with non-terminating executions.
We implemented our technique as an extension of KLEE. During the evaluation of this prototype we found nine bugs (that we attribute to four root causes) in the production version of memcached. All of these bugs have since been confirmed by the memcached maintainers and are fixed as of version 1.5.21. Our tool handles a significant portion of the POSIX threading API [25], including barriers, mutexes and condition variables without being significantly harder to use than common fuzz testing tools.
The main challenge that our approach needs to address is that of scalability in the face of an enormous state space. We tackle this challenge by detecting whenever any two Mazurkiewicz traces reach the same program state to only further explore one of them. Additionally, we exploit the fact that data races on non-atomic variables cause undefined behavior in C [26, § 5.1.2.4/35], which means that any unsynchronized memory access is, strictly speaking, a bug. By adding a data race detection algorithm, we can thereby restrict thread scheduling decisions to synchronization primitives, such as operations on mutexes and condition variables, which significantly reduces the state space.
This work has three core contributions, the combination of which enables the analysis of real-world multi-threaded programs (see also Sec. 6 for related work): 1. A partial order reduction algorithm capable of handling real-world POSIX programs that may utilize an arbitrary amount of threads, barriers, mutexes and condition variables. Our algorithm is capable of continuing analysis in the face of deadlocks. 2. A cutoff algorithm that recognizes whenever two Mazurkiewicz traces reach the same program state, as identified by its actual memory contents. This significantly prunes the search space and even enables the partial order reduction to deal with non-terminating executions. 3. An implementation that finds real-world bugs.

Overview
The technique proposed in this paper can be described as a process of 5 conceptual steps, each of which we describe in a section below:

Sequential Executions
Consider the program shown in Fig. 1a. Assume that all variables are initially set to zero. The statement a = in() initializes variable a non-deterministically. A run of the program is a sequence of actions, i.e., pairs i, s where i ∈ N identifies a thread that executes a statement s. For instance, the sequence σ 1 := 1, a=in() , 1, c=3 , 2, b=c , 2, a<0 , 2, puts("n") is a run of Fig. 1a. This run represents all program paths where both statements of thread 1 run before the statements of thread 2, and where the statement a = in() initializes variable a to a negative number. In our notion of run, concurrency is represented explicitly (via thread identifiers) and data non-determinism is represented symbolically (via constraints on program variables). To keep things simple the example only has atomic integers (implicitly guarded by locks) instead of POSIX synchronization primitives.

Independence between Actions and Partial-Order Runs
Many POR techniques use a notion called independence [20] to avoid exploring concurrent interleavings that lead to the same state. An independence relation associates pairs of actions that commute (running them in either order results in the same state). For illustration purposes, in Fig. 1 let us consider two actions as dependent iff either both of them belong to the same thread or one of them writes into a variable which is read/written by the other. Furthermore, two actions will be independent iff they are not dependent. A sequential run of the program can be viewed as a partial order when we take into account the independence of actions. These partial orders are known as dependency graphs in Mazurkiewicz trace theory [32] and as partial-order runs in this paper. Figures 1b to 1f show all the partial-order runs of Fig. 1a. The partial-order run associated to the run σ 1 above is Fig. 1c. For σ 2 := 2, b=c , 2, a>=0 , 1, a=in() , 2, puts("y") , 1, c=3 , we get the partial order shown in Fig. 1f.

Unfolding: Merging the Partial-Orders
An unfolding [33,16,39] is a tree-like structure that uses partial orders to represent concurrent executions and conflict relations to represent thread interference and data non-determinism. We can define unfolding semantics for programs in two conceptual steps: (1) identify isomorphic events that occur in different partial-order runs; (2) bind the partial-orders together using a conflict relation.
Two events are isomorphic when they are structurally equivalent, i.e., they have the same label (run the same action) and their causal (i.e., happens-before) predecessors are (transitively) isomorphic. The number within every event in Figs. 1b to 1f identifies isomorphic events.
Isomorphic events from different partial orders can be merged together using a conflict relation for the un-merged parts of those partial orders. To understand why conflict is necessary, consider the set of events C := {1, 2}. It obviously represents part of a partial-order run (Fig. 1c, for instance). Similarly, events C ′ := {1, 8, 9} represent (part of) a run. However, their union C ∪ C ′ does not represent any run, because (1) it does not describe what happens-before relation exists between the dependent actions of events 2 and 8, and (2) it executes the statement c=3 twice. Unfoldings fix this problem by introducing a conflict relation between events. Conflicts are to unfoldings what branches are to trees. If we declare that events 2 and 8 are in conflict, then any conflict-free (and causally-closed) subset of C ∪ C ′ is exactly one of the original partial orders. This lets us merge the common parts of multiple partial orders without losing track of the original partial orders. Figure 1g represents the unfolding of the program (after merging all 5 partialorder executions). Conflicts between events are represented by dashed red lines. Each original partial order can be retrieved by taking a (⊆-maximal) set of events which is conflict-free (no two events in conflict are in the set) and causally-closed (if you take some event, then take also all its causal predecessors).
For instance, the partial order in Fig. 1d can be retrieved by resolving the conflicts between events 1 vs 14, 2 vs 8, 10 vs 12 in favor of, resp., 1, 8, 10. Resolving in favor of 1 means that events 14 to 17 cannot be selected, because they causally succeed 14. Similarly, resolving in favor of 8 and 10 means that only events 9 and 11 remain eligible, which hold no conflicts among them-all other events are causal successors of either 2 or 12.

Exploring the Unfolding
Since the unfolding represents all runs of the program via a set of compactlymerged, prefix-sharing partial-orders, enumerating all the behaviors of the pro-gram reduces to exploring all partial-order runs represented in its unfolding. Our algorithm iteratively enumerates all ⊆-maximal partial-order runs.
In simplified terms, it proceeds as follows. Initially we explore the black events shown in Fig. 1h, therefore exploring the run shown in Fig. 1b. We discover the next partial order by computing the so-called conflicting extensions of the current partial order. These are, intuitively, events in conflict with some event in our current partial order but such that all its causal predecessors are in our current partial order. In Fig. 1h these are shown in circles, events 8 and 6.
We now find the next partial order by (1) selecting a conflicting extension, say event 6, (2) removing all events in conflict with the selected extension and its causal successors, in this case events 4 and 5, and (3) expanding the partial order until it becomes maximal, thus exploring the partial-order Fig. 1c, shown as the black events of Fig. 1i. Next we select event 8 (removing 2 and its causal successors) and explore the partial order Fig. 1j. Note that this reveals two new conflicting extensions that were hidden until now, events 12 and 14 (hidden because 8 was a causal predecessor of them, but 8 was not in our partial order). Selecting either of the two extensions makes the algorithm to explore the last two partial orders.

Cutoff Events: Pruning the Unfolding
When the program has non-terminating runs, its unfolding will contain infinite partial-orders and the algorithm above will not finish. To analyze nonterminating programs we use cutoff events [33]. In short, certain events do not need to be explored because they reach the same state as another event that has been already explored using a shorter (partial-order) run. Our algorithm prunes the unfolding at these cutoff events, thus handling terminating and nonterminating programs that repeatedly reach the same state.

Main Algorithm
This section formally describes the approach presented in this paper.

Programs, Actions, and Runs
Let P := T, L, C represent a (possibly non-terminating) multi-threaded POSIX C program, where T is the set of statements, L is the set of POSIX mutexes used in the program, and C is the set of condition variables. This is a deliberately simplified presentation of our program syntax, see App. A for full details. We represent the behavior of each statement in P by an action, i.e., a pair i, b in A ⊆ N × B, where i ≥ 1 identifies the thread executing the statement and b is the effect of the statement. We consider the following effects: Below we informally explain the intent of an effect and how actions of different effects interleave with each other. In App. B we use actions and effects to define labeled transition system semantics to P . Below we also (informally) define an independence relation (see Sec. 2.2) between actions.
Local actions. An action i, loc, t represents the execution of a local statement t from thread i, i.e., a statement which manipulates local variables. For instance, the actions labeling events 1 and 3 in Fig. 2b are local actions. Local actions are only dependent on actions of the same thread.
Mutex lock/unlock. Actions i, acq, l and i, rel, l resp. represent that thread i locks or unlocks mutex l ∈ L. The semantics of these actions correspond to the so-called NORMAL mutexes in the POSIX standard. Actions of acq, l or rel, l effect are only dependent on actions whose effect is an operation on the same mutex l (acq, rel, w 1 or w 2 , see below). For instance the action of event 4 (rel) in Fig. 2b depends on the action of event 6 (acq).
Wait on condition variables. The occurrence of a pthread_cond_wait(c, l) statement is represented by two separate actions of effect w 1 , c, l and w 2 , c, l . An action i, w 1 , c, l represents that thread i has atomically released the lock l and started waiting on condition variable c. An action i, w 2 , c, l indicates that thread i has been woken up by a signal or broadcast operation on c and that it successfully re-acquired mutex l. For instance the action 1, w 1 , c, m of event 10 in Fig. 2c represents that thread 1 has released mutex m and is waiting for c to be signaled. After the signal happens (event 12) the action 1, w 2 , c, m of event 14 represents that thread 1 wakes up and re-acquires mutex m. An action i, w 1 , c, l is dependent on any action whose effect operates on mutex l (acq, rel, w 1 or w 2 ) as well as signals directed to thread i ( sig, c, i , see below), lost signals ( sig, c, 0 , see below), and any broadcast ( bro, c, W for any W ⊆ N). Similarly, an action i, w 2 , c, l is dependent on any action whose effect operates on lock l as well as signals and broadcasts directed to thread i (that is, either sig, c, i or bro, c, W when i ∈ W ).
Signal/broadcast on condition variables. An action i, sig, c, j , with j ≥ 0 indicates that thread i executed a pthread_cond_signal(c) statement. If j = 0 then no thread was waiting on condition variable c, and the signal had no effect, as per the POSIX semantics. We refer to these as lost signals. Example: events 7 and 17 in Figs. 2b and 2d are labeled by lost signals. In both cases thread 1 was not waiting on the condition variable when the signal happened. However, when j ≥ 1 the action represents that thread j wakes up by this signal. Whenever a signal wakes up a thread j ≥ 1, we can always find a (unique) w 1 action of thread j that happened before the signal and a unique w 2 action in thread j that happens after the signal. For instance, event 12 in Fig. 2c signals thread 1, which went sleeping in the w 1 event 10 and wakes up in the w 2 event 14. Similarly, an action i, bro, c, W , with W ⊆ N indicates that thread i executed a pthread_cond_broadcast(c) statement and any thread j such that j ∈ W was woken up. If W = ∅, then no thread was waiting on condition variable c (lost broadcast ). Lost signals and broadcasts on c depend on any action of w 1 , c, · effect as well as any non-lost signal/broadcast on c. Non-lost signals and broadcasts on c that wake up thread j depend 4 on w 1 and w 2 actions of thread j as well as any signal/broadcast (lost or not) on the same condition variable.
A run of P is a sequence of actions in A * which respects the constraints stated above for actions. For instance, a run for the program shown in Fig. 2a is the sequence of actions which labels any topological order of the events shown in any partial order in Figs. 2b to 2e. The sequence below, 1, loc, x=in() , 2, loc, y=1 , 1, acq, m , 1, loc, x>=0 , 1, rel, m , 2, acq, m is a run of Fig. 2a. Naturally, if σ ∈ A * is a run, any prefix of σ is also a run. Runs explicitly represent concurrency, using thread identifiers, and symbolically represent data non-determinism, using constraints, as illustrated by the 1st and 4th actions of the run above. We let runs(P ) denote the set of all runs of P .
A concrete state of P is a tuple that represents, intuitively, the program counters of each thread, the values of all memory locations, the mutexes locked by each thread, and, for each condition variable, the set of threads waiting for it (see App. B for a formal definition). Since runs represent operations on symbolic data, they reach a symbolic state, which conceptually corresponds to a set of concrete states of P .
The state of a run σ, written state(σ), is the set of all concrete states of P that are reachable when the program executes the run σ. For instance, the run σ ′ given above reaches a state consisting on all program states where y is 1, x is a non-negative number, thread 2 owns mutex m and its instruction pointer is at line 3, and thread 1 has finished. We let reach(P ) := σ∈runs(P ) state(σ) denote the set of all reachable states of P .

Independence
In the previous section, given an action a ∈ A we informally defined the set of actions which are dependent on a, therefore indirectly defining an independence relation. We now show that this relation is a valid independence [19,43]. Intuitively, an independence relation is valid when every pair of actions it declares as independent can be executed in any order while still producing the same state.
Our independence relation is valid only for data-race free programs. We say that P is data-race free iff any two local actions a := i, loc, t and a ′ := i ′ , loc, t ′ from different threads (i = i ′ ) commute at every reachable state of P . See App. C for additional details. This ensures that local statements of different threads of P modify the memory without interfering each other.     Our technique does not use data races as a source of thread interference for POR. It will not explore two execution orders for the two statements that exhibit a data race. However, it can be used to detect and report data races found during the POR exploration, as we will see in Sec. 4.4.

Partial-Order Runs
A labeled partial-order (LPO) is a tuple X, <, h where X is a set of events, < ⊆ X × X is a causality (a.k.a., happens-before) relation, and h : X → A labels each event by an action in A.
A partial-order run of P is an LPO that represents a run of P without enforcing an order of execution on actions that are independent. All partialorder runs of Fig. 2a are shown in Figs. 2b to 2e.
Given a run σ of P , we obtain the corresponding partial-order run E σ := E, <, h by the following procedure: (1) initialize E σ to be the only totallyordered LPO that consists of |σ| events where the i-th event is labeled by the i-th action of σ; (2) for every two events e, e ′ such that e < e ′ , remove the pair e, e ′ from < if h(e) is independent from h(e ′ ); (3) restore transitivity in < (i.e., if e < e ′ and e ′ < e ′′ , then add e, e ′′ to <). The resulting LPO is a partial-order run of P .
Furthermore, the originating run σ is an interleaving of E σ . Given some LPO E := E, <, h , an interleaving of E is the sequence that labels any topological ordering of E. Formally, it is any sequence h(e 1 ), . . . , h(e n ) such that E = {e 1 , . . . , e n } and e i < e j =⇒ i < j. We let inter (E) denote the set of all interleavings of E. Given a partial-order run E of P , the interleavings inter(E) have two important properties: every interleaving in inter (E) is a run of P , and any two interleavings σ, σ ′ ∈ inter(E) reach the same state state(σ) = state(σ ′ ).

Prime-Event Structures
We use unfoldings to give semantics to multi-threaded programs. Unfoldings are Prime Event Structures (PES) [39]. A PES is a tree-like representation of system behavior that uses partial orders to represent concurrent interaction. Fig. 3a depicts an unfolding of the program in Fig. 2a. The nodes are events and solid arrows represent causal dependencies: events 1 and 4 must fire before 8 can fire. The dotted line represents conflicts: 2 and 5 are not in conflict and may occur in any order, but 2 and 16 are in conflict and cannot occur in the same (partial-order) execution.
Formally, a Prime Event Structure [39] (pes) is a tuple E := E, <, #, h with a set of events E, a causality relation < ⊆ E × E, which is a strict partial order, a conflict relation # ⊆ E × E that is symmetric and irreflexive, and a labeling function h : E → A.
The history of an event ⌈e⌉ := {e ′ ∈ E : e ′ < e} is the least set of events that must fire before e can fire. A configuration of E is a finite set C ⊆ E that is causally closed (⌈e⌉ ⊆ C for all e ∈ C), and conflict free (¬(e # e ′ ) for all e, e ′ ∈ C). We let conf (E) denote the set of all configurations of E. For any e ∈ E, the local configuration of e is defined as [e] := ⌈e⌉∪{e}. In Fig. 3a, the set {1, 2} is a configuration, and in fact it is a local configuration, i.e., [2] = {1, 2}. The local configuration of event 6 is {1, 2, 3, 4, 5, 6}. Set {2, 5, 16} is not a configuration, because it is neither causally closed (1 is missing) nor conflict-free (5 # 16).

Unfolding Semantics for Programs
Given a program P , in this section we define a PES U P such that every configuration of U P is a partial-order run of P .
Let E 1 := E 1 , < 1 , h 1 , . . . , E n := E n , < n , h n be the collection of all the partial-order runs of P . The events of U P are the equivalence classes of the structural equality relation that we intuitively described in Sec. 2.3. Two events are structurally equal iff their canonical name is the same. Given an event e ∈ E i in some partial-order run E i , the canonical name cn(e) of e is the pair a, H where a := h i (e) is the executed action and H := {cn(e ′ ) : e ′ < i e} is the set of canonical names of those events that causally precede e in E i . Intuitively, canonical names indicate that action h(e) runs after the (transitively canonicalized) partially-ordered history preceding e. For instance, in Fig. 3a cn(1) = 1, a=in() , ∅ , and cn(6) = 2, rel, m , {cn(4), cn(5)} . Actually, the number within every event in Figs. 2b to 2e identifies (is in bijective correspondence with) its canonical name. Event 19 in Fig. 2d is the same event as event 19 in Fig. 2e because it fires the same action ( 1, acq, m ) after the same causal history ({1, 5, 16, 17, 18}). Event 2 in Fig. 2c and 19 in Fig. 2d are not the same event because while h(2) = h(19) = 1, acq, m they have a different causal history ({1} vs {1, 5, 16, 17, 18}). Obviously events 4 and 6 in Fig. 2b are different because h(4) = h(6). We can now define the unfolding of P as the only PES U P := E, <, #, h such that -E := {cn(e) : e ∈ E 1 ∪ . . . ∪ E n } is the set of canonical names of all events; -Relation < ⊆ E ×E is the union < 1 ∪. . .∪< n of all happens-before relations; -Any two events e, e ′ ∈ E of U P are in conflict, e # e ′ , when e = e ′ , and ¬(e < e ′ ), and ¬(e ′ < e), and h(e) is dependent on h(e ′ ).
Fig. 3a is the unfolding produced by merging all 4 partial-order runs in Figs. 2b to 2e. Note that the configurations of U P are partial-order runs of P . Furthermore, the ⊆-maximal configurations are exactly the 4 originating partial orders. It is possible to prove that U P is a semantics of P . In App. E we show that (1) U P is uniquely defined, (2) any interleaving of any local configuration of U P is a run of P , (3) for any run σ of P there is a configuration C of U P such that σ ∈ inter (C).

Conflicting Extensions
Our technique analyzes P by iteratively constructing (all) partial-order runs of P . In every iteration we need to find the next partial order to explore. We use the so-called conflicting extensions of a configuration to detect how to start a new partial-order run that has not been explored before.
Given a configuration C of U P , an extension of C is any event e ∈ E \ C such that all the causal predecessors of e are in C. We denote the set of extensions of C as ex (C) := {e ∈ E : e / ∈ C ∧ ⌈e⌉ ⊆ C}. The enabled events of C are extensions that can form a larger configuration: en(C) := {e ∈ ex (C) : C ∪ {e} ∈ conf (E)}. For instance, in Fig. 3a, the (local) configuration [6] has 3 extensions, ex ([6]) = {7, 9, 16} of which, however, only event 7 is enabled: en([6]) = {7}. A conflicting extension of C is an extension for which there is at least one e ′ ∈ C such that e # e ′ . For instance, in Fig. 3a, the (local) configuration [6] has two conflicting extensions, events 16 and 9. Event 19 is not a conflicting extension of [6] because 18 is a causal predecessor of 19, but 18 ∈ [6]. A conflicting extension is, intuitively, an incompatible addition to the configuration C, an event e that cannot be executed together with C (without removing e ′ and its causal successors from C). Our technique discovers new conflicting extension events by trying to revert the causal order of certain events in C. Owing to space limitations we only explain how the algorithm handles events of acq and w 2 effect (App. F presents the remaining 4 procedures of the algorithm). Alg. 1 shows the procedure that handles this case. It receives an event e of acq or w 2 effect (line 2). We build and return a set of conflicting extensions, stored in variable R. Events are added to R in line 14 and 17. Note that we define events using their canonical name. For instance, in line 14 we add a new event whose action is h(e) and whose causal history is P . Note that we only create events that execute action h(e). Conceptually speaking, the algorithm simply finds different causal histories (variables P and e ′ ) within the set K = ⌈e⌉ to execute action h(e).
Procedure last-of(C, i) returns the only <-maximal event of thread i in C; last-notify(e, c, i) returns the only immediate <-predecessor e ′ of e such that the effect of h(e ′ ) is either sig, c, i or bro, c, S with i ∈ S; finally, procedure last-lock(C, l) returns the only <-maximal event that manipulates lock l in C (an event of effect acq, rel, w 1 or w 2 ), or ⊥ if no such event exists. See App. F for additional details.

Exploring the Unfolding
This section presents an algorithm that explores the state space of P by constructing all maximal configurations of U P . In essence, our procedure is an improved Quasi-Optimal POR algorithm [36], where the unfolding is not explored using a DFS traversal, but a user-defined search order. This was necessary to combine our algorithm with the symbolic execution engine KLEE, see Sec. 4.
Our algorithm explores one configuration of U P at a time and organizes the exploration into a binary tree. Fig. 3b shows the tree explored for the unfolding shown in Fig. 3a. A tree node is a tuple n := C, D, A, e that represents both the exploration of a configuration C of U P and a choice to execute, or not, event e ∈ en(C). Both D (for disabled ) and A (for add ) are sets of events.
The key insight of this tree is as follows. The subtree rooted at a given node n explores all configurations of U P that include C and exclude D, with the following constraint: n's left subtree explores all configurations including event e and n's right subtree explores all configuration excluding e. Set A is used to guide the algorithm when exploring the right subtree. For instance, in Fig. 3b the subtree rooted at node n := {1, 2}, ∅, ∅, 3 explores all maximal configurations that contain events 1 and 2 (namely, those shown in Figs. 2b and 2c). The left subtree of n explores all configurations including {1, 2, 3} (Fig. 2b) and the right subtree all of those including {1, 2} but excluding 3 (Fig. 2c).
Alg. 2 shows a simplified version of our algorithm. The complete version, in App. G, specifies additional details including how nodes are selected for exploration and how they are removed from the tree. The algorithm constructs and stores the exploration tree in the variable N , and the set of currently known events of U N in variable U . At the end of the exploration, U will store all events of U N and the leafs of the exploration tree in N will correspond to the maximal configurations of U N .
The tree is constructed using a fixed-point loop (line 4) that repeats the following steps as long as they modify the tree: select a node C, D, A, e in the tree (line 5), extend U with the conflicting extensions of C (line 6), check if the configuration is ⊆-maximal (line 7), case in which there is nothing left to do, then try to add a left (line 9) or right (line 12) child node.
The subtree rooted at the left child node will explore all configurations that include C ∪ {e} and exclude D (line 10); the right subtree will explore those including C and excluding D ∪ {e} (line 15), if any of them exists, which we detect by checking (line 14) if we found a so-called alternative [43].
An alternative is a set of events which witnesses the existence of some maximal configuration in U P that extends C without including D ∪ {e}. Computing such witness is an NP-complete problem, so we use an approximation called k-partial alternatives [36], which can be computed in P-time and works well in practice. Our procedure alt specifically computes 1-partial alternatives: it selects k = 1 event e from D ∩ en(C), searches for an event e ′ in conflict with e (we have added all known candidates in line 6, using the algorithms of Sec. 3.6) that can extend C (i.e., such that that C ∪ [e ′ ] is a configuration), and returns it. When such an event e ′ is found (line 33), some events in its local configuration [e ′ ] become the A-component of the right child node (line 15), and the leftmost branch rooted at that node will re-execute those events (as they will be selected in line 20), guiding the search towards the witnessed maximal configuration.
For instance, in Fig. 3b, assume that the algorithm has selected node n =

Cutoffs and Completeness
All interleavings of a given configuration always reach the same state, but interleavings of different configurations can also reach the same state. It is possible to exclude certain such redundant configurations from the exploration without making the algorithm incomplete, by using cutoff events [33].
Intuitively, an event is a cutoff if we have already visited another event that reaches the same state with a shorter execution. Formally, in Alg. 2, line 27 we let cutoff(e) return true iff there is some e ′ ∈ U such that state( While cutoffs prevent the exploration of redundant configurations, the analysis is still complete: it is possible to prove that every state reachable via a configuration with cutoffs is also reachable via a configuration without cutoffs. Theorem 2 (Completeness). For any reachable state s ∈ reach(P ), Alg. 2 explores a configuration C such that for some C ′ ⊆ C it holds that state(C ′ ) = s.
A proof sketch is available in App. G. Naturally, since Alg. 2 explores U P , and U P is an exact representation of all runs of P , then Alg. 2 is also sound : any event constructed by the algorithm (added to set U ) is associated with a real run of P . Furthermore, cutoff events not only reduce the exploration of redundant configurations, but also force the algorithm to terminate for non-terminating programs that run on bounded memory (proof sketch available in App. G): Theorem 3 (Termination). Algorithm 2 terminates for any program P such that reach(P ) is finite.

Implementation
We implemented our algorithm on top of the symbolic execution engine KLEE [9], which was previously restricted to sequential programs. KLEE already provides a minimal POSIX support library that we extended to translate calls to pthread functions to their respective events, enabling us to test real-world multi-threaded C programs. We also extended already available functionality to make it threadsafe, e.g., by implementing a global file system lock that ensures that parallel reads from the same file descriptor do not result in unsafe behavior. The source code of our prototype is available at https://github.com/por-se/por-se.

Standby States
When a new alternative is explored, a symbolic execution state needs to be computed to match the new node in the POR tree. However, creating it from scratch requires too much time and keeping a symbolic execution state around for each node consumes significant amounts of memory. Instead of committing to either extreme, we store standby states at regular intervals along the exploration tree and, when necessary, replay the closest standby state. This way, significantly fewer states are kept in memory without letting the replaying of previously computed operations dominate the analysis either.

Hash-Based Cutoff Events
Schemmel et al. presented [45] an incremental hashing scheme to identify infinite loops during symbolic execution. The approach detects when the program under test can transition from any one state back to that same state. Their scheme computes fragments for small portions of the program state, which are then hashed individually, and combined into a compound hash by bitwise xor operations. This compound hash, called a fingerprint, uniquely (modulo hash collisions) identifies the whole state of the program under test. We adapt this scheme to provide hashes that identify the concurrent state of parallel programs.
To this end, we associate each configuration with a fingerprint that describes the whole state of the program at that point. For example, if the program state consists of two variables, x = 3 and y = 5, the fingerprint would be f p = hash ("x=3") ⊕ hash ("y=5"). When one fragment changes, e.g., from x = 3 to x = 4, the old fragment hash needs to be replaced with the new one. This operation can be performed as f p ′ = f p⊕hash ("x=3")⊕hash ("x=4") as the duplicate fragments for x = 3 will cancel out. To quickly compute the fingerprint of a configuration, we annotate each event with an xor of all of these update operations that were done on its thread. Computing the fingerprint of a configuration now only requires xor-ing the values from its thread-maximal events, which will ensure that all changes done to each variable are accounted for, and cancel out one another so that only the fragment for the last value remains.
Any two local configurations that have the same fingerprint represent the same program state; each variable, program counter, etc., has the same value. Thus, it is not necessary to continue exploring both-we have found a potential cutoff point, which the POR algorithm will treat accordingly (Sec. 3.8).

Deterministic and Repeatable Allocations
KLEE usually uses the system allocator to determine the addresses of objects allocated by the program under test. But it also provides a (more) deterministic mode, in which addresses are consumed in sequence from a large pre-allocated array. Since our hash-based cutoff computation uses memory address as part of the computation, using execution replays from standby states (Sec. 4.1) requires that we have fully repeatable memory allocation.
We tackle this problem by decoupling the addresses returned by the emulated system allocator in the program under test from the system allocator of KLEE itself. A new allocator requires a large amount of virtual memory in which it will perform its allocations. This large virtual memory mapping is not actually used unless an external function call is performed, in which case the relevant objects are temporarily copied into the region from the symbolic execution state for which the external function call is to be performed. Afterwards, the pages are marked for reclamation by the OS. This way, allocations done by different symbolic execution states return the same address to the program under test.
While a deterministic allocator by itself would be enough for providing deterministic allocation to sequential programs, parallel programs also require an allocation pattern that is independent of which sequentialization of the same partial order is chosen. We achieve this property by providing independent allocators for each thread (based on the thread id, thus ensuring that the same virtual memory mapping is reused for each instance of the same semantic thread). When an object is deallocated on a different thread than it was allocated on, its address only becomes available for reuse once the allocating thread has reached a point in its execution where it is causally dependent on the deallocation. Additionally, the thread ids that are used by our implementation are hierarchically defined: A new thread t that is the i-th thread started by its parent thread p has the thread id t := (p, i), with the main thread being denoted as (1). This way, thread ids and the associated virtual memory mappings are independent of how the concurrent creation of multiple threads are sequentialized.
We have also included various optimizations that promote controlled reuse of addresses to increase the chance that a cutoff event (Sec. 4.2) is found, such as binning allocations by size, which reduces the chance that temporary allocations impact which addresses are returned for other allocations.

Data Race Detection
Our data race detection algorithm simply follows the happens-before relationships established by the POR. However, its implementation is complicated by the possibility of addresses becoming symbolic. Generally speaking, a symbolic address can potentially point to any and every byte in the whole address space, thus requiring frequent and large SMT queries to be solved.
To alleviate the quadratic blowup of possibly aliasing accesses, we exploit how KLEE performs memory accesses with symbolic addresses: The symbolic state is forked for every possible memory object that the access may refer to (and one additional time if the memory access may point to unallocated memory). Therefore, a symbolic memory access is already resolved to memory object granularity when it potentially participates in a data race. This drastically reduces the amount of possible data races without querying the SMT solver.

External Function Calls
When a program wants to call a function that is neither provided by the program itself nor by the chosen runtime, KLEE will attempt to perform an external function call by moving the function arguments from the symbolic state to its own address space and attempting to call the function itself. While this support for uninterpreted functions is helpful for getting some results for programs which are not fully supported by KLEE's POSIX runtime, it is also inherently incomplete and incorrect in the general case. Our prototype includes this option as well; we make sure that cutoffs cannot be found across external function calls.

Experimental Evaluation
To explore the efficacy of the presented approach, we performed a series of experiments including both synthetic benchmarks from the SV-COMP [8] benchmark suite and real-world programs, namely, Memcached [3] and GNU sort [1]. We compare against Yogar-CBMC [52], which is the winner of the concurrency safety category of SV-COMP 2019 [8], and stands in for the family of bounded model checkers. As such, Yogar-CBMC is predestined to fare well in the artificial SV-COMP benchmarks, while our approach may demonstrate its strength in dealing with more complicated programs.
We ran the experiments on a cluster of multiple identical machines with dual Intel Xeon E5-2643 v4 CPUs with 256 GiB of RAM. The real-world programs  were run with a timeout set at 4 h and a maximum memory usage of 200 GB, while the individual SV-COMP benchmarks were run with a timeout set at 15 min and a maximum memory usage of 15 GB.

SV-COMP
We ran our tool and Yogar-CBMC on the "pthread" and "pthread-driver-races" benchmark suites in their newest (2020) incarnation. As expected, Table 1 shows that Yogar-CBMC clearly outperforms our tool for this specific set of benchmarks. Not only does Yogar-CBMC not miscategorize even a single benchmark, it does so quickly and without using a lot of memory. Our tool, in contrast, takes more time and significantly more memory to analyze the target benchmarks. In fact, several benchmarks do not complete within the 15 min time frame and therefore cannot give a verdict for those.
The "pthread-driver-races" benchmark suite contains one benchmark that is marked as a failure for our tool in Table 1. For the relevant benchmark, a verdict of "target function unreachable" is expected, which we translate to mean "no data race occurs". However, the benchmark program constructs a pointer that may point to effectively any byte in memory, which, upon dereferencing it, leads to both, memory errors and data races (by virtue of the pointer also being able to touch another thread's stack). While we report this behavior for completeness sake, we attribute it to the adaptations we had to make to fit the SV-COMP model to ours.
Preparation of Benchmark Suites. The SV-COMP benchmark suite does not only assume various kinds of special casing (e.g., functions whose name begins with __VERIFIER_atomic must be executed atomically), but also routinely violates the C standard by, for example, employing data races as a control flow mechanism [26, §5.1.2.4/35]. Partially, this is because the analysis target is a question of reachability of a certain part of the benchmark program, not its correctness. We therefore attempted to guess the intention of the individual benchmarks, making variables atomic or leaving the data race in when it is the aim of the benchmark.

Memcached
Memcached [3] is an in-memory network object cache written in C. As it is a somewhat large project with a fairly significant state space, we were unable to analyze it completely, even though our prototype still found several bugs. Our attempts to run Yogar-CBMC did not succeed, as it reproducibly crashes.
Faults detected. Our prototype found nine bugs in memcached 1.5.19, attributable to four different root causes, all of which where previously unknown. The first bug is a misuse of the pthread API, causing six mutexes and condition variables to be initialized twice, leading to undefined behavior. We reported 5 the issue, a fix is included in version 1.5.20. The second bug occurs during the initialization of memcached, where fields that will later be accessed in a threadsafe manner are sometimes accessed in a non-thread-safe manner, assuming that competing accesses are not yet possible. We reported 6 a mistake our tool found in the initialization order that invalidates the assumption that locking is not (yet) necessary on one field. A fix ships with memcached 1.5.21. For the third bug, memcached utilizes a maintenance thread to manage and resize its core hash table when necessary. Additionally, on another thread, a timer checks whether the maintenance thread should perform an expansion of the hash table. We found 7 a data race between these two threads on a field that stores whether the maintenance thread has started expanding. This is fixed in version 1.5.20. The fourth and final issue is a data race on the stats_state storing execution statistics. We reported 8 this issue and a fix is included in version 1.5.21.
Experiment. We run our prototype on five different versions of memcached, the three releases 1.5.19, 1.5.20 and 1.5.21 plus variants of the earlier releases (1.5.19+ and 1.5.20+) which include patches for the two bugs we found during program initialization. Those variants are included to show performance when not restricted by inescapable errors very early in the program execution. Table 2 shows clearly how the two initialization bugs may lead to very quick analyses-versions 1.5.19 and 1.5.20 are completely analyzed in 7 seconds each, while versions 1.5.19+, 1.5.20+ and 1.5.21 exhaust the memory budget of 200 GB. We have configured the experiment to stop the analysis once the memory limit is reached, although the analysis could continue in an incomplete manner by removing parts of the exploration frontier to free up memory. Even though the number of error paths in Table 2 differs between configurations, it is notable that each configuration can only reach exactly one of the bugs, as execution is arrested at that point. When not restricted to the program initialization, the analysis of memcached produces hundreds of thousands of events and retires hundreds of millions of instructions in less than 2 h.  Our setup delivers a single symbolic packet to memcached followed by a concrete shutdown packet. As this packet can obviously only be processed once the server is ready to process input, we observe symbolic choices only after program startup is complete. (Since our prototype builds on KLEE, note that it assumes a single symbolic choice during startup, without generating an additional path.)

GNU sort
GNU sort uses threads for speeding up the sorting of very large workloads. We reduced the minimum size of input required to trigger concurrent sorting to four lines to enable the analysis tools to actually trigger concurrent behavior. Nevertheless, we were unable to avoid crashing Yogar-CBMC on this input.
During analysis of GNU sort 8.31, our prototype detected a data race, that we manually verified, but were unable to trigger in a harmful manner. Table 2 shows two variants of GNU sort, the baseline version with eager parallelization (8.31) and a version with added locking to prevent the data race (8.31+).
Surprisingly, version 8.31 finishes the exploration, as all paths either exit, encounter the data race and are terminated or are cut off. By fixing the data race in version 8.31+, we make it possible for the exploration to continue beyond this point, which results in a full 4 h run that retires a full billion instructions while encountering more than one million unique events.
Thread-modular abstract interpretation [34,31,18] and unfolding-based abstract interpretation [48] aim at proving safety rather than finding bugs. They use over-approximations to explore all behaviors, while we focus on testing and never produce false alarms. Sequentialization techniques [42,27,38] encode a multithreaded program into a sequential one. While these encodings can be very effective for small programs [27] they grow quickly with large context bounds (5 or more, see [38]). By contrast, some of the bugs found by our technique (Sec. 5) require more than 25 context switches.
Bounded-model checking [7,41,52,15,29] for multi-threaded programs encode multiple program paths into a single logic formula, while our technique encodes a single path. Their main disadvantage is that for very large programs, even constructing the multi-path formula can be extremely challenging, often producing an upfront failure and no result. Conversely, while our approach faces path explosion, it is always able to test some program paths.
Techniques like [17,28,46] operate on a data structure conceptually very similar to our unfolding. They track read/write operation to every variable, which become a liability on very large executions. In contrast, we only use POSIX synchronization primitives and compactly represent memory accesses to detect data races. Furthermore, they do not exploit anything similar to cutoff events for additional trace pruning.
Interpolation (2) we support condition variables, providing algorithms to compute conflicting extensions for them; and (3) here we use hash-based fingerprints to compute cutoff events, thus handling much more complex partial orders than the approach described in [48].

Conclusion
Our approach combines POR and symbolic execution to analyze programs w.r.t. both input (data) and concurrency non-determinism. We model a significant portion of the pthread API, including try-lock operations and robust mutexes. We introduce two techniques to cope with state-space explosion in real-world programs. We compute cutoff events by using efficiently-computed fingerprints that uniquely identify the total state of the program. We restrict scheduling to synchronization points and report data races as errors. Our experiments found previously unknown bugs in real-world software projects (memcached, GNU sort).

A Model of Computation
In this section we present a model of computation suitable for describing multithreaded programs that use POSIX threading. A concurrent program is a structure P := L, M, L, C, T, m 0 , p 0 , where L is the set of program locations, M is the set of memory states (valuations of program variables), L is the set of mutexes, C is the set of condition variables, m 0 ∈ M is the initial memory state, and p 0 : N → L is a function that maps every thread identifier (in N) to its initial location, and T ⊆ N × L × L × Q × 2 M×M is the set of thread statements. A thread statement t := i, n, n ′ , q, r ∈ T intuitively represents that thread i can execute operation q, updating the program pointer from n to n ′ and the memory from m ∈ M to m ′ ∈ M if m, m ′ ∈ r. We assume that threads are identified by strictly positive numbers i ∈ N \ {0}.
An operation characterizes the nature of the activity performed by a statement. We distinguish the following set of operations: Statements carrying a local operation model thread-local code. A statement of lock, l operation represents a request to acquire mutex l. A wait, c, l models a request to wait until condition variable c becomes available. Finally, operations signal, c and bcast, c represent, respectively, a signal and a broadcast operation on the condition variable c.

B Transition System Semantics
We use labeled transition systems (LTS) [14] semantics for our programs. We associate a program P with the LTS M P := S, →, A, s 0 . The set are the states of M P , i.e., tuples of the form p, m, u, v where p is a function that indicates the program location of every thread, m is the state of the (global) memory, u indicates when a mutex is locked (by thread i ≥ 1) or unlocked (0), and v maps every condition variable to a set of integers containing the identifiers of those threads that currently wait on that condition variable. The initial state is s 0 := p 0 , m 0 , l 0 , c 0 , where p 0 and m 0 come from P , l 0 : L → {0} is the function that maps every lock to the number 0, and c 0 : C → {∅} is the function that maps every condition variable to an empty set.
An action in A ⊆ N × B is a pair i, b where i is the identifier of the thread that executes some statement and b is the effect of the statement. Effects characterize the nature of an LTS transition in M P similarly to how operations capture the nature of a statement in P . We consider the following set of effects: As we will see below, the execution of a statement gives rise to a transition whose effect is in correspondence with the operation of the statement. The transition relation → ⊆ S × A × S contains triples of the form s i,b − −− → s ′ that represent the interleaved execution of a statement of thread i with effect b updating the global state from s to s ′ . The precise definition of → is given by the inference rules in Fig. 4. These rules insert transitions in M P that reflect the execution of statements on individual states.
As announced above, the effect of the inserted transition is in correspondence with the operation of the statement. For instance, the Loc rule inserts a transition labeled by a loc effect when it finds that a thread statement of local operation can be executed on a given LTS state. Similarly, lock, l and unlock, l operations produce transitions labeled by acq, l and rel, l effect. However, a wait, c, l action will produce two successive LTS transitions, as explained in Sec. 3.1.
The POSIX standard leaves undefined the behavior of the program under various circumstances, such as when a thread attempts to unlock a mutex not already locked by the calling thread. The rules in Fig. 4 halt the execution of the calling thread whenever undefined behavior would arise after executing an operation. To the best of our knowledge, the only exception to this in our semantics is when two threads concurrently attempt to wait on the same condition variable using different mutexes. The standard declares this as undefined behavior [25] but rule W1 lets both threads run normally.
Our semantics only implement the so-called non-robust NORMAL mutexes [25]. RECURSIVE and ERRORCHECK mutexes can easily be implemented using auxiliary local variables.
We finish this section with some additional definitions about LTSs. If s a − → s ′ is a transition, then we say that action a is enabled at s. Let enabl (s) denote the set of actions enabled at s. As actions may be non-deterministic, firing a may produce more than one such s ′ . For a sequence σ := a 1 . . . a n ∈ A * and a state s ∈ S we inductively define state(s, σ) := {s} if |σ| = 0 and state(s, σ) := {s ′′ ∈ S : s ′ ∈ state(s, σ ′ ) ∧ s ′ an − − → s ′′ } otherwise, where σ ′ a n = σ. By extension we let state(σ) := state(s 0 , σ) denote the states reachable by σ from the initial state. We say that σ is a run when state(σ) = ∅. We let runs(M P ) denote the set of all runs and reach(M P ) := σ∈runs (MP ) state(σ) the set of all reachable states. By extension, for a set S ′ ⊆ S of states we say that a is enabled at S ′ if a is enabled at some state in S ′ .
In this work, we assume that the model of computation P satisfies the following well-formedness condition: Definition 1. A concurrent program P is well-formed if for any reachable state s ∈ reach(M P ) and any two actions a, a ′ ∈ A enabled at s, we have that both actions are local, i.e., a = loc, t and a ′ = loc, t ′ for some t, t ′ ∈ T .
In simple words, a program P will therefore be well-formed when the only source of data non-determinism is local statements in P . In this work we assume that any given program is well-formed. Loc t := i, n, n ′ , local, r ∈ T p(i) = n m, m ′ ∈ r p, m, u, v i, loc,t − −−−−− → p i →n ′ , m ′ , u, v Acq i, n, n ′ , lock, l , r ∈ T p(i) = n u(l) = 0

C Independence
Many partial-order methods use a notion called independence to avoid exploring concurrent interleavings that lead to the same state. We recall the standard notion of independence for actions in [20]. Given two actions a, a ′ ∈ A and one state s ∈ S, we say that a commutes with a ′ at state s iff for all s ′ ∈ S we have: -if a ∈ enabl (s) and s a − → s ′ , then a ′ ∈ enabl (s) iff a ′ ∈ enabl (s ′ ); and (1) -if a, a ′ ∈ enabl (s), then state(s, aa ′ ) = state(s, a ′ a). ( Independence between actions is an under-approximation of commutativity. A binary relation ♦ ⊆ A × A is a valid independence on M P if it is symmetric, irreflexive, and every pair a, a ′ in ♦ commutes at every state in reach(M P ).
In general M P has multiple independence relations. Clearly ∅ is always one of them. Broadly speaking, the larger an independence relation is, the less inter-leaved executions a partial-order method will explore, as more of them will be regarded as equivalent to each others.
Given a valid independence relation ♦, the complementary relation (A×A)\♦ is a dependency relation, which we often denote by ⊞ .

D Independence for Programs
Let P be a program.
Definition 2. We define the dependence relation ⊞ P ⊆ A × A as the only reflexive, symmetric relation where a ⊞ P a ′ hold if either both a and a ′ are actions of the same thread or a matches the action described by the left column of Table 3 and a ′ matches the right column. Finally, we define the independence relation ♦ P := (A × A) \ ⊞ P as the complementary of ⊞ P in A.

Action Dependent actions
i, loc, t No other action is dependent.
i, acq, l or i, rel, l For any j ≥ 1, and any c ∈ C the following actions are dependent: j, acq, l , j, rel, l , j, w 1 , c, l , j, w 2 , c, l .
i, w 2 , l, c For any j ≥ 1, and any c ′ ∈ C, and any W ⊆ N such that i ∈ W , the following actions are dependent: j, acq, l , j, rel, l , j, w 1 , c ′ , l , j, w 2 , c ′ , l , j, sig, c, i , j, bro, c, W .
i, sig, c, k with k = 0 For any j ≥ 1, and any l ∈ L, and any W ⊆ N the following actions are dependent: k, w 1 , c, l , k, w 2 , c, l , j, sig, c, 0 , j, sig, c, k , j, bro, c, W .
i, sig, c, 0 or i, bro, c, ∅ For any j, k ≥ 1, and any l ∈ L, and any W ⊆ N such that W = ∅, the following actions are dependent: j, w 1 , c, l , j, sig, c, k , j, bro, c, W .
i, bro, c, W with W = ∅ For any j ≥ 1, and any j ′ ∈ W , and any k ′ ≥ 0, and any l ∈ L, and any W ′ ⊆ N the following actions are dependent: j, w 1 , c, l , j ′ , w 2 , c, l , j, sig, c, k ′ , j, bro, c, W ′ . We say that P is data-race free iff any two local actions a := i, loc, t and a ′ := i ′ , loc, t from different threads (i.e., i = i ′ ) commute at every reachable state s ∈ reach(M P ). This ensures that local statements of P modify the memory in a manner which does not interfere with local statements of other threads.
Theorem 4. If P is data-race free, then ♦ P is a valid independence relation.
Proof. Clearly, by construction, ♦ P is symmetric and irreflexive. Let a := i, b and a ′ := i ′ , b ′ be two actions of M P . Let s ∈ reach(M P ) be any reachable state of M P such that s := p, m, u, v . Assume that a ♦ P a ′ , and so, i = i ′ . We need to show that a commutes with a ′ .
Remark that the fact that a commutes with a ′ at s does not imply that a ′ commutes with a at s, due to the way (1) is defined. To simplify and organize better the reasoning below, we will not only prove that a commutes with a ′ at s, but for some combinations of a and a ′ we will also prove that a ′ commutes with a at s. We define the following claims: -If a ∈ enabl (s) and s a − → s ′ , then a ′ ∈ enabl (s) iff a ′ ∈ enabl (s ′ ).
Showing (3) and (5) will prove that a commutes with a ′ at s. And showing (4) and (5) will prove that a ′ commutes with a at s. The proof is by cases on a: -Assume that b = loc, t . If b ′ = loc, t ′ , then a commutes with a ′ at s (and a ′ commutes with a) because we assumed that P is data-race free. So assume that b ′ ∈ {acq, rel, w 1 , w 2 , sig, bro}. We show (5). For the same reasons as above, firing rule Loc followed by any other rule, or firing such other rule followed by Loc yields the same state. The reasoning here is analogous to the one above. This proves that i, loc commutes with a ′ at s, and that a ′ commutes with i, loc at s. -Assume that b ∈ { acq, l , rel, l }. If b ′ = loc, t , for any t ∈ T , then we have already proven (above) that a commutes with a ′ at s. So assume that b ′ ∈ { acq, l ′ , rel, l ′ }. By Def. 2 we know that l = l ′ . We need to prove (3) and (5). We do not need to prove (4) because b and b ′ are symmetric. Both claims hold owing to the same reason: applying rule Acq (resp. Rel) on a lock, l operation (resp. unlock, l ) of thread i does not interfere with applying neither Acq to lock, l ′ nor Rel to unlock, l on a different thread i ′ because Acq (resp. Rel) only updates and checks information that is local to each application, i.e., the program counter and the lock in question. Since the program counters and the locks are different in these applications, they cannot interfere between each other. In particular, observe that the global memory remains unchanged. Assume now that b ′ ∈ { w 1 , c, l ′ , w 2 , c, l ′ }. Again by Def. 2 we know that l = l ′ . We need to prove (3), (4), and (5) Therefore, we showed that in any case a and a ′ satisfy (3), (4), and (5) on s.
any t ∈ T and any l ′ ∈ L and any c ′ ∈ C, then we have already shown that a commutes with a ′ at s. Assume that b ′ = w 2 , c ′ , l ′ for some l ′ ∈ L and some c ′ ∈ C. By Def. 2 we know that l = l ′ . In this case rule W2 commutes with itself because each application only checks and updates the state of, respectively, locks l and l ′ , and they are different. Note that condition variables and memory are not modified. Assume that b ′ = sig, c ′ , j , for some c ′ ∈ C and j ≥ 0. By Def. 2 we know that either c ′ = c or j = i. If j = 0, clearly rules W2 and Sig' commute because, while W2 updates u(l), Sig' does not check or update lock information; and while W2 checks for v(c), Sig' does not update any condition variable information. If j = 0, then clearly also rules W2 and Sig commute. This is because while W2 checks and updates lock l, Sig neither checks nor updates any lock information. Additionally, while both rules check the state of condition variables c and c ′ , they clearly commute if c = c ′ , as Sig only updates v(c ′ ). Furthermore, if c = c ′ , then j = i, and so updating v(c) in Sig by removing thread j cannot interfere with checking if i ∈ v(c) in W2. Finally, assume that b ′ = bro, c ′ , W , for some c ′ ∈ C and some W ⊆ N. By Def. 2 we know that either c ′ = c or i / ∈ W . Clearly W2 and Bro commute if c = c ′ , as Bro neither checks nor updates lock information and it only updates (and checks) v(c ′ ). So assume that c = c ′ , and so i / ∈ W . We show (4). Applying Bro neither enables nor disables W2. This is because if a is enabled (Bro is enabled) at s, then v(c ′ ) = v(c) = W , and so i / ∈ v(c). Now firing a ′ at s (applying Bro at s) will clear v(c), but that will not modify the validity of the condition i / ∈ v(c). Since Bro does not update lock information (in u), applying Bro neither enables nor disables the enabledness of rule W2. We show (3). Similarly applying rule W2 only updates lock information, and rule Bro does not test for the state of locks (in fact it only checks for the instruction pointer of thread i ′ ). Consequently, firing a neither enables nor disables a ′ . Finally, proving (5) is an easy exercise.
-Assume that a = i, sig, c, k with k = 0. If for some l ′ ∈ L and some c ′ ∈ C, we have b ′ ∈ { loc, t , acq, l ′ , rel, l ′ , w 1 , c ′ , l ′ , w 2 , c ′ , l ′ }, then we have already shown that a commutes with a ′ at s. Assume that b ′ = sig, c ′ , j , for some c ′ ∈ C and some j ≥ 1. By Def. 2 we know that either c = c ′ or j = k. As a result clearly, a commutes with a ′ , because rule Sig commutes with itself, either because it updates (and checks) different condition variables or because it removes different threads from the set of the same condition variable. Assume that b ′ = sig, c ′ , 0 , for some c ′ ∈ C. By Def. 2 we know that c = c ′ . As a result rules Sig and Sig' trivially commute because they work on different condition variables. Assume that b ′ = bro, c ′ , W , for some c ′ ∈ C and some W ⊆ N. By Def. 2 we know that c = c ′ . Similarly rules Sig and Bro trivially commute because they work on different condition variables. -Assume that a = i, sig, c, 0 . If for some l ′ ∈ L, and some c ′ ∈ C, and some Consequently v(c) = v(c ′ ) = ∅. While rule Bro updates v(c ′ ) by clearing it, the update is immaterial. So applying both rules in either order produces the same state. -Assume that a = i, bro, c, W with W ⊆ N. If for some l ′ ∈ L, c ′ ∈ C, and k ∈ N we have b ′ ∈ { loc, t , acq, l ′ , rel, l ′ , w 1 , c ′ , l ′ , w 2 , c ′ , l ′ , sig, c ′ , k }, then we have already shown that a commutes with a ′ at s. Assume that b ′ = bro, c ′ , W ′ , for some c ′ ∈ C and some W ′ ⊆ N. By Def. 2 we know that either c = c ′ or W = W ′ = ∅. If c = c ′ clearly rule Bro commutes with itself because each application checks and updates a different condition variable. If c = c ′ , and so W = W ′ = ∅, rule Bro also commutes with itself because v(c) = v(c ′ ) = ∅ at s, and so the update v c →∅ is immaterial.

E Unfolding Semantics
We recall the program pes semantics of [48] (modulo notation differences). For a program P and any independence ♦ on M P we define a pes U P,♦ that represents the behavior of P , i.e., such that the interleavings of its set of configurations equals runs(M P ). Each event in U P,♦ is inductively defined by a pair of the form e := a, H , where a ∈ A is an action of M P and H is a configuration of U P,♦ . Intuitively, e represents the occurrence of a after the causes (or the history) H, as described in Sec. 3.5. Note the inductive nature of the name, and how it allows to uniquely identify each event. We define the state of a configuration as the set of states reached by any of its interleavings. Formally, for C ∈ conf (U P,♦ ) we define state(C) as {s 0 } if C = ∅ and as state(σ) for some σ ∈ inter (C) if C = ∅. Despite its appearance state(C) is well-defined because all sequences in inter (C) reach the same set of states, see [49] for a proof.
Definition 3 (Unfolding). Given a program P and some independence relation ♦ on M P := S, →, A, s 0 , the unfolding of P under ♦, denoted U P,♦ , is the pes over A constructed by the following fixpoint rules: 1. Start with a pes E := E, <, #, h equal to ∅, ∅, ∅, ∅ . 2. Add a new event e := a, C to E for any configuration C ∈ conf (E) and any action a ∈ A if a is enabled in state(C) and ¬(a ♦ h(e ′ )) holds for every <-maximal event e ′ in C. 3. For any new e in E, update <, #, and h as follows: for every e ′ ∈ C, set e ′ < e; for any e ′ ∈ E \C, set e ′ # e if e = e ′ and ¬(a ♦ h(e ′ )); set h(e) := a. 4. Repeat steps 2 and 3 until no new event can be added to E; return E.
Step 1 creates an empty pes with only one (empty) configuration.
Step 2 inserts a new event a, C by finding a configuration C that enables an action a which is dependent with all causality-maximal events in C.
After inserting an event e := a, C , Def. 3 declares all events in C causal predecessors of e. For any event e ′ in E but not in [e] such that h(e ′ ) is dependent with a, the order of execution of e and e ′ yields different states. We thus set them in conflict.

Theorem 5 (Soundness and completeness [48]
). For any program P and any independence relation ♦ on M P we have: 1. U P,♦ is uniquely defined. 2. For any configuration C of U P,♦ if σ, σ ′ ∈ inter(C) then state(σ) = state(σ ′ ). 3. Any interleaving of a local configuration of U P,♦ is a run of M P . 4. For any run σ of M P there is a configuration C of U P,♦ with σ ∈ inter (C).

F Computing Conflicting Extensions
Given a configuration C of U P,♦ , procedure cex in Alg. 3 computes the conflicting extensions cex (C) of C. Add to C any event e ∈ en(C) such that cutoff(e) 4 foreach event e ∈ C of loc effect do R := R ∪ cex-local(e) 5 foreach event e ∈ C of acq or w2 effect do R := R ∪ cex-acq-w2(e) 6 foreach event e ∈ C of w1 effect do R := R ∪ cex-w1(e) 7 foreach event e ∈ C of sig or bro effect do R := R ∪ cex-notify(C, e) 8 return R The algorithm works by selecting some event e from C and trying to find one or more events that are similar to e and in conflict with C. Except for events of loc effect, we try to reorder an event regarding those of its immediate causal predecessors that synchronize with other threads. We call an event e ′ an immediate causal predecessor of e iff e ′ < e and there is no other event e ′′ ∈ [e] such that e ′ < e ′′ < e.
For events dealing with locks or condition variables, we aim to find different possibilities for an action to synchronize with events on other threads. For any event of acq, w 1 or w 2 effect, we try to execute its action earlier than its causal predecessors by including only some of the original immediate causes in any conflicting extension. For any event e of bro or sig effect, however, we try to execute its action before or after certain other events in C that are concurrent to e. An event of rel effect does not itself introduce a new synchronization with another thread (compared to its sole immediate causal predecessor), as it has to occur on the same thread that previously held the lock to be released. Thus, we cannot create any reorderings for this kind of event.
Although similar to events of rel effect w.r.t. to lacking immediate causal predecessors on other threads, we still have to compute conflicting extensions for events of loc effect. Instead of reordering, we change the effect of any event that represents branching on symbolic values.
Since the effect of an event determines how its conflicting extensions are to be computed, cex internally relays this task to corresponding functions in Alg. 4, 5, 7 and 8. These functions share a similar structure, each of them receives an event e (cex-notify additionally takes the containing (maximal) configuration C as a parameter) and returns a set of conflicting extensions, stored in variable R during computation. The union of these sets will be the result of cex which is returned in line 8. Since our main algorithm, calling this function, deals with cutoff-free maximal configurations, we add enabled cutoff events to C before calling any of the effect-specific algorithms (line 3).
We have proofs that our algorithms compute exactly the set of conflicting extensions of a given configuration, but unfortunately we did not have time to transcribe them here.

F.1 Events of loc Effect
In our approach, branches on symbolic values correspond to two (or more) distinct LTS actions of loc effect that differ in the thread statements they represent. In Fig. 2, for example, we branch over x, which contains a symbolic value after the execution of x = in() (event 1). Event 3 corresponds to the then-and event 9 to the else-branch that results from executing if(x < 0) after events 1 and 2. Thus, both events share the same set of causes: ⌈3⌉ = ⌈9⌉ = {1, 2}. This can also be seen in Fig. 3a, along with the fact that these events are in (immediate) conflict with each other. Accordingly, a call to cex-local on event 3 will return a singleton set containing event 9 (and vice versa).
In both figures, we can also see another pair of events resulting from the same branching statement in the program, events 20 and 22. However, these events follow a common set of causes that is different from that of 3 and 9.
Function cex-local (Alg. 4) receives an event e of loc effect representing the execution of thread statement t on thread i (line 2). In line 4, we iterate over alternative thread statements from T i , the set of thread statements restricted to thread i. For each such thread statement t i = t, we test whether the corresponding action a is enabled in the set of states reached by K = ⌈e⌉ (line 6). For any action that is enabled, we add an event that executes a after the causes K to R (line 7). Finally, cex-local returns the set of conflicting extensions (line 8).

F.2 Events of acq or w 2 Effect
The function cex-acq-w2 in Alg. 5 is identical to the one presented in Sec. 3.6 for computing conflicting extensions for events of acq or w 2 effect. Due to space constraints, however, we did not yet describe its function in adequate detail.
To find conflicting extensions, the algorithm systematically constructs sets of causes that include less than those of a given event of acq or w 2 effect in order to execute the action of such an event earlier w.r.t. its causes. To explain how to do this, we first take a look at structural commonalities in the set of causes for any event of acq or w 2 effect. Along the way, we will also introduce some utility functions (see Alg. 6) used in the algorithm.
One event that we can always distinguish from any other in the set we will call the thread predecessor, which is the only event immediately preceding the event at hand on the same thread. Our utility function last-of(C, i) (Alg. 6) returns the only <-maximal event of thread i in a configuration C, so to retrieve the thread predecessor of a given event e on thread t we can call last-of(⌈e⌉ , t) or last-of([e], t). As we are dealing with lock operations here, another important event is called lock predecessor. It can be retrieved by calling last-lock(C, l) (Alg. 6) which returns the only <-maximal event that manipulates lock l in C (an event of effect acq, rel, w 1 or w 2 ), or ⊥ if no such event exists. In the case of events of w 2 effect, we can also distinguish another event from the rest of the causes: the notification predecessor, which is an event of sig or bro effect immediately preceding a given w 2 event. If e is an event of w 2 effect, executed on thread i and relating to condition variable c, we can retrieve its notification predecessor using last-notify(e, c, i) (Alg. 6) which returns the only immediate <-predecessor e ′ of e such that the effect of h(e ′ ) is either sig, c, i or bro, c, S with i ∈ S.
Conceptually, the algorithm tries to find lock releases (events of either rel or w 1 effect) in the past, i.e. the set of causes, that a potential conflicting extension could use as its lock predecessor.
Function cex-acq-w2 receives an event e of acq or w 2 effect (line 2). Central to its mode of operation is a subset of e's causal predecessors, stored in a variable called P . This set is (included in) the set of causes for any conflicting extension added to R (line 14 and 17) and how it is constructed poses the major difference between handling acq (line 6) and w 2 (line 9) effects.
For constructing P , the algorithm determines the thread predecessor, which is stored in a variable called e t (line 4) and then determines how to build P , based on the effect of e. In case e = i, acq, l , K , it assigns P the local configuration of e t (line 6). This partitions the causes of e such that K \ P contains exactly all events that were synchronized by the action of e, namely the locking of l following the execution of e t . In the other case, e = i, w 2 , c, l , K , the set P also includes the local configuration of e s , the notification predecessor, along with [e t ] (line 9).
We include e s into the set of causal predecessors (which will contain P for any conflicting extension), because the notification predecessor is a unique event waking up exactly e t (which will be of w 1 effect iff e is of w 2 effect). Since we set out not to modify any event in [e t ], there cannot be any conflicting extension that excludes e s from its set of causes. Thus, for the following steps, we can essentially treat any event of w 2 effect as if it were one of acq effect, since we only have to concern ourselves with the reacquisition of the lock done as part of its action. The only exception to this is making sure that conflicting extensions for e always executes the same action h(e) (line 14 and 17).
In addition to P , the algorithm determines two lock events: e r , which is the lock predecessor of e (line 11), and e m , which is the <-maximal lock event in P (line 10). For the former, we know that it has to be a lock release (an event of either rel or w 1 effect), while e m might not exist (⊥) or be any event of acq, rel, w 1 or w 2 effect. If e m = e r , then cex-acq-w2 can abort the search and return an empty set (line 12), since we know that outside of P , there exists no further relevant lock event.
Otherwise, the algorithm determines whether the lock can be acquired from e m (line 13). This is true when e m is either of rel or w 1 effect, but also when e m = ⊥. In the latter case, there is no prior acquisition on the same lock in P , constituting the need for a release. In both cases, the lock can be acquired from e m , so the algorithm adds a conflicting extension to R that only uses P as its causes (line 14). After handling special cases involving e m , the algorithm searches for other possible lock predecessors in K, outside of P and excluding e r (line 15). For any event e ′ , releasing the relevant lock (line 16), we add a conflicting extension to R (line 17). Note, that we include the local configuration of e ′ to the set of causes since e ′ might have causes outside of P . We exclude e r in line 15 as otherwise we would (at least) add e to R, which is (trivially) not in conflict with [e]. Finally, cex-acq-w2 returns R (line 18).
There is one situation not covered by the algorithm as we just described it, which are configurations caught in a deadlock while trying to acquire a lock. That is, in an LTS state reached by some configuration C, the only available action for some thread is not enabled and this action is either of acq or w 2 effect. Even though there cannot be an event e resulting from the execution of such an action in C, there might be a viable event (which is in fact a conflicting extension to C) that can be found akin to cex-acq-w2. To find such events, we take the following steps for each deadlocked thread: We determine e m and P like before, albeit modified to account for the lack of e. Afterwards, we do a search within the set (C \ P ) ∪ e m , analogously to line 15 to 17.

F.3 Events of w 1 Effect
When concerned with computing conflicting extensions for events of w 2 effect, we briefly discussed the interplay of such events with those of w 1 effect. We concluded that any reordering not caused by w 2 's interaction with locks hinges on the related event of w 1 effect. In return, we do not have to consider the lock-releasing part of the wait operation for w 1 events.
Condition variables, in contrast to locks which can only ever be manipulated by one event at a time, allow for concurrent operations. To account for this, we introduce another auxiliary function. A call to conc(S) (Alg. 6) returns true iff there is no causal dependency between any two events in S, a set of events.
Function cex-w1 (Alg. 7) receives an event e of w 1 effect (line 2) and determines its thread predecessor e t (line 4). The local configuration of e t is subtracted from the set of e's causes (line 5) and the resulting set is filtered such that only some determined events of sig and bro effect remain (line 6). The events that make up this filtered set, X ′ , are lost signals as well as events of bro effect that did not notify e's thread (including lost broadcasts) as they are included in ⌈e⌉. However, each of them corresponds to an action that can notify events of w 1 effect that have e t as thread predecessor. Thus, the underlying operation from any event in X ′ would have resulted in a successful notification, had it happened right after, and not (as is the case) before e's action. This is because of how the involved actions depend on each other (see App. D). For instance, a lost signal or broadcast from X ′ can only happen before (causally preceding) e, and no event representing the same action can happen before an event of sig or bro effect includes e in its set of causes (indicating a successful notification).
In line 8, conflicting extensions are added to R, each corresponding to some set M ⊆ X ′ , i.e. with ⌈M ⌉ ∪ ⌈e t ⌉ as causes. Each M is chosen such that its elements are not causally related (using conc) and the resulting event does not equal e (line 7). As each conflicting extension excludes at least one event from X ′ from its causes (compared to e), an event of w 2 effect following it might receive its notification earlier (i.e. with fewer events in its local configuration) than one following e. Once all suitable subsets were found, cex-w1 returns R (line 9). Lemma 1. Line 6 of the cex(C ) algorithm (Alg. 3) produces exactly all conflicting extensions e ′ of a configuration C such that e ′ = ·, b , · and b = w 1 , ·, · if C is maximal.
Proof. As cex(C ) computes conflicting extensions that have w 1 effect using a loop, calling cex-w1 (Alg. 7) on all events of w 1 effect in C, we have to show the following relationship: where C w1 := {e ∈ C : e has w 1 effect}. We will do this by first showing that every event that is returned by cex-w1(e) for any event e ∈ C of w 1 effect is a conflicting extension of C of the same effect (soundness). In a second step, we will show that every w 1 event that is contained in cex (C) will eventually be computed by some call to cex-w1 in the loop (completeness). In the following, let C be a maximal configuration and e be an event of the form e := i, w 1 , c, l , K (corresponding to line 2).
Soundness. We show that any event e ′ returned in line 9 of cex-w1(e) satisfies e ′ ∈ cex (C) with h(e ′ ) being ·, w 1 , ·, · . Any event e ′ returned by the algorithm is constructed in line 8 firing action h(e) after the causes Because cex is calling cex-w1 only for w 1 events (line 6), which in turn only produces events of the same effect (line 8), it remains to show that e ′ ∈ cex (C).
Before we can do that, however, we need to prove that any member of R (as returned in line 9) is indeed an event. To show that e ′ is an event, by Item 2 of definition Def. 3 we need to show that (1) its causes ⌈e ′ ⌉ form a configuration, (2) its action h(e ′ ) is enabled at state(⌈e ′ ⌉), and (3) ¬(h(e ′ ) ♦ h(e ′′ )) holds for every <-maximal event e ′′ in ⌈e ′ ⌉.
1. We show that ⌈e ′ ⌉ is a configuration. Because e is an event, we know that K is a configuration.  Fig. 4) hold: -i, n, n ′ , wait, c, l , r ∈ T : As e ′ executes the same action as e, we know that there exists such a thread statement in T , giving rise to h(e). This thread statement executes the operation wait, c, l on thread i, while this thread is at program location n. -p(i) = n holds at state s ′ : Observe that every M , as a subset of X ′ and subsequently X, only contains events on threads other than i (line 5). This is because we exclude [e t ] from it, the local configuration of e t , which is defined as the only <-maximal predecessor of e in line 4. Further, as we include [e t ], we have that e t is not only included in ⌈e ′ ⌉, but also the maximal event on thread i. Because e t is the <-maximal event on thread i in both K and ⌈e ′ ⌉, both s and s ′ are at the same program location n for thread i (also corresponding to the above thread statement). Thus, p(i) = n is true. -u(l) = i holds at state s ′ : Since h(e) is enabled at s, we know that u(l) = i at s. This means that there must be some event in K that acquires lock l for thread i. Let this event be known as e m . As it is on the same thread as e (and e ′ ), we conclude that e m ∈ [e t ]. Note that X does not contain any event of thread i (line 5). Only events of effect acq, l , rel, l , w 1 , ·, l or w 2 , ·, l can modify u(l). Now, X cannot contain any event of those effects as otherwise u(l) = i in state(K). This is because such event would be a <-successor of e m and would set u(l) = 0 or u(l) = j, j = i, and no event in X could make u(l) = i because X does not contain events from thread i. Consequently, state s ′ makes u(l) = i true. 3. Finally, we show that ¬(h(e ′ ) ♦ h(e ′′ )) holds for every <-maximal event e ′′ in ⌈e ′ ⌉. From that, we can deduce that max < (⌈e ′ ⌉) is a subset of M ∪{e t }. Regardless of whether e t is <-maximal in ⌈e ′ ⌉, h(e t ) is always trivially dependent on h(e ′ ) as both operate on the same thread. As for M , we remark that it is any concurrent subset of X ′ (line 7). Because of how X ′ is defined (line 6), this set only contains events that have one of the following effects: sig, c, 0 , bro, c, ∅ or bro, c, S with i / ∈ S. According to Table 3 from Def. 2, any action of either one of these effects (for any thread) is dependent on h(e ′ ). Thus, the actions of all events in M are always dependent on h(e ′ ). Together with the statement about e t , we can conclude that h(e ′ ) is dependent on the action from any event in M ∪ {e t }. As max < (⌈e ′ ⌉) ⊆ M ∪ {e t }, we have just shown that ¬(h(e ′ ) ♦ h(e ′′ )) holds for every <-maximal event e ′′ in ⌈e ′ ⌉. Now that we have established that e ′ is indeed an event, it remains to show that e ′ is in cex (C). Since ⌈e ′ ⌉ only contains events from ⌈e⌉ ⊆ C, we have that e ′ ∈ ex (C), i.e. all causes of e ′ are contained in C. To show that an event e ′ ∈ ex (C) is also in cex (C), we have to show that it is in conflict with some other event in C (see Sec. 3.6). We will show that e ′ is in conflict with e, i.e. e ′ # e. For this, the following needs to hold according to the definition given in Sec. 3.5: 1. e = e ′ : Since we explicitly prevent the creation of any conflicting extension with K = ⌈e⌉ as its set of causes in line 7, we have e = e ′ . 2. ¬(e < e ′ ): The set of causes for e ′ comes from ⌈e⌉, so clearly e / ∈ ⌈e ′ ⌉. 3. ¬(e ′ < e): By contradiction, so assume that e ′ ∈ ⌈e⌉. As [e t ] is included entirely in the causes of e ′ , we know that e ′ / ∈ [e t ], but instead e ′ ∈ X. Because e t is defined as the <-maximal predecessor of e on thread i, event e ′ has to be on some other thread j (i.e. i = j). We also know that at state([e t ]), thread i is holding lock l. However, e ′ requires lock l being held by thread j which is not possible without another event on thread i following e t (contradiction).

h(e)
⊞ h(e ′ ): Both h(e) and h(e ′ ) are of w 1 effect and refer to the same lock l. Such actions are always dependent, regardless of condition variable or thread (see Table 3 from Def. 2).
Completeness. We now show that for any event e ′ of w 1 effect in cex (C), e ′ will eventually be computed by some call to cex-w1. Let e ′ be an event of thread i. To show that e ′ is found by cex-w1, we need to show that C contains some event e such that h(e) = h(e ′ ). Such e will be found at line 6 of cex, and we will show that executing cex-w1(e) computes and returns e ′ .
First, we establish that there exists some event e t of thread i that is causally preceding e ′ . Such an event always exists in the case of a w 1 event and by its definition e t is in C. Now, based on such an event e t , we are going to prove that there exists some event e in C such that h(e) = h(e ′ ) and e is a causal successor of e t and ⌈e ′ ⌉ ⊆ ⌈e⌉.
-We define C ′ := C \ ⌊S⌋ with S := {e ∈ C : e t < e ∧ tid (e) = i}. By ⌊e⌋ we denote the set containing all events from U P,♦ that are causally dependent on e as well as e itself. Similarly, for a set S, we write ⌊S⌋ := e∈S ⌊e⌋. This way, C ′ excludes exactly all events from C that depend on any event of thread i succeeding e t . -An event e can only exist if its action is enabled after its causes. Thus, we show by induction over n that h(e ′ ) is enabled at C n .
σ = e 1 . . . e n ∈ inter (C ′ \ ⌈e ′ ⌉) Note that events e 1 , . . . , e n are numbered in such a way that σ conforms to an interleaving of C ′ \ ⌈e ′ ⌉. Thus for 0 ≤ i < n, event e i+1 is enabled at C i .
Because C is maximal, we can conclude that C must contain some event e such that h(e) = h(e ′ ). This is because h(e ′ ) is an action of w 1 effect and such an action cannot be disabled by any other action on another thread. Also, under our assumption of well-formedness (Def. 1) stating that only actions of loc effect may introduce non-determinism, we know that h(e ′ ) must eventually be fired in C. Otherwise, C could not be maximal. • Inductive case: Assume state(C j−1 ) enables h(e ′ ). By construction, we also have that state(C j−1 ) enables h(e j ). If h(e j ) ♦ h(e ′ ) then by definition of independence (App. C) they commute and so state(C j ) also enables h(e ′ ). If however, h(e j ) ⊞ h(e ′ ), then clearly k := tid (e j ) = i, because C' does not contain any event of thread i. For h(e ′ ) to remain enabled, u(l) = i needs to hold (as it does because state(C j−1 ) already enables the action) and p(i) must not be changed (see Table 3). Because there is no rule that changes p for a thread other than that of the action, we only need to consider changes to u. According to Def. 2, we have the following cases for h(e j ): * k, bro, c, · or k, sig, c, 0 . By rule Bro or Sig', respectively, u(l) remains untouched, so state(C j ) enables h(e ′ ). * k, sig, c, i . Rule Sig does not modify u, so h(e ′ ) remains enabled. * k, acq, l . Action h(e j ) cannot be enabled at state(C j−1 ) because u(l) = i at state(C j−1 ) and so thread k cannot take the lock. * k, rel, l . Impossible, because we have u(l) = i, but rule Rel requires u(l) = k to release lock l on thread k. * k, w 1 , c ′ , l . Impossible, because we have u(l) = i, but rule W1 requires u(l) = k to release lock l on thread k * k, w 2 , c ′ , l . Impossible, because we have u(l) = i, but rule W2 requires u(l) = 0 to acquire lock l on thread k. In both cases, we have established that h(e ′ ) is enabled at state(C j ).
-We will now show that there exists some event e in en(C ′ ) such that h(e) = h(e ′ ) and ⌈e ′ ⌉ ⊆ ⌈e⌉. Proof idea: Take configuration C'. If all maximal events are dependent with h(e ′ ), then h(e ′ ), C ′ is an event of the unfolding and it is the event we are searching for. If not, remove all <-maximal events independent of h(e ′ ) from C ′ until we find a configuration C ′′ ⊆ C ′ but such that C ′′ ⊇ ⌈e ′ ⌉ (because all maximal events in ⌈e ′ ⌉ are dependent on h(e ′ )).
Now that we know that C contains e, with the characteristics defined above, we run cex (C), where line 6 will call cex-w1(e). This function will then eventually add e ′ to R (line 8) and return it (line 9). Sketch: To show this, we establish that e t above is identical with the one selected in line 4. Then, we show (by contradiction, considering all other effects) that the effects of max < (⌈e ′ ⌉ \ {e t }) are exactly those that we select for X ′ in line 6. We then show that max < (⌈e ′ ⌉) is concurrent and max < (⌈e⌉) = max < (⌈e ′ ⌉). From this, we can follow that ⌈e ′ ⌉ will be considered in the loop of line 7.
Lemma 2. Let C be a finite configuration of U P,♦ . Let a ∈ A be an action enabled at state(C). Then there exists exactly one event e ∈ en(C) such that h(e) = a.
Proof. Let {e 1 , . . . , e n } be all <-maximal events of C. If a is dependent (w.r.t. ♦) with all of them, then e := a, C is an event (by Def. 3, item 2) and it is the event we are searching. Otherwise assume that e i is such that h(e i ) ♦ a. Then let C ′ := C \ {e i } be the result of removing e i from C and try again. If all maximal events of C ′ are now dependent with a then a, C ′ is the event we are searching. If not, then iterate this reasoning again and again on C ′ to "peel off" all events independent with a. The resulting configuration C ′′ will necessarily be smaller than C. Since C is of finite size, this procedure will terminate, it will produce a uniquely-defined event a, C ′′ , and such event will be enabled at C.

F.4 Events of sig or bro Effect
To find conflicting extensions, the algorithms cex-acq-w2 and cex-w1 pick a subset of a given event's causes. For events of sig and bro effect, however, we will also consider events that happen concurrently to a given event. Thus, cex-notify (Alg. 8) receives a (maximal) configuration C in addition to an event e of sig or bro effect (line 2) with e ∈ C. After determining the thread predecessor as usual (line 3), instead of using subsets of K \ [e t ] as basis for conflicting extensions, the algorithm prepares a set that additionally includes events concurrent to e. This set X is constructed by subtracting both events that causally precede e t and events that succeed e (line 4). The latter are denoted by ⌊e⌋, which is the set containing all events from U P,♦ that are causally dependent on e as well as e itself. In other words, X is the set of all events in the configuration that is neither e, causally dependent on e nor in e t 's local configuration. To avoid adding a conflicting extension that equals e, we additionally prepare M to be the set of immediate causal predecessors of e (line 5).
The algorithm is divided into three sections, adding conflicting extensions in the form of successful signals (line 7 to 11), successful (non-empty) broadcasts (line 12 to 21) and lost notifications (line 22 to 30). For the former two, the algorithm only executes the section that corresponds to the effect of e, while the last (for lost notifications) is always executed.
Common to all three sections is that they need to reason about events of w 1 effect. For that, we will introduce another utility function from Alg. 6: With outstanding-w1(C, c), we compute the (concurrent) set of w 1 events (on a condition variable c) that are not matched by a corresponding event of sig or bro effect in a configuration C. In other words, we receive the set of w 1 events corresponding to c that are still awaiting a notification in C.
We will start the description with the section concerning successful signals (line 7 to 11). As already established, this part of the function is only executed when e is of sig effect (line 7). It will add an event to R in form of a signal corresponding to each one w 1 event (corresponding to the same condition variable c) that can be notified following e t . To this end, a set W is built (line 8), containing all events of w 1 effect that concurrent to e or outstanding in [e t ]. For each event e ′ in W (line 9), we add an event of sig effect to R (line 11), notifying the thread of e ′ , which is denoted by tid (e). Note however, that we do not add a conflicting extension for the w 1 event that was already notified by e (line 10).
In case e is of bro effect, the algorithm tries to add successful broadcasts as conflicting extensions in line 12 to 21. As broadcasts, in contrast to signals, do not only notify a single event of w 1 effect, we need to reason about sets of those events. We chose candidate subsets from X in line 13 to 17, such that they adhere to three conditions. First, any M ′ ⊂ X has to be a concurrent set (line 14). We do this as only the maximal event of each thread determines what events are included in [M ′ ], i.e. additionally including any preceding events in M ′ does not change [M ′ ]. This also easily allows us to compare M to M ′ and mandate they should not be equal (line 15), to avoid adding the event that equals e to R. Finally, we are only interested in sets that only contain events of w 1 (line 16) or successful sig effects (line 17). Events representing successful signals are included into the set of causes as they are dependent (see App. D) on events of bro effect in the following way: The action must have happened before, as otherwise the broadcast would have included the notified thread in its set, eliminating the need for the signal. For each suitable subset (line 18), we determine the outstanding events of w 1 effect in [M ′ ] ∪ [e t ] (line 19) and if the resulting set (W ) is nonempty (i.e. the resulting broadcast will not be lost, line 20), we add a conflicting extension to R (line 21). This new event is a broadcast notifying exactly the set of threads for which there is an outstanding event of w 1 effect in W (retrieved by tid (W )).
Finally, we look at the generation of conflicting extensions representing lost notifications (line 22 to 30). Again, we determine candidate subsets, with the first two conditions matching those of the broadcast section (line 23 and 24). Then, however, we mandate that there are no outstanding events of w 1 effect (line 25), as the resulting event will not notify any thread. We also restrict M ′ to contain either a single event of bro effect, notifying at least one thread (line 26), or a number of successful sig events (line 27). These two possibilities cover all the immediate predecessor sets a lost notification can have (see App. D). In line 28, The algorithm visits all cutoff-free, ⊆-maximal configurations of the unfolding, and organizes the exploration into a tree. Each node of the tree is an instance of the data structure Node, shown in Alg. 9. Conceptually, a Node is a tuple n := C, D, e, l, r, s that represents a state of the exploration algorithm. Specifically, n represents that we need to explore all maximal configurations that include C, exclude D, and where all the maximal configurations visited through the the left child node of n include e. The event e always satisfies that e ∈ en(C). The left child node, field l, represents a state of the search where we explore event e (adding e to the C-component). The right child node, field r, represents a state of the exploration where we have decided to exclude e from any subsequently visited maximal-configuration. We keep track of this by adding e to the D-component of that node. The node is a sweep node iff bit s is true. We call s the sweep bit. We will explain in App. G.3 what the s bit is useful for.

G.1 Exploring the left branch
The algorithm maintains a queue of nodes to be explored, variable W (line 12). Initially the queue contains a node representing the exploration of the empty configuration (line 13). Note that the function alloc-nod() adds the new node to N . The user is free to select what node from the queue gets explored next (line 16). After a node n is selected, the first task is expanding the leftmost branch of the tree rooted at n. We do this in the call to expand-left(), line 17. Figure 5 shows an example of this. This tree will be explored when the algorithm is executed on the program shown in Fig. 2a (whose unfolding is shown in Fig. 3a). Initially we have n 0 := ∅, ∅, null, null, null in the queue. At line 16 we extract n 0 from the queue. The call to expand-left(n 0 ) will create all nodes from n 1 to n 8 (the call will also indirectly update the left-child pointer in node n 0 so that it becomes n 1 ). The call also returns the last (leave) node (n 8 in our example) and the set B of nodes in the branch ({n 0 , . . . , n 8 } in our example). As new events are discovered, they are also added to the set U (line 37).
Back to the main () function, we now add (line 18) all conflicting extensions of the maximal configuration visited by the leaf node. In our example, the maximal configuration will be [8] = {1, 2, 3, 4, 5, 6, 7, 8}, and we will add events 9 and 16 (see full unfolding in Fig. 3a). We use the algorithms described in App. F to do this.

G.2 Exploring the right branch
Next we call create-right-brs(B) in line 19. This function is defined in Alg. 10. The purpose of this call is inserting in the tree (set N ) some of the right children nodes for nodes contained in B. Some right-hand side children nodes that the tree will eventually have might not be inserted here, because the algorithm does not have yet visibility over (the conflicting extensions that trigger) them. Those will provably be inserted in the call to backtrack(), line 20 of Alg. 9, at some point in the future (at a later iteration of of the loop in main).      Fig. 2a. Symbol • denotes a null pointer. We do not represent the sweep bit here. Nodes n 0 , n 10 , n 13 , n 27 will be added to the queue W of Alg. 9.
A right-hand side child node from a node n = C, D, e, ·, ·, · represents the exploration of all maximal configurations that include C and exclude D ∪ {e}. To create a right branch, the algorithm first needs to decide if one such maximal configuration (including C and excluding D ∪ {e}) exists, which is an NPcomplete problem [36]. In this work we use k-partial alternatives [36] with k = 1 to compute (or rather, approximate) such decision. k-partial alternatives can be computed in P-time and are a good approximation for the algorithm.
In simple terms, a k-partial alternative for node n is a hint for the algorithm that a maximal configuration which includes C and excludes D ∪ {e} exists. When such maximal configuration exists, a 1-partial alternative will also exist. When such maximal configuration does not exist, a 1-partial alternative may still exist, therefore misguiding the algorithm. This is the cost we pay to compute an approximate a solution for the NP-complete problem using a P-time algorithm.
We compute 1-partial alternatives in function alt, shown in Alg. 10, and called from create-right-brs. In Fig. 5, create-right-brs would find that both n 2 and n 1 have right branches. The created right branch always consist on one right-node (line 10 in make-right-br) followed by one or more left-nodes (line 13 in make-right-br). In our example, this would create the nodes n 9 and n 10 for n 2 and n 11 , n 12 , n 13 for n 1 . The terminal node of any created right simulates (part of) the in-order DFS traversal, updating the sweep bit accordingly. When the user selects a node whose sweep bit is off, then the sweep node remains unchanged, and the algorithm is not simulating the DFS traversal now.
The management of the sweep bit is done in the backtrack and make-left functions. Function make-left(n, e) creates a new left child node for node n. The sweep bit of the new node equals that of the parent node (n), see line 43 of make-left. One line below we also turn off the sweep bit of the parent. If the parent was the sweep node, now the left child is. If the parent was not the sweep node, now left child is not, either. When the sweep node is selected in line 16 of main, the calls to make-left issued by expand-left will simulate that the DFS explores the left-most branch rooted at the sweep node.
Function backtrack updates the sweep bit to simulate the backtracking operations of the DFS. When backtrack(n) is called (line 20 of main), n is always a leaf node. If n is not the sweep node, then there is nothing to simulate and backtrack returns, line 3. When n is the sweep node, we backtrack moving upwards (line 5) along the branch and checking again every node for right branches (line 6). 10 If the node has a right child, then sweep node becomes the leaf node of the leftmost branch rooted at the right child. If the node does not have a right child, 10 Note that we have already checked at some point in the past if such nodes had right branches, in line 19 of main. Back then the check was inconclusive, as we explained in App. G.2. The check we do now is decisive: if no right branches are found now, the no maximal configuration exists that should be found by exploring a right branch from this node. then we remove it (because we know that the algorithm has already explored the entire left and right subtrees).

G.4 Correctness
We re-state now the two theorems in Sec. 3.7: Theorem 6 (Completeness, see Theorem 2). For any reachable state s ∈ reach(M P ), function main in Alg. 9 explores a configuration C such that for some C ′ ⊆ C it holds that state(C ′ ) = s.

Proof (Sketch).
Alg. 9 explores the same execution tree that we have characterized in Appendix B of [37]. The procedure alt of Alg. 9, to compute alternatives, implements k-partial alternatives [37] with k = 1. Theorem 2 in [37] guarantees that all maximal configurations of the unfolding will be explored.
Theorem 7 (Termination, see Theorem 3). Algorithm 9 terminates for any program P such that reach(P ) is finite.
Alg. 9 explores the same execution tree that we have characterized in Appendix B of [37]. Theorem 1 in [37] guarantees that the exploration always finishes.