Extracting safe thread schedules from incomplete model checking results

Model checkers frequently fail to completely verify a concurrent program, even if partial-order reduction is applied. The verification engineer is left in doubt whether the program is safe and the effort toward verifying the program is wasted. We present a technique that uses the results of such incomplete verification attempts to construct a (fair) scheduler that allows the safe execution of the partially verified concurrent program. This scheduler restricts the execution to schedules that have been proven safe (and prevents executions that were found to be erroneous). We evaluate the performance of our technique and show how it can be improved using partial-order reduction. While constraining the scheduler results in a considerable performance penalty in general, we show that in some cases our approach—somewhat surprisingly—even leads to faster executions.


Introduction
Automated verification of concurrent programs is inherently difficult because of exponentially large state spaces [41]. State space reductions such as partial-order reduction (POR) [10,16,17] allow a model checker to focus on a subset of all reachable states, while the verification result is valid for all reachable states. However, even reduced state spaces may be P. Metzler was supported by the German Academic Exchange Service (DAAD). N. Suri was supported in part by H2020-SU-ICT-2018-2 CONCORDIA GA #830927 and BMBF-Hessen TUD CRISP. G. Weissenbacher was funded by the Vienna Science and Technology Fund (WWTF) through the project Heisenbugs (VRG11-005) and the LogiCS doctoral program W1255-N23 of the Austrian Science Fund (FWF  [17] and corresponding programs infeasible to (automatically) verify, requiring manual intervention.
We propose a novel model checking approach for safety verification of potentially nonterminating programs with a bounded number of threads, nondeterministic scheduling, and shared memory. Our approach iteratively generates incomplete verification results (IVRs) to prove the safety of a program under a (semi-)deterministic scheduler. Our contribution is the novel generation and use of IVRs based on existing model checking algorithms, where we use lazy abstraction with interpolants [42] to instantiate our approach. The scheduling constraints induced by an IVR can be enforced by iteratively relaxed scheduling [29], a technique to enforce fine-grained orderings of concurrent memory events. When the scheduling constraints of an IVR are enforced, all executions (for all possible inputs) are safe, even if the underlying (operating system) scheduler is nondeterministic. Therefore, the program can be executed safely before a complete verification result is available. Executions can still exploit concurrency, and the number of memory accesses that are executed concurrently may even be increased. As the model checking problem is eased, additional programs become tractable. Furthermore, IVRs can be used to safely execute unsafe programs which are safe under at least one scheduler. For example, instead of pro- We use the producer-consumer example from Fig. 1 to explain our approach. The verifier analyzes an initial schedule, e.g., where threads T 1 and T 2 produce and consume in turns, and emits an IVR R 1 , guaranteeing safe executions under this schedule. With its second IVR, the verifier might verify the correctness of producing two items in a row and the scheduling constraints can be relaxed accordingly. When the verifier hits an unsafe execution (the producer causes an overflow or the consumer causes an underflow), it emits an unsafe IVR for debugging. If the verifier accomplishes to analyze all possible executions of the program, it will report the final result partially safe, as the program can be used safely under all inputs but unsafe executions exist. Had there been no unsafe or safe IVRs, the final result would be safe or unsafe, respectively. This paper shows how to instantiate our approach by answering the following questions: 1. Which state space abstractions are suitable for iterative model checking? The abstraction should be able to represent nonterminating executions and facilitate the extraction of schedules. 2. How to formalize and represent suitable IVRs? IVRs should be as small as possible in order to allow short iterations, while they must be large enough to guarantee fully functional executions under all possible program inputs. More precisely, for every possible program input, an IVR must cover a program execution. 3. What are suitable model checking algorithms that can be adapted to produce IVRs? A suitable algorithm should easily allow to select schedules for exploration.
Beyond the contributions of a previous version of this paper [31], this extended version contains proofs of our formal statements, a more detailed description of constructing ARTs with the monolithic Impact algorithm for concurrent programs and our iterative extension, a more detailed description of the implementation for our evaluation, addi-tional experimental performance measurements, additional illustration of our case studies, and a more detailed discussion of section schedules and their optimization.

Basic definitions
A program P comprises a set S of states (including a distinct initial state) and a finite set T of threads. Each state s ∈ S maps program counters and variables to values. We use 1s to denote the program location of a state s, which comprises a local location l T (s) for each thread T ∈ T . W.l.o.g. we assume the existence of a single error location that is only reachable if the program P is not safe.
A state formula φ is a predicate over the program variables encoding all states s in which φ(s) evaluates to true. A transition relation R relates states s and their successor states s . Each tread T is partitioned into local transitions R l,l such that l = l T (s) and l = l T (s ) for all s, s satisfying R l,l (s, s ) and R l,l leaves the program locations and variables of other threads unchanged. We use Guard(R) to denote a predicate encoding ∃s . R(s, s ), e.g., Guard(R 13,14 ) is (count < N) for the transition from location 15 to 16 in Fig. 1.
We say that R l,l (or T , respectively) is active at location l and enabled in a state s iff 1s = l and s satisfies Guard(R). We write enabled(s) for the set of enabled transitions at s. Multiple transitions of a thread T at a location can be active, but we allow only one transition R to be enabled at a given state. If R exists, we write enabled T (s) := {R} and enabled T (s) := ∅ otherwise.
If there exist states s for which no transition of a thread T is enabled (e.g., in line 14 in Fig. 1), T may block. We assume that such locations l T (s) are (conservatively) marked by may-block(l T (s)).
An execution is a sequence s 0 , T 1 , s 1 , . . . , where s 0 is the initial state and the states s i and s i+1 in every adjacent triple (s i , T i , s i+1 ) are related by the transition relation of T i . An execution that does not reach the error location is safe. A deadlock is a state s in which no transitions are enabled. W.l.o.g. we assume that all finite executions correspond to deadlocks and are undesirable; intentionally terminating executions can be modeled using terminal locations with selfloops.
An execution τ is (strongly) fair if every thread T i enabled infinitely often in τ is also scheduled infinitely often [4]. We assume that fairness is desirable and enforce it by our algorithm presented in Sect. 3. Other notions of fairness, such as weak fairness, can be enforced analogously to our use of strong fairness.
Nondeterminism can arise both through scheduling and nondeterministic transitions. A scheduler can resolve the former kind of nondeterminism. Definition 1 (scheduler) A Scheduler ζ : (S × T ) * × S → T of a program P is a function that takes an execution prefix s 0 , T 1 , . . . , T n , s n and selects a thread that is enabled at s n , if such a thread exists. A scheduler ζ is deadlock-free (fair, respectively) if all executions possible under ζ are deadlockfree (fair).
A scheduler for the program of Fig. 1, for instance, must select T 1 rather than T 2 for the prefix s init , T 1 , s 1 , T 1 , s 2 , T 1 , s 3 , T 2 , s 4 , T 2 , s 5 , since at that point the lock is held by T 1 and enabled T 2 (s 5 ) = ∅.
Nondeterministic transitions are the second source of nondeterminism. If R l,l of thread T allows multiple successor states for a state s, we presume the existence of input symbols X such that each ι ∈ X determines a unique successor state s by selecting an R ι l,l ⊆ R l,l with R ι l,l (s, s ).

Definition 2 (input) An
Input is a function χ : (S × T ) * → X , which chooses an input symbol depending on the current execution prefix.
In conjunction, an input and a scheduler render a program completely deterministic: The input χ and scheduler ζ select a transition in each step such that each adjacent triple For partial-order reduction (POR), we assume that a symmetric independence relation on transitions of different threads is given, which induces an equivalence relation on executions. Two transitions R 1 and R 2 are only independent if they are from distinct threads, they are commutative at states where both R 1 and R 2 are enabled, and executing R 1 does neither enable nor disable R 2 . If R 1 and R 2 are not independent, we write R 1 ∦ R 2 .

Requirements on incomplete verification results
Our goal is to ease the verification task by producing incomplete verification results (IVRs) which prove the program safety under reduced nondeterminism, i.e., only for a certain scheduler. We only allow "legitimate" restrictions of the scheduler that do not introduce deadlocks or exclude threads. Inputs must not be restricted, since this might reduce functionality and result in unhandled inputs.
Hence, we define an IVR to be a function Rthat maps execution prefixes to sets of threads, representing scheduling constraints. An IVR for the program from Fig. 1, for instance, may output {T 1 } in states with an empty buffer, meaning that only thread T 1 may be scheduled here, and {T 2 } otherwise, so that an item is produced if and only if the buffer is empty. A scheduler ζ R enforces (the scheduling constraints of) an IVR Rif ζ R (τ ) ∈ R(τ ) for all execution prefixes τ . IVR Rpermits all executions possible under a scheduler that enforces R.

ART:
IVR: The remainder of this subsection discusses the requirements on useful IVRs. We define safe, realizable, deadlockfree, fairness-admitting, and fair IVRs. In the following subsection, we instantiate IVRs with abstract reachability trees (ARTs). Figure 2 gives an overview on the logical relationship between properties of ARTs (left) and IVRs (right).
Safety. An IVR Rcan either expose a bug in a program or guarantee that all permitted executions are safe. Here, we are only concerned with the latter case. An IVR Ris safe if all executions permitted by Rare safe. An unsafe IVR permits an unsafe execution and is called a counterexample.
Completeness. To reduce the work for the model checker, a safe IVR Rshould ideally have to prove the correctness of as few executions as possible. At the same time, it should cover sufficiently many executions so that the program can be used without functional restrictions. For instance, the IVR R(τ ) := ∅, for all τ , is safe but not useful, as it does not permit any execution. Consequently, Rshould permit at least one enabled transition, in all nondeadlock states, which is done by realizable IVRs: An IVR Ris realizable if at least one scheduler that enforces Rexists. Furthermore, an IVR should never introduce a deadlock: An IVR Ris deadlockfree if all schedulers that enforce Rare deadlock-free.
Fairness. In general, we deem only fair executions desirable. The IVR R(τ ) := {T 1 }, for instance, is deadlock-free for the program of Fig. 1 but useless, as no item is consumed. A deadlock-free IVR admits fairness if there exists a fair scheduler enforcing R(i.e., a fair execution of the program is possible).
If a scheduler permits both fair and unfair executions, it might be difficult to guarantee fairness at runtime. In such cases, a fair IVR can be used: A deadlock-free IVR Ris fair if all schedulers enforcing Rare fair.

Abstract reachability trees as incomplete verification results
In this subsection, we instantiate the notion of IVRs using abstract reachability trees (ARTs), which underlie a range of software model checking tools [9,21,23,28] and have recently been used for concurrent programs [42]. Due to the explicit representation of scheduling choices from the beginning of an execution up to an (abstract) state, ARTs are well-suited to represent IVRs. Model checking algorithms based on ARTs perform a pathwise exploration of program executions and represent the current state of the exploration using a tree in which each node v corresponds to a set of states at a program location l(v). These states, represented by a predicate φ(v), (safely) over-approximate the states reachable via the program path from the root of the ART ( ) to v. Edges expanded at v correspond to transitions starting at l(v). A node w may cover v (written v w) if the states at w include all states at v (φ(v) ⇒ φ(w)); in this cases, v is covered (covered(v)) and its successors need not be further explored. (Intuitively, executions reaching v are continued from w.) Formally, an ART is defined as follows: Definition 3 (abstract reachability tree [28,42] Intuitively, an ART A is well-labeled [28] if A 's − →-edges represent the transitions of the program and edges v w indicate that all states modeled by node v are also modeled by node w. Formally, A is well-labeled if for every edge v T ,R l,l − −−− → w in A we have that (i) φ( ) represents the initial state, (ii) φ(v)(s) ∧ R l,l (s, s ) ⇒ φ(w)(s ) and l T (v) = l and l T (w) = l , and (iii) for every v, w with v w, φ(v) ⇒ φ(w) and ¬covered(w).
An incomplete ART A p-c for the producer-consumer problem of Fig. 1 is shown in Fig. 3. Nodes show the state formulas, and edges are labeled with the thread and statement corresponding to the transition. The dashed edge is a -edge. ART-induced schedulers. A well-labeled ART A directly corresponds to an IVR R A that simulates an execution by traversing A . We define R A as follows: Let τ = s 0 , T 1 , s 1 , . . . , s n be an execution prefix. If A contains no path that corresponds to τ , R A leaves the schedules for this execution unconstrained. Otherwise, let v n be the last node of the path in A that corresponds to τ . R A permits exactly those threads that are expanded at v n (or at w if v n is covered by some node w). Execution prefixes are matched with ( ∪ − →)-paths, which is, in particular, necessary to build infinite executions. For example, the execution prefix  corresponds to the path in A p-c from over v 1 , . . . , v 12 back to . As only T 1 is expanded at , R A p-c allows only {T 1 } after τ .
Safety. An ART is safe if whenever l T (v) is the error location then φ(v) = false. As only safe executions may correspond to a path in a safe ART (cf. Theorem 3.3 of [42]), R A is a safe IVR.
Completeness. In order to derive a deadlock-free IVR from a well-labeled ART A , we have to fully expand at least one thread T at each node v that represents reachable states (where T is fully expanded at v if v has an outgoing edge for every active transition of T at l T (v)). However, there may exist reachable states s represented by φ(v) for which no transition of T is enabled (i.e., enabled T (s) = ∅). If T is the only thread expanded at v, R A is not realizable. This situation can arise for locations l at which T may block (marked with may-block(l T )).
Consequently, whenever may-block(l T (v)) in a deadlockfree ART A , we require that φ(v) is strong enough to entail that the transition R of T expanded at v (or at the node covering v, respectively) is enabled (i.e., φ(v) ⇒ Guard(R)). For instance, φ(v 1 ) in the ART shown above proves the enabledness of T 1 at v 1 , as φ(v 1 ) ⇒ mutex = 0 and lock(mutex) is enabled if mutex = 0.

Lemma 1 If an ART
Proof Let R A be the IVR of a deadlock-free ART A . First, we construct a scheduler that enforces R A , which proves that R A is realizable. Second, we show that all schedulers that enforce R A are deadlock-free, which concludes the proof that R A is deadlock-free.
For arbitrary execution prefixes of the form τ = s 0 , T 1 , s 1 , . . . , s n , let T (τ ) = R A (τ ) ∩ {T ∈ T : enabled T (s n ) = ∅}. Let ζ : (S × T ) * × S → T be an arbitrary function such that ∀τ. ζ(τ ) ⊆ T (τ ) whenever T (τ ) is not empty. (A description of how ζ can be constructed is given by the definition of R A .) By construction, ζ enforces R A if ζ is a scheduler. We show that ζ is a scheduler by contradiction. Assume that ζ is not a scheduler. Then, there exists an exe- case τ does not correspond to a path in A : By the definition case not may-block(l T (v n )): By the definition of may block, enabled T (s n ) = ∅. Contradiction to enabled T (s n ) = ∅.
It remains to show that all schedulers that enforce R A are deadlock-free. Let ζ be an arbitrary scheduler that enforces R A . Assume that ζ is not deadlock-free. Then, there exists an execution τ = s 0 , T 1 , s 1 , . . . , s n that is possible under ζ such that s n is a deadlock, i.e., ∀T ∈ T . enabled T (s n ) = ∅ and ∃T ∈ T . ∃R l,l . l T (s n ) = l. As τ is an execution permitted by R A , τ corresponds to a path π = v 0 , With the same argument as above, in case may-block(l T (v n )), and a contradiction to enabled(s n ) = ∅ and in case not may-block(l T (v n )), we have enabled T (s n ) = ∅ and a contradiction to enabled T (s n ) = ∅.
Fairness. IVRs derived from deadlock-free ARTs do not necessarily admit fairness if the underlying ART contains cycles (across and − → edges) that represent unfair executions. In order to make sure a deadlock-free ART admits fairness, we implement a scheduler that allows A to schedule each thread infinitely often (whenever it is enabled infinitely often) by requiring that every ( ∪ − →)-cycle is "fair," defined as follows.

Definition 4 (ART admitting fairness) A deadlock-free ART
contains, for every thread T that is enabled at a node of the cycle, a node v such that T is expanded at v.
Before we proof the fairness of IVRs induced by fair ARTs, we state the following auxiliary proposition.
be a directed, finite graph. For all infinite paths π ∈ V ω through G and for all nodes v ∈ V that occur infinitely often in π , there exists a cycle π in G such that π contains v and all nodes of π are visited infinitely often by π .

Lemma 2 If an ART A admits fairness, R A is an IVR that admits fairness.
Proof We need to show that there exists a fair scheduler ζ that enforces an arbitrary ART A that admits fairness. After constructing ζ , we show that ζ is fair by contradiction.
Let τ = s 0 , T 1 , s 1 , . . . , s n be an execution prefix and let π be a path such that τ corresponds to π = v 0 , T 1 , . . . , v n . By γ (T ), we denote the number of occurrences of T in π . Let T be the set of threads that is both enabled at s n and permitted by A , i.e., T = R A (τ ) ∩ {T : enabled T (s n ) = ∅}. We let ζ schedule an arbitrary thread T ∈ T such that no other thread in T occurs less often in π , i.e., ζ(τ ) = T ∈ T such that ∀T ∈ T . γ (T ) ≤ γ (T ). By Lemma 1 and as A admits fairness, ζ is indeed a scheduler (T is only empty when enabled(s n ) is empty).
It remains to show that ζ is fair, i.e., that every execution scheduled by ζ is fair. Let τ be an execution that is scheduled by ζ (τ is of the form τ = s init , ζ(s init ), s 1 , . . .). If τ is finite, it is trivially fair. Otherwise, assume that τ is not fair. Then, there exists a thread T that is infinitely often enabled in τ but does not occur in τ after some prefix of τ . Let π be a path in A such that τ corresponds to pi. Let v T be a node at which T is enabled and that occurs infinitely often in π . As A is finite and by Proposition 1, there exists a cycle that contains v T such that π visits all nodes in this cycle infinitely often. As A admits fairness, there exists v T ,a − − → A v such that v is in this cycle and a ∈ enabled(s) for all states s that correspond to v. As T is not scheduled in τ after some finite number i of steps, there exist one or more other threads Let t be the set of those threads T . By the construction of the scheduler, γ (T ) ≤ γ (T ) for all T ∈ t. After only finitely many steps l, γ (T ) < γ (T ) for all T ∈ t (e.g., take l to be the product of the maximum path length from v to v and the number T ∈t 1 + γ (T ) − γ (T ) of required visits of v). Hence, there exists a prefix of π of length l ≥ l in which v T − → A v is the last step, i.e., T has been scheduled. Contradiction to the assumption that T is not scheduled after i steps in π . Note that the expansion of a thread T at a node in a cycle does not guarantee that the transition is part of the cycle. A slight modification of the fairness condition for ARTs leads to a sufficient condition for ARTs as fair IVRs, as the following definition and lemma show. The difference in the fairness condition is that all enabled threads are expanded within each ( ∪ − →)-cycle c, which we denote by fair(c). The ( ∪ − → )-cycle shown in Fig. 4, for instance, is fair.

Lemma 3 (fairness) For all fair ARTs A , R A is a fair IVR.
Proof Let A be a fair ART. By Lemma 1 and as A is deadlock-free, there exists a scheduler ζ that enforces A . It remains to show that ζ is fair, which we prove by contradiction. Suppose that an unfair execution τ is possible under ζ . There exists a thread T that is enabled infinitely often in τ but does not occur in τ after a finite prefix. Let π be a path through A such that τ corresponds to π . As V A is finite, there exists a node v that occurs infinitely often in π and at which T is enabled. As A is finite and by Proposition 1, v is part of a cycle of which all nodes occur infinitely often in π . By fairness, one edge in this cycle is labeled with T . By the definition of ARTs ((V A , − → A ) is a tree), this edge occurs infinitely often in π . Contradiction.
Given an ART A that admits fairness, one can generate a fair ART A such that R A permits all executions permitted by R A .

Iterative model checking
A suitable algorithm for our framework must generate fair IVRs. We use model checking based on ARTs (cf. Sect. 2.3), which allows us to check infinite executions and explicitly represent scheduling. Nevertheless, other program analysis techniques such as symbolic execution are also suitable to generate IVRs. In particular, our algorithm (Alg. 1) constitutes an iterative extension of the Impact algorithm [28] for concurrent programs [42]. We chose Impact as a base for our algorithm because it has an available implementation for multithreaded programs, which we use to evaluate our approach in Sect. 5.
Impact generates an ART by pathwise unwinding the transitions of a program. Once an error location is reached at a node v, Impact checks whether the path π from the ART's root to v corresponds to a feasible execution. If this is the case, a property violation is reported; otherwise, the node labeling is strengthened via interpolation. Thereby, a well-labeled ART is maintained. Once the ART is complete, its node labeling provides a safety proof for the program.
To build an ART as in the producer-consumer example of Fig. 3, Impact starts by constructing the root node with φ( ) = true and 1 = (8, 12), where we indicate locations by line numbers in Fig. 1. Initially, mutex = 0, count = 0, and the buffer size is bounded by an arbitrary constant N > 0. Thread T 1 is expanded by adding a node v 1 with φ(v 1 ) = true and 1v 1 = (14, 12). From v 1 , thread T 1 is expanded repeatedly until node v 6 with φ(v 6 ) = true and 1v 6 = (8, 12) is produced. At this point, all statements of the produce() procedure have been expanded once. As v 6 has the same global location as and φ(v 6 ) ⇒ φ( ), a covering v 6 can be inserted. However, when the else branch of thread T 1 at node v 1 is expanded, a node v error labeled with the error location is added. In order to check the feasibility of the error path − → v 1 − → v 2 − → v error , Impact tries to find a sequence interpolant for: As we assume that the buffer is never of size 0, i.e., N > 0, U is unsatisfiable and a possible sequence interpolant is: Hence, v error can be labeled with false, so that the ART remains safe, and the preceding labels can be updated to if not may-block(lv_n-1)T_n then 36 return progress for all uncovered nodes w that have been created before v do Due to the relabeling, the covering v 6 has to be removed and v 6 has to be expanded.
When T 2 has been expanded six times beginning at v 6 , a node v 12 is added with 1v 12 = (8, 12). Impact applies a heuristic that attempts to introduce coverings eagerly, which results in a label φ(v 12 ) = mutex = 0 ∧ count = 0 and a covering v 12 can be added. With this covering, the current ART is fair and can be used as an IVR. In contrast, Impact for concurrent programs would then continue to explore additional interleavings by expanding, e.g., T 2 at . A complete ART is found when both error paths and all interleavings of produce() and consume() that respect the available buffer size N are explored. Impact for concurrent programs does not terminate until such a complete ART is found and would not terminate at all if the buffer size is unbounded. Our algo-rithm, however, is able to yield an fair IVR each time a new interleaving has been explored.
In each iteration, our extended algorithm yields an IVR which is either unsafe (a counterexample) or fair (can be used as scheduling constraints). If the algorithm terminates, it outputs "safe", "partially safe," or "unsafe," depending on whether the program is safe under all, some, or no schedulers. Procedure Main() repeatedly calls Iteration() (line 3), which, intuitively, corresponds to an execution of the original algorithm of [42] under a deterministic scheduler. Iteration() (potentially) extends the ART A . If no progress is made (A is unchanged), the algorithm terminates (lines 12, 14, and 16). Otherwise, an intermediate output is yielded: either A as an intermediate output (line 7) or A with all previously found counterexamples removed, i.e., the largest fair ART that is a subgraph of A , denoted by Remove_Error_Paths().
Iteration() maintains a work list W of nodes v to be explored via Close(v), which tries to find (as in [42]) a node that covers v. In addition to the covering check of [42], we check fairness, where C A (v, w) denotes all cycles that would be closed by adding the edge v w (line 43). If such a node w is found, any thread T that is expanded at v but not at w (line 46) must not be skipped at w by POR. Instead of expanding T instantaneously at w (as in [42]), which would explore another schedule, T is added to the set I so that it can be explored in a subsequent iteration. If no covering node for v is found, v is refined, which returns counterexample if v has a feasible error path (line 25). Otherwise (line 28), Check_-Enabledness() performs a deadlock check by testing whether the last transition that leads to v is enabled in all states represented by the predecessor node. If not, deadlock freedom is not guaranteed and Backtrack() tries to find a substitute node where exploration can continue. The deterministic scheduler of Iteration() is controlled by New_Schedule_Start() and Schedule_Thread(). The former selects a set of initial nodes for the exploration (line 18); the latter decides which thread to expand at a given node (line 61). We use a simple heuristic that selects the first (in breadthfirst order) node which is not yet fully expanded and use a round-robin scheduler for Schedule_Thread that switches to the next thread once a back jump occurs (e.g., the end of a loop body is reached). Additionally, Schedule_Thread returns only threads that are necessary to expand at the given node after POR (cf. Skip() [42]). More elaborate heuristics are conceivable but out of the scope of this paper.

Partial-order reduction
A naive enforcement of the context switches at the relevant nodes of a safe IVR R A would result in a strictly sequential execution of the transitions, foiling any benefits of concurrency. To enable parallel executions, we introduce program schedules that relax the scheduling constraints by means of partial-order reduction (POR). Note that this application of POR concerns the enforcement of scheduling constraints and occurs in addition to POR applied by our model checking algorithm when constructing an ART (cf. Sect. 3). Nevertheless, dependency information that is used for POR during model checking can be reused so that redundant computations are avoided.
The goal is to permit the parallel execution of independent transitions (in different threads) whose order does not affect the outcome of the execution represented by A (i.e., the resulting traces are Mazurkiewicz-equivalent). Using traditional POR to construct such scheduling constraints poses two challenges: 1. Executions may be infinite, but we need a finite representation of scheduling constraints. 2. The control flow of an execution may be unpredictable, i.e., it is a priori unclear which scheduling constraints will apply. We solve issue 1 by partitioning ARTs into sections and associate a finite schedule with every section. To address issue 2, we require that sections do not contain branchings (control flow and nondeterministic transitions).
Consider the program and corresponding ART in Fig. 5a. The if statement of T 1 is modeled as a separate read transition followed by a branching at node v 3 . We define three section paths: After π 1 has been executed, a scheduler can distinguish the cases y = 0 and y = 0 and schedule π 2 or π 3 accordingly.
Formally, a section path v 1 R 1 − → · · · R n − → v n+1 corresponds to a branching-free path in an ART whose first transition may be guarded. A section path follows − → A edges, skipping covering edges . The section schedule of a section path describes the Mazurkiewicz equivalence class of the contained transitions and is defined as the smallest partial order σ = (V σ , − → σ ) such that V σ = {e 1 , . . . , e n } and The section schedule σ (π 1 ) of π 1 is depicted in Fig. 5b. It consists of four events e 1 T 1 : x:=1, e 2 T 1 : read z, e 3 T 2 : y:=0, and e 4 T 2 : x:=0. An arrow e → e indicates that σ (π 1 ) requires e to occur before e . Events of the same thread are ordered according to the program order of the respective thread. Events e 1 and e 3 are from different threads and write to the same variable; hence, they are dependent and the section schedule needs to specify an ordering: e 1 must occur before e 3 . Accordingly, the complete section schedule is ({e 1 , e 2 , e 3 , e 4 }, {(e 1 , e 2 ), (e 3 , e 4 ), (e 1 , e 3 )}).
By the following lemma, an execution from a state corresponding to the first node of a section and scheduled according to the respective section schedule will always lead to a state corresponding to the last node of the section. For instance, the following execution fragments both lead from the initial state to a state represented by v 4 (s 4 , s 4 φ(v 4 )), as e 1 and e 3 are independent and can be swapped: Lemma 5 (Correctness of section schedules) Let τ be a linear extension of a section schedule σ (π) of a section path π in a deadlock-free ART A . τ is equivalent to a linear extension of σ (π) that corresponds to π .
Proof Let π be a section path, σ (π) its section schedule, and τ a linear extension of σ (π). As σ (π) is a partial order, all linear extensions of σ (π) are equivalent [17], in particular the linear extension of σ (π) that corresponds to π .
A program schedule comprises several section schedules.
is a labeled graph (V , − → ). Each node v ∈ V is the start of a section path π in A . Each edge is labeled with the section schedule of π and the guard Guard(R) of the first transition R in π . As A is deadlock-free, there exists a thread T which is fully expanded at v in A and we require that likewise has outgoing edges at v labeled with T for each transition of T at v. Figure 5c shows a program schedule for our example program.
A scheduler can enforce the scheduling constraints of a program schedule by picking a section schedule that matches the current execution prefix and scheduling an event whose predecessors (according to the section schedule) have already been executed. Hence, all independent events in a section can be executed concurrently without synchronization. All events of a section schedule have to appear before the first event of the next section schedule, so that the states reached between sections correspond to nodes of the program schedule. For example, the event T 1 : y := 1 from section π 2 must not occur in between events T 1 : read z and T 2 : y := 0 from section π 1 .
A program schedule of an ART A that admits fairness permits exactly those executions that correspond to a path in A (modulo Mazurkiewicz equivalence). In particular, as Mazurkiewicz equivalence preserves safety properties [17], only safe executions are permitted.

Lemma 6 (Correctness of program schedules) Let A be an ART that admits fairness and a program schedule for A . All program executions that adhere to the scheduling constraints of are equivalent to an execution that corresponds to a path in A .
Proof Let A be an ART that admits fairness, a program schedule for A , and τ an execution that adheres to the scheduling constraints of . We show that all finite prefixes τ of τ are equivalent to an execution prefix that corresponds to a path from in A .
Induction on the length of τ .
case τ is empty: τ corresponds to the empty path in A .
inductive case: Let π τ = v 0 −−−→ v n+1 be the path in that τ corresponds to. Let τ = x 1 x 2 be partitioned so that x 1 corresponds to the prefix v 0 . . . v n in that path. Such a partition exists, as an event must occur after all events from the previous section schedule and before all events from the following section schedule. By induction hypothesis, there exists an execution x ≈ 1 that is equivalent to x 1 that corresponds to the path π 0 . . . π n−1 in A . By Lemma 5, there exists a linear extension x ≈ 2 of σ n (π n ) that is equivalent to x 2 , which corresponds to π n in A . Thus, x ≈ 1 x ≈ 2 is equivalent to τ and corresponds to π 0 . . . π n .

Evaluation
In five case studies, we evaluate our iterative model checking algorithm and scheduling based on IVRs. We use the Impara model checker [42], as it is the only available implementation of model checking for nonterminating, multi-threaded programs based on a forward analysis on ARTs we have found. Impara uses lazy abstraction with interpolants based The section schedule for the section path π 1 from to v 4 c A corresponding program schedule on weakest preconditions. We extend the tool by implementing our algorithm presented in Sect. 3. Impara accepts C programs as inputs; however, some language features are not supported and we have rewritten programs accordingly. 1 We refer to the (noniterative) Impara tool as Impara-C (for complete verification) and to our extension of Impara with iterative model checking as Impara-IMC. In addition to our modifications of Impara, we implement a custom (user space) scheduler to evaluate the enforcement of program schedules for infinite executions. The software used to conduct our experiments, including our modifications of Impara, our custom scheduler, and benchmarks, is available for reproduction [32].

Implementation
In the first step, we automatically translate ARTs constructed by Impara-IMC to program schedules encoded as vector clocks. To omit sections in the generated program schedule that would never be executed and thereby reduce the size of the program schedule, we discard all paths in the ART that lead only to nodes labeled with false. As we use only deadlock-free ARTs, an alternative, feasible path, always exists. A given ART is traversed from the root. Recursively, we build section paths by traversing the graph until a branching node is reached. At the branching node, a fully expanded thread Tis chosen. The next sections are started at all child nodes of the branching node that are reached by a transition of T. For each section, the section schedule is generated based on the dependency information of memory accesses. Section schedules are represented by vector clocks. Additionally, each section schedule contains a link to all possible successor sections, i.e., those sections that start at a direct successor node of the current section. If there exist nodes v, wsuch that all possible (interleaved) paths between vand ware equivalent and section paths, a single section path between vand wwith relaxed scheduling constraints is sufficient. In this case, no dependencies between memory events need to be enforced. However, we use only the first IVR in our experiments (produced in a single iteration of Algorithm 1); hence, we do not evaluate this case. Firstly, all section schedules for the given ART are generated by enumerating them, including link information about successor sections, and marking the initial section.
Secondly, we instrument the source code of benchmark programs manually with callbacks to our user space scheduler and code for time measurement. The user space sched- 1 For example, Pthread mutexes, some uses of the address-of operator, and reuse of the same function by several threads are not supported. We solve these issues by rewriting our benchmark programs so that Impara handles them correctly and their semantics is not changed. We will publish our modifications to Impara, including two bug fixes. uler is implemented in C++11 and uses the C++ standard library for atomic memory operations. Program schedules are included as header files. Every access to a nonthread-local, global variable (shared variable) is replaced by a C++ preprocessor macro that calls the user space scheduler, executes the original statement, and calls the user space scheduler to notify that the statement has been executed. In our selection of benchmark programs, we had to instrument assignments and if-then-else statements. In the case of control flow branchings that depend on a shared variable, i.e., an if-thenelse statement where the branching expression depends on a shared variable, additional callbacks are necessary to notify the scheduler of the taken control flow path.
To ensure that memory accesses enclosed by callbacks are indeed executed after the preceding callback and before the succeeding callback, memory fences are used.
The result of steps one and two is a multithreaded program that executes concurrent memory accesses according to a given program schedule. Threads are executed concurrently and only forced to execute sequentially where required by the program schedule. Each time a thread Tenters the callback preceding a memory access, Tlooks up the current section schedule and program counters of the other threads. If the vector clock of the section schedule, at the position of the current event of T, shows an event of an other thread that has to occur first, Twaits until this event has been executed. If no more events are required to occur before the current event of Tby the section schedule, Texecutes the current memory access and, in the succeeding callback, updates its program counter so that the other threads are notified that Thas executed another event.
In case all events of the current section have already been executed, Tchooses the successor section associated with its current event. Waiting for all threads to completely execute the current section before switching to a successor section ensures that the program, at the end of each section, reaches a state that is represented by a node in the program schedule (and therefore, in the ART generated by the model checker). In case Thas no successor section associated with its current event, Twaits for an other thread to choose the next section. In case the last node of the current section is a branching node, only the thread with a control flow branching chooses the next section. In case Thas a control flow branching at the end of the last section, Tchooses the successor section based on the taken control flow branch.
Thirdly, we instrument the benchmark programs with code for time measurement. Each thread executes in an indefinite loop. Each time a thread has accomplished useful work in the current loop iteration, e.g., producing or consuming an item, writing a block or inode, or executing the critical section, it increments its performance counter. The main thread sleeps for 2 s, the time-out duration, and subsequently prints the sum of the performance counters of all threads and terminates the program. Such a single run of a benchmark program is executed five times, and we report the respective median value of performance counter sums. All experiments have been executed on a four-core Intel Core i5-6500 CPU at 3.2 GHz.
While we manually instrumented the benchmark source code, an automated instrumentation is well conceivable. Main tasks of such an automated instrumentation are to identify shared variables and all points in the program, where dependent expressions are accessed. Relevant shared variables can be either overapproximated so that all shared or global variables are included or found by a static dependency analysis. Even if the variables to be instrumented are overapproximated, the expected additional execution time overhead is small, as our experiments show: A callback to our scheduler is fast if the current thread does not have to wait for other threads before executing the next variable access. Expressions that depend on a shared variable can likewise be found by a static dependency analysis. The automated instrumentation may of course be implemented on the level on the intermediate representation of a compiler and does not have to be conducted on the source code level.

Infeasible complete verification
Even for a moderate number of threads, complete verification, i.e., verification of a program under all possible schedules and inputs, may be infeasible. In particular, Impara-C times out (after 72 h) on a corrected variant of the producerconsumer problem (Fig. 13) with four producers and four consumers. Impara-IMC produces the first IVR R 1 after 4:29:53 h. A simplification of R 1 is depicted in Fig. 6; it covers all executions in which the threads appear to execute their loop bodies atomically in the order T 1 , T 2 , . . . , T 8 . While the main bottleneck for Impara-C is state explosion and finding many coverings for different schedules, we observe that the main issue to produce R 1 is to find a single covering that comprises all threads, i.e., to find a fair cycle. The essential predicates that lead to a fair cycle are: count > 0, count + 1 > 0, count + 2 > 0, count + 3 > 0, count = 1000, count = 999, count = 998, count = 997 The subsequent IVRs R 2 , . . . , R 8 are found much faster than the first IVR, after 19:31, 12:3, 6:13, 28:0, 9:25, 8:27, and 8:40 min. We stop the model checker after eight IVRs. According to our implementation of New_Schedule_Start() in Alg. 1, IVR R i permits, in addition to all executions permitted by R i−1 , those executions in which the threads appear in the order T i , T 1 , . . . , T i−1 , T i+1 , . . . , T 8 . Hence, R 8 gives the scheduler more freedom than R 1 , which may result in a better execution performance, e.g., because a producer which has its item available earlier does not have to wait for all previous producers.

Deadlocks
A common issue with multithreaded programs is deadlocks, which may occur when multiple mutexes are acquired in a wrong order, as in the program in Fig. 7, in which two threads use two mutexes to protect their critical sections. A deadlock is reached, e.g., when T 2 acquires mutex2 directly after T 1 has acquired mutex1. A monolithic verification approach would try to verify one or more executions and, as soon as a deadlock is found, report the execution that leads to the deadlock as a counterexample. With manual intervention, this counterexample can be inspected in order to identify and fix the bug. In contrast, Impara-IMC logs both safe and unsafe IVRs. The first IVR found in this example covers all executions in which Threads 1 and 2 execute their loop bodies in turns, with Thread 1 beginning. The corresponding program schedule consists of a single section schedule depicted in Fig. 8. As expected, executing the program with enforcing the first program schedule never leads to a deadlock. Executing the uninstrumented program (without scheduling constraints) leads to a deadlock after only a few hundred loop iterations. Hence, IMC enables to safely use the program deadlock-free and without manual intervention.

Race conditions through erroneous synchronization
The program in Fig. 9a shows a variant of the producerconsumer problem with two producers and two consumers which uses erroneous synchronization: Both the produce and consume procedures check the amount of free space without acquiring the mutex first. For example, a buffer underflow occurs if the buffer contains only one item and the two consumers concurrently find that the buffer is not empty; although the buffer becomes empty after the first consumer has removed the last item, the second consumer tries to remove another item. The first IVR found by Impara-IMC is depicted simplified in Fig. 9b. The simplification merges all individual edges of a procedure into a single edge, which is possible as Impara-IMC does not apply context switches inside of procedures during the first iteration. Since both procedures appear to be executed atomically, no assertion violation is found during the first iteration. We ran the program with a program sched-ule corresponding to the first IVR. As expected, we have not observed any assertion violations. Figure 10 shows an extension of a benchmark used in [15], which is a simplified extract of the multithreaded Frangipani file system. The program uses a time-varying mutex: Depending on the current value of the busy bit, a disk block is protected by m_busy or m_inode. We want to evaluate whether we can use Impara-IMC to generate safe program schedules even if all mutexes are (intentionally) removed from the program.

Declarative synchronization
For this purpose, we use a variant of the file system benchmark where all mutexes are removed and synchronization constraints are declared as assume statements, shown in Fig. 11. It is sufficient to assure for T 1 that the block is written only if it is allocated, i.e., both inode and busy are true. For T 2 , it is sufficient to assure that the block is only reset if it is not busy, i.e., busy = false. Finally, for T 3 , it is necessary to assure that the block is deallocated only if it is already deallocated or fully allocated, i.e., inode = busy.
Running Impara-IMC on the file system benchmark without mutexes yields a first program schedule that schedules T 1 , T 2 , T 3 repeatedly in this order, according to our simple heuristic for an initial IVR. However, although all executions permitted by this schedule are fair, the if condition of T 2 always evaluates to false and T 2 never performs useful work. To obtain a more useful schedule, we inform the model checker that the (omitted) else branch of Thread T 2 is not useful. We encode this information by inserting else: assume false. After simplifying the code, we obtain T 2 as depicted in Fig. 12. For the updated code, Impara-IMC yields a first scheduler that schedules T 3 before T 2 before T 1 , so that all threads perform useful work. Table 1 shows the performance impact of enforcing IVRs on several correct programs. Each program is model-checked once until the first IVR (Impara-IMC) and once completely (Impara-C). As a baseline, the program is run without schedule enforcement (unconstrained). The first IVR is enforced without (Opt0), and with optimizations (Opt1, Opt2). Opt1 applies POR and omits operations on synchronization objects (mutexes, barriers). 2 Opt2 uses, in addition to Opt1, longer section schedules (by replicating a section eight times) and stronger partial-order reduction that identifies independent accesses to distinct indices of an array. Additionally, for the producer-consumer benchmark, we apply a compiler-like optimization, removing and reordering events to reduce the number of constraints. 3 Both Opt1 and Opt2 enable the concurrent execution of more memory accesses, e.g., because the beginning of a critical section can already be executed before a thread arrives at a constrained access that has to wait. The schedules for each benchmark (Opt0-Opt2) are obtained from the first IVR. As all benchmarks use unbounded loops, we measure the execution time performance by counting useful (i.e., with a successful concurrent access such as a produced item) loop iterations and terminating the execution after 2 s. At the example of a section schedule of the producerconsumer benchmark with two threads, Fig. 14a, b illustrates the difference between optimizations. Figure 14a shows a section schedule for Opt0. All shared memory events are executed strictly sequentially, as it is the case with unconstrained executions: Only the thread holding the lock is allowed to access shared memory. Opt1 removes the lock operations while maintaining the same ordering of events. Opt2, cf. Fig.14b, relaxes the original ordering, subsumes eight loop executions of both threads, and eliminates the redundant read event of count.

Performance
In Fig. 14b, when the consumer executes the scheduler callback before its first event (read count), it looks up the constraint e 12 → e 21 and waits for the producer to finish event e 12 . When the producer in the callback after e 12 has notified that e 12 has been executed, the consumer continues and executes e 21 . Similarly, the producer is permitted to execute e 14 before e 23 has been executed. Thus, the constrained execution under the optimized schedule permits "more" concurrency (i.e., more events to be executed concurrently) than the unconstrained execution with locks.
For instance, the consumer is allowed to read the counter already after the producer has written it and does not have to wait for the producer to also write an item to the buffer.
We use the producer-consumer implementation (with correct synchronization and buffer size 1000) from SV-COMP [5] (stack_safe), modified with an unbounded loop and with one, two, and four producers and consumers. The double lock benchmark is a corrected version (lock operations in T 2 reversed) of the deadlock benchmark (Sect. 5.3), where the critical section is simulated by sleeping for 1 ms;  the uncorrected version reached a deadlock after only 172 loop iterations. The file system benchmark from SV-COMP (time_var_mutex_safe) is extended with a third thread and again with unbounded loops as in Sect. 5.5. The barrier benchmark uses two barriers to implement ring communication between threads. As the model checking columns of Table 1 show, Impara-IMC finds the first IVR often much faster than or at least as fast as it takes Impara-C for complete model checking; it can produce an IVR even for our largest benchmarks, where Impara-C times out. For a buffer size of 5, Impara-C can verify the producer-consumer benchmark even with eight threads but again, Impara-IMC is considerably faster in finding the first IVR. Subsequent IVRs were generated considerably faster than the first IVR, which might be caused by caching of facts in the model checker.
The verification time for the producer-consumer benchmark of both Impara-C and Impara-IMC appears to grow exponentially with the number of threads. This growth is not a limitation of our approach but a property of the application of lazy abstraction with interpolants in Impara. Potentially, Impara can be improved by including symmetry reduction, which would reduce the verification time for both Impara-C and Impara-IMC but is outside of the scope of this work.
Somewhat surprisingly, some benchmarks are slower when executed unconstrained than under Opt2. We conjecture that this is caused by more memory accesses being executed in parallel under Opt2, as all other effects of Opt2 only improve handling by our user space scheduler and do not affect unconstrained executions. It is, however, not directly possible to measure the effect of parallelizing memory accesses: In order to re-sequentialize memory accesses under Opt2, synchronization (e.g., over a mutex) would have to be added, which produces additional overhead.
In all cases but one, Opt2 is considerably faster than Opt1, which is considerably faster than Opt0. The highest overhead is observed for the file system benchmark, where Opt2 is about 3.5 times slower than the unconstrained execution. We conjecture that the high overhead here stems from an unequal distribution of loop iterations among threads, when executed unconstrained: The loop body of T 2 was executed nearly 100 times more frequently than T 1 , while it is shorter and probably faster. Opt0-Opt2 execute all threads nearly balanced. In addition to the Pthread barriers used in the barrier benchmark, we tried a variant with busy waiting barriers, where the unconstrained execution showed a performance of 13 567 135, which is still slower than Opt2.
Comparing the results for the producer-consumer benchmark with a buffer size of 1000 to those for a buffer size of 5, we observe that there is no considerable effect on Opt0-Opt2 but on most of the unconstrained executions. This observation is comprehensible, as the first IVR does not make use of more than at most four cells in the buffer (in case of four producers). The performance of unconstrained executions decreases with a smaller buffer as the chance that the buffer is full and a producer has to wait is higher. For all three configurations with a buffer size of 5, Opt2 shows the highest execution time performance.
Even in repeated executions of the experiment, the unconstrained variant of double lock showed only "starving" executions in the sense that the second thread was never able to acquire the mutexes before the time-out of 2 s. Hence, the constrained executions improve on the operating system scheduler in terms of a balanced execution of all threads.
In order to compare to the enforcement of input-covering schedules [7] (explained in Sect. 6), we measure the overhead of our scheduler implementation on the pfscan benchmark used there. Pfscan is a parallel implementation of grep and uses 1 producer and 2 consumer threads to distribute tasks, consisting of reading and searching a file for a given query. As input, we use eight files with 100MB of random content each. We evaluate four different schedules, 4 which show an overhead between 3% and 10% (with Opt2). Hence, IVRs can perform much better than input-covering schedules (60% overhead reported in [7]). Table 2 contains our experimental results for the pfscan benchmark. We use two worker threads in addition to the main thread. The benchmark is executed with scheduling constraints of several program schedules S1-4 (column two) and unconstrained (column three). Execution times are given in seconds. The fourth column gives the relative execution time (overhead). In all constrained configurations, operations on synchronization objects have been omitted (Opt1). S1, S2, and S3 are program schedules as they can be produced during the first iteration of our model checking algorithm. Program schedule S4 allows any interleaving of critical sections so that all executions of the unconstrained program are matched. S1 and S2 contain sections that comprise both worker threads, while S3 and S4 contain only single-threaded sections. S1 and S2 differ in the ordering of the worker threads. S3 causes an overhead of 10% with respect to the unconstrained execution. Although S4 allows any interleaving of critical sections, there remains an overhead of 10% caused by looking up section schedules during the execution. S1 and S2 show only a small overhead of 3%. We conjecture that the lower number of section schedule look-ups (compared to S3 and S4) is responsible for the considerably lower overhead.

Related work
Unbounded model checking [18,20,35,42] is a technique to verify the correctness of potentially nonterminating pro-grams. In our setting, we deploy algorithms that use abstract reachability trees (ARTs) [21,28,42] to represent the already explored state space and schedules, and perform this exploration in a forward manner. Instead of discarding an ART after an unsuccessful attempt to verify a program, we use the ART to extract safe schedules.
Conditional model checking [8] reuses arbitrary intermediate verification results. In contrast to our approach, they are not guaranteed to prove the safety of a program that is functional under all inputs and does not enforce the preconditions (e.g., scheduling constraints) of the intermediate result.
Context bounding [34,38,39] eases the model checking problem by bounding the number of context switches. It is limited to finite executions and, unlike our approach, does not enforce schedules at runtime.
Automated fence insertion [1,2,13,24,26] transforms a program that is safe under sequential consistency to a program that is also safe under weaker memory models. While the amount of nondeterminism in the ordering of events is reduced, nondeterminism due to scheduling cannot be influenced. Synchronization synthesis [19] inserts synchronization primitives in order to prevent incorrect executions, but may introduce deadlocks.
Deterministic multi-threading (DMT) [3,6,7,11,12,27,33,37] reduces nondeterminism due to scheduling in multithreaded programs. Schedules are chosen dynamically, depending on the explicit input, and cannot be enforced by a model checker. Nevertheless, there are combinations with model checking [11] and instances which schedule based on previously recorded executions [12].
We are aware of only one DMT approach that supports symbolic inputs [7]. Similar to our sections, bounded epochs describe infinite schedules as permutations of finite schedules. Via symbolic execution, an input-covering set of schedules is generated, which contains a schedule for each permutation of bounded epochs. As all permutations need to be analyzed (even if they are infeasible), state space explosion through concurrency is only partially avoided; indeed, the experimental evaluation shows that the analysis is infeasible even for five threads when the program has many such permutations. In contrast, we do not require race-freedom, use model checking, sections may contain multiple threads, omit infeasible schedules, and allow a safe execution from the first schedule on, i.e., an IVR can be considerably smaller than an input-covering set of schedules.
Issues of how to efficiently enforce fine-grained schedules for multithreaded programs are discussed in [30]. For finite executions, the impact of scheduling constraints on execution time performance is investigated, however without generating scheduling constraints via model checking.
Deterministic concurrency requires a program to be deterministic regardless of scheduling. In [40], a deterministic variant of a concurrent program is synthesized based on con-straints on conflicts learned by abstract interpretation. In contrast to DMT, symbolic inputs are supported; however, no verification of general safety properties is done and the degree of nondeterminism is not adjustable, in contrast to IVRs.
Sequentialized programs [14,22,25,35,36,39] emulate the semantics of a multithreaded program, allowing tools for sequential programs to be used. The amount of possible schedules is either not reduced at all or similar to context bounding.

Conclusion
We present a formal framework for using IVRs to extract safe schedules. We state why it is legitimate to constrain scheduling (in contrast to inputs) and formulate general requirements on model checkers in our framework. We instantiate our framework with the Impact model checking algorithm and find in our evaluation that it can be used to 1. model check programs that are intractable for monolithic model checkers, 2. safely execute a program, given an IVR, even if there exist unsafe executions, 3. synthesize synchronization via assume statements, and 4. guarantee fair executions. A drawback of enforcing IVRs is a potential execution time overhead; however, in several cases, constrained executions turned out to be even faster than unconstrained executions.