Abstract
Modern softwareverification tools need to support development processes that involve frequent changes. Existing approaches for incremental verification hardcode specific verification techniques. Some of the approaches must be tightly intertwined with the development process. To solve this open problem, we present the concept of difference verification with conditions. Difference verification with conditions is independent from any specific verification technique and can be integrated in software projects at any time. It first applies a change analysis that detects which parts of a software were changed between revisions and encodes that information in a condition. Based on this condition, an offtheshelf verifier is used to verify only those parts of the software that are influenced by the changes. As a proof of concept, we propose a simple, syntaxbased change analysis and use difference verification with conditions with three offtheshelf verifiers. An extensive evaluation shows the competitiveness of difference verification with conditions.
Replication package available on ZenodoÂ [12].
Funded in part by the Deutsche Forschungsgemeinschaft (DFG) â€“ 418257054 (Coop) and 378803395 (GRK ConVeY).
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Software changes frequently during its lifecycle: developers fix bugs, adapt existing features, or add new features. In agile development, software construction is an intrinsically incremental process. Every change to a working system holds a risk to introduce a new defect. Since software failures are often costly and may even endanger human lives, it is an integral part of software development to find potential failures and ensure their absence.
However, running a full verification after each change is inadequate: Changes rarely affect the complete program behavior. For example, consider programÂ absSum (Fig.Â 1, middle). If the assignment of program variable r is changed in the elsebranch at locationÂ 5 (\(\texttt {absSum}^\texttt {mod}\), Fig.Â 1, right), only program executions that take that elsebranch show different behavior. Program executions that take the ifbranch (highlighted in gray) are not affected by the change. This is typical for program changes: A modified programÂ \(P'\) exhibits some new or changed program executions compared to an original programÂ P, but some executions also stay the same (Fig.Â 1, left). To ensure the safety of \(P'\), it is sufficient to inspect only the changed behavior \(\mathrm {ex}(P')\setminus \mathrm {ex}(P)\).
Many incremental verification approachesÂ [39, 40] use this insight: Regressiontest selectionÂ [62] tries to only execute those tests in a test suite that are relevant w.r.t. the change, and incremental formal verification techniques adapt existing proofsÂ [33, 49, 53, 54], reuse intermediate resultsÂ [16, 59], or skip the exploration of unchanged behaviorÂ [21, 47, 60, 61]. However, they (a)Â all focus on one fixed verification approach, (b)Â require a strong coupling between the original verification approach and the incremental technique, and (c)Â require an initial, full verification run. Often, this inflexibility makes an approach prohibitive.
As an alternative, we define the concept of difference verification with conditions: Given the original and the changed software, difference verification with conditions first identifies all executions that are affected by changes and encodes them in a condition, an exchange format already known from conditional model checkingÂ [10]â€”we call this first part diffCond. Then, a conditional verifier uses that condition to verify only the changed program behavior. For this step, any existing offtheshelf verifier can be turned into a conditional verifier with the reducerbased approachÂ [13].
Difference verification with conditions allows us to (a)Â use varying verification approaches for incremental verification, (b)Â automatically turn any existing verifier into an incremental verifier, and (c) skip an initial, costly verification run.
Contributions. We make the following contributions:

We propose difference verification with conditions, which is an incremental verification approach that combines existing tools and approaches.

We provide the algorithmÂ diffCond, an integral part of difference verification with conditions, which outputs a description of the modified execution paths in an exchangeable condition format. We also prove its correctness.

We implemented diffCond in the verification frameworkÂ CPAchecker and combined it with existing verifiers to construct difference verifiers.

To study the effectiveness and efficiency of difference verification with conditions, we performed an extensive evaluation on more than 10Â 000 C programs.

diffCond and all our data are available for replication and to construct further difference verifiers (seeÂ Sect.Â 7).
2 Background
Programs. For ease of presentation, we consider imperative programs with deterministic controlflow, which execute statements from a set Ops. Our implementation supports C programs. Following literatureÂ [8, 9, 30], we model programs as controlflow automata.
Definition 1
A controlflow automaton (CFA) \(P=(L,\ell _0,G)\) consists of

a set L of program locations with initial location \(\ell _0\in L\), and

a set \(G\subseteq L\times Ops\times L\) of controlflow edges.
CFA P is deterministic if \((\ell ,op,\ell '),(\ell ,op,\ell '')\in G ~\Rightarrow ~ \ell '=\ell ''\).
FigureÂ Â 2 shows the CFA of the example program absSum from Fig.Â 1. AÂ sequence \(\ell _0{\mathop {\rightarrow }\limits ^{op_1}}\ell _1\dots {\mathop {\rightarrow }\limits ^{op_n}}\ell _n\) is a syntactical path through CFAÂ \(P = (L, \ell _0, G)\), if \(\forall i \in [1, n]: (\ell _{i1}, op_i, \ell _i) \in G\). We rely on standard operational semantics and model a program state by a pair of (1)Â the program counter, whose value refers to a program location in the CFA, and (2)Â a concrete data state c, whose shape we do not further specifyÂ [8]. We denote the set of all concrete data states asÂ C.Â The function \(sp_{op} : C \rightarrow 2^C\) describes the possible effects of operation \(op \in Ops\) on concrete data state \(c \in C\). Based on this, a sequence \((\ell _0,c_0){\mathop {\rightarrow }\limits ^{op_1}}(\ell _1, c_1)\dots {\mathop {\rightarrow }\limits ^{op_n}}(\ell _n, c_n)\) is a program path through CFA \(P=(L, \ell _0,G)\), if \(\ell _0{\mathop {\rightarrow }\limits ^{op_1}}\ell _1\dots {\mathop {\rightarrow }\limits ^{op_n}}\ell _n\) is a syntactical path through P and \(\forall i \in [1, n]: c_i \in sp_{op_i}(c_{i1})\). We denote the set of all program paths by \(\mathrm {paths}(P)\). Program executions are derived from program paths. If \(p=(\ell _0,c_0){\mathop {\rightarrow }\limits ^{op_1}}(\ell _1,c_1)\dots {\mathop {\rightarrow }\limits ^{op_n}}(\ell _n,c_n)\) is a program path, then \(\mathrm {ex}(p)=c_0{\mathop {\rightarrow }\limits ^{op_1}}c_1\dots {\mathop {\rightarrow }\limits ^{op_n}}c_n\) is a program execution. The executions of a programÂ P are defined as \(\mathrm {ex}(P):=\{ex(p)\mid p\in \mathrm {paths}(P)\}\).
Conditions. A condition describes which program executions were already verified, e.g., in a previous verification run. We use automata to represent conditions and use accepting states to identify already verified executionsÂ [13].
Definition 2
A condition \(A=(Q,\delta ,q_0,F)\) consists of:

a finite set Q of states,

a transition relation \(\delta \subseteq Q\times Ops\times Q\) ensuring \(\forall (q, op, q')\in \delta : q\!\in \! F \Rightarrow q'\!\in \! F\),

the initial state \(q_0\in Q\), and a set \(F\subseteq Q\) of accepting states.^{Footnote 1}
The goal of absSum (left program in Fig.Â 2) is to compute \(r=\sum _{i=0}^{a}\). However, the original program is buggy: In location \(\ell _5\), it must compute the product of a and \(a+1\), not the sum. The fixed program is shown in the middle of Fig.Â 2â€”the fix is highlighted in blue. The original and modified version of the program only differ in the elsebranch. If we assume that the original program was already verified, we know that program executions passing through the ifbranch have already been verified and do not need to be considered during a reverification. In contrast, executions that pass through the elsebranch and reach the modified statement must be verified. The condition shown on the right of Fig.Â 2 encodes this insight. Program executions that pass through the ifbranch (\(a < 0\)) lead to the accepting state \(q_2\)â€”we say they are covered by the condition. In contrast, program executions that pass through the elsebranch (\(\lnot a < 0\)) never reachÂ \(q_2\) â€”they are not covered by the condition, and must be analyzed.
Definition 3
A condition \(A\quad \,\,\,{=}\quad \,\,\,(Q,\delta ,q_0,F)\) covers an execution \(\pi ={c_0{\mathop {\rightarrow }\limits ^{op_1}}c_1\dots {\mathop {\rightarrow }\limits ^{op_n}}c_n}\) if there exists an index \(k\in [0, n]\) and a run \(\rho \,\,{=}\,\,q_0{\mathop {\rightarrow }\limits ^{op_1}}q_2\dots {\mathop {\rightarrow }\limits ^{op_k}}q_k\), s.t. \(q_k\in F\) and \(\forall i \in [1, k]: (q_{i1},op_i,q_i)\in \delta \).
Next, we introduce a simple and efficient way to systematically compute a condition that covers the common executions of an original and a modified program.
3 Component diffCond for Modular Construction
The ultimate goal of difference verification with conditions is to speed up reverification of modified programs. To achieve this goal, we aim at ignoring unmodified program behavior during verification. Conditions are a wellfitting format to describe the unmodified program behavior. However, to benefit from difference verification with conditions, the construction of such conditions must be efficient, i.e., consume only a small portion of the overall execution time of the verification. Therefore, we use a syntactic approach to compute the condition, diffCond (Alg.Â 1), which is linear in time regarding the size of the modified program.
diffCond Â gets as input the original programÂ P and the modified programÂ \(P'\). In linesÂ 1 toÂ 11, diffCond traverses the modified and the original program in parallel, stops traversal if the original and the modified program differ, and remembers the edge that differs in the modified program.
It uses three data structures: SetÂ \(E \subseteq L \times L' \times Ops \times L \times L'\) stores all compared edges \((\ell _1, op, \ell _2)\) and \((\ell _1', op, \ell _2')\) that are equal in both programs. These edges are called standard edges. They are stored in the composite form \(((\ell _1, \ell _1'), op, (\ell _2, \ell _2'))\). SetÂ \(D \subseteq L \times L' \times Ops \times L'\) stores all edges \((\ell _1', op, \ell _2')\) of the modified programÂ \(P'\) that represent a change from the original programÂ P at \(\ell _1\), called difference edges. They are stored in the form \(((\ell _1, \ell _1'), op, \ell _2')\). Set \(\textit{waitlist}\subseteq L \times L'\) stores all pairs of program locations \((\ell _1, \ell _1')\) for which a program path with the same syntactic structure exist in P and \(P'\), and for which no outgoing edges have been considered yet. Initially, E and D are emptyâ€”no edges were checked so far, and the algorithm starts at the two initial program locations, i.e., \(\textit{waitlist}= \{(\ell _0, \ell _0')\}\)Â (linesÂ 1 and 2). As long as \(\textit{waitlist}\) contains program locations, the algorithm picks one of them, here depicted as \((\ell _1, \ell _1')\)Â (lineÂ 4). It considers all outgoing edges \((\ell _1', op, \ell _2')\) of \(\ell _1'\) in the modified program. If the same operationÂ op does not exist at any outgoing edge of \(\ell _1\), it is considered to be changed and the difference edge \(((\ell _1, \ell _1'), op, \ell _2')\) is stored in D before continuing with the next state in \(\textit{waitlist}\). However, if the same operationÂ op exists at an outgoing edge \((\ell _1, op, \ell _2)\), it is considered to be equal and the standard edge \(((\ell _1, \ell _1'), op, (\ell _2, \ell _2'))\) is stored in E before continuing with the next state in \(\textit{waitlist}\). To this end, diffCond explores the syntactical composition of the original and modified program. In addition, if the tupleÂ \((\ell _2, \ell _2')\) of locations has not been detected beforeÂ (lineÂ 10), it is added to the waitlist for further exploration. FigureÂ 3 shows the graph built from edges E (black) and D (blue and dashed) when executing diffCond on absSum and \(\texttt {absSum}^\texttt {mod}\).
To compute the condition, we first determine the conditionâ€™s states. Lines 12 to 18 compute all nodes that can reach a successor of a difference edge. FigureÂ 3 highlights these nodes in green. Nodes that are not discovered in linesÂ 12â€“18 cannot lead to a difference edge and, thus, not to different program behavior. Consequently, undiscovered nodes that are successors of nodes discovered in linesÂ 12â€“18 become final statesÂ (lineÂ 23). FigureÂ 3 highlights these nodes in gray (only node \((\ell _2, \ell '_2)\)). The union of discovered and final states become our condition states. To complete the construction, we use the pair of initial program locations as the initial stateÂ \((\ell _0, \ell _0')\) and add to the transition relation all transitions from E and D that connect conditionÂ states. FigureÂ 2c shows the condition created from Fig.Â 3.
Finally, note that lines 19â€“21 handle the special case that the set D of difference edges is empty, thus resulting in \(Q=\emptyset \) in line 19. The set D is empty if the original and the modified program only differ in the names of their program locations^{Footnote 2} or if the modified program is empty (\((\ell '_0,\cdot ,\cdot )\notin G'\)). In both cases, all executions of the modified program are covered by the executions of the original program. As a result, the condition covers all executions: its only state is both initial and accepting state, and the condition has no transitions.
The purpose of algorithmÂ diffCond is to compute a condition that skipping unchanged behavior during reverification of a modified program. To still have a sound reverification, the produced condition must not cover executions that do not occur in the original program. The following theorem states this property of algorithmÂ diffCond.
Theorem 1
Let \(P=(L,\ell _0,G)\) and \(P'=(L',\ell '_0,G')\) be two CFAs. diffCond\((P,P')\) does not cover any execution from \(\mathrm {ex}(P')\setminus \mathrm {ex}(P)\).
Proof
Assume \(\mathrm {ex}(P')\setminus \mathrm {ex}(P)\ne \emptyset \). Hence, \({\textsc {diffCond}}(P,P') = (Q,\delta ,q_0, F)\) is returned in line 27. Let \((Q,\delta ,q_0,F) = A\), let \(\pi =c_0{\mathop {\rightarrow }\limits ^{op_1}}c_1\dots {\mathop {\rightarrow }\limits ^{op_n}}c_n\in \mathrm {ex}(P')\setminus \mathrm {ex}(P)\), and let \(\rho =q_0{\mathop {\rightarrow }\limits ^{op_1}}q_1\dots {\mathop {\rightarrow }\limits ^{op_k}}q_k\) be a run through A, s.t. \(0\le k\le n\) and \(\forall 1\le i\le k: (q_{i1},op_i,q_i)\in \delta \). By construction, (1)Â \(q_0\notin F\), (2)Â \({\forall 1\le i<k}: (q_{i1}, op_i, q_i)\in E\wedge q_i\notin F\), and (3)Â \((q_{k1},op_k,q_k)\in E\cup D\). We need to show that \(q_k\notin F\). Case \(k=0\) follows from (1).
Next, consider the case \(k=n\). If \((q_{k1},op_k,q_k)\in E\), by construction there exists syntactical path \(sp=\ell _0{\mathop {\rightarrow }\limits ^{op_1}}\ell _2\dots {\mathop {\rightarrow }\limits ^{op_n}}\ell _n\) in P and due to program semantics, \(\pi {\,\in \,}\mathrm {ex}(P)\). Since \(\pi {\,\in \,}\mathrm {ex}(P')\setminus \mathrm {ex}(P)\), we infer \((q_{k1},op_k,q_k){\,\in \,} D\) and thusÂ \(q_k{\,\notin \,} F\).
Finally, consider the case \(k<n\). If \((q_{k1},op_k,q_k)\in D\), we infer \(q_k\notin F\). Assume \((q_{k1},op_k,q_k)\in E\). By construction, there exists a syntactical pathÂ \(sp=\ell _0{\mathop {\rightarrow }\limits ^{op_1}}\ell _2\dots {\mathop {\rightarrow }\limits ^{op_k}}\ell _k\) in programÂ P and a syntactical pathÂ \(sp'=\ell '_0{\mathop {\rightarrow }\limits ^{op_1}}\ell '_2\dots {\mathop {\rightarrow }\limits ^{op_k}}\ell '_k\) in programÂ \(P'\), s.t. \(\forall 0\le i\le k: q_i=(\ell _i,\ell '_i)\). Let \(\ell _0{\mathop {\rightarrow }\limits ^{op_1}}\ell _2\dots {\mathop {\rightarrow }\limits ^{op_k}}\ell _k{\mathop {\rightarrow }\limits ^{op_{k+1}}}\ell _{k+1}\dots {\mathop {\rightarrow }\limits ^{op_m}}\ell _m\) be an extension of the syntactical path sp s.t. \(m=n\) or \((\ell _m, op_{m+1},\cdot )\notin G\). Due to program semantics and \(\pi \in \mathrm {ex}(P')\setminus \mathrm {ex}(P)\), we conclude \(k\le m<n\). Due to program semantics, \(P'\) being deterministic, and \(\pi \in \mathrm {ex}(P')\), there exists an extension \(\ell '_0{\mathop {\rightarrow }\limits ^{op_1}}\ell '_2\dots {\mathop {\rightarrow }\limits ^{op_k}}\ell '_k{\mathop {\rightarrow }\limits ^{op_{k+1}}}\ell '_{k+1}\dots {\mathop {\rightarrow }\limits ^{op_m}}\ell '_m\) of the syntactical path \(sp'\). By construction, \(\forall 1\le i\le m: ((\ell _{i1},\ell '_{i1}), op_i,(\ell _i,\ell '_i))\in E\) and there exists \(((\ell _{m},\ell '_{m}), op_{m+1},\cdot )\in D\). Hence, \(\forall 0\le i\le m: (\ell _{i1},\ell '_{i1})\in Q\setminus F\). Since \(q_k=(\ell _k,\ell '_k)\) and \(k\le m\), \(q_k\notin F\).
Theoretical Limitations. The effectiveness of difference verification with conditions depends on the amount of program code potentially affected by a change, which is determined by the diffCond component. diffCond only excludes program parts that cannot be syntactically reached from a program change. Therefore, difference verification is ineffective if some initial variable assignments at the very beginning of the program or some global declarations change. Moreover, the structure of a program strongly influences the effectiveness of difference verification. For example, programs like \(\texttt {absSum}^\infty \) (Fig.Â 4) that mainly consist of a loop are problematic. Program \(\texttt {absSum}^\infty \) (Fig.Â 4) is similar to absSum, but has an additional, outer loop that dominates the program. So when locationÂ \(\ell _7\) is changed in \(\texttt {absSum}^\infty \), difference verification with conditions can only exclude the ifbranch for the very first iteration of the outer loop. Thereafter, the change in locationÂ \(\ell _7\) may propagate into the ifbranch.
In contrast, difference verification with conditions can be effective on programs that allow the exclusion of program parts, e.g., if the program is modular and, thus, consists of multiple, loosely coupled parts. Examples for modularity are the strategy design pattern, objectoriented software, or softwareÂ applications with multiple program features.
When designing our experiments, we will consider these limitations of difference verification with conditions. Before we get to our experiments, we must describe the modular composition of the diffCond component with a verifier, which specifies the difference verifier.
4 Modular Combinations with Existing Verifiers
The diffCond algorithm can be combined with any offtheshelf conditional verifierÂ [10] to produce a difference verifier in a modular way. The goal of a difference verifier is to verify only modified program paths. To this end, it first uses diffCond to discover potentially modified program paths and then runs a conditional verifier to explore only those paths identified by diffCond. FigureÂ Â 5 shows the construction template for difference verification with conditions.Â diffCond gets the original and modified program as input and encodes the modified paths in a condition. The constructed condition is forwarded to a conditional verifier, which uses the condition to restrict its analysis of the modified program to those paths that are not covered by the condition (i.e., the modified paths). Based on this template, we can construct difference verifiers from arbitrary conditional verifiers. Moreover, we can construct difference verifiers from nonconditional verifiers by using the concept of reducerbased conditional verifiersÂ [13]. The idea of a reducerbased conditional verifier is shown on the right of Fig.Â 5. To turn an arbitrary verifier into a conditional one, a reducerbased conditional verifier puts a preprocessor (called reducer) in front of the verifier. The reducer gets a program and a condition and outputs a new, residual program that represents the program paths not covered by the condition. A full verification of this residual program is then equivalent to a conditional verification of the original program with the produced condition. However, note that the existing reducers are designed for model checkers and do not necessarily work with other verification technologies like deductive verifiers.
In this paper, we transform three verifiers into difference verifiers: CPASeq, UAutomizer, and Predicate. The first two are the best verifiers from SVCOMPÂ 2020Â [5], and the third is a predicateabstraction approach. We use the offtheshelf verifiers CPASeq and UAutomizer as nonconditional verifiers and thus add a reducer, while we use Predicate as conditional verifier. Since a difference verifier can now be built from any offtheshelf verifier, we can also combine difference verification with other incremental verification techniques. As an example, we can use precision reuseÂ [16]. This technique is implemented in CPAchecker Â [16] and UAutomizer Â [49] and can be used with the previously mentioned approaches. Next we explain the technologies of the selected verifiers.
CPASeq uses several different strategies from the CPAchecker verification frameworkÂ [6, 11, 14]. CPASeq first analyzes different features of the program under verification. The program features considered are: recursion, concurrency, occurrence of loops, and occurrence of complex data types like pointers and structs. Based on these features, CPASeq uses one of five different verification techniques (cf. [6]). For nonrecursive, nonconcurrent programs with a nontrivial control flow, CPASeq uses a sequential combination of four different analyses: It uses value analysis with and without Counterexampleguided Abstraction Refinement (CEGAR)Â [24], a predicate analysis similar to Predicate, and kinduction with invariant generationÂ [7]. Invariants are generated by numerical and predicate analyses and are forwarded to the kinduction analysis.
UAutomizer is the automatabased approach from the Ultimate verification frameworkÂ [29, 31]. It uses a CEGAR approach to successively refine an overapproximation of the error paths, which is given in form of automata. In each refinement step, a generalization of an infeasible error path is excluded from the overapproximation. The generalization of the error path is described by a FloydHoare automatonÂ [31], which assigns Boolean formulas over predicates to its states. The predicates are obtained via interpolation along the infeasible error pathÂ [43].
Predicate is the predicateabstraction approach from the CPAchecker frameworkÂ [14] with adjustableblock encoding (ABE)Â [15]. ABE is instructed to abstract at loop heads only. CEGAR together with lazy refinementÂ [34] and interpolationÂ [32] determines the necessary set of predicates.
PrecisionReuse is a competitive incremental approach that avoids recomputing the required abstraction levelÂ [16]. The idea is to start with the abstraction level determined in a previous verification run. To this end, it stores and reuses the precision, which describes the abstraction level, e.g., the set of predicates to be tracked. We use the version as implemented in CPAchecker.
5 Evaluation
We systematically evaluate our proposed approach along the following claims:
Claim 1
Difference verification with conditions can be more effective than a full verification. Evaluation Plan: For all verifiers, we compare the number of tasks solved by difference verification with conditions and by the pure verifier.
Claim 2
Difference verification with conditions is more effective when using multiple verifiers. Evaluation Plan: We compare the number of tasks solved by each difference verifier with the union of tasks solved by all difference verifiers.
Claim 3
Difference verification with conditions can be more efficient than a full verification. Evaluation Plan: For all verifiers, we compare the run time of difference verification with conditions and of the pure verifier.
Claim 4
The run time of difference verification with conditions is dominated by the run time of the verifier. Evaluation Plan: We relate the time for verification to the time required by the diffCond algorithm and the reducer.
Claim 5
Difference verification with conditions can complement existing incremental verification approaches. Evaluation Plan: We compare the results of difference verification with conditions with the results of precision reuseÂ [16], a competitive incremental verification approach.
Claim 6
Combining difference verification with conditions with existing incremental verification approaches can be beneficial. Evaluation Plan: We compare the results of difference verification with the results of a combination of difference verification with conditions and precision reuse.
5.1 Experiment Setup
Computing Environment. We performed all experiments on machines with an Intel Xeon E31230Â v5Â CPU, 3.4Â GHz, with 8 cores each, and 33Â GB of memory, running UbuntuÂ 18.04 with Linux kernelÂ 4.15. We limited each analysis run to 15Â GB of memory, a timeÂ limit of 900Â s, and 4Â CPUÂ cores. To enforce these limits, we ran our experiments with BenchExec Â [17], versionÂ 2.3.
Verifiers. For our experiments, we use the software verifiers CPASeq^{Footnote 3}Â [6, 14] and UAutomizer^{Footnote 4}Â [29, 31] as submitted for SVCOMPÂ 2020, and CPAchecker Â [14, 15] in revisionÂ 32864^{Footnote 5}. CPASeq and UAutomizer are used as verifiers. CPAchecker provides the verifier Predicate, but also the new diffCond component and the Reducer component for reducerbased conditional verification. The difference verifier based on Predicate is realized as a single run. In contrast, the difference verifiers based on CPASeq and UAutomizer are realized as composition of two separate runs. The first run executes the diffCond algorithm followed by the reducer to generate the residual program. It is only executed once per task, i.e., the same residual programs are given to CPASeq and UAutomizer. In a second run, CPASeq and UAutomizer, respectively, verify the residual program. To deal with residual programs, we increased the Java stack size for CPASeq and UAutomizer.
Existing Incremental Verifier. We use Predicate with precision reuseÂ [16].
Verification Tasks. We use verification tasks from the public repository svbenchmarks (tag svcomp20)^{Footnote 6}, which is the most diverse, largest, and wellestablished collection of verification tasks. Since difference verification with conditions is an incremental verification approach, we require different program versions. We searched the benchmark repository for programs that come with multiple versions and for which at least one version is hard to solve, i.e., at least one of the three considered verifiers takes more than 100Â s for verification of that version, but is successful. From these programs, we arbitrarily picked the following: eca05 and eca12 (eventconditionaction systems, both have 10Â versions each), gcd (greatest common divisor computation, has 4Â versions), newton (approximation of sine, has 24Â versions), pals (leader election, has 26Â versions), sfifo (secondchance FIFO replacement, has 5Â versions), softflt (a software implementation of floats, has 5Â versions), square (squareroot computation, has 8Â versions), and token (a communication protocol, has 28Â versions). Unfortunately, all of these programs are specialized implementations with a single purpose. Thus, their implementation is strongly coupled and any reasonable program change affects the complete program. As explained before, this prohibits effective difference verification with conditions.
To get benchmark tasks that instead contain independent program parts, we create new combinations from the selected programs. We choose two programs, e.g., eca05 and token. We then combine these two programs according to the following scheme: We create a new program with all declarations and definitions of both original programs, but a new main function. This new main function randomly calls the main function of one of the two original programs. Name clashes are resolved via renaming. FigureÂ 6 shows the conceptual structure of each program created through this combination. For our experiments, we consider the following combinations of programs: (1)Â eca05+token, (2)Â gcd+newton, (3)Â pals+eca12, (4)Â sfifo+token, (5)Â square+softflt. To different versions of our combinations, we replace one of the two program parts with a different version of that part. For example, to get a different version of the original program eca05+token, we change the version of the eca05 part or the token part, but never both.
With this procedure, we get a large amount of different versions of our program combinations. For our evaluation, we consider each pair (O,Â N) of versions O and N of program combinations that fulfills the following two conditions: (1)Â N reflects a change, i.e., the two programs are different. (2)Â VersionÂ O, versionÂ N, or both versions are bugfree. This ensures that verification and difference verification can only find the same bugs. With this construction of benchmark tasks for incremental verification we get a total of 10Â 426Â tasks that we use in our experiments.
5.2 Experimental Results
Claim 1
(Difference verification with conditions more effective). TableÂ 1 gives an overview of our experimental results. Each column represents one task set. The rows refer to verifiers, i.e., pure verifiers (X) and difference verifiers (\(X^\varDelta \)). The last two rows are the union of the results of all three verifiers. For each taskÂ set and verifier, the table provides the number of tasks for which the verifier finds a proof ( ), finds a bug ( ), and only the difference verifier gives a conclusive answer ( ). It also shows the number of tasks ( ) that cannot be solved. Neither the pure nor the difference verifiers reported incorrect results.
The table shows that for each verifier there exist task sets on which the number of solved correct tasks ( ) is higher for the difference verifier. Looking at columns , we observe that typically there exist tasks that only the difference verifier can solve. Thus, this shows that our new difference verification with conditions can be more effective.
Difference verification with conditions is not always more effective. Especially, \({\small {\textsc {CPA}}} \hbox {}{\small {\textsc {Seq}}} ^\varDelta \) and \({\small {\textsc {UAutomizer}}} ^\varDelta \) sometimes perform worse. For example, \({\small {\textsc {CPA}}} \hbox {}{\small {\textsc {Seq}}} ^\varDelta \) finds significantly less bugs than CPASeq for eca05+token. The reason for this is the residual program constructed by the reducer, which is necessary to turn CPASeq into the required conditional verifier. The created residual programs, on which the offtheshelf verifiers run, have a different structure than the original program. They make heavy use of goto statements and deeply nested branching structures. While semantically equivalent, this can have unexpected effects on analyses: In the case of the tasks in eca05+token, CPASeq was not able to detect required information about loops and thus aborts its verification. Note that this is not a direct issue of difference verification with conditions, but an orthogonal issue. To fix the problem, verification tools must be improved to better deal with the generated residual programs or the structure of the residual program must be improved. Despite of the problem with residual programs, difference verification can solve many tasks that a full verification run cannot solve.
Since Predicate is already a conditional model checker, \({\small {\textsc {Predicate}}} ^\varDelta \) does not suffer from the residual program problem. Thus, the effectiveness of difference verification with conditions becomes even more obvious when comparing Predicate with \({\small {\textsc {Predicate}}} ^\varDelta \). For the first three taskÂ sets, \({\small {\textsc {Predicate}}} ^\varDelta \) solves all tasks that Predicate solves plus a significant amount of additional tasks that Predicate cannot solve. For the last two taskÂ sets \({\small {\textsc {Predicate}}} ^\varDelta \) fails to solve a few tasks that Predicate can solve. However, \({\small {\textsc {Predicate}}} ^\varDelta \) still solves more tasks inÂ total. One reason for this is that the predicateÂ abstraction used by Predicate may compute different predicates (due to a slightly different exploration of the state space), which may result in a more expensive abstraction, if the explored statespace looks different. For some tasks, these different predicates may be less suited to solve the task and thus require more time, which results in the analysis hitting the time limit. Typically, we observe this phenomenon when Predicate is expensive already (in our experiments, when it takes at least 700Â s). While for complicated tasks with large changes, difference verification may produce worse results, \({\small {\textsc {Predicate}}} ^\varDelta \) is still more effective than Predicate in all categories.
Claim 2
(Better with several verifiers). To study the usefulness of using several verifiers in difference verification, we look at the tasks solved by the three difference verifiers together. We observe that \({\small {\textsc {Predicate}}} ^\varDelta \) solves the most tasks in all task sets except for pals+eca12, in which \({\small {\textsc {CPA}}} \hbox {}{\small {\textsc {Seq}}} ^\varDelta \) is better. Moreover, when looking at All\(^\varDelta \), which takes the union of all results, we observe that for eca05+token multiple tasks without a property violation exist that cannot be solved by the best difference verifier of this task set (\({\small {\textsc {Predicate}}} ^\varDelta \)). Thus, the difference verification is more effective when using several verifiers.
Claim 3
(Difference verification with conditions more efficient). We compare the run times of the verifiers with the run times of the difference verifiers. For all three verifiers, the scatter plots in Fig.Â 7 show the CPUÂ time required to check a task without (xaxis) and with difference verification (yaxis). If a task was not solved, because the verifier either runs out of resources or encountered an error, we assume the maximum CPUÂ time of 900Â s. FiguresÂ 7a and 7b compare the two nonconditional verifiers CPASeq and UAutomizer, for which we use the reducerbased conditional verifier approach. For a significant number of tasks (below diagonal), the difference verifier is faster than the respective verifier CPASeq and UAutomizer, and the tasks on the right edge can only be solved by the difference verifier. There are tasks for which difference verification is slower (above diagonal). Note that the problem is the residual program, not our approach. For example, many tasks located at the upper edge do not represent timeouts of the difference verification, but failures of the verifier caused by the structure of the residual program. FigureÂ 7c compares the conditional verifier Predicate. For the majority of tasks, the CPUÂ time required by \({\small {\textsc {Predicate}}} ^\varDelta \) is equal to or less than the time required by Predicate (tasks below the line). Moreover, there are only few tasks for which \({\small {\textsc {Predicate}}} ^\varDelta \) is slower than Predicate (tasks above the line). The reason for this slowdown is most likely the computation of worse predicates (see ClaimÂ 1). To sum up, difference verification with conditions can successfully increase efficiency.
Claim 4
(Verifier dominates run time). We aim to show that the diffCond component and the residual program construction (in the reducerbased approach to construct conditional verifiers) require a negligible run time compared to the complete verification run time. We show in Fig.Â 8a for each task verified with \({\small {\textsc {CPA}}} \hbox {}{\small {\textsc {Seq}}} ^\varDelta \) and \({\small {\textsc {UAutomizer}}} ^\varDelta \), the CPUÂ time required by the full verification runÂ (xaxis) and the CPUÂ time of that run spent for diffCond plus the reducerÂ (yaxis). The time required by diffCond + reducer does not depend on the run time of the verifier, and it is belowÂ 60Â s for all tasks.
Claim 5
(Difference verification with conditions complementary). To show that difference verification with conditions complements existing incremental verification, we need to compare difference verification with conditions against an existing incremental approach. Looking at existing approaches that are (1)Â available as replication artifact and (b)Â able to run on verification tasks from svbenchmarks, we identified two: both based on precision reuse, one implemented in CPAcheckerÂ [16] and one in UltimateÂ [49]. We use the one in CPAchecker. FigureÂ 8b shows the CPU time of precision reuse with Predicate, called \({\small {\textsc {Predicate}}} ^{\varvec{\circlearrowright }}\) Â (xaxis) against our difference verification with Predicate, called \({\small {\textsc {Predicate}}} ^\varDelta \) Â (yaxis). Many tasks are solved efficiently by both techniques (large cluster in lower left). For the remaining hard tasks, difference verification is often faster than precision reuse, or precision reuse cannot even solve the task (points below the diagonal and on right edge). This shows that difference verification with conditions can improve on precision reuse for a significant number of tasks. It can thus complement existing incremental techniques.
Claim 6
(Combinations sometimes beneficial). We combined difference verification with conditions with precision reuse, called \({\small {\textsc {Predicate}}} ^{\varDelta \varvec{\circlearrowright }}\). FigureÂ 8c shows that this combination rarely becomes faster than difference verification \({\small {\textsc {Predicate}}} ^\varDelta \) alone. In the worst case, the combination even slows down because precision reuse tracks previously used predicates from the beginning while difference verification would only detect the necessary ones lazily. This more precise abstraction leads to more, sometimes unnecessary computations. Nevertheless, the combination can solve 29Â tasks that neither Predicate, its difference verifier, nor precision reuse can solve alone. Thus, while aÂ combination of the two incremental techniques is not beneficial in general, it can be.
5.3 Threats to Validity
External Validity. (1)Â Our benchmark tasks might not represent real program changes, and thus, our results might not transfer to reality. However, we built our tasks from a wellestablished collection of softwareverification problems, which are considered relevant in the verification community. Moreover, many of the combined programs implement known algorithms (greatest common divisor, Newton approximation of a sine function, Taylor expansion of a square root) or are derived from real applications (OpenSSL, SystemC design, leader election). Also, our combination is not uncommon in practice. Such combination patterns e.g. result from implementing the strategy pattern. Finally, our task set contains pairs of programs whose only difference is a bug fix to eliminate the reachability of the __VERIFIER_error() call. We believe that similar fixes are done in practice to eliminate bugs. (2)Â We compared our approach only with a single existing approach for incremental verification, and this comparison is restricted to a single verifier. Our observations may not apply to different incremental verification approaches or different verifiers. The same holds for the combination of difference verification with orthogonal, incremental verification approaches.
Internal Validity. (3)Â The implementation of the diffCond algorithm may contain bugs, and thus, produces conditions that also exclude modified paths. We would expect that such a bug also excludes error paths. Since we never observed false proofs, we assume this is unlikely. (4)Â Difference verification with CPASeq and UAutomizer could appear improved simply because we separated verification from the execution of diffCond + Reducer and granted both runs a limit ofÂ 900Â s. But the sum of the two times are always below 900Â s for all correctly solved tasks.
6 Related Work
Equivalence Checking. Regression verificationÂ [27, 28, 55, 56], SymDiffÂ [23], UCKleeÂ [48], and other approachesÂ [4, 26] check whether the inputoutput behavior of the original and modified method or program is the same. Differential assertion checkingÂ [38] inspects whether the original and modified program trigger the same assertions when given the same inputs. Equivalence checking does not need to be restricted to a simple yes or no answer. Semantic DiffÂ [35] reports all dependencies between variables and input values that occur either in the original or modified program. Conditional equivalenceÂ [37] infers under which input assumption the original and modified program produce the same output. Overapproximation of the differences between the original and modified program was also investigatedÂ [45]. Differential symbolic executionÂ [46] compares function summaries and constructs a delta that describes the input values on which the summaries are unequal. Partitionbased regression verificationÂ [19] splits the program input space into inputs on which original and modified program behave equivalently and those on which the two programs are unequal. Equivalence checking is not directly tailored to property verification, but determining when the original and modified programs may behave differently is similar to the goal of the diffCond algorithm.
Result Adaption. Incremental dataflow analysisÂ [51], ReviserÂ [3], and IncAÂ [57, 58] adapt the existing dataflow solution to program modifications. Similarly, incremental abstract interpretationÂ [52] adapts the solution of the abstract interpreter. Incremental model checking in the modal\(\mu \) calculusÂ [54] adapts a previous fixed point and restarts the fixedpoint iteration. Other approachesÂ [18, 20] model dataflow analysis and verification as computation of attributed parse trees. A change results in an update of the attributed parse tree. Extreme model checkingÂ [33] reuses valid parts of the abstract reachability graph (ARG) and resumes the statespace exploration from those nodes with invalid successors. Incremental statespace explorationÂ [41] reuses a previous statespace graph to prune the current exploration. HiFrogÂ [1] and eVolCheckÂ [25] implement an approach that reuses function summaries and recomputes invalid summariesÂ [53]. UAutomizer adapts a previous trace abstractionÂ [49], a set of FloydHoare automata that describe infeasible error paths, to reuse it on the modified program. While result adaption uses the same verification technique for original and modified program, our approach may use different techniques.
Reusing Intermediate Results. GreenÂ [59], GreenTrieÂ [36], and RecalÂ [2] support the reuse of constraint proofs. Similarly, iSaturnÂ [44] supports the reuse of SAT results of Boolean constraints that are identical. Precision reuseÂ [16] reuses the precision of an abstraction, e.g., which variables or predicates to track, from a previous verification run. These approaches are orthogonal to our approach. In the experiments, we even combined precision reuseÂ [16] with our approach.
Skipping Unaffected Verification Steps. Regression model checkingÂ [60] stops exploration of a state as soon as no program change can be reached from that state. Directed incrementalÂ [47, 50] and memoizedÂ [61] symbolic execution restrict the exploration to paths that may be affected by the program change. Additionally, memoized symbolic execution does not check constraints as long as the path prefix is unchanged. The Dafny verifier rechecks methods affected by a change reusing unchanged verification conditionsÂ [42]. iCoqÂ [21, 22] detects and only rechecks those Coq proofs that are affected by a change in the Coq project. These ideas are similar to ours but are tailored to specific techniques.
7 Conclusion
Software is frequently changed during development. Verification techniques must deal with repeatedly verifying nearly the same software again and again. To be able to construct efficient incremental verifiers from offtheshelf components, we introduce difference verification with conditions, which steers an arbitrary existing verifier to reverify only the changed parts. Compared to existing techniques, our approach is toolagnostic and can be used with arbitrary algorithms for change analysis. We provide an implementation of a change analysis as reusable component, which we combined with three existing verifiers. In a thorough evaluation on more than 10Â 000 tasks, we showed the effectiveness and efficiency of difference verification with conditions.
Data Availability Statement. diffCond and all our data are available for replication and to construct further difference verifiers on our supplementary web page^{Footnote 7} and in a replication package on ZenodoÂ [12].
Notes
 1.
 2.
In practice, this can happen if empty lines are added or removed from the program.
 3.
 4.
 5.
 6.
 7.
References
Alt, L., Asadi, S., Chockler, H., EvenMendoza, K., Fedyukovich, G., HyvÃ¤rinen, A.E.J., Sharygina, N.: HiFrog: SMTbased function summarization for software verification. In: Proc. TACAS, LNCS, vol. 10206, pp. 207â€“213. Springer (2017). https://doi.org/10.1007/9783662545805_12
Aquino, A., Bianchi, F.A., Chen, M., Denaro, G., PezzÃ¨, M.: Reusing constraint proofs in program analysis. In: Proc. ISSTA, pp. 305â€“315. ACM (2015). https://doi.org/10.1145/2771783.2771802
Arzt, S., Bodden, E.: Reviser: Efficiently updating IDE/IFDSbased dataflow analyses in response to incremental program changes. In: Proc. ICSE, pp. 288â€“298. ACM (2014). https://doi.org/10.1145/2568225.2568243
Backes, J., Person, S., Rungta, N., Tkachuk, O.: Regression verification using impact summaries. In: Proc. SPIN, LNCS, vol. 7976, pp. 99â€“116. Springer (2013). https://doi.org/10.1007/9783642391767_7
Beyer, D.: Advances in automatic software verification: SVCOMP 2020. In: Proc. TACAS (2), LNCS, vol. 12079, pp. 347â€“367. Springer (2020). https://doi.org/10.1007/9783030452377_21
Beyer, D., Dangl, M.: Strategy selection for software verification based on Boolean features: A simple but effective approach. In: Proc. ISoLA, LNCS, vol. 11245, pp. 144â€“159. Springer (2018). https://doi.org/10.1007/9783030034214_11
Beyer, D., Dangl, M., Wendler, P.: Boosting kinduction with continuouslyrefined invariants. In: Proc. CAV, LNCS, vol. 9206, pp. 622â€“640. Springer (2015). https://doi.org/10.1007/9783319216904_42
Beyer, D., Gulwani, S., Schmidt, D.: Combining model checking and dataflow analysis. In: Handbook of Model Checking, pp. 493â€“540. Springer (2018). https://doi.org/10.1007/9783319105758_16
Beyer, D., Henzinger, T.A., Jhala, R., Majumdar, R.: The software model checker Blast. Int. J. Softw. Tools Technol. Transfer 9(5â€“6), 505â€“525 (2007). https://doi.org/10.1007/s100090070044z
Beyer, D., Henzinger, T.A., Keremoglu, M.E., Wendler, P.: Conditional model checking: A technique to pass information between verifiers. In: Proc. FSE. ACM (2012). https://doi.org/10.1145/2393596.2393664
Beyer, D., Henzinger, T.A., ThÃ©oduloz, G.: Configurable software verification: Concretizing the convergence of model checking and program analysis. In: Proc. CAV, LNCS, vol. 4590, pp. 504â€“518. Springer (2007). https://doi.org/10.1007/9783540733683_51
Beyer, D., Jakobs, M.C., Lemberger, T.: Replication package for article â€˜Difference verification with conditionsâ€™. Zenodo (2020). https://doi.org/10.5281/zenodo.3954933
Beyer, D., Jakobs, M.C., Lemberger, T., Wehrheim, H.: Reducerbased construction of conditional verifiers. In: Proc. ICSE, pp. 1182â€“1193. ACM (2018). https://doi.org/10.1145/3180155.3180259
Beyer, D., Keremoglu, M.E.: CPAchecker: A tool for configurable software verification. In: Proc. CAV, LNCS, vol. 6806, pp. 184â€“190. Springer (2011). https://doi.org/10.1007/9783642221101_16
Beyer, D., Keremoglu, M.E., Wendler, P.: Predicate abstraction with adjustableblock encoding. In: Proc. FMCAD, pp. 189â€“197. FMCAD (2010)
Beyer, D., LÃ¶we, S., Novikov, E., Stahlbauer, A., Wendler, P.: Precision reuse for efficient regression verification. In: Proc. FSE, pp. 389â€“399. ACM (2013). https://doi.org/10.1145/2491411.2491429
Beyer, D., LÃ¶we, S., Wendler, P.: Benchmarking and resource measurement. In: Proc. SPIN, LNCS, vol. 9232, pp. 160â€“178. Springer (2015). https://doi.org/10.1007/9783319234045_12
Bianculli, D., Filieri, A., Ghezzi, C., Mandrioli, D.: Syntacticsemantic incrementality for agile verification. SCICO 97, 47â€“54 (2015). https://doi.org/10.1016/j.scico.2013.11.026
BÃ¶hme, M., Oliveira, B.C.d.S., Roychoudhury, A.: Partitionbased regression verification. In: Proc. ICSE, pp. 302â€“311. IEEE (2013). https://doi.org/10.1109/ICSE.2013.6606576
Carroll, M.D., Ryder, B.G.: Incremental dataflow analysis via dominator and attribute updates. In: Proc. POPL, pp. 274â€“284. ACM (1988). https://doi.org/10.1145/73560.73584
Ã‡elik, A., Palmskog, K., Gligoric, M.: iCoq: Regression proof selection for largescale verification projects. In: Proc. ASE, pp. 171â€“182. IEEE (2017). https://doi.org/10.1109/ASE.2017.8115630
Ã‡elik, A., Palmskog, K., Gligoric, M.: A regression proof selection tool for Coq. In: Proc. ICSE (Companion Volume), pp. 117â€“120. ACM (2018). https://doi.org/10.1145/3183440.3183493
Chaki, S., Gurfinkel, A., Strichman, O.: Regression verification for multithreaded programs (with extensions to locks and dynamic thread creation). FMSD 47(3), 287â€“301 (2015). https://doi.org/10.1007/s1070301502370
Clarke, E.M., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexampleguided abstraction refinement for symbolic model checking. J. ACM 50(5), 752â€“794 (2003). https://doi.org/10.1145/876638.876643
Fedyukovich, G., Sery, O., Sharygina, N.: eVolCheck: Incremental upgrade checker for C. In: Proc. TACAS, LNCS, vol. 7795, pp. 292â€“307. Springer (2013). https://doi.org/10.1007/9783642367427_21
Felsing, D., Grebing, S., Klebanov, V., RÃ¼mmer, P., Ulbrich, M.: Automating regression verification. In: Proc. ASE, pp. 349â€“360. ACM (2014). https://doi.org/10.1145/2642937.2642987
Godlin, B., Strichman, O.: Regression verification. In: Proc. DAC, pp. 466â€“471. ACM (2009). https://doi.org/10.1145/1629911.1630034
Godlin, B., Strichman, O.: Regression verification: Proving the equivalence of similar programs. Softw. Test. Verif. Reliab. 23(3), 241â€“258 (2013). https://doi.org/10.1002/stvr.1472
Heizmann, M., Chen, Y.F., Dietsch, D., Greitschus, M., Hoenicke, J., Li, Y., Nutz, A., Musa, B., Schilling, C., Schindler, T., Podelski, A.: Ultimate Automizer and the search for perfect interpolants (competition contribution). In: Proc. TACAS (2), LNCS, vol. 10806, pp. 447â€“451. Springer (2018). https://doi.org/10.1007/9783319899633_30
Heizmann, M., Hoenicke, J., Podelski, A.: Refinement of trace abstraction. In: Proc. SAS, LNCS, vol. 5673, pp. 69â€“85. Springer (2009). https://doi.org/10.1007/9783642032370_7
Heizmann, M., Hoenicke, J., Podelski, A.: Software model checking for people who love automata. In: Proc. CAV, LNCS, vol. 8044, pp. 36â€“52. Springer (2013). https://doi.org/10.1007/9783642397998_2
Henzinger, T.A., Jhala, R., Majumdar, R., McMillan, K.L.: Abstractions from proofs. In: Proc. POPL, pp. 232â€“244. ACM (2004). https://doi.org/10.1145/964001.964021
Henzinger, T.A., Jhala, R., Majumdar, R., Sanvido, M.A.A.: Extreme model checking. In: Verification: Theory and Practice, LNCS, vol. 2772, pp. 332â€“358 (2003). https://doi.org/10.1007/9783540399100_16
Henzinger, T.A., Jhala, R., Majumdar, R., Sutre, G.: Lazy abstraction. In: Proc. POPL, pp. 58â€“70. ACM (2002). https://doi.org/10.1145/503272.503279
Jackson, D., Ladd, D.A.: Semantic Diff: A tool for summarizing the effects of modifications. In: Proc. ICSM, pp. 243â€“252. IEEE (1994). https://doi.org/10.1109/ICSM.1994.336770
Jia, X., Ghezzi, C., Ying, S.: Enhancing reuse of constraint solutions to improve symbolic execution. In: Proc. ISSTA, pp. 177â€“187. ACM (2015). https://doi.org/10.1145/2771783.2771806
Kawaguchi, M., Lahiri, S., Rebelo, H.: Conditional equivalence. Tech. rep., Microsoft Research (2010)
Lahiri, S.K., McMillan, K.L., Sharma, R., Hawblitzel, C.: Differential assertion checking. In: Proc. FSE, pp. 345â€“355. ACM (2013). https://doi.org/10.1145/2491411.2491452
Lahiri, S.K., Murawski, A., Strichman, O., Ulbrich, M.: Program Equivalence (Dagstuhl Seminar 18151). Dagstuhl Reports 8(4), 1â€“19 (2018). https://doi.org/10.4230/DagRep.8.4.1
Lahiri, S.K., Vaswani, K., Hoare, C.A.R.: Differential static analysis: Opportunities, applications, and challenges. In: Proc. FoSER, pp. 201â€“204. ACM (2010). https://doi.org/10.1145/1882362.1882405
Lauterburg, S., Sobeih, A., Marinov, D., Viswanathan, M.: Incremental statespace exploration for programs with dynamically allocated data. In: Proc. ICSE, pp. 291â€“300. ACM (2008). https://doi.org/10.1145/1368088.1368128
Leino, K.R.M., WÃ¼stholz, V.: Finegrained caching of verification results. In: Proc. CAV, LNCS, vol. 9206, pp. 380â€“397. Springer (2015). https://doi.org/10.1007/9783319216904_22
McMillan, K.L.: Interpolation and SATbased model checking. In: Proc. CAV, LNCS, vol. 2725, pp. 1â€“13. Springer (2003). https://doi.org/10.1007/9783540450696_1
Mudduluru, R., Ramanathan, M.K.: Efficient incremental static analysis using path abstraction. In: Proc. FASE, LNCS, vol. 8411, pp. 125â€“139. Springer (2014). https://doi.org/10.1007/9783642548048_9
Partush, N., Yahav, E.: Abstract semantic differencing for numerical programs. In: Proc. SAS, LNCS, vol. 7935, pp. 238â€“258. Springer (2013). https://doi.org/10.1007/9783642388569_14
Person, S., Dwyer, M.B., Elbaum, S.G., PÄƒsÄƒreanu, C.S.: Differential symbolic execution. In: Proc. FSE, pp. 226â€“237. ACM (2008). https://doi.org/10.1145/1453101.1453131
Person, S., Yang, G., Rungta, N., Khurshid, S.: Directed incremental symbolic execution. In: Proc. PLDI, pp. 504â€“515. ACM (2011). https://doi.org/10.1145/1993498.1993558
Ramos, D.A., Engler, D.R.: Practical, loweffort equivalence verification of real code. In: Proc. CAV, LNCS, vol. 6806, pp. 669â€“685. Springer (2011). https://doi.org/10.1007/9783642221101_55
Rothenberg, B., Dietsch, D., Heizmann, M.: Incremental verification using trace abstraction. In: Proc. SAS, LNCS, vol. 11002, pp. 364â€“382. Springer (2018). https://doi.org/10.1007/9783319997254_22
Rungta, N., Person, S., Branchaud, J.: A change impact analysis to characterize evolving program behaviors. In: Proc. ICSM, pp. 109â€“118. IEEE (2012). https://doi.org/10.1109/ICSM.2012.6405261
Ryder, B.G.: Incremental dataflow analysis. In: Proc. POPL, pp. 167â€“176. ACM (1983). https://doi.org/10.1145/567067.567084
Seidl, H., Erhard, J., Vogler, R.: Incremental abstract interpretation. In: From Lambda Calculus to Cybersecurity Through Program Analysis  Essays Dedicated to Chris Hankin on the Occasion of His Retirement, LNCS, vol. 12065, pp. 132â€“148. Springer (2020). https://doi.org/10.1007/9783030411039_5
Sery, O., Fedyukovich, G., Sharygina, N.: Incremental upgrade checking by means of interpolationbased function summaries. In: Proc. FMCAD, pp. 114â€“121. FMCAD Inc. (2012)
Sokolsky, O.V., Smolka, S.A.: Incremental model checking in the modal mucalculus. In: Proc. CAV, LNCS, vol. 818, pp. 351â€“363. Springer (1994). https://doi.org/10.1007/3540581790_67
Strichman, O., Godlin, B.: Regression verification â€“ a practical way to verify programs. In: Proc. VSTTE, LNCS, vol. 4171, pp. 496â€“501. Springer (2008). https://doi.org/10.1007/9783540691495_54
Strichman, O., Veitsman, M.: Regression verification for unbalanced recursive functions. In: Proc. FM, LNCS, vol. 9995, pp. 645â€“658 (2016). https://doi.org/10.1007/9783319489896_39
SzabÃ³, T., Bergmann, G., Erdweg, S., Voelter, M.: Incrementalizing latticebased program analyses in Datalog. PACMPL 2(OOPSLA), 139:1â€“139:29 (2018). https://doi.org/10.1145/3276509
SzabÃ³, T., Erdweg, S., Voelter, M.: IncA: A DSL for the definition of incremental program analyses. In: Proc. ASE, pp. 320â€“331. ACM (2016). https://doi.org/10.1145/2970276.2970298
Visser, W., Geldenhuys, J., Dwyer, M.B.: Green: Reducing, reusing, and recycling constraints in program analysis. In: Proc. FSE, pp. 58:1â€“58:11. ACM (2012). https://doi.org/10.1145/2393596.2393665
Yang, G., Dwyer, M.B., Rothermel, G.: Regression model checking. In: Proc. ICSM, pp. 115â€“124. IEEE (2009). https://doi.org/10.1109/ICSM.2009.5306334
Yang, G., PÄƒsÄƒreanu, C.S., Khurshid, S.: Memoized symbolic execution. In: Proc. ISSTA, pp. 144â€“154. ACM (2012). https://doi.org/10.1145/2338965.2336771
Yoo, S., Harman, M.: Regression testing minimization, selection, and prioritization: A survey. STVR 22(2), 67â€“120 (2012). https://onlinelibrary.wiley.com/doi/abs/10.1002/stvr.430
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
Â© 2020 The Author(s)
About this paper
Cite this paper
Beyer, D., Jakobs, MC., Lemberger, T. (2020). Difference Verification with Conditions. In: de Boer, F., Cerone, A. (eds) Software Engineering and Formal Methods. SEFM 2020. Lecture Notes in Computer Science(), vol 12310. Springer, Cham. https://doi.org/10.1007/9783030587680_8
Download citation
DOI: https://doi.org/10.1007/9783030587680_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783030587673
Online ISBN: 9783030587680
eBook Packages: Computer ScienceComputer Science (R0)