Abstract
Witness validation is an important technique to increase trust in verification results, by making descriptions of error paths (violation witnesses) and important parts of the correctness proof (correctness witnesses) available in an exchangeable format. This way, the verification result can be validated independently from the verification in a second step. The problem is that there are unfortunately not many tools available for witnessbased validation of verification results. We contribute to closing this gap with the approach of validation via verification, which is a way to automatically construct a set of validators from a set of existing verification engines. The idea is to take as input a specification, a program, and a verification witness, and produce a new specification and a transformed version of the original program such that the transformed program satisfies the new specification if the witness is useful to confirm the result of the verification. Then, an ‘offtheshelf’ verifier can be used to validate the previously computed result (as witnessed by the verification witness) via an ordinary verification task. We have implemented our approach in the validator , and it was successfully used in SVCOMP 2020 and confirmed 3 653 violation witnesses and 16 376 correctness witnesses. The results show that improves the effectiveness (167 uniquely confirmed violation witnesses and 833 uniquely confirmed correctness witnesses) of the overall validation process, on a large benchmark set. All components and experimental data are publicly available.
This work was funded by the Deutsche Forschungsgemeinschaft (DFG) – 378803395.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
 Computeraided verification
 Software verification
 Program analysis
 Software model checking
 Certification
 Verification witnesses
 Validation of verification results
 Reducer
1 Introduction
Formal software verification becomes more and more important in the development process for software systems of all types. There are many verification tools available to perform verification [4]. One of the open problems that was addressed only recently is the topic of results validation [10,11,12, 37]: The verification work is often done by untrusted verification engines, on untrusted computing infrastructure, or even on approximating computation systems, and staticanalysis tools suffer from false positives that engineers in practice hate because they are tedious to refute [20]. Therefore, it is necessary to validate verification results, ideally by an independent verification engine that likely does not have the same weaknesses as the original verifier. Witnesses also help serving as an interface to the verification engine, in order to overcome integration problems [1].
The idea to witness the correctness of a program by annotating it with assertions is as old as programming [38], and from the beginning of model checking it was felt necessary to witness counterexamples [21]. Certifying algorithms [30] are not only computing a solution but also produce a witness that can be used by a computationally much less expensive checker to (re)establish the correctness of the solution. In software verification, witnesses became standardized^{Footnote 1} and exchangeable about five years ago [10, 11]. In the meanwhile, the exchangeable witnesses can be used also for deriving tests from witnesses [12], such that an engineer can study an error report additionally with a debugger. The ultimate goal of this direction of research is to obtain witnesses that are certificates and can be checked by a fully trusted validator based on trusted theorem provers, such as Coq and Isabelle, as done already for computational models that are ‘easier’ than C programs [40].
Yet, although considered very useful, there are not many witness validators available. For example, the most recent competition on software verification (SVCOMP 2020)^{Footnote 2} showcases 28 software verifiers but only 6 witness validators. Two were published in 2015 [11], two more in 2018 [12], the fifth in 2020 [37], and the sixth is , which we describe here. Witness validation is an interesting problem to work on, and there is a large, yet unexplored field of opportunities. It involves many different techniques from program analysis and model checking. However, it seems that this also requires a lot of engineering effort.
Our solution validation via verification is a construction that takes as input an offtheshelf software verifier and a new program transformer, and composes a witness validator in the following way (see Fig. 1): First, the transformer takes the original input program and transforms it into a new program. In case of a violation witness, which describes a path through the program to a specific program location, we transform the program such that all parts that are marked as unnecessary for the path by the witness are pruned. This is similar to the reducer for a condition in reducerbased conditional model checking [14]. In case of a correctness witness, which describes invariants that can be used in a correctness proof, we transform the program such that the invariants are asserted (to check that they really hold) and assumed (to use them in a reconstructed correctness proof). A standard verification engine is then asked to verify that (1) the transformed program contains a feasible path that violates the original specification (violation witness) or (2) the transformed program satisfies the original specification and all assertions added to the program hold (correctness witness).
is an implementation of this concept. It performs the transformation according to the witness type and specification, and can be configured to use any of the available software verifiers^{Footnote 3} as verification backend.
Contributions. contributes several important benefits:

The program transformer was a onetime effort and is available from now on.

Any existing standard verifier can be used as verification backend.

Once a new verification technology becomes available in a verification tool, it can immediately be turned into a validator using our new construction.

Technology bias can be avoided by complementing the verifier by a validator that is based on a different technology.

Selecting the strongest verifiers (e.g., by looking at competition results) can lead to strong validators.

All data and software that we describe are publicly available (see Sect. 6).
2 Preliminaries
For the theoretical part, we will have to set a common ground for the concepts of verification witnesses [10, 11] as well as reducers [14]. In both cases, programs are represented as controlflow automata (CFAs). A controlflow automaton \(C=(L,l_0,G)\) consists of a set L of control locations, an initial location \({l_0 \in L}\), and a set \(G\subseteq L \times Ops \times L\) of controlflow edges that are labeled with the operations in the program. In the mentioned literature on witnesses and reducers, a simple programming language is used in which operations are either assignments or assumptions over integer variables. Operations \(op \in Ops\) in such a language can be represented by formulas in first order logic over the sets V,\(V'\) of program variables before and after the transition, which we denote by \(op(V,V')\). In order to simplify our construction later on, we will also allow mixed operations of the form \(f(V) \wedge \left( x' = g(V)\right) \) that combine assumptions with an assignment, which would otherwise be represented as an assumption followed by an assignment operation.
The conversion from the source code into a CFA and vice versa is straight forward, provided that the CFA is deterministic. A CFA is called deterministic if in case there are multiple outgoing CFA edges from a location l, the assumptions in those edges are mutually exclusive (but not necessarily exhaustive).
Since our goal is to validate (i.e., prove or falsify) the statement that a program fulfills a certain specification, we need to additionally model the property to be verified. For properties that can be translated into nonreachability, this can be done by defining a set \(T \subseteq L\) of target locations that shall not be reached. For the example program in Fig. 2 we want to verify that the call in line 10 is not reachable. In the corresponding CFA in Fig. 3 this is represented by the reachability of the location labeled with 10. Depending on whether or not a verifier accounts for the overflow in this example program, it will either consider the program safe or unsafe, which makes it a perfect example that can be used to illustrate both correctness and violation witnesses.
In order to reason about the soundness of our approach, we need to also formalize the program semantics. This is done using the concept of concrete data states. A concrete data state is a mapping from the set V of program variables to their domain \(\mathbb {Z}\), and a concrete state is a pair of control location and concrete data state. A concrete program path is then defined as a sequence \(\pi = (c_0,l_0)\xrightarrow {g_1}\dots \xrightarrow {g_n}(c_n,l_n)\) where \(c_0\) is the initial concrete data state, \(g_i = (l_{i1}, op_i, l_i) \in G\), and \(c_{i1}(V),c_i(V') \vDash op_i\). A concrete execution \(ex(\pi )\) is then derived from a path \(\pi \) by only looking at the sequence \((c_0,l_0)\dots (c_n,l_n)\) of concrete states from the path. Note the we deviate here from the definition given in [14], where concrete executions do not contain information about the program locations. This is necessary here since we want to reason about the concrete executions that fulfill a given nonreachability specification, i.e., that never reach certain locations in the original program.
Witnesses are formalized using the concept of protocol automata [11]. A protocol automaton \(W=(Q,\varSigma ,\delta ,q_0,F)\) consists of a set Q of states, a set of transition labels \(\varSigma = 2^G \times \varPhi \), a transition relation \(\delta \subseteq Q \times \varSigma \times Q\), an initial state \(q_0\), and a set \(F \subseteq Q\) of final states. A state is a pair that consists of a name to identify the state and a predicate over the program variables V to represent the state invariant.^{Footnote 4} A transition label is a pair that consists of a subset of controlflow edges and a predicate over the program variables V to represent the guard condition for the transition to be taken. An observer automaton [11, 13, 32, 34, 36] is a protocol automaton that does not restrict the state space, i.e., if for each state \(q \in Q\) the disjunction of the guard conditions of all outgoing transitions is a tautology. Violation witnesses are represented by protocol automata in which all state invariants are true. Correctness witnesses are represented by observer automata in which the set of final states is empty.
3 Approach
3.1 From Witnesses to Programs
When given a CFA C = \((L,l_0,G)\), a specification T \(\subseteq \) L, and a witness automaton W = \((Q,\varSigma ,\delta ,q_0,F)\), we can construct a product automaton \({A_{C\times W} =(L\times Q,(l_0,q_0),\varGamma ,T \times F)}\) where \(\varGamma \subseteq (L\times Q)\times (Ops\times \varPhi ) \times (L \times Q)\). The new transition relation \(\varGamma \) is defined by allowing for each transition g in the CFA only those transitions \((S,\varphi )\) from the witness where \(g \in S\) holds:
We can now define the semantics of a witness by looking at the paths in the product automaton and mapping them to concrete executions in the original program. A path of the product automaton \(A_{C,W}\) is a sequence \({(l_0,q_0)\xrightarrow {\alpha _0}\dots \xrightarrow {\alpha _{n1}} (l_n,q_n)}\) such that \({\big ((l_i,q_i),\alpha _i,(l_{i+1},q_{i+1})\big ) \in \varGamma }\) and \({\alpha _i=(op_i,\phi _i)}\).
It is evident that the automaton \(A_{C \times W}\) can easily be mapped to a new program \(C_{C\times W}\) by reducing the pair \((op,\varphi )\) in its transition relation to an operation \(\overline{op}\). In case op is a pure assumption of the form f(V) then \(\overline{op}\) will simply be \(f(V) \wedge \varphi (V)\). If op is an assignment of the form \(f(V) \wedge \left( x' = g(V)\right) \), then \(\overline{op}\) will be \((f(V)\wedge \varphi (V)) \wedge \left( x' = g(V)\right) \). This construction has the drawback that the resulting CFA might be nondeterministic, but this is actually not a problem when the corresponding program is only used for verification. The nondeterminism can be expressed in the source code by using nondeterministic values, which are already formalized by the community and established in the SVCOMP rules, and therefore also supported by all participating verifiers. The concrete executions of \(C_{C\times W}\) can be identified with concrete executions of C by projecting their pairs (l, q) on their first element. Let \(proj_C(ex(C_{C\times W}))\) denote the set of concrete executions that is derived this way. Due to how the relation \(\varGamma \) of \(A_{C \times W}\) is constructed, it is guaranteed that this is a subset of the executions of C, i.e., \(proj_C(ex(C_{C\times W})) \subseteq ex(C)\). In this respect the witness acts in very much the same way as a reducer [14], and the reduction of the search space is also one of the desired properties of a validator for violation witnesses.
3.2 Programs from Violation Witnesses
For explaining the validation of results based on a violation witness, we consider the witness in Fig. 4 for our example C program in Fig. 2. The program \(C_{C\times W_V}\) resulting from product automaton \(A_{C\times W_V}\) in Fig. 5 can be passed to a verifier. If this verification finds an execution that reaches a specification violation, then this violation is guaranteed to be also present in the original program. There is however one caveat: In the example in Fig. 5, a reachable state \((10,q_0)\) at program location 10 (i.e., a state that violates the specification) can be found that is not marked as accepting state in the witness automaton \(W_V\). For a strict version of witness validation, we can remove all states that are in \(T\times Q\) but not in \(T \times F\) from the product automaton, and thus, from the generated program. This will ensure that if the verifier finds a violation in the generated program, the witness automaton also accepts the found error path. The version of that was used in SVCOMP 2020 did not yet support strict witness validation.
3.3 Programs from Correctness Witnesses
Correctness witnesses are represented by observer automata. Figure 6 shows a potential correctness witness \(W_C\) for our example program C in Fig. 2, where the invariants are annotated in bold font next to the corresponding state. The construction of the product automaton \(A_{C\times W_C}\) in Fig. 7 is a first step towards reestablishing the proof of correctness: the product states tell us to which control locations of the CFA for the program the invariants from the witness belong.
The idea of a result validator for correctness witnesses is to

1.
check the invariants in the witness and

2.
use the invariants to establish that the original specification holds.
We can achieve the second goal by extracting the invariants from each state in the product automaton \(A_{C\times W_C}\) and adding them as conditions to all edges by which the state can be reached. This will then be semantically equivalent to assuming that the invariants hold at the state and potentially make the consecutive proof easier. For soundness we need to also ensure the first goal. To achieve that, we add transitions into a (new) accepting state from \(T\times F\) whenever we transition into a state q and the invariant of q does not hold, and we add selfloops such that the automaton stays in the new accepting state forever. In sum, for each invariant, there are two transitions, one with the invariant as guard (to assume that the invariant holds) and one with the negation of the invariant as guard (to assert that the invariant holds, going to an accepting (error) state if it does not hold). This transformation ensures that the resulting automaton after the transformation is still a proper observer automaton.
4 Evaluation
This section describes the results that were obtained in the 9th Competition on Software Verification (SVCOMP 2020), in which participated as validator. We did not perform a separate evaluation because the results of SVCOMP are complete, accurate, and reproducible; all data and tools are publicly available for inspection and replication studies (see data availability in Sect. 6).
4.1 Experimental Setup
Execution Environment. In SVCOMP 2020, the validators were executed in a benchmark environment that makes use of a cluster with 168 machines, each of them having an Intel Xeon E31230 v5 CPU with 8 processing units, 33 GB of RAM, and the GNU/Linux operating system Ubuntu 18.04. Each validation run was limited to 2 processing units and 7 GB of RAM, in order to allow up to 4 validation runs to be executed on the same machine at the same time. The time limit for a validation run was set to 15 min for correctness witnesses and to 90 s for violation witnesses. The benchmarking framework 2.5.1 was used to ensure that the different runs do not influence each other and that the resource limits are measured and enforced reliably [15]. The exact information to replicate the runs of SVCOMP 2020 can be found in Sect. 3 of the competition report [4].
Benchmark Tasks. The verification tasks^{Footnote 5} of SVCOMP can be partitioned wrt. their specification into ReachSafety, MemSafety, NoOverflows, and Termination. Validators can be configured using different options for each specification.
Validator Configuration. Since our architecture (cf. Fig. 1) allows for a wide range of verifiers to be used for validation, there are many interesting configurations for constructing a validator. Exploring all of these in order to find the best configuration, however, would require significant computational resources, and also be susceptible to overfitting. Instead, we chose a heuristic based on the results of the competition from the previous year, i.e., SVCOMP 2019 [3]. The idea is that a verifier which performed well at verifying tasks for a specific specification is also a promising candidate to be used in validating results for that specification. Therefore the configuration of our validator uses as verifier for tasks with specification ReachSafety, for NoOverflow and Termination, and for MemSafety.
4.2 Results
The results of the validation phase in SVCOMP 2020 [5] are summarized in Table 1 (for violation witnesses) and Table 2 (for correctness witnesses). For each specification, was able to not only confirm a large number of results that were also validated by other tools, but also to confirm results that were not previously validated by any of the other tools.^{Footnote 6}
For violation witnesses, we can observe that confirms significantly less witnesses than the other validators. This can be explained partially by the restrictive time limit of 90 s. Our approach not only adds overhead when generating the program from the witness, but this new program can also be harder to parse and analyze for the verifier we use in the backend. It is also the case that the verifiers that we use in are not tuned for such a short time limit, as a verifier in the competition will always get the full 15 min. For specification ReachSafety, for example, we use , which starts with a very simply analysis and switches verification strategies after a fixed time that happens to be also 90 s. So in this case we will never benefit from the more sophisticated strategies that offers.
For validation of correctness witnesses, where the time limit is higher, this effect is less noticeable such that the number of results confirmed by is more in line with the numbers achieved by the other validators. For specification MemSafety, even confirms more correctness witnesses than . This indicates that was a good choice in our configuration for that specification. generally performs much better in verification of MemSafety tasks than , so this result was expected.
Before the introduction of , there was only one validator for correctness witnesses in the categories NoOverflow and MemSafety, while constructing a validator for those categories with our approach did not require any additional development effort.
5 Related Work
Programs from Proofs. Our approach for generating programs can be seen as a variant of the Programs from Proofs (PfP) framework [27, 41]. Both generate programs from an abstract reachability graph of the original program. The difference is that PfP tries to remove all specification violations from the graph, while we just encode them into the generated program as violation of the standard reachability property. We do this for the original specification and the invariants in the witness, which we treat as additional specifications.
AutomataBased Software Model Checking. Our approach is also similar to that of the validator [10]. For violation witnesses, it also constructs the product of CFA and witness. For correctness witnesses, it instruments the invariants directly into the CFA of the program (see [10], Sect. 4.2) and passes the result to its verification engine, while constructs the product of CFA and witness, and applies a similar instrumentation. In both cases, ’s transformer produces a C program, which can be passed to an independent verifier.
ReducerBased Conditional Model Checking. The concept of generating programs from an ARG has also been used to successfully construct conditional verifiers [14]. Our approach for correctness witnesses can be seen as a special case of this technique, where acts as initial verifier that does not try to reduce the search space and instead just instruments the invariants from the correctness witness as additional specification into the program.
Verification Artifacts and Interfacing. The problem that verification results are not treated well enough by the developers of verification tools is known [1] and there are also other works that address the same problem, for example, the work on execution reports [19] or on cooperative verification [17].
TestCase Generation. The idea to generate test cases from verification counterexamples is more than ten years old [8, 39], has since been used to create debuggable executables [31, 33], and was extended and combined to various successful automatic testcase generation approaches [24, 25, 29, 35].
Execution. Other approaches [18, 22, 28] focus on creating tests from concrete and toolspecific counterexamples. In contrast, witness validation does not require full counterexamples, but works on more flexible, possibly abstract, violation witnesses from a wide range of verification tools.
Debugging and Visualization. Besides executing a test, it is important to understand the cause of the error path [23], and there are tools and methods to debug and visualize program paths [2, 9, 26].
6 Conclusion
We address the problem of constructing a tool for witness validation in a systematic and generic way: We developed the concept of validation via verification, which is a twostep approach that first applies a program transformation and then applies an offtheshelf verification tool, without development effort.
The concept is implemented in the witness validator , which has already been successfully used in SVCOMP 2020. The validation results are impressive: the new validator enriches the competition’s validation capabilities by 164 uniquely confirmed violation results and 834 uniquely confirmed correctness results, based on the witnesses provided by the verifiers. This paper does not contain an own evaluation, but refers to results from the recent competition in the field.
The major benefit of our concept is that it is now possible to configure a spectrum of validators with different strengths, based on different verification engines. The ‘time to market’ of new verification technology into validators is negligibly small because there is no development effort necessary to construct new validators from new verifiers. A potential technology bias is also reduced.
Data Availability Statement. All data from SVCOMP 2020 are publicly available: witnesses [7], verification and validation results as well as log files [5], and benchmark programs and specifications [6]^{Footnote 7}. The validation statistics in Tables 1 and 2 are available in the archive [5] and on the SVCOMP website^{Footnote 8}. 1.0 is available on GitLab^{Footnote 9} and in our AECapproved virtual machine [16].
Notes
 1.
Latest version of standardized witness format: https://github.com/sosylab/svwitnesses
 2.
 3.
 4.
These invariants are the central piece of information in correctness witnesses. While invariants that proof a program correct can be hard to come up with, they are usually easier to check.
 5.
 6.
In the statistics, a witness is only counted as confirmed if the verifier correctly stated whether the input program satisfies the respective specification.
 7.
 8.
 9.
References
Alglave, J., Donaldson, A.F., Kröning, D., Tautschnig, M.: Making software verification tools really work. In: Proc. ATVA, LNCS, vol. 6996, pp. 28–42. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642243721_3
Artho, C., Havelund, K., Honiden, S.: Visualization of concurrent program executions. In: Proc. COMPSAC, pp. 541–546. IEEE (2007). https://doi.org/10.1109/COMPSAC.2007.236
Beyer, D.: Automatic verification of C and Java programs: SVCOMP 2019. In: Proc. TACAS (3), LNCS, vol. 11429, pp. 133–155. Springer, Cham (2019). https://doi.org/10.1007/9783030175023_9
Beyer, D.: Advances in automatic software verification: SVCOMP 2020. In: Proc. TACAS (2), LNCS, vol. 12079, pp. 347–367. Springer, Cham (2020). https://doi.org/10.1007/9783030452377_21
Beyer, D.: Results of the 9th International Competition on Software Verification (SVCOMP 2020). Zenodo (2020). https://doi.org/10.5281/zenodo.3630205
Beyer, D.: SVBenchmarks: Benchmark set of 9th Intl. Competition on Software Verification (SVCOMP 2020). Zenodo (2020). https://doi.org/10.5281/zenodo.3633334
Beyer, D.: Verification witnesses from SVCOMP 2020 verification tools. Zenodo (2020). https://doi.org/10.5281/zenodo.3630188
Beyer, D., Chlipala, A.J., Henzinger, T.A., Jhala, R., Majumdar, R.: Generating tests from counterexamples. In: Proc. ICSE, pp. 326–335. IEEE (2004). https://doi.org/10.1109/ICSE.2004.1317455
Beyer, D., Dangl, M.: Verificationaided debugging: An interactive webservice for exploring error witnesses. In: Proc. CAV (2), LNCS, vol. 9780, pp. 502–509. Springer, Cham (2016). https://doi.org/10.1007/9783319415406_28
Beyer, D., Dangl, M., Dietsch, D., Heizmann, M.: Correctness witnesses: Exchanging verification results between verifiers. In: Proc. FSE, pp. 326–337. ACM (2016). https://doi.org/10.1145/2950290.2950351
Beyer, D., Dangl, M., Dietsch, D., Heizmann, M., Stahlbauer, A.: Witness validation and stepwise testification across software verifiers. In: Proc. FSE, pp. 721–733. ACM (2015). https://doi.org/10.1145/2786805.2786867
Beyer, D., Dangl, M., Lemberger, T., Tautschnig, M.: Tests from witnesses: Executionbased validation of verification results. In: Proc. TAP, LNCS, vol. 10889, pp. 3–23. Springer, Cham (2018). https://doi.org/10.1007/9783319929941_1
Beyer, D., Gulwani, S., Schmidt, D.: Combining model checking and dataflow analysis. In: Handbook of Model Checking, pp. 493–540. Springer, Cham (2018). https://doi.org/10.1007/9783319105758_16
Beyer, D., Jakobs, M.C., Lemberger, T., Wehrheim, H.: Reducerbased construction of conditional verifiers. In: Proc. ICSE, pp. 1182–1193. ACM (2018). https://doi.org/10.1145/3180155.3180259
Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: Requirements and solutions. Int. J. Softw. Tools Technol. Transfer 21(1), 1–29 (2017). https://doi.org/10.1007/s100090170469y
Beyer, D., Spiessl, M.: Replication package (virtual machine) for article ‘MetaVal: Witness validation via verification’ in Proc. CAV 2020. Zenodo (2020). https://doi.org/10.5281/zenodo.3831417
Beyer, D., Wehrheim, H.: Verification artifacts in cooperative verification: Survey and unifying component framework. arXiv/CoRR 1905(08505), May 2019. https://arxiv.org/abs/1905.08505
Cadar, C., Ganesh, V., Pawlowski, P.M., Dill, D.L., Engler, D.R.: EXE: Automatically generating inputs of death. In: Proc. CCS, pp. 322–335. ACM (2006). https://doi.org/10.1145/1180405.1180445
Castaño, R., Braberman, V.A., Garbervetsky, D., Uchitel, S.: Model checker execution reports. In: Proc. ASE, pp. 200–205. IEEE (2017). https://doi.org/10.1109/ASE.2017.8115633
Christakis, M., Bird, C.: What developers want and need from program analysis: An empirical study. In: Proc. ASE, pp. 332–343. ACM (2016). https://doi.org/10.1145/2970276.2970347
Clarke, E.M., Grumberg, O., McMillan, K.L., Zhao, X.: Efficient generation of counterexamples and witnesses in symbolic model checking. In: Proc. DAC, pp. 427–432. ACM (1995). https://doi.org/10.1145/217474.217565
Csallner, C., Smaragdakis, Y.: Check ‘n’ crash: Combining static checking and testing. In: Proc. ICSE, pp. 422–431. ACM (2005). https://doi.org/10.1145/1062455.1062533
Ermis, E., Schäf, M., Wies, T.: Error invariants. In: Proc. FM, LNCS, vol. 7436, pp. 187–201. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642327599_17
Godefroid, P., Klarlund, N., Sen, K.: Dart: Directed automated random testing. In: Proc. PLDI, pp. 213–223. ACM (2005). https://doi.org/10.1145/1065010.1065036
Gulavani, B.S., Henzinger, T.A., Kannan, Y., Nori, A.V., Rajamani, S.K.: Synergy: A new algorithm for property checking. In: Proc. FSE, pp. 117–127. ACM (2006). https://doi.org/10.1145/1181775.1181790
Gunter, E.L., Peled, D.A.: Path exploration tool. In: Proc. TACAS, LNCS, vol. 1579, pp. 405–419. Springer, Heidelberg (1999). https://doi.org/10.1007/3540490590_28
Jakobs, M.C., Wehrheim, H.: Programs from proofs: A framework for the safe execution of untrusted software. ACM Trans. Program. Lang. Syst. 39(2), 7:1–7:56 (2017). https://doi.org/10.1145/3014427
Li, K., Reichenbach, C., Csallner, C., Smaragdakis, Y.: Residual investigation: Predictive and precise bug detection. In: Proc. ISSTA, pp. 298–308. ACM (2012). https://doi.org/10.1145/2338965.2336789
Majumdar, R., Sen, K.: Hybrid concolic testing. In: Proc. ICSE, pp. 416–426. IEEE (2007). https://doi.org/10.1109/ICSE.2007.41
McConnell, R.M., Mehlhorn, K., Näher, S., Schweitzer, P.: Certifying algorithms. Comput. Sci. Rev. 5(2), 119–161 (2011). https://doi.org/10.1016/j.cosrev.2010.09.009
Müller, P., Ruskiewicz, J.N.: Using debuggers to understand failed verification attempts. In: Proc. FM, LNCS, vol. 6664, pp. 73–87. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642214370_8
Plasil, F., Visnovsky, S.: Behavior protocols for software components. IEEE Trans. Software Eng. 28(11), 1056–1076 (2002). https://doi.org/10.1109/TSE.2002.1049404
Rocha, H., Barreto, R.S., Cordeiro, L.C., Neto, A.D.: Understanding programming bugs in ANSIC software using bounded model checking counterexamples. In: Proc. IFM, LNCS, vol. 7321, pp. 128–142. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642307294_10
Schneider, F.B.: Enforceable security policies. ACM Trans. Inf. Syst. Secur. 3(1), 30–50 (2000). https://doi.org/10.1145/353323.353382
Sen, K., Marinov, D., Agha, G.: Cute: A concolic unit testing engine for C. In: Proc. FSE, pp. 263–272. ACM (2005). https://doi.org/10.1145/1081706.1081750
Šerý, O.: Enhanced property specification and verification in Blast. In: Proc. FASE, LNCS, vol. 5503, pp. 456–469. Springer, Heidelberg (2009). https://doi.org/10.1007/9783642005930_32
Svejda, J., Berger, P., Katoen, J.P.: Interpretationbased violation witness validation for C: NitWit. In: Proc. TACAS, LNCS, vol. 12078, pp. 40–57. Springer, Cham (2020). https://doi.org/10.1007/9783030451905_3
Turing, A.: Checking a large routine. In: Report on a Conference on High Speed Automatic Calculating Machines, pp. 67–69. Cambridge Univ. Math. Lab. (1949)
Visser, W., Păsăreanu, C.S., Khurshid, S.: Testinput generation with Java PathFinder. In: Proc. ISSTA, pp. 97–107. ACM (2004). https://doi.org/10.1145/1007512.1007526
Wimmer, S., von Mutius, J.: Verified certification of reachability checking for timed automata. In: Proc. TACAS, LNCS, vol. 12078, pp. 425–443. Springer, Cham (2020). https://doi.org/10.1007/9783030451905_24
Wonisch, D., Schremmer, A., Wehrheim, H.: Programs from proofs: A PCC alternative. In: Proc. CAV, LNCS, vol. 8044, pp. 912–927. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642397998_65
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2020 The Author(s)
About this paper
Cite this paper
Beyer, D., Spiessl, M. (2020). MetaVal: Witness Validation via Verification. In: Lahiri, S., Wang, C. (eds) Computer Aided Verification. CAV 2020. Lecture Notes in Computer Science(), vol 12225. Springer, Cham. https://doi.org/10.1007/9783030532918_10
Download citation
DOI: https://doi.org/10.1007/9783030532918_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783030532901
Online ISBN: 9783030532918
eBook Packages: Computer ScienceComputer Science (R0)