Advertisement

Advances in Automatic Software Verification: SV-COMP 2020

  • Dirk BeyerEmail author
Open Access
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12079)

Abstract

This report describes the 2020 Competition on Software Verification (SV-COMP), the 9\(^{\text {th}}\) edition of a series of comparative evaluations of fully automatic software verifiers for C and Java programs. The competition provides a snapshot of the current state of the art in the area, and has a strong focus on replicability of its results. The competition was based on 11 052 verification tasks for C programs and 416 verification tasks for Java programs. Each verification task consisted of a program and a property (reachability, memory safety, overflows, termination). SV-COMP 2020 had 28 participating verification systems from 11 countries.

Keywords

Formal Verification Program Analysis Competition 

References

  1. 1.
    Afzal, M., Asia, A., Chauhan, A., Chimdyalwar, B., Darke, P., Datar, A., Kumar, S., Venkatesh, R.: VeriAbs: Verification by abstraction and test generation. In: Proc. ASE. pp. 1138–1141 (2019).  https://doi.org/10.1109/ASE.2019.00121
  2. 2.
    Afzal, M., Chakraborty, S., Chauhan, A., Chimdyalwar, B., Darke, P., Gupta, A., Kumar, S., M., C.B., Unadkat, D., Venkatesh, R.: VeriAbs: Verification by abstraction and test generation (competition contribution). In: Proc. TACAS (2). LNCS 12079, Springer (2020)Google Scholar
  3. 3.
    Andrianov, P., Friedberger, K., Mandrykin, M.U., Mutilin, V.S., Volkov, A.: CPA-BAM-BNB: Block-abstraction memoization and region-based memory models for predicate abstractions (competition contribution). In: Proc. TACAS. pp. 355–359. LNCS 10206, Springer (2017).  https://doi.org/10.1007/978-3-662-54580-5_22
  4. 4.
    Andrianov, P., Mutilin, V., Khoroshilov, A.: Predicate abstraction based configurable method for data race detection in Linux kernel. In: Proc. TMPA. CCIS 779, Springer (2018).  https://doi.org/10.1007/978-3-319-71734-0_2
  5. 5.
    Balyo, T., Heule, M.J.H., Järvisalo, M.: SAT Competition 2016: Recent developments. In: Proc. AAAI. pp. 5061–5063. AAAI Press (2017)Google Scholar
  6. 6.
    Baranová, Z., Barnat, J., Kejstová, K., Kučera, T., Lauko, H., Mrázek, J., Ročkai, P., Štill, V.: Model checking of C and C++ with Divine 4. In: Proc. ATVA. pp. 201–207. LNCS 10482, Springer (2017).  https://doi.org/10.1007/978-3-319-68167-2_14
  7. 7.
    Bartocci, E., Beyer, D., Black, P.E., Fedyukovich, G., Garavel, H., Hartmanns, A., Huisman, M., Kordon, F., Nagele, J., Sighireanu, M., Steffen, B., Suda, M., Sutcliffe, G., Weber, T., Yamada, A.: TOOLympics 2019: An overview of competitions in formal methods. In: Proc. TACAS (3). pp. 3–24. LNCS 11429, Springer (2019).  https://doi.org/10.1007/978-3-030-17502-3_1
  8. 8.
    Beyer, D.: Competition on software verification (SV-COMP). In: Proc. TACAS. pp. 504–524. LNCS 7214, Springer (2012).  https://doi.org/10.1007/978-3-642-28756-5_38
  9. 9.
    Beyer, D.: Second competition on software verification (Summary of SV-COMP 2013). In: Proc. TACAS. pp. 594–609. LNCS 7795, Springer (2013).  https://doi.org/10.1007/978-3-642-36742-7_43
  10. 10.
    Beyer, D.: Status report on software verification (Competition summary SV-COMP 2014). In: Proc. TACAS. pp. 373–388. LNCS 8413, Springer (2014).  https://doi.org/10.1007/978-3-642-54862-8_25
  11. 11.
    Beyer, D.: Software verification and verifiable witnesses (Report on SV-COMP 2015). In: Proc. TACAS. pp. 401–416. LNCS 9035, Springer (2015).  https://doi.org/10.1007/978-3-662-46681-0_31
  12. 12.
    Beyer, D.: Reliable and reproducible competition results with BenchExec and witnesses (Report on SV-COMP 2016). In: Proc. TACAS. pp. 887–904. LNCS 9636, Springer (2016).  https://doi.org/10.1007/978-3-662-49674-9_55
  13. 13.
    Beyer, D.: Software verification with validation of results (Report on SV-COMP 2017). In: Proc. TACAS. pp. 331–349. LNCS 10206, Springer (2017).  https://doi.org/10.1007/978-3-662-54580-5_20
  14. 14.
    Beyer, D.: Automatic verification of C and Java programs: SV-COMP 2019. In: Proc. TACAS (3). pp. 133–155. LNCS 11429, Springer (2019).  https://doi.org/10.1007/978-3-030-17502-3_9
  15. 15.
    Beyer, D.: First international competition on software testing (Test-Comp2019). Int. J. Softw. Tools Technol. Transf. (2020)Google Scholar
  16. 16.
    Beyer, D.: Results of the 9th International Competition on Software Verification (SV-COMP 2020). Zenodo (2020).  https://doi.org/10.5281/zenodo.3630205
  17. 17.
    Beyer, D.: SV-Benchmarks: Benchmark set of 9th Intl. Competition on Software Verification (SV-COMP 2020). Zenodo (2020).  https://doi.org/10.5281/zenodo.3633334
  18. 18.
    Beyer, D.: Verification witnesses from SV-COMP 2020 verification tools. Zenodo (2020).  https://doi.org/10.5281/zenodo.3630188
  19. 19.
    Beyer, D., Dangl, M., Dietsch, D., Heizmann, M.: Correctness witnesses: Exchanging verification results between verifiers. In: Proc. FSE. pp. 326–337. ACM (2016).  https://doi.org/10.1145/2950290.2950351
  20. 20.
    Beyer, D., Dangl, M., Dietsch, D., Heizmann, M., Stahlbauer, A.: Witness validation and stepwise testification across software verifiers. In: Proc. FSE. pp. 721–733. ACM (2015).  https://doi.org/10.1145/2786805.2786867
  21. 21.
    Beyer, D., Dangl, M., Lemberger, T., Tautschnig, M.: Tests from witnesses: Execution-based validation of verification results. In: Proc. TAP. pp. 3–23. LNCS 10889, Springer (2018).  https://doi.org/10.1007/978-3-319-92994-1_1
  22. 22.
    Beyer, D., Keremoglu, M.E.: CPAchecker: A tool for configurable software verification. In: Proc. CAV. pp. 184–190. LNCS 6806, Springer (2011).  https://doi.org/10.1007/978-3-642-22110-1_16
  23. 23.
    Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: Requirements and solutions. Int. J. Softw. Tools Technol. Transfer 21(1), 1–29 (2019).  https://doi.org/10.1007/s10009-017-0469-y
  24. 24.
    Beyer, D., Wendler, P.: CPU Energy Meter: A tool for energy-aware algorithms engineering. In: Proc. TACAS (2). LNCS 12079, Springer (2020)Google Scholar
  25. 25.
    Beyer, D., Spiessl, M.: MetaVal: Witness validation via verification. In: unpublished manuscript (2020)Google Scholar
  26. 26.
    Brain, M., Joshi, S., Kröning, D., Schrammel, P.: Safety verification and refutation by k-invariants and k-induction. In: Proc. SAS. pp. 145–161. LNCS 9291, Springer (2015).  https://doi.org/10.1007/978-3-662-48288-9_9
  27. 27.
    Brückner, I., Dräger, K., Finkbeiner, B., Wehrheim, H.: Slicing abstractions. Fundam. Inform. 89(4), 369–392 (2008)Google Scholar
  28. 28.
    Chalupa, M., Jašek, T., Tomovič, L., Hruška, M., Šoková, V., Ayaziová, P., Strejček, J., Vojnar, T.: Symbiotic 7: Integration of Predator and more (competition contribution). In: Proc. TACAS (2). LNCS 12079, Springer (2020)Google Scholar
  29. 29.
    Chalupa, M., Strejcek, J., Vitovská, M.: Joint forces for memory safety checking. In: Proc. SPIN. pp. 115–132. Springer (2018).  https://doi.org/10.1007/978-3-319-94111-0_7
  30. 30.
    Chaudhary, E., Joshi, S.: Pinaka: Symbolic execution meets incremental solving (competition contribution). In: Proc. TACAS (3). pp. 234–238. LNCS 11429, Springer (2019).  https://doi.org/10.1007/978-3-030-17502-3_20
  31. 31.
    Chowdhury, A.B., Medicherla, R.K., Venkatesh, R.: VeriFuzz: Program-aware fuzzing (competition contribution). In: Proc. TACAS (3). pp. 244–249. LNCS 11429, Springer (2019).  https://doi.org/10.1007/978-3-030-17502-3_22
  32. 32.
    Cok, D.R., Déharbe, D., Weber, T.: The 2014 SMT competition. JSAT 9, 207–242 (2016)Google Scholar
  33. 33.
    Cordeiro, L.C., Kesseli, P., Kröning, D., Schrammel, P., Trtík, M.: JBmc: A bounded model checking tool for verifying Java bytecode. In: Proc. CAV. pp. 183–190. LNCS 10981, Springer (2018).  https://doi.org/10.1007/978-3-319-96145-3_10
  34. 34.
    Cordeiro, L.C., Kröning, D., Schrammel, P.: JBmc: Bounded model checking for Java bytecode (competition contribution). In: Proc. TACAS (3). pp. 219–223. LNCS 11429, Springer (2019).  https://doi.org/10.1007/978-3-030-17502-3_17
  35. 35.
    Czech, M., Hüllermeier, E., Jakobs, M.C., Wehrheim, H.: Predicting rankings of software verification tools. In: Proc. SWAN. pp. 23–26. ACM (2017).  https://doi.org/10.1145/3121257.3121262
  36. 36.
    Dangl, M., Löwe, S., Wendler, P.: CPAchecker with support for recursive programs and floating-point arithmetic (competition contribution). In: Proc. TACAS. pp. 423–425. LNCS 9035, Springer (2015).  https://doi.org/10.1007/978-3-662-46681-0_34
  37. 37.
    Dietsch, D., Heizmann, M., Nutz, A., Schãtzle, C., Schüssele, F.: Ultimate Taipan with symbolic interpretation and fluid abstractions (competition contribution). In: Proc. TACAS (2). LNCS 12079, Springer (2020)Google Scholar
  38. 38.
    Gadelha, M.Y.R., Monteiro, F.R., Cordeiro, L.C., Nicole, D.A.: Esbmc v6.0: Verifying C programs using k-induction and invariant inference (competition contribution). In: Proc. TACAS (3). pp. 209–213. LNCS 11429, Springer (2019).  https://doi.org/10.1007/978-3-030-17502-3_15
  39. 39.
    Gadelha, M.Y., Ismail, H.I., Cordeiro, L.C.: Handling loops in bounded model checking of C programs via k-induction. Int. J. Softw. Tools Technol. Transf. 19(1), 97–114 (Feb 2017).  https://doi.org/10.1007/s10009-015-0407-9
  40. 40.
    Gavrilenko, N., Ponce de León, H., Furbach, F., Heljanko, K., Meyer, R.: BMC for weak memory models: Relation analysis for compact SMT encodings. In: Proc. CAV. pp. 355–365. LNCS 11561, Springer (2019).  https://doi.org/10.1007/978-3-030-25540-4_19
  41. 41.
    Greitschus, M., Dietsch, D., Podelski, A.: Loop invariants from counterexamples. In: Proc. SAS. pp. 128–147. LNCS 10422, Springer (2017).  https://doi.org/10.1007/978-3-319-66706-5_7
  42. 42.
    Heizmann, M., Chen, Y.F., Dietsch, D., Greitschus, M., Hoenicke, J., Li, Y., Nutz, A., Musa, B., Schilling, C., Schindler, T., Podelski, A.: Ultimate Automizer and the search for perfect interpolants (competition contribution). In: Proc. TACAS (2). pp. 447–451. LNCS 10806, Springer (2018).  https://doi.org/10.1007/978-3-319-89963-3_30
  43. 43.
    Heizmann, M., Hoenicke, J., Podelski, A.: Software model checking for people who love automata. In: Proc. CAV. pp. 36–52. LNCS 8044, Springer (2013).  https://doi.org/10.1007/978-3-642-39799-8_2
  44. 44.
    Holík, L., Kotoun, M., Peringer, P., Šoková, V., Trtík, M., Vojnar, T.: Predator shape analysis tool suite. In: Hardware and Software: Verification and Testing. pp. 202–209. LNCS 10028, Springer (2016).  https://doi.org/10.1007/978-3-319-49052-6
  45. 45.
    Howar, F., Isberner, M., Merten, M., Steffen, B., Beyer, D.: The RERS grey-box challenge 2012: Analysis of event-condition-action systems. In: Proc. ISoLA. pp. 608–614. LNCS 7609, Springer (2012).  https://doi.org/10.1007/978-3-642-34026-0_45
  46. 46.
    Huisman, M., Klebanov, V., Monahan, R.: VerifyThis 2012: A program verification competition. STTT 17(6), 647–657 (2015).  https://doi.org/10.1007/s10009-015-0396-8
  47. 47.
    Inverso, O., Tomasco, E., Fischer, B., La Torre, S., Parlato, G.: Lazy-CSeq: A lazy sequentialization tool for C (competition contribution). In: Proc. TACAS. pp. 398–401. LNCS 8413, Springer (2014).  https://doi.org/10.1007/978-3-642-54862-8_29
  48. 48.
    Inverso, O., Trubiani, C.: Parallel and distributed bounded model checking of multi-threaded programs. In: Proc. PPoPP. ACM (2020)Google Scholar
  49. 49.
    Kahsai, T., Rümmer, P., Sanchez, H., Schäf, M.: JayHorn: A framework for verifying Java programs. In: Proc. CAV. pp. 352–358. LNCS 9779, Springer (2016).  https://doi.org/10.1007/978-3-319-41528-4_19
  50. 50.
    Kahsai, T., Rümmer, P., Schäf, M.: JayHorn: A Java model checker (competition contribution). In: Proc. TACAS (3). pp. 214–218. LNCS 11429, Springer (2019).  https://doi.org/10.1007/978-3-030-17502-3_16
  51. 51.
    Kröning, D., Tautschnig, M.: Cbmc: C bounded model checker (competition contribution). In: Proc. TACAS. pp. 389–391. LNCS 8413, Springer (2014).  https://doi.org/10.1007/978-3-642-54862-8_26
  52. 52.
    Lauko, H., Ročkai, P., Barnat, J.: Symbolic computation via program transformation. In: Proc. ICTAC. pp. 313–332. Springer (2018).  https://doi.org/10.1007/978-3-030-02508-3_17
  53. 53.
    de Leon, H.P., Furbach, F., Heljanko, K., Meyer, R.: Dartagnan: Bounded model checking for weak memory models (competition contribution). In: Proc. TACAS (2). LNCS 12079, Springer (2020)Google Scholar
  54. 54.
    Luckow, K.S., Dimjasevic, M., Giannakopoulou, D., Howar, F., Isberner, M., Kahsai, T., Rakamaric, Z., Raman, V.: JDart: A dynamic symbolic analysis framework. In: Proc. TACAS. pp. 442–459. LNCSS 9636, Springer (2016).  https://doi.org/10.1007/978-3-662-49674-9_26
  55. 55.
    Malík, V., Schrammel, P., Vojnar, T.: 2ls: Heap analysis and memory safety (competition contribution). In: Proc. TACAS (2). LNCS 12079, Springer (2020)Google Scholar
  56. 56.
    Mues, M., Howar, F.: JDart: Dynamic symbolic execution for Java bytecode (competition contribution). In: Proc. TACAS (2). LNCS 12079, Springer (2020)Google Scholar
  57. 57.
    Noller, Y., Păsăreanu, C.S., Le, X.B.D., Visser, W., Fromherz, A.: Symbolic Pathfinder for SV-COMP (competition contribution). In: Proc. TACAS (3). pp. 239–243. LNCS 11429, Springer (2019).  https://doi.org/10.1007/978-3-030-17502-3_21
  58. 58.
    Nutz, A., Dietsch, D., Mohamed, M.M., Podelski, A.: Ultimate Kojak with memory safety checks (competition contribution). In: Proc. TACAS. pp. 458–460. LNCS 9035, Springer (2015).  https://doi.org/10.1007/978-3-662-46681-0_44
  59. 59.
    Peringer, P., Šoková, V., Vojnar, T.: PredatorHP revamped (not only) for interval-sized memory regions and memory reallocation (competition contribution). In: Proc. TACAS (2). LNCS 12079, Springer (2020)Google Scholar
  60. 60.
    Păsăreanu, C.S., Visser, W., Bushnell, D.H., Geldenhuys, J., Mehlitz, P.C.,Rungta, N.: Symbolic PathFinder: integrating symbolic execution with model checking for Java bytecode analysis. Autom. Software Eng. 20(3), 391–425 (2013).  https://doi.org/10.1007/s10515-013-0122-2
  61. 61.
    Quiring, B., Manolios, P.: Gacal: Conjecture-based verification (competition contribution). In: Proc. TACAS (2). LNCS 12079, Springer (2020)Google Scholar
  62. 62.
    Richter, C., Wehrheim, H.: PeSCo: Predicting sequential combinations of verifiers (competition contribution). In: Proc. TACAS (3). pp. 229–233. LNCS 11429, Springer (2019).  https://doi.org/10.1007/978-3-030-17502-3_19
  63. 63.
    Rocha, H.O., Menezes, R., Cordeiro, L., Barreto, R.: Map2Check: Using symbolic execution and fuzzing (competition contribution). In: Proc. TACAS (2). LNCS 12079, Springer (2020)Google Scholar
  64. 64.
    Rocha, H., Barreto, R.S., Cordeiro, L.C.: Memory management test-case generation of C programs using bounded model checking. In: Proc. SEFM. pp. 251–267. LNCS 9276, Springer (2015).  https://doi.org/10.1007/978-3-319-22969-0_18
  65. 65.
    Sharma, V., Hussein, S., Whalen, M., McCamant, S., Visser, W.: Java Ranger at SV-COMP 2020 (competition contribution). In: Proc. TACAS (2). LNCS 12079, Springer (2020)Google Scholar
  66. 66.
    Svejda, J., Berger, P., Katoen, J.P.: Interpretation-based violation witness validation for C: NitWit. In: Proc. TACAS. LNCS, Springer (2020)Google Scholar
  67. 67.
    Visser, W., Geldenhuys, J.: Coastal: Combining concolic and fuzzing for Java (competition contribution). In: Proc. TACAS (2). LNCS 12079, Springer (2020)Google Scholar
  68. 68.
    Volkov, A.R., Mandrykin, M.U.: Predicate abstractions memory modeling method with separation into disjoint regions. Proceedings of the Institute for System Programming (ISPRAS) 29, 203–216 (2017).  https://doi.org/10.15514/ISPRAS-2017-29(4)-13
  69. 69.
    Wetzler, N., Heule, M.J.H., Jr., W.A.H.: Drat-trim: Efficient checking and trimming using expressive clausal proofs. In: Proc. SAT. pp. 422–429. LNCS 8561, Springer (2014).  https://doi.org/10.1007/978-3-319-09284-3_31
  70. 70.
    Yin, L., Dong, W., Liu, W., Li, Y., Wang, J.: Yogar-CBmc: Cbmc with scheduling constraint based abstraction refinement (competition contribution). In: Proc. TACAS. pp. 422–426. LNCS 10806, Springer (2018).  https://doi.org/10.1007/978-3-319-89963-3_25
  71. 71.
    Yin, L., Dong, W., Liu, W., Wang, J.: On scheduling constraint abstraction for multi-threaded program verification. IEEE Trans. Softw. Eng. (2018).  https://doi.org/10.1109/TSE.2018.2864122

Copyright information

© The Author(s) 2020

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  1. 1.LMU MunichMunichGermany

Personalised recommendations