Skip to main content

Second Competition on Software Testing: Test-Comp 2020

Part of the Lecture Notes in Computer Science book series (LNTCS,volume 12076)

Abstract

This report describes the 2020 Competition on Software Testing (Test-Comp), the 2\(^{\text {nd}}\) edition of a series of comparative evaluations of fully automatic software test-case generators for C programs. The competition provides a snapshot of the current state of the art in the area, and has a strong focus on replicability of its results. The competition was based on 3 230 test tasks for C programs. Each test task consisted of a program and a test specification (error coverage, branch coverage). Test-Comp 2020 had 10 participating test-generation systems.

Keywords

  • Software Testing
  • Test-Case Generation
  • Competition
  • Software Analysis
  • Software Validation
  • Test Validation
  • Test-Comp
  • Benchmarking
  • Test Coverage
  • Bug Finding
  • BenchExec
  • TestCov

References

  1. Bartocci, E., Beyer, D., Black, P.E., Fedyukovich, G., Garavel, H., Hartmanns, A., Huisman, M., Kordon, F., Nagele, J., Sighireanu, M., Steffen, B., Suda, M., Sutcliffe, G., Weber, T., Yamada, A.: TOOLympics 2019: An overview of competitions in formal methods. In: Proc. TACAS (3). pp. 3–24. LNCS 11429, Springer (2019). https://doi.org/10.1007/978-3-030-17502-3_1

  2. Beyer, D.: Second competition on software verification (Summary of SV-COMP 2013). In: Proc. TACAS. pp. 594–609. LNCS 7795, Springer (2013). https://doi.org/10.1007/978-3-642-36742-7_43

  3. Beyer, D.: Automatic verification of C and Java programs: SV-COMP 2019. In: Proc. TACAS (3). pp. 133–155. LNCS 11429, Springer (2019). https://doi.org/10.1007/978-3-030-17502-3_9

  4. Beyer, D.: Competition on software testing (Test-Comp). In: Proc. TACAS (3). pp. 167–175. LNCS 11429, Springer (2019). https://doi.org/10.1007/978-3-030-17502-3_11

  5. Beyer, D.: First international competition on software testing (Test-Comp 2019). Int. J. Softw. Tools Technol, Transf (2020)

    Google Scholar 

  6. Beyer, D.: Results of the 2nd International Competition on Software Testing (Test-Comp 2020). Zenodo (2020). https://doi.org/10.5281/zenodo.3678264

  7. Beyer, D.: SV-Benchmarks: Benchmark set of the 2nd Intl. Competition on Software Testing (Test-Comp 2020). Zenodo (2020). https://doi.org/10.5281/zenodo.3678250

  8. Beyer, D.: Test suites from Test-Comp 2020 test-generation tools. Zenodo (2020). https://doi.org/10.5281/zenodo.3678275

  9. Beyer, D., Chlipala, A.J., Henzinger, T.A., Jhala, R., Majumdar, R.: Generating tests from counterexamples. In: Proc. ICSE. pp. 326–335. IEEE (2004). https://doi.org/10.1109/ICSE.2004.1317455

  10. Beyer, D., Jakobs, M.C.: CoVeriTest: Cooperative verifier-based testing. In: Proc. FASE. pp. 389–408. LNCS 11424, Springer (2019). https://doi.org/10.1007/978-3-030-16722-6_23

  11. Beyer, D., Lemberger, T.: Software verification: Testing vs. model checking. In: Proc. HVC. pp. 99–114. LNCS 10629, Springer (2017). https://doi.org/10.1007/978-3-319-70389-3_7

  12. Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: requirements and solutions. Int. J. Soft. Tools Technol. Transfer 21(1), 1–29 (2017). https://doi.org/10.1007/s10009-017-0469-y

    CrossRef  Google Scholar 

  13. Beyer, D., Wendler, P.: CPU Energy Meter: A tool for energy-aware algorithms engineering. In: Proc. TACAS (2). LNCS 12079, Springer (2020)

    Google Scholar 

  14. Beyer, D., Lemberger, T.: TestCov: Robust test-suite execution and coverage measurement. In: Proc. ASE. pp. 1074–1077. IEEE (2019). https://doi.org/10.1109/ASE.2019.00105

  15. Bürdek, J., Lochau, M., Bauregger, S., Holzer, A., von Rhein, A., Apel, S., Beyer, D.: Facilitating reuse in multi-goal test-suite generation for software product lines. In: Proc. FASE. pp. 84–99. LNCS 9033, Springer (2015). https://doi.org/10.1007/978-3-662-46675-9_6

  16. Cadar, C., Dunbar, D., Engler, D.R.: Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: Proc. OSDI. pp. 209–224. USENIX Association (2008)

    Google Scholar 

  17. Cadar, C., Nowack, M.: Klee symbolic execution engine (competition contribution). Int. J. Softw. Tools Technol, Transf (2020)

    Google Scholar 

  18. Chalupa, M., Vitovska, M., Jašek, T., Šimáček, M., Strejček, J.: Symbiotic 6: Generating test-cases (competition contribution). Int. J. Softw. Tools Technol. Transf. (2020)

    Google Scholar 

  19. Chalupa, M., Strejcek, J., Vitovská, M.: Joint forces for memory safety checking. In: Proc. SPIN. pp. 115–132. Springer (2018). https://doi.org/10.1007/978-3-319-94111-0_7

  20. Chowdhury, A.B., Medicherla, R.K., Venkatesh, R.: VeriFuzz: Program-aware fuzzing (competition contribution). In: Proc. TACAS (3). pp. 244–249. LNCS 11429, Springer (2019). https://doi.org/10.1007/978-3-030-17502-3_22

  21. Cok, D.R., Déharbe, D., Weber, T.: The 2014 SMT competition. JSAT 9, 207–242 (2016)

    MathSciNet  Google Scholar 

  22. Gadelha, M.R., Menezes, R., Monteiro, F.R., Cordeiro, L., Nicole, D.: Esbmc: Scalable and precise test generation based on the floating-point theory (competition contribution). In: Proc. FASE. LNCS 12076, Springer (2020)

    Google Scholar 

  23. Gadelha, M.Y.R., Ismail, H.I., Cordeiro, L.C.: Handling loops in bounded model checking of C programs via k-induction. Int. J. Softw. Tools Technol. Transfer 19(1), 97–114 (2015). https://doi.org/10.1007/s10009-015-0407-9

    CrossRef  Google Scholar 

  24. Godefroid, P., Sen, K.: Combining model checking and testing. In: Handbook of Model Checking, pp. 613–649. Springer (2018). https://doi.org/10.1007/978-3-319-10575-8_19

  25. Harman, M., Hu, L., Hierons, R.M., Wegener, J., Sthamer, H., Baresel, A., Roper, M.: Testability transformation. IEEE Trans. Software Eng. 30(1), 3–16 (2004). https://doi.org/10.1109/TSE.2004.1265732

    CrossRef  Google Scholar 

  26. Holzer, A., Schallhart, C., Tautschnig, M., Veith, H.: How did you specify your test suite. In: Proc. ASE. pp. 407–416. ACM (2010). https://doi.org/10.1145/1858996.1859084

  27. Howar, F., Isberner, M., Merten, M., Steffen, B., Beyer, D., Păsăreanu, C.S.: Rigorous examination of reactive systems. Int. J. Softw. Tools Technol. Transfer 16(5), 457–464 (2014). https://doi.org/10.1007/s10009-014-0337-y

    CrossRef  Google Scholar 

  28. Huisman, M., Klebanov, V., Monahan, R.: VerifyThis 2012: A program verification competition. STTT 17(6), 647–657 (2015). https://doi.org/10.1007/s10009-015-0396-8

    CrossRef  Google Scholar 

  29. Jaffar, J., Maghareh, R., Godboley, S., Ha, X.L.: TracerX: Dynamic symbolic execution with interpolation (competition contribution). In: Proc. FASE. LNCS 12076, Springer (2020)

    Google Scholar 

  30. Jaffar, J., Murali, V., Navas, J.A., Santosa, A.E.: Tracer: A symbolic execution tool for verification. In: Proc. CAV. pp. 758–766. LNCS 7358, Springer (2012). https://doi.org/10.1007/978-3-642-31424-7_61

  31. Jakobs, M.C.: CoVeriTest with dynamic partitioning of the iteration time limit (competition contribution). In: Proc. FASE. LNCS 12076, Springer (2020)

    Google Scholar 

  32. Kifetew, F.M., Devroey, X., Rueda, U.: Java unit-testing tool competition: Seventh round. In: Proc. SBST. pp. 15–20. IEEE (2019). https://doi.org/10.1109/SBST.2019.00014

  33. King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976). https://doi.org/10.1145/360248.360252

    MathSciNet  CrossRef  MATH  Google Scholar 

  34. Le, H.M.: Llvm-based hybrid fuzzing with LibKluzzer (competition contribution). In: Proc. FASE. LNCS 12076, Springer (2020)

    Google Scholar 

  35. Lemberger, T.: Plain random test generation with PRTest (competition contribution). Int. J. Softw. Tools Technol. Transf. (2020)

    Google Scholar 

  36. Liu, D., Ernst, G., Murray, T., Rubinstein, B.: Legion: Best-first concolic testing (competition contribution). In: Proc. FASE. LNCS 12076, Springer (2020)

    Google Scholar 

  37. Ruland, S., Lochau, M., Jakobs, M.C.: HybridTiger: Hybrid model checking and domination-based partitioning for efficient multi-goal test-suite generation (competition contribution). In: Proc. FASE. LNCS 12076, Springer (2020)

    Google Scholar 

  38. Song, J., Alves-Foss, J.: The DARPA cyber grand challenge: A competitor’s perspective, part 2. IEEE Security and Privacy 14(1), 76–81 (2016). https://doi.org/10.1109/MSP.2016.14

    CrossRef  Google Scholar 

  39. Stump, A., Sutcliffe, G., Tinelli, C.: StarExec: A cross-community infrastructure for logic solving. In: Proc. IJCAR, pp. 367–373. LNCS 8562, Springer (2014). https://doi.org/10.1007/978-3-319-08587-6_28

  40. Sutcliffe, G.: The CADE ATP system competition: CASC. AI Magazine 37(2), 99–101 (2016)

    CrossRef  Google Scholar 

  41. Visser, W., Păsăreanu, C.S., Khurshid, S.: Test-input generation with Java PathFinder. In: Proc. ISSTA. pp. 97–107. ACM (2004). https://doi.org/10.1145/1007512.1007526

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dirk Beyer .

Editor information

Editors and Affiliations

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and Permissions

Copyright information

© 2020 The Author(s)

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Beyer, D. (2020). Second Competition on Software Testing: Test-Comp 2020. In: Wehrheim, H., Cabot, J. (eds) Fundamental Approaches to Software Engineering. FASE 2020. Lecture Notes in Computer Science(), vol 12076. Springer, Cham. https://doi.org/10.1007/978-3-030-45234-6_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-45234-6_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-45233-9

  • Online ISBN: 978-3-030-45234-6

  • eBook Packages: Computer ScienceComputer Science (R0)