Skip to main content

Status Report on Software Testing: Test-Comp 2021

Part of the Lecture Notes in Computer Science book series (LNTCS,volume 12649)

Abstract

This report describes Test-Comp 2021, the 3rd edition of the Competition on Software Testing. The competition is a series of annual comparative evaluations of fully automatic software test generators for C programs. The competition has a strong focus on reproducibility of its results and its main goal is to provide an overview of the current state of the art in the area of automatic test-generation. The competition was based on 3 173 test-generation tasks for C programs. Each test-generation task consisted of a program and a test specification (error coverage, branch coverage). Test-Comp 2021 had 11 participating test generators from 6 countries.

Keywords

  • Software Testing
  • Test-Case Generation
  • Competition
  • Program Analysis
  • Software Validation
  • Software Bugs
  • Test Validation
  • Test-Comp
  • Benchmarking
  • Test Coverage
  • Bug Finding
  • Test-Suites
  • BenchExec
  • TestCov

This report extends previous reports on Test-Comp [4,5,6].

Reproduction packages are available on Zenodo (see Table 3).

Funded in part by the Deutsche Forschungsgemeinschaft (DFG) – 418257054 (Coop).

References

  1. Alshmrany, K., Menezes, R., Gadelha, M., Cordeiro, L.: FuSeBMC: A white-box fuzzer for finding security vulnerabilities in C programs (competition contribution). In: Proc. FASE. LNCS 12649, Springer (2021)

    Google Scholar 

  2. Bartocci, E., Beyer, D., Black, P.E., Fedyukovich, G., Garavel, H., Hartmanns, A., Huisman, M., Kordon, F., Nagele, J., Sighireanu, M., Steffen, B., Suda, M., Sutcliffe, G., Weber, T., Yamada, A.: TOOLympics 2019: An overview of competitions in formal methods. In: Proc. TACAS (3). pp. 3–24. LNCS 11429, Springer (2019). https://doi.org/10.1007/978-3-030-17502-3_1

  3. Beyer, D.: Second competition on software verification (Summary of SV-COMP 2013). In: Proc. TACAS. pp. 594–609. LNCS 7795, Springer (2013). https://doi.org/10.1007/978-3-642-36742-7_43

  4. Beyer, D.: Competition on software testing (Test-Comp). In: Proc. TACAS (3). pp. 167–175. LNCS 11429, Springer (2019). https://doi.org/10.1007/978-3-030-17502-3_11

  5. Beyer, D.: Second competition on software testing: Test-Comp 2020. In: Proc. FASE. pp. 505–519. LNCS 12076, Springer (2020). https://doi.org/10.1007/978-3-030-45234-6_25

  6. Beyer, D.: First international competition on software testing (Test-Comp 2019). Int. J. Softw. Tools Technol. Transf. (2021)

    Google Scholar 

  7. Beyer, D.: Results of the 3rd Intl. Competition on Software Testing (Test-Comp 2021). Zenodo (2021). https://doi.org/10.5281/zenodo.4459470

  8. Beyer, D.: Software verification: 10th comparative evaluation (SV-COMP 2021). In: Proc. TACAS (2). LNCS 12652, Springer (2021), preprint available

    Google Scholar 

  9. Beyer, D.: SV-Benchmarks: Benchmark set of 3rd Intl. Competition on Software Testing (Test-Comp 2021). Zenodo (2021). https://doi.org/10.5281/zenodo.4459132

  10. Beyer, D.: Test suites from Test-Comp 2021 test-generation tools. Zenodo (2021). https://doi.org/10.5281/zenodo.4459466

  11. Beyer, D., Chlipala, A.J., Henzinger, T.A., Jhala, R., Majumdar, R.: Generating tests from counterexamples. In: Proc. ICSE. pp. 326–335. IEEE (2004). https://doi.org/10.1109/ICSE.2004.1317455

  12. Beyer, D., Jakobs, M.C.: CoVeriTest: Cooperative verifier-based testing. In: Proc. FASE. pp. 389–408. LNCS 11424, Springer (2019). https://doi.org/10.1007/978-3-030-16722-6_23

  13. Beyer, D., Kanav, S.: CoVeriTeam: On-demand composition of cooperative verification systems. unpublished manuscript (2021)

    Google Scholar 

  14. Beyer, D., Lemberger, T.: Software verification: Testing vs. model checking. In: Proc. HVC. pp. 99–114. LNCS 10629, Springer (2017). https://doi.org/10.1007/978-3-319-70389-3_7

  15. Beyer, D., Lemberger, T.: TestCov: Robust test-suite execution and coverage measurement. In: Proc. ASE. pp. 1074–1077. IEEE (2019). https://doi.org/10.1109/ASE.2019.00105

  16. Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: Requirements and solutions. Int. J. Softw. Tools Technol. Transfer 21(1), 1–29 (2019). https://doi.org/10.1007/s10009-017-0469-y

  17. Beyer, D., Wendler, P.: CPU CPU Energy Meter: A tool for energy-aware algorithms engineering. In: Proc. TACAS (2). pp. 126–133. LNCS 12079, Springer (2020). https://doi.org/10.1007/978-3-030-45237-7_8

  18. Bürdek, J., Lochau, M., Bauregger, S., Holzer, A., von Rhein, A., Apel, S., Beyer, D.: Facilitating reuse in multi-goal test-suite generation for software product lines. In: Proc. FASE. pp. 84–99. LNCS 9033, Springer (2015). https://doi.org/10.1007/978-3-662-46675-9_6

  19. Cadar, C., Dunbar, D., Engler, D.R.: Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: Proc. OSDI. pp. 209–224. USENIX Association (2008)

    Google Scholar 

  20. Cadar, C., Nowack, M.: Klee symbolic execution engine in 2019. Int. J. Softw. Tools Technol. Transf. (2020). https://doi.org/10.1007/s10009-020-00570-3

  21. Chalupa, M., Novák, J., Strejček, J.: Symbiotic 8: Parallel and targeted test generation (competition contribution). In: Proc. FASE. LNCS 12649, Springer (2021)

    Google Scholar 

  22. Chalupa, M., Strejček, J., Vitovská, M.: Joint forces for memory safety checking. In: Proc. SPIN. pp. 115–132. Springer (2018). https://doi.org/10.1007/978-3-319-94111-0_7

  23. Chowdhury, A.B., Medicherla, R.K., Venkatesh, R.: VeriFuzz: Program-aware fuzzing (competition contribution). In: Proc. TACAS (3). pp. 244–249. LNCS 11429, Springer (2019). https://doi.org/10.1007/978-3-030-17502-3_22

  24. Cok, D.R., Déharbe, D., Weber, T.: The 2014 SMT competition. JSAT 9, 207–242 (2016)

    Google Scholar 

  25. Gadelha, M.R., Menezes, R., Cordeiro, L.: Esbmc 6.1: Automated test-case generation using bounded model checking. Int. J. Softw. Tools Technol. Transf. (2020). https://doi.org/10.1007/s10009-020-00571-2

  26. Godefroid, P., Sen, K.: Combining model checking and testing. In: Handbook of Model Checking, pp. 613–649. Springer (2018). https://doi.org/10.1007/978-3-319-10575-8_19

  27. Harman, M., Hu, L., Hierons, R.M., Wegener, J., Sthamer, H., Baresel, A., Roper, M.: Testability transformation. IEEE Trans. Software Eng. 30(1), 3–16 (2004). https://doi.org/10.1109/TSE.2004.1265732

  28. Holzer, A., Schallhart, C., Tautschnig, M., Veith, H.: How did you specify your test suite. In: Proc. ASE. pp. 407–416. ACM (2010). https://doi.org/10.1145/1858996.1859084

  29. Jaffar, J., Maghareh, R., Godboley, S., Ha, X.L.: TracerX: Dynamic symbolic execution with interpolation (competition contribution). In: Proc. FASE. pp. 530–534. LNCS 12076, Springer (2020). https://doi.org/10.1007/978-3-030-45234-6_28

  30. Jaffar, J., Murali, V., Navas, J.A., Santosa, A.E.: Tracer: A symbolic execution tool for verification. In: Proc. CAV. pp. 758–766. LNCS 7358, Springer (2012). https://doi.org/10.1007/978-3-642-31424-7_61

  31. Jakobs, M.C., Richter, C.: CoVeriTest with adaptive time scheduling (competition contribution). In: Proc. FASE. LNCS 12649, Springer (2021)

    Google Scholar 

  32. Kifetew, F.M., Devroey, X., Rueda, U.: Java unit-testing tool competition: Seventh round. In: Proc. SBST. pp. 15–20. IEEE (2019). https://doi.org/10.1109/SBST.2019.00014

  33. Kim, H.: Fuzzing with stochastic optimization (2020), Bachelor’s Thesis, LMU Munich

    Google Scholar 

  34. King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976). https://doi.org/10.1145/360248.360252

  35. Le, H.M.: Llvm-based hybrid fuzzing with LibKluzzer (competition contribution). In: Proc. FASE. pp. 535–539. LNCS 12076, Springer (2020). https://doi.org/10.1007/978-3-030-45234-6_29

  36. Lemberger, T.: Plain random test generation with PRTest. Int. J. Softw. Tools Technol. Transf. (2020)

    Google Scholar 

  37. Liu, D., Ernst, G., Murray, T., Rubinstein, B.: Legion: Best-first concolic testing (competition contribution). In: Proc. FASE. pp. 545–549. LNCS 12076, Springer (2020). https://doi.org/10.1007/978-3-030-45234-6_31

  38. Ruland, S., Lochau, M., Jakobs, M.C.: HybridTiger: Hybrid model checking and domination-based partitioning for efficient multi-goal test-suite generation (competition contribution). In: Proc. FASE. pp. 520–524. LNCS 12076, Springer (2020). https://doi.org/10.1007/978-3-030-45234-6_26

  39. Song, J., Alves-Foss, J.: The DARPA cyber grand challenge: A competitor’s perspective, part 2. IEEE Security and Privacy 14(1), 76–81 (2016). https://doi.org/10.1109/MSP.2016.14

  40. Stump, A., Sutcliffe, G., Tinelli, C.: StarExec: A cross-community infrastructure for logic solving. In: Proc. IJCAR, pp. 367–373. LNCS 8562, Springer (2014). https://doi.org/10.1007/978-3-319-08587-6_28

  41. Sutcliffe, G.: The CADE ATP system competition: CASC. AI Magazine 37(2), 99–101 (2016)

    Google Scholar 

  42. Visser, W., Păsăreanu, C.S., Khurshid, S.: Test-input generation with Java PathFinder. In: Proc. ISSTA. pp. 97–107. ACM (2004). https://doi.org/10.1145/1007512.1007526

  43. Wendler, P., Beyer, D.: sosy-lab/benchexec: Release 3.6. Zenodo (2021). https://doi.org/10.5281/zenodo.4317433

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Dirk Beyer .

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and Permissions

Copyright information

© 2021 The Author(s)

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Beyer, D. (2021). Status Report on Software Testing: Test-Comp 2021. In: Guerra, E., Stoelinga, M. (eds) Fundamental Approaches to Software Engineering. FASE 2021. Lecture Notes in Computer Science(), vol 12649. Springer, Cham. https://doi.org/10.1007/978-3-030-71500-7_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-71500-7_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-71499-4

  • Online ISBN: 978-3-030-71500-7

  • eBook Packages: Computer ScienceComputer Science (R0)