Automatic Control and Computer Sciences

, Volume 49, Issue 7, pp 466–472 | Cite as

Using a bounded model checker for test generation: How to kill two birds with one SMT solver

Article
  • 62 Downloads

Abstract

Automated test generation has received a lot of attention in recent decades, because it is one possible solution to the problems inherent to software testing: the need to write tests in the first place and providing test coverage for the human factor. De facto the most promising technique to automatically generate a test is dynamic symbolic execution assisted by an automated constraint solver, e.g., an SMT solver. This process is very similar to bounded model checking, which also deals with generating models from source code, asserting logic properties in it, and processing the returned model. This paper describes a prototype unit test generator for C based on a working bounded model checker called Borealis and shows that these two techniques are very similar and can be easily implemented using the same basic components. The prototype test generator has been evaluated on a number of examples and has shown good results in terms of test coverage and test excessiveness.

Keywords

automated test generation dynamic symbolic execution bounded model checking satisfiability modulo theories function contracts 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Akhin, M., Belyaev, M., and Itsykson, V., Yet another defect detection: Combining bounded model checking and code contracts, PSSV'13, 2013, p. 1–11.Google Scholar
  2. 2.
    Armando, A., Mantovani, J., and Platania, L., Bounded model checking of software using SMT solvers instead of SAT solvers, Int. J. Software Tools Technol. Transf., 2009, no. 11(1), pp. 69–83.CrossRefGoogle Scholar
  3. 3.
    Baudin, P., Filliatre, J.C., Hubert, T., Marche, C., Monate, B., Moy, Y., and Prevosto, V., ACSL: ANSI/ISO C Specification Language. Preliminary Design, version 1.4, 2008, preliminary edition, 2008.Google Scholar
  4. 4.
    Beck, K., Test-driven development: By example, Addison-Wesley Professional, 2003.Google Scholar
  5. 5.
    Beizer, B., Software testing techniques, Dreamtech Press, 2003.Google Scholar
  6. 6.
    Beyer, D., Competition on software verification, TACAS'12, 2012, p. 504–524.Google Scholar
  7. 7.
    Biere, A., Cimatti, A., Clarke, E.M., and Zhu, Yu., Symbolic model checking without BDDs, TACAS'9, 1999, p. 193–207.Google Scholar
  8. 8.
    Clarke, E., Kroening, D., and Lerda, F., A tool for checking ANSI-C programs, TACAS'04, 2004, p. 168–176.Google Scholar
  9. 9.
    Cohen, D.M., Dalal, S.R., Parelius, J., and Patton, G.C., The combinatorial design approach to automatic test generation, IEEE software, 1996, 13(5), pp. 83–88.CrossRefGoogle Scholar
  10. 10.
    Cordeiro, L., Fischer, B., and Marques-Silva, J., SMT-based bounded model checking for embedded ANSI-C software, ASE'09, 2009, p. 137–148.Google Scholar
  11. 11.
    de Moura, L. and Bjrner, N., Z3: An Efficient SMT Solver, 2008, p. 337–340.Google Scholar
  12. 12.
    DeMilli, R.A. and Offutt, A.J., Constraint-based automatic test data generation, IEEE Trans. Software Eng., 1991, no. 17(9), pp. 900–910.CrossRefGoogle Scholar
  13. 13.
    Godefroid, P., Klarlund, N., and Sen, K., DART: Directed automated random testing, ACM Sigplan Notices. ACM, 2005, vol. 40. pp. 213–223.CrossRefGoogle Scholar
  14. 14.
    Hamlet, R., Random testing, Encyclopedia of Software Engineering, 1994.Google Scholar
  15. 15.
    Ivančicć, F. and Sankaranarayanan, S, NECLA Static Analysis Benchmarks.Google Scholar
  16. 16.
    Kumar, A. and Clair, J.St., CUnit. A unit testing framework for C. http://cunit.sourceforge.net/doc/index.htmlGoogle Scholar
  17. 17.
    Lattner, C., LLVM and Clang: Next generation compiler technology, The BSD Conference, 2008.Google Scholar
  18. 18.
    Lattner, C. and Adve, V., LLVM: A compilation framework for lifelong program analysis and transformation, CGO'04, 2004, p. 75–86.Google Scholar
  19. 19.
    Merz, F., Falke, S., and Sinz, C., LLBMC: Bounded model checking of C and C++ programs using a compiler IR, VSTTE'12, 2012, p. 146–161.Google Scholar
  20. 20.
    Bertrand, M., Applying ‘design by contract’, Computer, 1992, no. 25(10), pp. 40–51.CrossRefMathSciNetGoogle Scholar
  21. 21.
    Myers, G.J., Sandler, C., and Badgett, T., The Art of Software Testing, John Wiley & Sons, 2011.Google Scholar
  22. 22.
    Nebut, C., Fleurey, F., le Traon, Y., and Jezequel, J.-M., Automatic test generation: A use case driven approach, IEEE Trans. Software Eng., 2006, no. 32(3), pp. 140–155.CrossRefGoogle Scholar
  23. 23.
    Pacheco, C., Lahiri, Sh.K., Ernst, M.D., and Ball, Th., Feedback-directed random test generation, ICSE 2007, 2007, p. 75–84.Google Scholar
  24. 24.
    Sen, K., Marinov, D., and Agha, G., CUTE: A concolic unit testing engine for C, ACM, 2005, vol. 30.Google Scholar
  25. 25.
    Tillmann, N. and de Halleux, J., Pex-white box test generation for. Net, in Tests and Proofs, Springer, 2008, p. 134–153.CrossRefGoogle Scholar
  26. 26.
    Xie, T., Tillmann, N., de Halleux, J., and Schulte, W., Fitness-guided path exploration in dynamic symbolic execution, IEEE, 2009, p. 359–368.Google Scholar

Copyright information

© Allerton Press, Inc. 2015

Authors and Affiliations

  • M. Petrov
    • 1
  • K. Gagarski
    • 1
  • M. Belyaev
    • 1
  • V. Itsykson
    • 1
  1. 1.St. Petersburg State Polytechnical UniversitySt. PetersburgRussia

Personalised recommendations