Using Dynamic Symbolic Execution to Improve Deductive Verification

  • Dries Vanoverberghe
  • Nikolaj Bjørner
  • Jonathan de Halleux
  • Wolfram Schulte
  • Nikolai Tillmann
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5156)

Abstract

One of the most challenging problems in deductive program verification is to find inductive program invariants typically expressed using quantifiers. With strong-enough invariants, existing provers can often prove that a program satisfies its specification. However, provers by themselves do not find such invariants. We propose to automatically generate executable test cases from failed proof attempts using dynamic symbolic execution by exploring program code as well as contracts with quantifiers. A developer can analyze the test cases with a traditional debugger to determine the cause of the error; the developer may then correct the program or the contracts and repeat the process.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Anand, S., Godefroid, P., Tillmann, N.: Demand-driven compositional symbolic execution. In: Proc. of TACAS 2008. LNCS, vol. 4963, pp. 367–381. Springer, Heidelberg (2008)Google Scholar
  2. 2.
    Ball, T., Lahiri, S.K., Musuvathi, M.: Zap: Automated theorem proving for software analysis. In: Sutcliffe, G., Voronkov, A. (eds.) LPAR 2005. LNCS (LNAI), vol. 3835, pp. 2–22. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  3. 3.
    Barnett, M., Leino, K.R.M., Schulte, W.: The Spec# programming system: An overview. In: Barthe, G., Burdy, L., Huisman, M., Lanet, J.-L., Muntean, T. (eds.) CASSIS 2004. LNCS, vol. 3362, pp. 49–69. Springer, Heidelberg (2005)Google Scholar
  4. 4.
    Barrett, C., Tinelli, C.: CVC3. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 298–302. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  5. 5.
    Cadar, C., Ganesh, V., Pawlowski, P.M., Dill, D.L., Engler, D.R.: Exe: automatically generating inputs of death. In: CCS 2006: Proceedings of the 13th ACM conference on Computer and communications security, pp. 322–335. ACM Press, New York (2006)CrossRefGoogle Scholar
  6. 6.
    Cheon, Y.: A runtime assertion checker for the Java Modeling Language. Technical Report 03-09, Department of Computer Science, Iowa State University, The author’s Ph.D. dissertation. (April 2003), http://archives.cs.iastate.edu
  7. 7.
    Cheon, Y.: Automated random testing to detect specification-code inconsistencies. Technical report, Department of Computer Science The University of Texas at El Paso, 500 West University Avenue, El Paso, Texas, USA (2007)Google Scholar
  8. 8.
    Cheon, Y., Leavens, G.T.: A simple and practical approach to unit testing: The JML and JUnit way. In: Proc. 16th European Conference Object-Oriented Programming, pp. 231–255 (June 2002)Google Scholar
  9. 9.
    Ciupa, I., Leitner, A., Oriol, M., Meyer, B.: Experimental assessment of random testing for object-oriented software. In: ISSTA 2007: Proceedings of the 2007 international symposium on Software testing and analysis, pp. 84–94. ACM, New York (2007)Google Scholar
  10. 10.
    Ciupa, I., Leitner, A., Oriol, M., Meyer, B.: Artoo: adaptive random testing for object-oriented software. In: ICSE 2008: Proceedings of the 30th international conference on Software engineering, pp. 71–80. ACM, New York (2008)CrossRefGoogle Scholar
  11. 11.
    Claessen, K., Svensson, H.: Finding counter examples in induction proofs. In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 48–65. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  12. 12.
    de Moura, L., Bjørner, N.: Efficient E-matching for SMT Solvers. In: Pfenning, F. (ed.) CADE 2007. LNCS (LNAI), vol. 4603, pp. 183–198. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  13. 13.
    de Moura, L., Bjørner, N.: Model-based Theory Combination. Electron. Notes Theor. Comput. Sci. 198(2), 37–49 (2008)CrossRefGoogle Scholar
  14. 14.
    de Moura, L., Bjørner, N.: Z3: An efficient SMT solver (2007), http://research.microsoft.com/projects/Z3
  15. 15.
    de Moura, L., Bjørner, N.: Z3: An Efficient SMT Solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  16. 16.
    Detlefs, D., Nelson, G., Saxe, J.B.: Simplify: a theorem prover for program checking. J. ACM 52(3), 365–473 (2005)CrossRefMathSciNetGoogle Scholar
  17. 17.
    Downey, P.J., Sethi, R., Tarjan, R.E.: Variations on the common subexpression problem. J. ACM 27(4), 758–771 (1980)MATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    Dutertre, B., de Moura, L.: A Fast Linear-Arithmetic Solver for DPLL(T). In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 81–94. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  19. 19.
    Flanagan, C., Joshi, R., Saxe, J.B.: An explicating theorem prover for quantified formulas. Technical Report HPL-2004-199, HP Laboratories, Palo Alto (2004)Google Scholar
  20. 20.
    Ganzinger, H., Hagen, G., Nieuwenhuis, R., Oliveras, A., Tinelli, C.: DPLL( T): Fast Decision Procedures. In: Alur, R., Peled, D.A. (eds.) CAV 2004. LNCS, vol. 3114, pp. 175–188. Springer, Heidelberg (2004)Google Scholar
  21. 21.
    Godefroid, P.: Compositional dynamic test generation. In: Proc. of POPL 2007, pp. 47–54. ACM Press, New York (2007)Google Scholar
  22. 22.
    Godefroid, P., Klarlund, N., Sen, K.: DART: directed automated random testing. SIGPLAN Notices 40(6), 213–223 (2005)CrossRefGoogle Scholar
  23. 23.
    Godefroid, P., Levin, M.Y., Molnar, D.: Automated whitebox fuzz testing. In: Proceedings of NDSS 2008 (Network and Distributed Systems Security), pp. 151–166 (2008)Google Scholar
  24. 24.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)MATHCrossRefGoogle Scholar
  25. 25.
    Leavens, G.T., Baker, A.L., Ruby, C.: Preliminary design of JML: A behavioral interface specification language for Java. Technical Report TR 98-06i, Department of Computer Science, Iowa State University (June 1998)Google Scholar
  26. 26.
    McCarthy, J.: Towards a mathematical science of computation. In: IFIP Congress, pp. 21–28 (1962)Google Scholar
  27. 27.
    Meyer, B.: Eiffel: The Language. Prentice Hall, New York (1992)MATHGoogle Scholar
  28. 28.
    Moskal, M., Lopuszański, J.: Fast quantifier reasoning with lazy proof explication (2006), http://nemerle.org/malekith/smt/smt-tr-1.pdf
  29. 29.
    Moskewicz, M., Madigan, C., Zhao, Y., Zhang, L., Malik, S.: Chaff: Engineering an efficient SAT solver. In: 38th Design Automation Conference (DAC 2001) (2001)Google Scholar
  30. 30.
    Mouy, P., Marre, B., Williams, N., Gall, P.L.: Generation of all-paths unit test with function calls. In: Proceedings of ICST 2008 (International Conference on Software Testing, Verification and Validation), pp. 32–41 (2008)Google Scholar
  31. 31.
    Peters, D.K., Parnas, D.L.: Using test oracles generated from program documentation. IEEE Trans. Softw. Eng. 24(3), 161–173 (1998)CrossRefGoogle Scholar
  32. 32.
    Pex development team. Pex (2007), http://research.microsoft.com/Pex
  33. 33.
    Sen, K., Marinov, D., Agha, G.: CUTE: a concolic unit testing engine for C. In: Proc. of ESEC/FSE 2005, pp. 263–272. ACM Press, New York (2005)CrossRefGoogle Scholar
  34. 34.
    Tillmann, N., de Halleux, J.: Pex – white box test generation for .NET. In: Proc. of Tests and Proofs (TAP 2008), Prato, Italy, April 2008. LNCS, vol. 4966, pp. 134–153. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  35. 35.
    Tseitin, G.S.: On the complexity of derivation in propositional calculus. In: Automation of Reasoning 2: Classical Papers on Computational Logic 1967-1970, pp. 466–483. Springer, Heidelberg (1983)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Dries Vanoverberghe
    • 1
  • Nikolaj Bjørner
    • 1
  • Jonathan de Halleux
    • 1
  • Wolfram Schulte
    • 1
  • Nikolai Tillmann
    • 1
  1. 1.Microsoft ResearchOne Microsoft WayRedmondUSA

Personalised recommendations