Abstract

We discuss how to perform symbolic execution of large programs in a manner that is both compositional (hence more scalable) and demand-driven. Compositional symbolic execution means finding feasible interprocedural program paths by composing symbolic executions of feasible intraprocedural paths. By demand-driven, we mean that as few intraprocedural paths as possible are symbolically executed in order to form an interprocedural path leading to a specific target branch or statement of interest (like an assertion). A key originality of this work is that our demand-driven compositional interprocedural symbolic execution is performed entirely using first-order logic formulas solved with an off-the-shelf SMT (Satisfiability-Modulo-Theories) solver – no procedure in-lining or custom algorithm is required for the interprocedural part. This allows a uniform and elegant way of summarizing procedures at various levels of detail and of composing those using logic formulas.

We have implemented a prototype of this novel symbolic execution technique as an extension of Pex, a general automatic testing framework for .NET applications. Preliminary experimental results are encouraging. For instance, our prototype was able to generate tests triggering assertion violations in programs with large numbers of program paths that were beyond the scope of non-compositional test generation.

References

  1. 1.
    Alur, R., Yannakakis, M.: Model Checking of Hierarchical State Machines. In: Vaudenay, S. (ed.) FSE 1998. LNCS, vol. 1372, pp. 175–188. Springer, Heidelberg (1998)Google Scholar
  2. 2.
    Babic, D., Hu, A.J.: Structural Abstraction of Software Verification Conditions. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, Springer, Heidelberg (2007)Google Scholar
  3. 3.
    Ball, T., Majumdar, R., Millstein, T., Rajamani, S.: Automatic Predicate Abstraction of C Programs. In: Proceedings of PLDI 2001 (2001)Google Scholar
  4. 4.
    Bush, W.R., Pincus, J.D., Sielaff, D.J.: A static analyzer for finding dynamic programming errors. Software Practice and Experience 30(7), 775–802 (2000)MATHCrossRefGoogle Scholar
  5. 5.
    Cadar, C., Ganesh, V., Pawlowski, P.M., Dill, D.L., Engler, D.R.: EXE: Automatically Generating Inputs of Death. In: ACM CCS (2006)Google Scholar
  6. 6.
    Clarke, E., Kroening, D., Lerda, F.: A Tool for Checking ANSI-C Programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, Springer, Heidelberg (2004)Google Scholar
  7. 7.
    Csallner, C., Smaragdakis, Y.: Check’n Crash: Combining Static Checking and Testing. In: Inverardi, P., Jazayeri, M. (eds.) ICSE 2005. LNCS, vol. 4309, Springer, Heidelberg (2006)Google Scholar
  8. 8.
    de Moura, L., Bjørner, N.: Z3, 2007. Web page: http://research.microsoft.com/projects/Z3
  9. 9.
    Engler, D., Dunbar, D.: Under-constrained execution: making automatic code destruction easy and scalable. In: Proceedings of ISSTA 2007 (2007)Google Scholar
  10. 10.
    Godefroid, P.: Compositional Dynamic Test Generation. In: POPL 2007, pp. 47–54 (January 2007)Google Scholar
  11. 11.
    Godefroid, P., Klarlund, N., Sen, K.: DART: Directed Automated Random Testing. In: PLDI 2005, Chicago, pp. 213–223 (June 2005)Google Scholar
  12. 12.
    Gopan, D., Reps, T.: Low-level Library Analysis and Summarization. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 68–81. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  13. 13.
    Gupta, N., Mathur, A.P., Soffa, M.L.: Generating Test Data for Branch Coverage. In: Proceedings of ASE 2000, pp. 219–227 (September 2000)Google Scholar
  14. 14.
    Khurshid, S., Suen, Y.L.: Generalizing Symbolic Execution to Library Classes. In: PASTE 2005, Lisbon (September 2005)Google Scholar
  15. 15.
    King, J.C.: Symbolic Execution and Program Testing. Journal of the ACM 19(7), 385–394 (1976)MATHCrossRefGoogle Scholar
  16. 16.
    Korel, B.: A Dynamic Approach of Test Data Generation. In: ICSM, pp. 311–317 (November 1990)Google Scholar
  17. 17.
    Livshits, V.B., Lam, M.: Tracking Pointers with Path and Context Sensitivity for Bug Detection in C Programs. In: Johansson, T. (ed.) FSE 2003. LNCS, vol. 2887, Springer, Heidelberg (2003)Google Scholar
  18. 18.
    Majumdar, R., Sen, K.: Latest: Lazy dynamic test input generation. Technical report, UC Berkeley (2007)Google Scholar
  19. 19.
    Reps, T., Horwitz, S., Sagiv, M.: Precise interprocedural dataflow analysis via graph reachability. In: Proceedings of POPL 1995, pp. 49–61 (1995)Google Scholar
  20. 20.
    Tillmann, N., de Halleux, J.: Pex (2007), http://research.microsoft.com/Pex
  21. 21.
    Tillmann, N., Schulte, W.: Parameterized unit tests. In: ESEC-FSE 2005, pp. 253–262. ACM, New York (2005)Google Scholar
  22. 22.
    Visser, W., Pasareanu, C., Khurshid, S.: Test Input Generation with Java PathFinder. In: ISSTA 2004, Boston (July 2004)Google Scholar
  23. 23.
    Xie, Y., Aiken, A.: Scalable Error Detection Using Boolean Satisfiability. In: Proceedings of POPL 2005 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Saswat Anand
    • 1
  • Patrice Godefroid
    • 2
  • Nikolai Tillmann
    • 2
  1. 1.Georgia Institute of Technology 
  2. 2.Microsoft Research 

Personalised recommendations