Advertisement

Symbolic Execution with CEGAR

  • Dirk Beyer
  • Thomas Lemberger
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9952)

Abstract

Symbolic execution, a standard technique in program analysis, is a particularly successful and popular component in systems for test-case generation. One of the open research problems is that the approach suffers from the path-explosion problem. We apply abstraction to symbolic execution, and refine the abstract model using counterexample-guided abstraction refinement (Cegar), a standard technique from model checking. We also use refinement selection with existing and new heuristics to influence the behavior and further improve the performance of our refinement procedure. We implemented our new technique in the open-source software-verification framework CPAchecker. Our experimental results show that the implementation is highly competitive.

Keywords

Variable Assignment Program Variable Symbolic Execution Program Location Verification Task 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Anand, S., Godefroid, P., Tillmann, N.: Demand-driven compositional symbolic execution. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 367–381. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  2. 2.
    Apel, S., Beyer, D., Friedberger, K., Raimondi, F., von Rhein, A.: Domain types: Abstract-domain selection based on variable usage. In: Bertacco, V., Legay, A. (eds.) HVC 2013. LNCS, vol. 8244, pp. 262–278. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  3. 3.
    Beyer, D.: Second competition on software verification. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013. LNCS, vol. 7795, pp. 594–609. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  4. 4.
    Beyer, D.: Status report on software verification. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014. LNCS, vol. 8413, pp. 373–388. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  5. 5.
    Beyer, D.: Reliable and reproducible competition results with BenchExec and witnesses (report on SV-COMP 2016). In: Chechik, M., Raskin, J.-F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 887–904. Springer, Heidelberg (2016)CrossRefGoogle Scholar
  6. 6.
    Beyer, D., Henzinger, T.A., Théoduloz, G.: Configurable software verification: Concretizing the convergence of model checking and program analysis. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 504–518. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  7. 7.
    Beyer, D., Henzinger, T.A., Théoduloz, G.: Program analysis with dynamic precision adjustment. In: ASE 2008, pp. 29–38. IEEE (2008)Google Scholar
  8. 8.
    Beyer, D., Keremoglu, M.E.: CPAchecker: A tool for configurable software verification. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 184–190. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  9. 9.
    Beyer, D., Löwe, S.: Explicit-state software model checking based on CEGAR and interpolation. In: Cortellessa, V., Varró, D. (eds.) FASE 2013. LNCS, vol. 7793, pp. 146–162. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  10. 10.
    Beyer, D., Löwe, S., Wendler, P.: Benchmarking and resource measurement. In: Fischer, B., Geldenhuys, J. (eds.) SPIN 2015. LNCS, vol. 9232, pp. 160–178. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  11. 11.
    Beyer, D., Löwe, S., Wendler, P.: Refinement selection. In: Fischer, B., Geldenhuys, J. (eds.) SPIN 2015. LNCS, vol. 9232, pp. 20–38. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  12. 12.
    Beyer, D., Löwe, S., Wendler, P.: Sliced path prefixes: An effective method to enable refinement selection. In: Graf, S., Viswanathan, M. (eds.) Formal Techniques for Distributed Objects, Components, and Systems. LNCS, vol. 9039, pp. 228–243. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  13. 13.
    Beyer, D., Wendler, P.: Algorithms for software model checking: Predicate abstraction vs. Impact. In: FMCAD 2012, pp. 106–113 (2012)Google Scholar
  14. 14.
    Burnim, J., Sen, K.: Heuristics for scalable dynamic test generation. In: ASE 2008, pp. 443–446. IEEE (2008)Google Scholar
  15. 15.
    Cadar, C., Dunbar, D., Engler, D.R.: Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: USENIX 2008, vol. 8, pp. 209–224 (2008)Google Scholar
  16. 16.
    Chalupa, M., Jonáš, M., Slaby, J., Strejcek, J., Vitovská, M.: Symbiotic 3: New slicer and error-witness generation. In: Chechik, M., Raskin, J.-F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 946–949. Springer, Heidelberg (2016)CrossRefGoogle Scholar
  17. 17.
    Chu, D.-H., Jaffar, J., Murali, V.: Lazy symbolic execution for enhanced learning. In: Bonakdarpour, B., Smolka, S.A. (eds.) RV 2014. LNCS, vol. 8734, pp. 323–339. Springer, Heidelberg (2014)Google Scholar
  18. 18.
    Clarke, E.M., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-guided abstraction refinement for symbolic model checking. J. ACM 50(5), 752–794 (2003)MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Craig, W.: Linear reasoning. a new form of the Herbrand-Gentzen theorem. J. Symb. Log. 22(3), 250–268 (1957)MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Godefroid, P.: Compositional dynamic test generation. In: POPL 2007, pp. 47–54. ACM (2007)Google Scholar
  21. 21.
    Godefroid, P., Klarlund, N., Sen, K.: Dart: Directed automated random testing. In: PLDI 2005, pp. 213–223. ACM (2005)Google Scholar
  22. 22.
    Henzinger, T.A., Jhala, R., Majumdar, R., Sutre, G.: Lazy abstraction. In: POPL 2002, pp. 58–70. ACM (2002)Google Scholar
  23. 23.
    Jaffar, J., Michaylov, S., Stuckey, P.J., Yap, R.H.C.: The Clp(R) language and system. ACM Trans. Program. Lang. Syst. 14(3), 339–395 (1992)CrossRefGoogle Scholar
  24. 24.
    Jaffar, J., Murali, V., Navas, J.A.: Boosting concolic testing via interpolation. In: FSE 2013, pp. 48–58. ACM (2013)Google Scholar
  25. 25.
    Jaffar, J., Navas, J.A., Santosa, A.E.: Unbounded symbolic execution for program verification. In: Khurshid, S., Sen, K. (eds.) RV 2011. LNCS, vol. 7186, pp. 396–411. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  26. 26.
    Jaffar, J., Santosa, A.E., Voicu, R.: An interpolation method for Clp traversal. In: Gent, I.P. (ed.) CP 2009. LNCS, vol. 5732, pp. 454–469. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  27. 27.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)MathSciNetCrossRefMATHGoogle Scholar
  28. 28.
    McMillan, K.L.: Interpolation and SAT-based model checking. In: Hunt, W.A., Somenzi, F. (eds.) CAV 2003. LNCS, vol. 2725, pp. 1–13. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  29. 29.
    McMillan, K.L.: Lazy annotation for program testing and verification. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 104–118. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  30. 30.
    Slaby, J., Strejček, J., Trtík, M.: Compact symbolic execution. In: Hung, D., Ogawa, M. (eds.) ATVA 2013. LNCS, vol. 8172, pp. 193–207. Springer, Heidelberg (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Dirk Beyer
    • 1
  • Thomas Lemberger
    • 2
  1. 1.LMU MunichMunichGermany
  2. 2.University of PassauPassauGermany

Personalised recommendations