SMACK: Decoupling Source Language Details from Verifier Implementations

  • Zvonimir Rakamarić
  • Michael Emmi
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8559)

Abstract

A major obstacle to putting software verification research into practice is the high cost of developing the infrastructure enabling the application of verification algorithms to actual production code, in all of its complexity. Handling an entire programming language is a huge endeavor that few researchers are willing to undertake; even fewer could invest the effort to implement a verification algorithm for many source languages. To decouple the implementations of verification algorithms from the details of source languages, and enable rapid prototyping on production code, we have developed SMACK. At its core, SMACK is a translator from the LLVM intermediate representation (IR) into the Boogie intermediate verification language (IVL). Sourcing LLVM exploits an increasing number of compiler front ends, optimizations, and analyses. Targeting Boogie exploits a canonical platform which simplifies the implementation of algorithms for verification, model checking, and abstract interpretation. Our initial experience in verifying C-language programs is encouraging: SMACK is competitive in SV-COMP benchmarks, is able to translate large programs (100 KLOC), and is being used in several verification research prototypes.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Albarghouthi, A., Li, Y., Gurfinkel, A., Chechik, M.: UFO: A framework for abstraction- and interpolation-based software verification. In: Madhusudan, P., Seshia, S.A. (eds.) CAV 2012. LNCS, vol. 7358, pp. 672–678. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  2. 2.
    Arlt, S., Rubio-González, C., Rümmer, P., Schäf, M., Shankar, N.: The gradual verifier. In: Badger, J.M., Rozier, K.Y. (eds.) NFM 2014. LNCS, vol. 8430, pp. 313–327. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  3. 3.
    Arlt, S., Rümmer, P., Schäf, M.: Joogie: From Java through Jimple to Boogie. In: ACM SIGPLAN International Workshop on State of the Art in Java Program Analysis (SOAP), pp. 3–8 (2013)Google Scholar
  4. 4.
    Babić, D., Hu, A.J.: Calysto: Scalable and precise extended static checking. In: International Conference on Software Engineering (ICSE), pp. 211–220 (2008)Google Scholar
  5. 5.
    Barnett, M., Leino, K.R.M.: Weakest-precondition of unstructured programs. In: ACM SIGPLAN-SIGSOFT Workshop on Program Analysis For Software Tools and Engineering (PASTE), pp. 82–87 (2005)Google Scholar
  6. 6.
    Barrett, C., Stump, A., Tinelli, C.: The SMT-LIB standard: Version 2.0. In: International Workshop on Satisfiability Modulo Theories (SMT) (2010)Google Scholar
  7. 7.
    Beyer, D., Keremoglu, M.E.: CPAchecker: A tool for configurable software verification. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 184–190. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  8. 8.
    Bouajjani, A., Emmi, M., Enea, C., Hamza, J.: Verifying concurrent programs against sequential specifications. In: Felleisen, M., Gardner, P. (eds.) ESOP 2013. LNCS, vol. 7792, pp. 290–309. Springer, Heidelberg (2013)Google Scholar
  9. 9.
    Cadar, C., Dunbar, D., Engler, D.: KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: USENIX Conference on Operating Systems Design and Implementation (OSDI), pp. 209–224 (2008)Google Scholar
  10. 10.
    Chatterjee, S., Lahiri, S.K., Qadeer, S., Rakamarić, Z.: A reachability predicate for analyzing low-level software. In: Grumberg, O., Huth, M. (eds.) TACAS 2007. LNCS, vol. 4424, pp. 19–33. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  11. 11.
    Clang: A C language family frontend for LLVM, http://clang.llvm.org
  12. 12.
    Clarke, E., Kroning, D., Sharygina, N., Yorav, K.: SATABS: SAT-based predicate abstraction for ANSI-C. In: Halbwachs, N., Zuck, L.D. (eds.) TACAS 2005. LNCS, vol. 3440, pp. 570–574. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  13. 13.
    Clarke, E., Kroning, D., Lerda, F.: A tool for checking ANSI-C programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 168–176. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  14. 14.
    Cohen, E., Dahlweid, M., Hillebrand, M., Leinenbach, D., Moskal, M., Santen, T., Schulte, W., Tobies, S.: VCC: A practical system for verifying concurrent C. In: Berghofer, S., Nipkow, T., Urban, C., Wenzel, M. (eds.) TPHOLs 2009. LNCS, vol. 5674, pp. 23–42. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  15. 15.
    Contiki: The open source OS for the Internet of things, http://www.contiki-os.org
  16. 16.
    Cordeiro, L., Fischer, B., Marques-Silva, J.: SMT-based bounded model checking for embedded ANSI-C software. In: IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 137–148 (2009)Google Scholar
  17. 17.
    Cuoq, P., Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Frama-C: A software analysis perspective. In: Eleftherakis, G., Hinchey, M., Holcombe, M. (eds.) SEFM 2012. LNCS, vol. 7504, pp. 233–247. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  18. 18.
    de Moura, L., Bjørner, N.S.: Z3: An efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  19. 19.
    DeLine, R., Leino, K.R.M.: BoogiePL: A typed procedural language for checking object-oriented programs. Technical Report MSR-TR-2005-70, Microsoft Research (2005)Google Scholar
  20. 20.
    Emmi, M., Qadeer, S., Rakamarić, Z.: Delay-bounded scheduling. In: ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL), pp. 411–422 (2011)Google Scholar
  21. 21.
    Filliâtre, J.-C., Marché, C.: Multi-prover verification of C programs. In: Davies, J., Schulte, W., Barnett, M. (eds.) ICFEM 2004. LNCS, vol. 3308, pp. 15–29. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  22. 22.
    Fischer, B., Inverso, O., Parlato, G.: Cseq: A sequentialization tool for C (competition contribution). In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013. LNCS, vol. 7795, pp. 616–618. Springer, Heidelberg (2013)Google Scholar
  23. 23.
    Lal, A., Qadeer, S., Lahiri, S.K.: A solver for reachability modulo theories. In: Madhusudan, P., Seshia, S.A. (eds.) CAV 2012. LNCS, vol. 7358, pp. 427–443. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  24. 24.
    Lattner, C., Adve, V.: LLVM: A compilation framework for lifelong program analysis & transformation. In: International Symposium on Code Generation and Optimization (CGO), pp. 75–86 (2004)Google Scholar
  25. 25.
    Lattner, C., Lenharth, A., Adve, V.S.: Making context-sensitive points-to analysis with heap cloning practical for the real world. In: ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pp. 278–289 (2007)Google Scholar
  26. 26.
    Leino, K.R.M.: This is Boogie 2 (2008)Google Scholar
  27. 27.
    The LLVM compiler infrastructure, http://llvm.org
  28. 28.
    Merz, F., Falke, S., Sinz, C.: LLBMC: Bounded model checking of C and C++ programs using a compiler IR. In: Joshi, R., Müller, P., Podelski, A. (eds.) VSTTE 2012. LNCS, vol. 7152, pp. 146–161. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  29. 29.
    Pǎsǎreanu, C.S., Mehlitz, P.C., Bushnell, D.H., Gundy-Burlet, K., Lowry, M., Person, S., Pape, M.: Combining unit-level symbolic execution and system-level concrete execution for testing NASA software. In: International Symposium on Software Testing and Analysis (ISSTA), pp. 15–26 (2008)Google Scholar
  30. 30.
    Rakamarić, Z., Hu, A.J.: Automatic inference of frame axioms using static analysis. In: IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 89–98 (2008)Google Scholar
  31. 31.
    Rakamarić, Z., Hu, A.J.: A scalable memory model for low-level code. In: Jones, N.D., Müller-Olm, M. (eds.) VMCAI 2009. LNCS, vol. 5403, pp. 290–304. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  32. 32.
    Siegel, S.F., Zirkel, T.K.: TASS: The toolkit for accurate scientific software. Mathematics in Computer Science 5(4), 395–426 (2011)CrossRefMATHGoogle Scholar
  33. 33.
    International competition on software verification (SV-COMP), http://sv-comp.sosy-lab.org
  34. 34.
    Wang, W., Barrett, C., Wies, T.: Cascade 2.0. In: McMillan, K.L., Rival, X. (eds.) VMCAI 2014. LNCS, vol. 8318, pp. 142–160. Springer, Heidelberg (2014)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Zvonimir Rakamarić
    • 1
  • Michael Emmi
    • 2
  1. 1.School of ComputingUniversity of UtahUSA
  2. 2.IMDEA Software InstituteSpain

Personalised recommendations