Development and Evaluation of LAV: An SMT-Based Error Finding Platform

System Description
  • Milena Vujošević-Janičić
  • Viktor Kuncak
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7152)

Abstract

We present design and evaluation of LAV, a new open-source tool for statically checking program assertions and errors. LAV integrates into the popular LLVM infrastructure for compilation and analysis. LAV uses symbolic execution to construct a first-order logic formula that models the behavior of each basic blocks. It models the relationships between basic blocks using propositional formulas. By combining these two kinds of formulas LAV generates polynomial-sized verification conditions for loop-free code. It uses underapproximating or overapproximating unrolling to handle loops. LAV can pass generated verification conditions to one of the several SMT solvers: Boolector, MathSAT, Yices, and Z3. Our experiments with small 200 benchmarks suggest that LAV is competitive with related tools, so it can be used as an effective alternative for certain verification tasks. The experience also shows that LAV provides significant help in analyzing student programs and providing feedback to students in everyday university practice.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ackermann, W.: Solvable cases of the decision problem. North-Holland (1954)Google Scholar
  2. 2.
    Babić, D., Hu, A.J.: Calysto: Scalable and Precise Extended Static Checking. In: ICSE 2008, pp. 211–220. ACM (2008)Google Scholar
  3. 3.
    Balakrishnan, G., Ganai, M.K., Gupta, A., Ivancic, F., Kahlon, V., Li, W., Maeda, N., Papakonstantinou, N., Sankaranarayanan, S., Sinha, N., Wang, C.: Scalable and precise program analysis at nec. In: FMCAD (2010)Google Scholar
  4. 4.
    Barnett, M., Leino, K.R.M.: Weakest-precondition of unstructured programs. ACM Sigsoft Software Engineering Notes 31, 82–87 (2006)CrossRefGoogle Scholar
  5. 5.
    Barrett, C., Sebastiani, R., Seshia, S.A., Tinelli, C.: Satisfiability modulo theories. In: Handbook of Satisfiability. Frontiers in Artificial Intelligence and Applications, vol. 185, pp. 825–885. IOS Press (2009)Google Scholar
  6. 6.
    Biere, A., Cimatti, A., Clarke, E., Strichman, O., Zhu, Y.: Bounded model checking. Advances in Computers 58 (2003)Google Scholar
  7. 7.
    Brummayer, R., Biere, A.: Boolector: An Efficient SMT Solver for Bit-Vectors and Arrays. In: Kowalewski, S., Philippou, A. (eds.) TACAS 2009. LNCS, vol. 5505, pp. 174–177. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  8. 8.
    Bruttomesso, R., Cimatti, A., Franzén, A., Griggio, A., Santuari, A., Sebastiani, R.: To Ackermannize or Not to Ackermannize? In: Hermann, M., Voronkov, A. (eds.) LPAR 2006. LNCS (LNAI), vol. 4246, pp. 557–571. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  9. 9.
    Bruttomesso, R., Cimatti, A., Franzén, A., Griggio, A., Sebastiani, R.: The MathSAT 4 SMT Solver. In: Gupta, A., Malik, S. (eds.) CAV 2008. LNCS, vol. 5123, pp. 299–303. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  10. 10.
    Cadar, C., Dunbar, D., Engler, D.: Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: OSDI (2008)Google Scholar
  11. 11.
    Chipounov, V., Kuznetsov, V., Candea, G.: S2e: a platform for in-vivo multi-path analysis of software systems. SIGARCH Comput. Archit. News 39 (2011)Google Scholar
  12. 12.
    Clarke, E., Kroning, D., Lerda, F.: A Tool for Checking ANSL-C Programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 168–176. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  13. 13.
    Cordeiro, L., Fischer, B., Marques-Silva, J.: SMT-based bounded model checking for embedded ansi-c software. In: ASE, pp. 137–148 (2009)Google Scholar
  14. 14.
    de Moura, L., Bjørner, N.S.: Z3: An Efficient SMT Solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  15. 15.
    Dutertre, B., de Moura, L.: The Yices SMT solver. Tool paper at (August 2006), http://-yices.csl.sri.com/tool-paper.pdf
  16. 16.
    Flanagan, C., Saxe, J.B.: Avoiding exponential explosion: generating compact verification conditions. In: Proc. ACM SIGPLAN POPL (January 2001)Google Scholar
  17. 17.
    King, J.C.: Symbolic execution and program testing. Communications of the ACM 19(7), 385–394 (1976)MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    Ku, K., Hart, T.E., Chechik, M., Lie, D.: A buffer overflow benchmark for software model checkers. In: Proceedings of ASE 2007. ACM (2007)Google Scholar
  19. 19.
    Sankaranarrayanan, S.: Necla static analysis benchmarks (2009), http://www.nec-labs.com/research/system
  20. 20.
    Sinz, C., Falke, S., Merz, F.: The low-level bounded model checker llbmc: A precise memory model for llbmc. In: SSV (2010)Google Scholar
  21. 21.
    Tillmann, N., Halleux, J.: Pex White Box Test Generation for.NET. In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 134–153. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  22. 22.
    Zitser, M., Lippmann, R., Leek, T.: Testing static analysis tools using exploitable buffer overflows from open source code. SIGSOFT Softw. Eng. Notes 29 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Milena Vujošević-Janičić
    • 1
  • Viktor Kuncak
    • 2
  1. 1.Faculty of MathematicsBelgradeSerbia
  2. 2.School of Computer and Communication SciencesEPFLLausanneSwitzerland

Personalised recommendations