Advertisement

Automatic Modularization of Large Programs for Bounded Model Checking

  • Marko Kleine Büning
  • Carsten SinzEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11852)

Abstract

The verification of real-world applications is a continuous challenge which yielded numerous different methods and approaches. However, scalability of precise analysis methods on large programs is still limited. We thus propose a formal definition of modules that allows a partitioning of the program into smaller code fragments suitable for verification by bounded model checking. We consider programs written in C/C++ and use LLVM as an intermediate representation. A formal trace semantics for LLVM program runs is defined that also takes modularization into account. Using different abstractions and a selection of fragments of a program for each module, we describe four different modularization approaches. We define desirable properties of modularizations, and show how a bounded model checking approach can be adapted for modularization. Two modularization approaches are implemented within the tool QPR-Verify, which is based on the bounded model checker LLBMC. We evaluate our approach on a medium-sized embedded system software encompassing approximately 160 KLoC.

References

  1. 1.
    Ball, T., Bounimova, E., Kumar, R., Levin, V.: SLAM2: static driver verification with under 4% false alarms. In: Proceedings of the 2010 Conference on Formal Methods in Computer-Aided Design, pp. 35–42. FMCAD Inc. (2010)Google Scholar
  2. 2.
    Balyo, T., Heule, M.J., Järvisalo, M. (eds.): Proceedings of SAT Competition 2017: Solver and Benchmark Descriptions. University of Helsinki (2017)Google Scholar
  3. 3.
    Beckert, B., Hähnle, R., Schmitt, P.H.: Verification of Object-Oriented Software: The KeY Approach. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-69061-0CrossRefzbMATHGoogle Scholar
  4. 4.
    BMW CarIT GmbH: Open Source ConnMan (2019). http://www.bmw-carit.de/open-source/connman.php. Accessed 10 June 2019
  5. 5.
    Clarke, E.M., Long, D.E., McMillan, K.L.: Compositional model checking. In: Fourth Annual Symposium on Logic in Computer Science, pp. 353–362 (1989)Google Scholar
  6. 6.
    Cobleigh, J.M., Giannakopoulou, D., PĂsĂreanu, C.S.: Learning assumptions for compositional verification. In: Garavel, H., Hatcliff, J. (eds.) TACAS 2003. LNCS, vol. 2619, pp. 331–346. Springer, Heidelberg (2003).  https://doi.org/10.1007/3-540-36577-X_24CrossRefzbMATHGoogle Scholar
  7. 7.
    Cousot, P., Cousot, R.: Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Proceedings of the 4th Annual Symposium on Principles of Programming Languages, pp. 238–252 (1977)Google Scholar
  8. 8.
    Giannakopoulou, D., Pasareanu, C.S., Cobleigh, J.M.: Assume-guarantee verification of source code with design-level assumptions. In: Proceedings of 26th International Conference on Software Engineering, pp. 211–220. IEEE (2004)Google Scholar
  9. 9.
    Grumberg, O., Long, D.E.: Model checking and modular verification. In: Baeten, J.C.M., Groote, J.F. (eds.) CONCUR 1991. LNCS, vol. 527, pp. 250–265. Springer, Heidelberg (1991).  https://doi.org/10.1007/3-540-54430-5_93CrossRefGoogle Scholar
  10. 10.
    Hardekopf, B., Lin, C.: Flow-sensitive pointer analysis for millions of lines of code. In: Proceedings of the 9th Annual IEEE/ACM International Symposium on Code Generation and Optimization, pp. 289–298. IEEE Computer Society (2011)Google Scholar
  11. 11.
    Hawblitzel, C., et al.: Ironclad apps: end-to-end security via automated full-system verification. In: 11th Symposium on Operating Systems Design and Implementation, pp. 165–181 (2014)Google Scholar
  12. 12.
    Henzinger, T.A., Qadeer, S., Rajamani, S.K.: You assume, we guarantee: methodology and case studies. In: Hu, A.J., Vardi, M.Y. (eds.) CAV 1998. LNCS, vol. 1427, pp. 440–451. Springer, Heidelberg (1998).  https://doi.org/10.1007/BFb0028765CrossRefGoogle Scholar
  13. 13.
    Intel Corporation: Connection Manager (2019). https://git.kernel.org/pub/scm/network/connman/connman.git/tag/?h=1.36. Accessed 10 June 2019
  14. 14.
    Kaiser, J.O., Dang, H.H., Dreyer, D., Lahav, O., Vafeiadis, V.: Strong logic for weak memory: reasoning about release-acquire consistency in Iris. In: 31st European Conference on Object-Oriented Programming (2017)Google Scholar
  15. 15.
    Koopman, P.: A case study of Toyota unintended acceleration and software safety. Carnegie Mellon University Presentation, September 2014Google Scholar
  16. 16.
    Le Lann, G.: An analysis of the Ariane 5 flight 501 failure-a system engineering perspective. In: Proceedings of International Conference and Workshop on Engineering of Computer-Based Systems, pp. 339–346 (1997)Google Scholar
  17. 17.
    Merz, F., Falke, S., Sinz, C.: LLBMC: bounded model checking of C and C++ programs using a compiler IR. In: Joshi, R., Müller, P., Podelski, A. (eds.) VSTTE 2012. LNCS, vol. 7152, pp. 146–161. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-27705-4_12CrossRefGoogle Scholar
  18. 18.
    Müller, P.: Modular Specification and Verification of Object-Oriented Programs. Springer, Heidelberg (2002).  https://doi.org/10.1007/3-540-45651-1CrossRefzbMATHGoogle Scholar
  19. 19.
    Müller, P.: The binomial heap verification challenge in Viper. In: Müller, P., Schaefer, I. (eds.) Principled Software Development, pp. 203–219. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-98047-8_13CrossRefGoogle Scholar
  20. 20.
    Westland, J.C.: The cost of errors in software development: evidence from industry. J. Syst. Softw. 62(1), 1–9 (2002)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Karlsruhe Institute of Technology (KIT)KarlsruheGermany

Personalised recommendations