Advertisement

Deductive Verification of Legacy Code

  • Bernhard BeckertEmail author
  • Thorsten Bormer
  • Daniel Grahl
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9952)

Abstract

Deductive verification is about proving that a piece of code conforms to a given requirement specification. For legacy code, this task is notoriously hard for three reasons: (1) writing specifications post-hoc is much more difficult than producing code and its specification simultaneously, (2) verification does not scale as legacy code is often badly modularized, (3) legacy code may be written in a way such that verification requires frequent user interaction.

We give examples for which characteristics of (imperative) legacy code impede the specification and verification effort. We also discuss how to handle the challenges of legacy code verification and suggest a strategy for post-hoc verification, together with possible improvements to existing verification approaches. We draw our experience from two case studies for verification of imperative implementations (in Java and C) in which we verified legacy software, i.e., code that was provided by a third party and was not designed to be verified.

Keywords

Software Product Line Verification Process Proof Obligation Verification Task Interface Specification 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Ahrendt, W., et al.: The KeY platform for verification and analysis of Java programs. In: Giannakopoulou, D., Kroening, D. (eds.) VSTTE 2014. LNCS, vol. 8471, pp. 55–71. Springer, Heidelberg (2014)Google Scholar
  2. 2.
    Alkassar, E., Hillebrand, M.A., Paul, W., Petrova, E.: Automated verification of a small hypervisor. In: Leavens, G.T., O’Hearn, P., Rajamani, S.K. (eds.) VSTTE 2010. LNCS, vol. 6217, pp. 40–54. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  3. 3.
    Baumann, C., Beckert, B., Blasum, H., Bormer, T.: Lessons learned from microkernel verification: specification is the new bottleneck. In: Cassez, F., Huuck, R., Klein, G., Schlich, B. (eds.) 7th Conference on Systems Software Verification. SSV 2012, Sydney, Australia, vol. 102. Electronic Proceedings in Theoretical Computer Science (2012)Google Scholar
  4. 4.
    Beckert, B., Bormer, T., Merz, F., Sinz, C.: Integration of bounded model checking and deductive verification. In: Beckert, B., Damiani, F., Gurov, D. (eds.) FoVeOOS 2011. LNCS, vol. 7421, pp. 86–104. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  5. 5.
    Bhattacharya, P., Iliofotou, M., Neamtiu, I., Faloutsos, M.: Graph-based analysis and prediction for software evolution. In: Glinz, M., Murphy, G.C., Pezzè, M. (eds.) 34th International Conference on Software Engineering (ICSE 2012), pp. 419–429. IEEE (2012)Google Scholar
  6. 6.
    Bobot, F., Filliâtre, J.C., Marché, C., Paskevich, A.: Why3: shepherd your herd of provers. In: Boogie 2011: First International Workshop on Intermediate Verification Languages, Wroclaw, Poland, pp. 53–64 (2011)Google Scholar
  7. 7.
    Bruns, D., Mostowski, W., Ulbrich, M.: Implementation-level verification of algorithms with KeY. Softw. Tools Technol. Transf. 17(6), 729–744 (2015)CrossRefGoogle Scholar
  8. 8.
    Bubel, R., Hähnle, R., Pelevina, M.: Fully abstract operation contracts. In: Margaria, T., Steffen, B. (eds.) ISoLA 2014, Part II. LNCS, vol. 8803, pp. 120–134. Springer, Heidelberg (2014)Google Scholar
  9. 9.
    Cohen, E., Dahlweid, M., Hillebrand, M., Leinenbach, D., Moskal, M., Santen, T., Schulte, W., Tobies, S.: VCC: a practical system for verifying concurrent C. In: Berghofer, S., Nipkow, T., Urban, C., Wenzel, M. (eds.) TPHOLs 2009. LNCS, vol. 5674, pp. 23–42. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  10. 10.
    Damiani, F., Dovland, J., Johnsen, E.B., Schaefer, I.: Verifying traits: an incremental proof system for fine-grained reuse. Formal Asp. Comput. 26(4), 761–793 (2014)CrossRefzbMATHGoogle Scholar
  11. 11.
    Delahaye, M., Kosmatov, N., Signoles, J.: Common specification language for static and dynamic analysis of C programs. In: Shin, S.Y., Maldonado, J.C. (eds.) Proceedings of the 28th Annual ACM Symposium on Applied Computing, SAC 2013, Coimbra, Portugal, 18–22 March 2013, pp. 1230–1235. ACM (2013)Google Scholar
  12. 12.
    Dovland, J., Johnsen, E.B., Owe, O., Steffen, M.: Lazy behavioral subtyping. J. Logic Algebraic Program. 79(7), 578–607 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Falke, S., Merz, F., Sinz, C.: The bounded model checker LLBMC. In: Denney, E., Bultan, T., Zeller, A. (eds.) 28th IEEE/ACM International Conference on Automated Software Engineering, ASE 2013, Silicon Valley, CA, USA. IEEE (2013)Google Scholar
  14. 14.
    Felsing, D., Grebing, S., Klebanov, V., Rümmer, P., Ulbrich, M.: Automating regression verification. In: 29th IEEE/ACM International Conference on Automated Software Engineering (ASE 2014), pp. 349–360. ACM (2014)Google Scholar
  15. 15.
    Le Goues, C., Leino, K.R.M., Moskal, M.: The Boogie verification debugger (Tool Paper). In: Barthe, G., Pardo, A., Schneider, G. (eds.) SEFM 2011. LNCS, vol. 7041, pp. 407–414. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  16. 16.
    Grahl, D.: Deductive verification of concurrent programs and its application to secure information flow for Java. Ph.D. thesis, Karlsruhe Inst. of Techn. (2015)Google Scholar
  17. 17.
    Hammer, C., Snelting, G.: Flow-sensitive, context-sensitive, and object-sensitive information flow control based on program dependence graphs. Int. J. Inf. Secur. 8(6), 399–422 (2009)CrossRefGoogle Scholar
  18. 18.
    Hawblitzel, C., Howell, J., Lorch, J.R., Narayan, A., Parno, B., Zhang, D., Zill, B.: Ironclad apps: end-to-end security via automated full-system verification. In: Flinn, J., Levy, H. (eds.) 11th USENIX Symposium on Operating Systems Design and Implementation, pp. 165–181. USENIX Association (2014)Google Scholar
  19. 19.
    Hentschel, M.: Integrating symbolic execution, debugging and verification. Ph.D. thesis, Technische Universität Darmstadt, January 2016Google Scholar
  20. 20.
    Hentschel, M., Käsdorf, S., Hähnle, R., Bubel, R.: An interactive verification tool meets an IDE. In: Albert, E., Sekerinski, E. (eds.) IFM 2014. LNCS, vol. 8739, pp. 55–70. Springer, Heidelberg (2014)Google Scholar
  21. 21.
    Herda, M.: Generating bounded counterexamples for KeY proof obligations. Master’s thesis, KIT (2014). http://dx.doi.org/10.5445/IR/1000055929
  22. 22.
    Kaiser, R., Wagner, S.: Evolution of the PikeOS microkernel. In: Kuz, I., Petters, S.M. (eds.) 1st International Workshop on Microkernels for Embedded Systems (MIKES 2007). National ICT Australia (2007)Google Scholar
  23. 23.
    Klein, G., Andronick, J., Elphinstone, K., Heiser, G., Cock, D., Derrin, P., Elkaduwe, D., Engelhardt, K., Kolanski, R., Norrish, M., Sewell, T., Tuch, H., Winwood, S.: seL4: formal verification of an operating-system kernel. Commun. ACM 53(6), 107–115 (2010). doi: 10.1145/1743546.1743574 CrossRefGoogle Scholar
  24. 24.
    Klein, G., Andronick, J., Elphinstone, K., Murray, T., Sewell, T., Kolanski, R., Heiser, G.: Comprehensive formal verification of an OS microkernel. ACM Trans. Comput. Syst. 32(1), 2: 1–2: 70 (2014)CrossRefGoogle Scholar
  25. 25.
    Küsters, R., Truderung, T., Beckert, B., Bruns, D., Kirsten, M., Mohr, M.: A hybrid approach for proving noninterference of Java programs. In: Fournet, C., Hicks, M., Viganò, L. (eds.) 28th IEEE Computer Security Foundations Symposium (CSF) (2015)Google Scholar
  26. 26.
    Küsters, R., Truderung, T., Vogt, A.: Verifiability, privacy, and coercion-resistance: new insights from a case study. In: Proceedings of the 32nd IEEE Symposium on Security and Privacy (S&P), pp. 538–553. IEEE Computer Society (2011)Google Scholar
  27. 27.
    Leino, K.R.M.: Dafny: an automatic program verifier for functional correctness. In: Clarke, E.M., Voronkov, A. (eds.) LPAR-16 2010. LNCS, vol. 6355, pp. 348–370. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  28. 28.
    Leino, K.R.M., Moskal, M.: Usable auto-active verification. In: Usable Verification Workshop (2010). http://fm.csl.sri.com/UV10
  29. 29.
    Müller, P., Ruskiewicz, J.N.: Using debuggers to understand failed verification attempts. In: Butler, M., Schulte, W. (eds.) FM 2011. LNCS, vol. 6664, pp. 73–87. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  30. 30.
    Paulson, L.C.: Isabelle–A Generic Theorem Prover. LNCS, vol. 828. Springer, Heidelberg (1994)zbMATHGoogle Scholar
  31. 31.
    Polikarpova, N., Furia, C.A., West, S.: To run what no one has run before: executing an intermediate verification language. In: Legay, A., Bensalem, S. (eds.) RV 2013. LNCS, vol. 8174, pp. 251–268. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  32. 32.
    Slind, K., Norrish, M.: A brief overview of HOL4. In: Mohamed, O.A., Muñoz, C., Tahar, S. (eds.) TPHOLs 2008. LNCS, vol. 5170, pp. 28–32. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  33. 33.
    Wenzel, M.M.: Isabelle/Isar–a versatile environment for human-readable formal proof documents. Ph.D. thesis, Technische Universität München (2002)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Bernhard Beckert
    • 1
    Email author
  • Thorsten Bormer
    • 1
  • Daniel Grahl
    • 1
  1. 1.Karlsruhe Institute of TechnologyKarlsruheGermany

Personalised recommendations