Constructing Semantic Models of Programs with the Software Analysis Workbench

  • Robert Dockins
  • Adam Foltzer
  • Joe Hendrix
  • Brian Huffman
  • Dylan McNamee
  • Aaron TombEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9971)


The Software Analysis Workbench (SAW) is a system for translating programs into logical expressions, transforming these expressions, and using external reasoning tools (such as SAT and SMT solvers) to prove properties about them. In the implementation of this translation, SAW combines efficient symbolic execution techniques in a novel way. It has been used most extensively to prove that implementations of cryptographic algorithms are functionally equivalent to referencespecifications, but can also be used to identify inputs to programs that will lead to outputs with particular properties, and prove other properties about programs. In this paper, we describe the structure of the SAW system and present experimental results demonstrating the benefits of its implementation techniques.


Equivalence checking Cryptography SAT SMT Symbolic execution Verification 



Much of the work on SAW and Cryptol has been funded by, and design input was provided by the team at the NSA’s Trusted Systems Research Group, including Brad Martin, Frank Taylor, Sean Weaver, and Jared Ziegler.


  1. 1.
    Ahrendt, W., Beckert, B., Bruns, D., Bubel, R., Gladisch, C., Grebing, S., Hähnle, R., Hentschel, M., Herda, M., Klebanov, V., Mostowski, W., Scheben, C., Schmitt, P.H., Ulbrich, M.: The KeY platform for verification and analysis of Java programs. In: Giannakopoulou, D., Kroening, D. (eds.) VSTTE 2014. LNCS, vol. 8471, pp. 55–71. Springer, Heidelberg (2014). doi: 10.1007/978-3-319-12154-3_4 Google Scholar
  2. 2.
    Appel, A.W.: Verification of a cryptographic primitive: SHA-256. ACM Trans. Program. Lang. Syst. 37(2), 7:1–7:31 (2015)Google Scholar
  3. 3.
    Barthe, G., Grégoire, B., Heraud, S., Béguelin, S.Z.: Computer-aided security proofs for the working cryptographer. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 71–90. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-22792-9_5 CrossRefGoogle Scholar
  4. 4.
    Bhargavan, K., Fournet, C., Kohlweiss, M., Pironti, A., Strub, P.Y.: Implementing TLS with verified cryptographic security. In: Proceedings of the 2013 IEEE Symposium on Security and Privacy (SP), pp. 445–459, May 2013Google Scholar
  5. 5.
    Brayton, R., Mishchenko, A.: ABC: an academic industrial-strength verification tool. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 24–40. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-14295-6_5 CrossRefGoogle Scholar
  6. 6.
    Burdy, L., Cheon, Y., Cok, D.R., Ernst, M.D., Kiniry, J.R., Leavens, G.T., Leino, K.R.M., Poll, E.: An overview of JML tools and applications. Intl. J. Softw. Tools Technol. Transf. 7(3), 212–232 (2005)Google Scholar
  7. 7.
    Cadar, C., Dunbar, D., Engler, D.: KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: Proceedings of the 8th USENIX Conference on Operating Systems Design and Implementation (OSDI 2008), pp. 209–224. USENIX Association, Berkeley (2008)Google Scholar
  8. 8.
    Carter, K., Foltzer, A., Hendrix, J., Huffman, B., Tomb, A.: SAW: the software analysis workbench. In: Proceedings of the 2013 ACM SIGAda Annual Conference on High Integrity Language Technology (HILT 2013), pp. 15–18 (2013)Google Scholar
  9. 9.
    Casinghino, C., Sjöberg, V., Weirich, S.: Combining proofs and programs in a dependently typed language. In: Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POpPL 2014), pp. 33–45 (2014)Google Scholar
  10. 10.
    Clarke, E., Kroening, D., Lerda, F.: A tool for checking ANSI-C programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 168–176. Springer, Heidelberg (2004). doi: 10.1007/978-3-540-24730-2_15 CrossRefGoogle Scholar
  11. 11.
    Cohen, E., Dahlweid, M., Hillebrand, M., Leinenbach, D., Moskal, M., Santen, T., Schulte, W., Tobies, S.: VCC: a practical system for verifying concurrent C. In: Berghofer, S., Nipkow, T., Urban, C., Wenzel, M. (eds.) TPHOLs 2009. LNCS, vol. 5674, pp. 23–42. Springer, Heidelberg (2009). doi: 10.1007/978-3-642-03359-9_2 CrossRefGoogle Scholar
  12. 12.
    Erkök, L., Matthews, J.: Pragmatic equivalence and safety checking in Cryptol. In: Proceedings of the 3rd Workshop on Programming Languages Meets Program Verification (PLpPV 2009), pp. 73–82 (2009)Google Scholar
  13. 13.
    Falke, S., Merz, F., Sinz, C.: The bounded model checker LLBMC. In: Proceedings of the 28th IEEE/ACM International Conference on Automated Software Engineering, (ASE 2013), pp. 706–709. IEEE (2013)Google Scholar
  14. 14.
    Filliâtre, J.-C., Paskevich, A.: Why3 — where programs meet provers. In: Felleisen, M., Gardner, P. (eds.) ESOP 2013. LNCS, vol. 7792, pp. 125–128. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-37036-6_8 CrossRefGoogle Scholar
  15. 15.
    Hansen, T., Schachte, P., Søndergaard, H.: State joining and splitting for the symbolic execution of binaries. In: Bensalem, S., Peled, D.A. (eds.) RV 2009. LNCS, vol. 5779, pp. 76–92. Springer, Heidelberg (2009). doi: 10.1007/978-3-642-04694-0_6 CrossRefGoogle Scholar
  16. 16.
    Hardin, D.S.: Reasoning about LLVM code using Codewalker. In: Proceedings of the 13th International Workshop on the ACL2 Theorem Prover and Its Applications. Electronic Proceedings in Theoretical Computer Science, vol. 192, pp. 79–92, October 2015Google Scholar
  17. 17.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Frama-C: a software analysis perspective. Formal Aspects Comput. 27(3), 573–609 (2015)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Kuznetsov, V., Kinder, J., Bucur, S., Candea, G.: Efficient state merging in symbolic execution. In: Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2012), pp. 193–204 (2012)Google Scholar
  20. 20.
    Leino, K.R.M.: This is Boogie 2. Technical report, Microsoft Research (2008)Google Scholar
  21. 21.
    Lewis, J., Martin, B.: Cryptol: high assurance, retargetable crypto development and validation. In: Proceedings of the IEEE Military Communications Conference (MILCOM 2003), vol. 2, pp. 820–825, October 2003Google Scholar
  22. 22.
    de Moura, L., Kong, S., Avigad, J., van Doorn, F., von Raumer, J.: The Lean theorem prover. In: Proceedings of the 25th International Conference on Automated Deduction (CADE-25), Berlin, Germany (2015)Google Scholar
  23. 23.
    Myreen, M.O., Gordon, M.J.C., Slind, K.: Decompilation into logic - improved. In: Proceedings of the 12th International Conference on Formal Methods in Computer-Aided Design (FMCAD 2012), pp. 78–81. IEEE (2012)Google Scholar
  24. 24.
    Smith, E.W.: Axe: an automated formal equivalence checking tool for programs. Ph.D. thesis, Stanford University (2011)Google Scholar
  25. 25.
    The Coq development team: The Coq Proof assistant reference manual. LogiCal Project, version 8.0 (2004).
  26. 26.
    Tristan, J.B., Govereau, P., Morrisett, G.: Evaluating value-graph translation validation for LLVM. In: Proceedings of the 32nd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2011), pp. 295–305 (2011)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Robert Dockins
    • 1
  • Adam Foltzer
    • 1
  • Joe Hendrix
    • 1
  • Brian Huffman
    • 1
  • Dylan McNamee
    • 1
  • Aaron Tomb
    • 1
    Email author
  1. 1.Galois, Inc.PortlandUSA

Personalised recommendations