Skip to main content

Using Debuggers to Understand Failed Verification Attempts

  • Conference paper
FM 2011: Formal Methods (FM 2011)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 6664))

Included in the following conference series:

Abstract

Automatic program verification allows programmers to detect program errors at compile time. When an attempt to automatically verify a program fails the reason for the failure is often difficult to understand. Many program verifiers provide a counterexample of the failed attempt. These counterexamples are usually very complex and therefore not amenable to manual inspection. Moreover, the counterexample may be invalid, possibly misleading the programmer. We present a new approach to help the programmer understand failed verification attempts by generating an executable program that reproduces the failed verification attempt described by the counterexample. The generated program (1) can be executed within the program debugger to systematically explore the counterexample, (2) encodes the program semantics used by the verifier, which allows us to detect errors in specifications as well as in programs, and (3) contains runtime checks for all specifications, which allows us to detect spurious errors. Our approach is implemented within the Spec# programming system.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ball, T., Naik, M., Rajamani, S.K.: From symptom to cause: Localizing errors in counterexample traces. In: POPL, pp. 97–105. ACM, New York (2003)

    Google Scholar 

  2. Beyer, D., Chlipala, A.J., Henzinger, T.A., Jhala, R., Majumbar, R.: Generating tests from counterexamples. In: ICSE, pp. 326–335. IEEE, Los Alamitos (2004)

    Google Scholar 

  3. Clarke, E.M., Kroening, D., Lerda, F.: A tool for checking ANSI C programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 168–176. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  4. Csallner, C., Smaragdakis, Y.: Check ’n’ Crash: Combining static checking and testing. In: ICSE, pp. 422–431. ACM, New York (2005)

    Google Scholar 

  5. Darvas, Á., Müller, P.: Reasoning about method calls in interface specifications. Journal of Object Technology 5(5), 59–85 (2006)

    Article  Google Scholar 

  6. de Moura, L., Bjørner, N.: Z3: An efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  7. Detlefs, D., Nelson, G., Saxe, J.B.: Simplify: A theorem prover for program checking. Technical Report HPL-2003-148, HP Laboratories, Palo Alto (2003)

    Google Scholar 

  8. Groce, A.: Error explanation with distance metrics. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 108–122. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  9. Hähnle, R., Baum, M., Bubel, R., Rothe, M.: A visual interactive debugger based on symbolic execution. In: ASE, pp. 143–146. ACM, New York (2010)

    Google Scholar 

  10. Hall, R.J., Zisman, A.: Validating personal requirements by assisted symbolic behavior browsing. In: ASE, pp. 56–66. IEEE, Los Alamitos (2004)

    Google Scholar 

  11. Leino, K.R.M., Müller, P.: Using the spec# language, methodology, and tools to write bug-free programs. In: Müller, P. (ed.) LASER Summer School 2007/2008. LNCS, vol. 6029, pp. 91–139. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  12. Rayside, D., Chang, F.S.-H., Dennis, G., Seater, R., Jackson, D.: Automatic visualization of relational logic models. ECEASST 7 (2007)

    Google Scholar 

  13. Tillman, N., Schulte, W.: Mock-object generation with behavior. In: ASE, pp. 365–368. IEEE, Los Alamitos (2006)

    Google Scholar 

  14. Tip, F.: A survey of program slicing techniques. Journal of Programming Languages 3(3) (1995)

    Google Scholar 

  15. Zeller, A., Lütkehaus, D.: DDD—a free graphical front-end for UNIX debuggers. SIGPLAN Notices 31(1), 22–27 (1996)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Müller, P., Ruskiewicz, J.N. (2011). Using Debuggers to Understand Failed Verification Attempts. In: Butler, M., Schulte, W. (eds) FM 2011: Formal Methods. FM 2011. Lecture Notes in Computer Science, vol 6664. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21437-0_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21437-0_8

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21436-3

  • Online ISBN: 978-3-642-21437-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics