Skip to main content

What Went Wrong: Explaining Counterexamples

  • Conference paper
  • First Online:
Model Checking Software (SPIN 2003)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2648))

Included in the following conference series:

Abstract

One of the chief advantages of model checking is the production of counterexamples demonstrating that a system does not satisfy a specification. However, it may require a great deal of human effort to extract the essence of an error from even a detailed source-level trace of a failing run. We use an automated method for finding multiple versions of an error (and similar executions that do not produce an error), and analyze these executions to produce a more succinct description of the key elements of the error. The description produced includes identification of portions of the source code crucial to distinguishing failing and succeeding runs, differences in invariants between failing and non-failing runs, and information on the necessary changes in scheduling and environmental actions needed to cause successful runs to fail.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. T. Ball, M. Naik, and S. Rajamani. From Symptom to Cause: Localizing Errors in Counterexample Traces. In Principles of Programming Languages, 2003.

    Google Scholar 

  2. T. Ball and S. K. Rajamani. Automatically Validating Temporal Safety Properties of Interfaces. In SPIN Workshop on Model Checking of Software, pages 103–122, 2001.

    Google Scholar 

  3. J. Choi, and A. Zeller. Isolating Failure-Inducing Thread Schedules. In International Symposium on Software Testing and Analysis, 2002.

    Google Scholar 

  4. E. M. Clarke, O. Grumberg, and D. Peled. Model Checking. MIT Press, 2000.

    Google Scholar 

  5. N. Dodoo, A. Donovan, L. Lin and M. Ernst. Selecting Predicates for Implications in Program Analysis. http://pag.lcs.mit.edu/~mernst/pubs/invariants-implications-abstract.html. March 16, 2000. Viewed: September 6th, 2002.

  6. M. Ernst, J. Cockrell, W. Griswold and D. Notkin. Dynamically Discovering Likely Program Invariants to Support Program Evolution. In International Conference on Software Engineering, pages 213–224, 1999.

    Google Scholar 

  7. D. Giannakopoulou, C. Păsăreanu, and H. Barringer. Assumption Generation for Software Component Verification. In Automated Software Engineering, 2002.

    Google Scholar 

  8. A. Groce and W. Visser. Model Checking Java Programs using Structural Heuristics. In International Symposium on Software Testing and Analysis, 2002.

    Google Scholar 

  9. K. Havelund, M. Lowry, S. Park, C. Pecheur, J. Penix, W. Visser and J. White. Formal Analysis of the Remote Agent Before and After Flight. In Proceedings of the 5th NASA Langley Formal Methods Workshop, June 2000.

    Google Scholar 

  10. T. A. Henzinger, R. Jhala, R. Majumdar and G. Sutre. Lazy Abstraction. In ACM SIGPLAN-SIGACT Conference on Principles of Programming Languages, 2002.

    Google Scholar 

  11. H. Jin, K. Ravi and F. Somenzi. Fate and Free Will in Error Traces. In Tools and Algorithms for the Construction and Analysis of Systems, pages 445–458, 2002.

    Google Scholar 

  12. J. Penix, W. Visser, E. Engstrom, A. Larson and N. Weininger. Verification of Time Partitioning in the DEOS Scheduler Kernel. In International Conference on Software Engineering, pages 488–497, 2000.

    Google Scholar 

  13. N. Sharygina and D. Peled. A Combined Testing and Verification Approach for Software Reliability. In Formal Methods for Increasing Software Productivity, International Symposium of Formal Methods Europe, pages 611–628, 2001.

    Google Scholar 

  14. O. Tkachuk, G. Brat and W. Visser Using Code Level Model Checking To Discover Automation Surprises In Digital Avionics Systems Conference, Irvine CA, October 2002.

    Google Scholar 

  15. W. Visser, K. Havelund, G. Brat and S. Park. Model Checking Programs. In Automated Software Engineering (ASE), pages 3–11, 2000.

    Google Scholar 

  16. A. Zeller. Isolating Cause-Effect Chains from Computer Programs. In International Symposium on the Foundations of Software Engineering (FSE-10), 2002.

    Google Scholar 

  17. A. Zeller and R. Hildebrandt. Simplifying and Isolating Failure-Inducing Input. In IEEE Transactions on Software Engineering, 28(2), February 2002, pages 183–200.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Groce, A., Visser, W. (2003). What Went Wrong: Explaining Counterexamples. In: Ball, T., Rajamani, S.K. (eds) Model Checking Software. SPIN 2003. Lecture Notes in Computer Science, vol 2648. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44829-2_8

Download citation

  • DOI: https://doi.org/10.1007/3-540-44829-2_8

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40117-9

  • Online ISBN: 978-3-540-44829-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics