Error explanation with distance metrics

  • Alex Groce
  • Sagar Chaki
  • Daniel Kroening
  • Ofer Strichman
Special section on Tools and Algorithms for the Construction and Analysis of Systems

Abstract

In the event that a system does not satisfy a specification, a model checker will typically automatically produce a counterexample trace that shows a particular instance of the undesirable behavior. Unfortunately, the important steps that follow the discovery of a counterexample are generally not automated. The user must first decide if the counterexample shows genuinely erroneous behavior or is an artifact of improper specification or abstraction. In the event that the error is real, there remains the difficult task of understanding the error well enough to isolate and modify the faulty aspects of the system. This paper describes a (semi-)automated approach for assisting users in understanding and isolating errors in ANSI C programs. The approach, derived from Lewis’ counterfactual approach to causality, is based on distance metrics for program executions. Experimental results show that the power of the model checking engine can be used to provide assistance in understanding errors and to isolate faulty portions of the source code.

Keywords

Model checking Error explanation Fault localization Automated debugging 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Agrawal, H., Horgan, J.R., London, S., Wong, W.E.: Fault localization using execution slices and dataflow tests. In: International Symposium on Software Reliability Engineering, pp. 143–151. Toulouse, France (1995)Google Scholar
  2. 2.
    Aloul, F., Ramani, A., Markov, I., Sakallah, K.: PBS: A backtrack search pseudo Boolean solver. In: Symposium on the Theory and Applications of Satisfiability Testing (SAT), pp. 346–353. Cincinnati, OH (2002)Google Scholar
  3. 3.
    Alpern, B., Wegman, M., Zadeck, F.: Detecting equality of variables in programs. In: Principles of Programming Languages, pp. 1–11. San Diego, CA (1988).Google Scholar
  4. 4.
    Alpuente, M., Comini, M., Escobar, S., Falaschi, M., Lucas, S.: Abstract diagnosis of functional programs. In: Logic Based Program Synthesis and Tranformation, 12th International Workshop (2002)Google Scholar
  5. 5.
    Anderson, P., Teitelbaum, T.: Software inspection using codesurfer. In: Workshop on Inspection in Software Engineering. Paris, France (2001)Google Scholar
  6. 6.
    Ball, T., Eick, S.: Software visualization in the large. Computer 29(4), 33–43 (1996)CrossRefGoogle Scholar
  7. 7.
    Ball, T., Naik, M., Rajamani, S.: From symptom to cause: Localizing errors in counterexample traces. In: Principles of Programming Languages, pp. 97–105. New Orleans, LA (2003)Google Scholar
  8. 8.
    Ball, T., Rajamani, S.: Automatically validating temporal safety properties of interfaces. In: SPIN Workshop on Model Checking of Software, pp. 103–122. Toronto, Canada (2001)Google Scholar
  9. 9.
    Biere, A., Artho, C., Schuppan, V.: Liveness checking as safety checking. In: ERCIM Workshop in Formal Methods for Industrial Critical Systems, vol. 66 of Electronic Notes in Theoretical Computer Science. University of Malaga, Spain (2002)Google Scholar
  10. 10.
    Biere, A., Cimatti, A., Clarke, E., Zhu, Y.: Symbolic model checking without BDDs. In: Tools and Algorithms for the Construction and Analysis of Systems, pp. 193–207. Amsterdam, The Netherlands (1999)Google Scholar
  11. 11.
    Chaki, S., Clarke, E., Groce, A., Jha, S., Veith, H.: Modular verification of software components in C. IEEE Trans. Soft. Eng. 30(6), 388–402 (2004)CrossRefGoogle Scholar
  12. 12.
    Chaki, S., Groce, A., Strichman, O.: Explaining abstract counterexamples. In: Foundations of Software Engineering, pp. 73–82. Newport Beach, CA (2004)Google Scholar
  13. 13.
    Chan, W.: Temporal-logic queries. In: Computer-Aided Verification, pp. 450–463. Chicago, IL (2000)Google Scholar
  14. 14.
    Chechik, M., Gurfinkel, A.: Proof-like counter-examples. In: Tools and Algorithms for the Construction and Analysis of Systems, pp. 160–175. Warsaw, Poland (2003)Google Scholar
  15. 15.
    Choi, J., Zeller, A.: Isolating failure-inducing thread schedules. In: International Symposium on Software Testing and Analysis, pp. 210–220. Rome, Italy (2002)Google Scholar
  16. 16.
    Clarke, E., Emerson, E.: The design and synthesis of synchronization skeletons using temporal logic. In: Workshop on Logics of Programs, pp. 52–71. Yorktown Heights, NY (1981)Google Scholar
  17. 17.
    Clarke, E., Grumberg, O., McMillan, K., Zhao, X.: Efficient generation of counterexamples and witnesses in symbolic model checking. In: Design Automation Conference, pp. 427–432. San Francisco, CA (1995)Google Scholar
  18. 18.
    Clarke, E., Grumberg, O., Peled, D.: Model Checking. MIT Press, Cambridge (2000)Google Scholar
  19. 19.
    Cleve, H., Zeller, A.: Locating causes of program failures. In: International Conference on Software Engineering. St. Louis, MO (2005) (in press)Google Scholar
  20. 20.
    Cobleigh, J., Giannakopoulou, D., Păsăreanu, C.: Learning assumptions for compositional verification. In: Tools and Algorithms for the Construction and Analysis of Systems, pp. 331–346. Warsaw, Poland (2003)Google Scholar
  21. 21.
    Coen-Porisini, A., Denaro, G., Ghezzi, C., Pezze, M.: Using symbolic execution for verifying safety-critical systems. In: European Software Engineering Conference/Foundations of Software Engineering, pp. 142–151. Vienna, Austria (2001)Google Scholar
  22. 22.
    Dodoo, N., Donovan, A., Lin, L., Ernst, M.: Selecting predicates for implications in program analysis. http://pag.lcs.mit.edu/^∼mernst/pubs/invariants-implications.ps, (2000)
  23. 23.
    Ernst, M., Cockrell, J., Griswold, W., Notkin, D.: Dynamically discovering likely program invariants to support program evolution. In: International Conference on Software Engineering, pp. 213–224. Los Angeles, CA (1999)Google Scholar
  24. 24.
    Galles, D., Pearl, J.: Axioms of causal relevance. Artif. Intell. 97(1–2), 9–43 (1997)CrossRefMathSciNetMATHGoogle Scholar
  25. 25.
    Groce, A.: Error explanation with distance metrics. In: Tools and Algorithms for the Construction and Analysis of Systems, pp. 108–122. Barcelona, Spain (2004)Google Scholar
  26. 26.
    Groce, A., Kroening, D.: Making the most of BMC counterexamples. In: Workshop on Bounded Model Checking, pp. 67–81. Boston, MA (2004)Google Scholar
  27. 27.
    Groce, A., Kroening, D., Lerda, F.: Understanding counterexamples with explain. In: Computer-Aided Verification, pp. 453–456. Boston, MA (2004)Google Scholar
  28. 28.
    Groce, A., Visser, W.: What went wrong: Explaining counterexamples. In: SPIN Workshop on Model Checking of Software, pp. 121–135. Portland, OR (2003)Google Scholar
  29. 29.
    Gurfinkel, A., Devereux, B., Chechik, M.: Model exploration with temporal logic query checking. In: Foundations of Software Engineering, pp. 139–148. Charleston, SC (2002)Google Scholar
  30. 30.
    Harrold, M., Rothermel, G., Sayre, K., Wu, R., Yi, L.: An empirical investigation of the relationship between spectra differences and regression faults. Softw. Test., Verif. Reliab. 10(3), 171–194 (2000)CrossRefGoogle Scholar
  31. 31.
    Horwich, P.: Asymmetries in Time, pp. 167–176 (1987)Google Scholar
  32. 32.
    Horwitz, S., Reps, T.: The use of program dependence graphs in software engineering. In: International Conference on Software Engineering, pp. 392–411. Melbourne, Australia (1992)Google Scholar
  33. 33.
    Hume, D.: A Treatise of Human Nature. London (1739)Google Scholar
  34. 34.
    Hume, D.: An Enquiry Concerning Human Understanding. London (1748)Google Scholar
  35. 35.
    Jin, H., Ravi, K., Somenzi, F.: Fate and free will in error traces. In: Tools and Algorithms for the Construction and Analysis of Systems, pp. 445–458. Grenoble, France (2002)Google Scholar
  36. 36.
    Jones, J., Harrold, M., Stasko, J.: Visualization of test information to assist fault localization. In: International Conference on Software Engineering, pp. 467–477. Orlando, FL (2002)Google Scholar
  37. 37.
    Kim, J.: Causes and counterfactuals. J. Philos. 70, 570–572 (1973)CrossRefGoogle Scholar
  38. 38.
    Kókai, G., Harmath, L., Gyimóthy, T.: Algorithmic debugging and testing of Prolog programs. In: Workshop on Logic Programming Environments, pp. 14–21 (1997)Google Scholar
  39. 39.
    Kroening, D., Clarke, E., Lerda, F.: A tool for checking ANSI-C programs. In: Tools and Algorithms for the Construction and Analysis of Systems, pp. 168–176. Barcelona, Spain (2004)Google Scholar
  40. 40.
    Lewis, D.: Causation. J. Philos. 70, 556–567 (1973)CrossRefGoogle Scholar
  41. 41.
    Lewis, D.: Counterfactuals. Harvard University Press, Harvard (1973) [revised printing 1986]Google Scholar
  42. 42.
    Lucas, P.: Analysis of notions of diagnosis. Artif. Intell. 105(1–2), 295–343 (1998)CrossRefMathSciNetMATHGoogle Scholar
  43. 43.
    Mateis, C., Stumptner, M., Wieland, D., Wotawa, F.: Model-based debugging of Java programs. In: Workshop on Automatic Debugging. Munich, Germany (2000)Google Scholar
  44. 44.
    Moskewicz, M., Madigan, C., Zhao, Y., Zhang, L., Malik, S.: Chaff: Engineering an efficient SAT solver. In: Design Automation Conference, pp. 530–535. Las Vegas, NV (2001)Google Scholar
  45. 45.
    Nayak, P., Williams, B.: Fast context switching in real-time propositional reasoning. In: National Conference on Artificial Intelligence, pp. 50–56. Providence, RI (1997)Google Scholar
  46. 46.
    Queille, J., Sifakis, J.: Specification and verification of concurrent programs in CESAR. In: International Symposium on Programming, pp. 337–351. Torino, Italy (1982)Google Scholar
  47. 47.
    Ravi, K., Somenzi, F.: Minimal assignments for bounded model checking. In: Tools and Algorithms for the Construction and Analysis of Systems, pp. 31–45. Barcelona, Spain (2004)Google Scholar
  48. 48.
    Reiter, R.: A theory of diagnosis from first principles. Artif. Intell. 32(1), 57–95 (1987)CrossRefMathSciNetMATHGoogle Scholar
  49. 49.
    Renieris, M., Reiss, S.: Fault localization with nearest neighbor queries. In: Automated Software Engineering, pp. 30–39. Montreal, Canada (2003)Google Scholar
  50. 50.
    Reps, T., Ball, T., Das, M., Larus, J.: The use of program profiling for software maintenance with applications to the year (2000) problem. In: European Software Engineering Conference, pp. 432–449. Zurich, Switzerland (1997)Google Scholar
  51. 51.
    Rothermel, G., Harrold, M.J.: Empirical studies of a safe regression test selection technique. Softw. Eng. 24(6), 401–419 (1999)Google Scholar
  52. 52.
    Sankoff, D., Kruskal, J., (eds.): Time Warps, String Edits, and Macromolecules: The Theory and Practice of Sequence Comparison. Addison Wesley, Reading (1983)Google Scholar
  53. 53.
    Shapiro, E.: Algorithmic Program Debugging. MIT Press, Cambridge (1983)Google Scholar
  54. 54.
    Sharygina, N., Peled, D.: A combined testing and verification approach for software reliability. In: Formal Methods Europe, pp. 611–628. Berlin, Germany (2001)Google Scholar
  55. 55.
    Simmons, R., Pecheur, C.: Automating model checking for autonomous systems. In: AAAI Spring Symposium on Real-Time Autonomous Systems (2000)Google Scholar
  56. 56.
    Sosa, E., Tooley, M. (eds.): Causation. Oxford University Press, Oxford (1993)Google Scholar
  57. 57.
    Tan, L., Cleaveland, R.: Evidence-based model checking. In: Computer-Aided Verification, pp. 455–470. Copenhagen, Denmark (2002)Google Scholar
  58. 58.
    Tip, F.: A survey of program slicing techniques. J. Program. Lang. 3, 121–189 (1995)Google Scholar
  59. 59.
    Vesey, I.: Expertise in debugging computer programs. Int. J. Man-Mach. Stud. 23(5), 459–494 (1985)CrossRefGoogle Scholar
  60. 60.
    Visser, W., Havelund, K., Brat, G., Park, S., Lerda, F.: Model checking programs. Autom. Softw. Eng. 10(2), 203–232 (2003)CrossRefGoogle Scholar
  61. 61.
    Wotawa, F.: On the relationship between model-based debugging and program slicing. Artif. Intell. 135(1–2), 125–143 (2002)CrossRefMathSciNetMATHGoogle Scholar
  62. 62.
    Zeller, A.: Isolating cause–effect chains from computer programs. In: Foundations of Software Engineering, pp. 1–10. Charleston, SC (2002)Google Scholar
  63. 63.
    Zeller, A., Hildebrandt, R.: Simplifying and isolating failure-inducing input. IEEE Trans. Softw. Eng. 28(2), 183–200 (2002)CrossRefGoogle Scholar
  64. 64.
    Zhang, X., Gupta, R., Zhang, Y.: Precise dynamic slicing algorithms. In: International Conference on Software Engineering, pp. 319–329. Portland, OR (2003)Google Scholar

Copyright information

© Springer-Verlag 2005

Authors and Affiliations

  • Alex Groce
    • 1
  • Sagar Chaki
    • 1
  • Daniel Kroening
    • 2
  • Ofer Strichman
    • 3
  1. 1.Chaki JPL Laboratory for Reliable Software California Institute of Technology
  2. 2.ETH ZurichZurichSwitzerland
  3. 3.TechnionHaifaIsrael

Personalised recommendations