International Journal on Software Tools for Technology Transfer

, Volume 8, Issue 3, pp 229–247

Error explanation with distance metrics


    • Chaki JPL Laboratory for Reliable Software California Institute of Technology
  • Sagar Chaki
    • Chaki JPL Laboratory for Reliable Software California Institute of Technology
  • Daniel Kroening
    • ETH Zurich
  • Ofer Strichman
    • Technion
Special section on Tools and Algorithms for the Construction and Analysis of Systems

DOI: 10.1007/s10009-005-0202-0

Cite this article as:
Groce, A., Chaki, S., Kroening, D. et al. Int J Softw Tools Technol Transfer (2006) 8: 229. doi:10.1007/s10009-005-0202-0


In the event that a system does not satisfy a specification, a model checker will typically automatically produce a counterexample trace that shows a particular instance of the undesirable behavior. Unfortunately, the important steps that follow the discovery of a counterexample are generally not automated. The user must first decide if the counterexample shows genuinely erroneous behavior or is an artifact of improper specification or abstraction. In the event that the error is real, there remains the difficult task of understanding the error well enough to isolate and modify the faulty aspects of the system. This paper describes a (semi-)automated approach for assisting users in understanding and isolating errors in ANSI C programs. The approach, derived from Lewis’ counterfactual approach to causality, is based on distance metrics for program executions. Experimental results show that the power of the model checking engine can be used to provide assistance in understanding errors and to isolate faulty portions of the source code.


Model checkingError explanationFault localizationAutomated debugging
Download to read the full article text

Copyright information

© Springer-Verlag 2005