Diagnosing Program Errors with Light-Weighted Specifications

  • Rong Chen
  • Franz Wotawa
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4031)


During the last decade many computer-aided debugging tools have been developed to assist users to detect program errors in a software system. A good example are model checking tools that provide counterexamples in case a given program violates the specified properties. However, even with a detailed erroneous run, it remains difficult for users to understand the error well and to isolate its root cause quickly and cheaply. This paper presents object store models for diagnosing program errors with light-weighted specifications. The models we use can keep track on object relations arising during program execution, detect counterexamples that violate user-provided properties, and highlight statements responsible for the violation. We have used the approach to help students to locate and correct the program errors in their course works.


Model Check Binary Relation Object Relation Symbolic Execution Class Creation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Ball, T., Naik, M., Rajamani, S.K.: From symptom to cause: localizing errors in counterexample traces. In: Proc. of POPL, pp. 97–105. ACM Press, New York (2003)Google Scholar
  2. 2.
    Bush, W.R., Pincus, J.D., Sielaff, D.J.: A static analyzer for finding dynamic programming errors. Software Practice and Experience 30(7), 775–802 (2000)CrossRefMATHGoogle Scholar
  3. 3.
    Corbett, J.C.: Using shape analysis to reduce finite-state models of concurrent Java programs. ACM Transactions on Software Engineering and Methodology 9(1), 51–93 (2000)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Demsky, B., Rinard, M.: Automatic detection and repair of errors in data structures. ACM SIGPLAN Notices 38(11), 78–95 (2003)CrossRefGoogle Scholar
  5. 5.
    Detlefs, D.L., Leino, K.R.M., Nelson, G., Saxe, J.B.: Extended static checking. Technical Report SRC-RR-159, HP Laboratories (1998)Google Scholar
  6. 6.
    Groce, A.: Error explanation with distance metrics. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 108–122. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  7. 7.
    Groce, A., Visser, W.: What went wrong: Explaining counter examples. In: 10th International SPIN Workshop on Model Checking of Software, vol. 5 (2003)Google Scholar
  8. 8.
    Jackson, D.: Aspect: Detecting Bugs with Abstract Dependences. ACM TOSEM 4(2), 109–145 (1995)CrossRefGoogle Scholar
  9. 9.
    Mayer, W., Stumptner, M., Wieland, D., Wotawa, F.: Can ai help to improve debugging substantially? debugging experiences with value-based models. In: Proc. ECAI, pp. 417–421. IOS Press, Amsterdam (2002)Google Scholar
  10. 10.
    Reiter, R.: A theory of diagnosis from first principles. Artificial Intelligence 32(1), 57–95 (1987)CrossRefMATHMathSciNetGoogle Scholar
  11. 11.
    Wieland, D.: Model-Based Debugging of Java Programs Using Dependencies. PhD thesis, Vienna University of Technology, Institute of Information Systems (184) (November 2001)Google Scholar
  12. 12.
    Zeller, A., Hildebrandt, R.: Simplifying and isolating failure-inducing input. IEEE Transactions on Software Engineering 28(2) (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Rong Chen
    • 1
  • Franz Wotawa
    • 2
  1. 1.College of Computer Science and TechnologyDalian Maritime UniversityDalianChina
  2. 2.Institut for Software TechnologyTechnische Universität GrazGrazAustria

Personalised recommendations