Advertisement

Evaluation of Measures for Statistical Fault Localisation and an Optimising Scheme

  • David Landsberg
  • Hana Chockler
  • Daniel Kroening
  • Matt Lewis
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9033)

Abstract

Statistical Fault Localisation (SFL) is a widely used method for localizing faults in software. SFL gathers coverage details of passed and failed executions over a faulty program and then uses a measure to assign a degree of suspiciousness to each of a chosen set of program entities (statements, predicates, etc.) in that program. The program entities are then inspected by the engineer in descending order of suspiciousness until the bug is found. The effectiveness of this process relies on the quality of the suspiciousness measure. In this paper, we compare 157 measures, 95 of which are new to SFL and borrowed from other branches of science and philosophy. We also present a new measure optimiser Lex g , which optimises a given measure g according to a criterion of single bug optimality. An experimental comparison on benchmarks from the Software-artifact Infrastructure Repository (SIR) indicates that many of the new measures perform competitively with the established ones. Furthermore, the large-scale comparison reveals that the new measures Lex Ochiai and Pattern-Similarity perform best overall.

Keywords

Negative Predictive Value Test Suite Fault Localisation Program Entity Fail Test Case 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Abreu, R., Zoeteweij, P., van Gemund, A.J.C.: An evaluation of similarity coefficients for software fault localization. In: Pacific Rim International Symposium on Dependable Computing (PRDC), pp. 39–46. IEEE (2006)Google Scholar
  2. 2.
    Abreu, R., Zoeteweij, P., van Gemund, A.J.C.: On the accuracy of spectrum-based fault localization. In: TAICPART-MUTATION, pp. 89–98. IEEE (2007)Google Scholar
  3. 3.
    Briand, L.C., Labiche, Y., Liu, X.: Using machine learning to support debugging with Tarantula. In: International Symposium on Software Reliability (ISSRE), pp. 137–146. IEEE (2007)Google Scholar
  4. 4.
    Carnap, R.: Logical Foundations of Probability. University of Chicago Press (1962)Google Scholar
  5. 5.
    Choi, S.S., Cha, S.H., Tappert, C.: A Survey of Binary Similarity and Distance Measures. Journal on Systemics, Cybernetics and Informatics 8, 43–48 (2010)Google Scholar
  6. 6.
    Collofello, Woodfield: Evaluating the effectiveness of reliability-assurance techniques. Journal of Systems and Software, 745–770 (1989)Google Scholar
  7. 7.
    Do, H., Elbaum, S., Rothermel, G.: Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Softw. Eng, 405–435 (2005)Google Scholar
  8. 8.
    Eells, E., Fitelson, B.: Symmetries and asymmetries in evidential support. Philosophical Studies 107, 129–142 (2002)CrossRefGoogle Scholar
  9. 9.
    Everitt, B.: The Cambridge Dictionary of Statistics. CUP (2002)Google Scholar
  10. 10.
    Fitelson, B., Hitchcock, C.: Probabilistic measures of causal strength. In: Illari, P.M., Russo, F., Williamson, J. (eds.) Causality in the Sciences. Oxford University Press, Oxford (2011)Google Scholar
  11. 11.
    Fletcher, R., Suzanne, W.: Clinical epidemiology: the essentials. Lippincott Williams and Wilkins (2005)Google Scholar
  12. 12.
    Groce, A.: Error explanation with distance metrics. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 108–122. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  13. 13.
    Huber, F.: Confirmation and induction, http://www.iep.utm.edu/conf-ind/
  14. 14.
    Jones, J.A., Harrold, M.J.: Empirical evaluation of the Tarantula automatic fault-localization technique. In: ASE, pp. 273–282. ACM (2005)Google Scholar
  15. 15.
    Liblit, B., Naik, M., Zheng, A.X., Aiken, A., Jordan, M.I.: Scalable statistical bug isolation. In: SIGPLAN Not., pp. 15–26 (2005)Google Scholar
  16. 16.
    Liu, C., Fei, L., Yan, X., Han, J., Midkiff, S.P.: Statistical debugging: A hypothesis testing-based approach. IEEE Trans. Softw. Eng. 32(10), 831–848 (2006)CrossRefGoogle Scholar
  17. 17.
    Liu, C., Yan, X., Fei, L., Han, J., Midkiff, S.P.: SOBER: Statistical model-based bug localization. SIGSOFT Softw. Eng. Notes, 286–295 (2005)Google Scholar
  18. 18.
    Lucia, Lo, D., Jiang, L., Budi, A.: Comprehensive evaluation of association measures for fault localization. In: International Conference on Software Maintenance (ICSM), pp. 1–10. IEEE (2010)Google Scholar
  19. 19.
    Lucia, Lo, D., Jiang, L., Thung, F., Budi, A.: Extended comprehensive study of association measures for fault localization. Journal of Software: Evolution and Process 26(2), 172–219 (2014)Google Scholar
  20. 20.
    Naish, L., Lee, H.J.: Duals in spectral fault localization. In: Australian Conference on Software Engineering (ASWEC), pp. 51–59. IEEE (2013)Google Scholar
  21. 21.
    Naish, L., Lee, H.J., Ramamohanarao, K.: A model for spectra-based software diagnosis. ACM Trans. Softw. Eng. Methodol, 1–11 (2011)Google Scholar
  22. 22.
    Parnin, C., Orso, A.: Are automated debugging techniques actually helping programmers? In: International Symposium on Software Testing and Analysis (ISSTA), pp. 199–209. ACM (2011)Google Scholar
  23. 23.
    Pearl, J.: Probabilities of causation: three counterfactual interpretations and their identification. Synthese 1-2(121), 93–149 (1999)Google Scholar
  24. 24.
    Pearson, K.: On the theory of contingency and its relation to association and normal correlation (1904)Google Scholar
  25. 25.
    Pytlik, B., Renieris, M., Krishnamurthi, S., Reiss, S.: Automated fault localization using potential invariants. Arxiv preprint cs.SE/0310040 (2003)Google Scholar
  26. 26.
    Renieris, M., Reiss, S.P.: Fault localization with nearest neighbor queries. In: ASE, pp. 30–39 (2003)Google Scholar
  27. 27.
    Tan, P.-N., Kumar, V., Srivastava, J.: Selecting the right interestingness measure for association patterns. In: Knowledge Discovery and Data Mining (KDD), pp. 32–41. ACM (2002)Google Scholar
  28. 28.
    Wilcoxon, F.: Individual comparisons by ranking methods. Biometrics Bulletin 1(6), 80–83 (1945)CrossRefGoogle Scholar
  29. 29.
    Wong, W.E., Qi, Y.: Effective program debugging based on execution slices and inter-block data dependency. Journal of Systems and Software 79(7), 891–903 (2006)CrossRefGoogle Scholar
  30. 30.
    Wong, W.E., Qi, Y., Zhao, L., Cai, K.-Y.: Effective fault localization using code coverage. In: Computer Software and Applications Conference (COMPSAC), pp. 449–456. IEEE (2007)Google Scholar
  31. 31.
    Xie, X., Chen, T.Y., Kuo, F.-C., Xu, B.: A theoretical analysis of the risk evaluation formulas for spectrum-based fault localization. ACM Trans. Softw. Eng. Methodol., 31:1–31:40 (2013)Google Scholar
  32. 32.
    Yoo, S., Harman, M., Clark, D.: Fault localization prioritization: Comparing information-theoretic and coverage-based approaches. ACM Trans. Softw. Eng. Methodol. 22(3), 19 (2013)CrossRefGoogle Scholar
  33. 33.
    Zhang, Z., Chan, W.K., Tse, T.H., Jiang, B., Wang, X.: Capturing propagation of infected program states. In: ESEC/FSE, pp. 43–52. ACM (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • David Landsberg
    • 2
  • Hana Chockler
    • 1
  • Daniel Kroening
    • 2
  • Matt Lewis
    • 2
  1. 1.King’s College LondonLondonUK
  2. 2.University of OxfordOxfordUK

Personalised recommendations