Skip to main content

Will this localization tool be effective for this bug? Mitigating the impact of unreliability of information retrieval based bug localization tools

Abstract

Information retrieval (IR) based bug localization approaches process a textual bug report and a collection of source code files to find buggy files. They output a ranked list of files sorted by their likelihood to contain the bug. Recently, several IR-based bug localization tools have been proposed. However, there are no perfect tools that can successfully localize faults within a few number of most suspicious program elements for every single input bug report. Therefore, it is difficult for developers to decide which tool would be effective for a given bug report. Furthermore, for some bug reports, no bug localization tools would be useful. Even a state-of-the-art bug localization tool outputs many ranked lists where buggy files appear very low in the lists. This potentially causes developers to distrust bug localization tools. In this work, we build an oracle that can automatically predict whether a ranked list produced by an IR-based bug localization tool is likely to be effective or not. We consider a ranked list to be effective if a buggy file appears in the top-N position of the list. If a ranked list is unlikely to be effective, developers do not need to waste time in checking the recommended files one by one. In such cases, it is better for developers to use traditional debugging methods or request for further information to localize bugs. To build this oracle, our approach extracts features that can be divided into four categories: score features, textual features, topic model features, and metadata features. We build a separate prediction model for each category, and combine them to create a composite prediction model which is used as the oracle. We name this solution APRILE, which stands for Automated PRediction of IR-based Bug Localization’s Effectiveness. We further integrate APRILE with two other components that are learned using our bagging-based ensemble classification (BEC) method. We refer to the extension of APRILE as APRILE +. We have evaluated APRILE + to predict the effectiveness of three state-of-the-art IR-based bug localization tools on more than three thousands bug reports from AspectJ, Eclipse, SWT, and Tomcat. APRILE + can achieve an average precision, recall, and F-measure of 77.61 %, 88.94 %, and 82.09 %, respectively. Furthermore, APRILE + outperforms a baseline approach by Le and Lo and APRILE by up to a 17.43 % and 10.51 % increase in F-measure respectively.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2

Notes

  1. 1.

    https://bugcenter.googlecode.com/files/BugLocator.zip

  2. 2.

    http://dev.mysql.com/doc/refman/5.1/en/fulltext-stopwords.html

  3. 3.

    https://www.st.cs.uni-saarland.de/ibugs/

  4. 4.

    http://goo.gl/Ojqrrp

  5. 5.

    https://bugcenter.googlecode.com/files/swt-3.1.zip

  6. 6.

    http://svn.apache.org/repos/asf/tomcat/trunk/

  7. 7.

    https://github.com/lebuitienduy/aprile_plus

  8. 8.

    http://nlp.stanford.edu/software/tmt/tmt-0.4/

References

  1. Abreu R, Zoeteweij P, Golsteijn R, Van Gemund AJ (2009) A practical evaluation of spectrum-based fault localization. J Syst Softw 82(11):1780–1792

  2. Antoniol G, Ayari K, Di Penta M, Khomh F, Guéhéneuc YG (2008) Is it a bug or an enhancement?: A text-based approach to classify change requests. In: Proceedings of the 2008 Conference of the Center for Advanced Studies on Collaborative Research: Meeting of Minds, ACM, New York, NY, USA, CASCON ’08, pp 23:304–23:318

  3. Ayewah N, Pugh W (2010) The google findbugs fixit. In: Proceedings of the 19th international symposium on Software testing and analysis, ACM, pp 241–252

  4. Bachmann A, Bernstein A (2009) Software process data quality and characteristics: a historical view on open and closed source projects. In: Proceedings of the joint international and annual ERCIM workshops on principles of software evolution (IWPSE) and software evolution (Evol) workshops, ACM, pp 119–128

  5. Bauer E, Kohavi R (1999) An empirical comparison of voting classification algorithms: bagging, boosting, and variants. Mach Learn 36(1-2):105–139

    Article  Google Scholar 

  6. Blei DM, Ng AY, Jordan MI (2003) Latent dirichlet allocation. J Mach Learn Res 3:993–1022

    MATH  Google Scholar 

  7. Bowring JF, Rehg JM, Harrold MJ (2004) Active learning for automatic classification of software behavior. In: Proc. 2004 int. Symp. on software testing and analysis (ISSTA’04), Boston, MA, pp 195–205

  8. Breiman L (1996a) Bagging predictors. Mach Learn 24(2):123–140. doi:10.1007/BF00058655

  9. Breiman L (1996b) Bagging predictors. Mach Learn 24:123–140

  10. Broomhead DS, Lowe D (1988) Multivariable functional interpolation and adaptive networks. Complex Syst 2:321–355

    MathSciNet  MATH  Google Scholar 

  11. Brun Y, Ernst MD (2004) Finding latent code errors via machine learning over program executions. In: Proc. 26th Int. Conf Software Engineering (ICSE’04), Edinburgh, Scotland

  12. Cleve H, Zeller A (2005) Locating causes of program failures. In: Proceedings of the 27th International Conference on Software Engineering, ACM, New York, NY, USA, ICSE ’05, pp 342–351

  13. Cronen-Townsend S, Zhou Y, Croft WB (2002) Predicting query performance. In: Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, ACM, pp 299–306

  14. Han J, Kamber M (2006) Data Mining Concepts and Techniques, 2nd edn. Morgan Kaufmann

  15. He B, Ounis I (2004) Inferring query performance using pre-retrieval predictors. In: String processing and information retrieval, Springer, pp 43–54

  16. Heckman S, Williams L (2011) A systematic literature review of actionable alert identification techniques for automated static code analysis. Inf Softw Technol 53 (4):363–387

    Article  Google Scholar 

  17. Hovemeyer D, Pugh W (2004) Finding bugs is easy. ACM Sigplan Notices 39 (12):92–106

    Article  Google Scholar 

  18. Jalbert N, Weimer W (2008) Automated duplicate detection for bug tracking systems. In: Dependable Systems and Networks With FTCS and DCC, 2008. DSN 2008. IEEE International Conference on, pp 52–61

  19. Johnson B, Song Y, Murphy-Hill E, Bowdidge R (2013) Why don’t software developers use static analysis tools to find bugs?. In: 2013 35Th international conference on software engineering, ICSE, IEEE, pp 672–681

  20. Johnson S (1978) Lint, a c program checker

  21. Jones JA, Harrold MJ (2005) Empirical evaluation of the tarantula automatic fault-localization technique. In: Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, ACM, pp 273–282

  22. Kim S, Ernst MD (2007) Which warnings should i fix first?. In: Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering, ACM, pp 45–54

  23. Kochhar PS, Tian Y, Lo D (2014) Potential biases in bug localization: do they matter?. In: ACM/IEEE International conference on automated software engineering, ASE’14, vasteras, Sweden - september 15 - 19, 2014, pp 803–814

  24. Kochhar PS, Xia X, Lo D, Li S (2016) Practitioners’ expectations on automated fault localization. In: Proceedings of the 25th International Symposium on Software Testing and Analysis, ACM, pp 165–176

  25. Kohavi R, Wolpert DH et al (1996) Bias plus variance decomposition for zero-one loss functions. In: ICML, vol 96, pp 275–83

  26. Lamkanfi A, Demeyer S, Giger E, Goethals B (2010) Predicting the severity of a reported bug. In: 7Th IEEE working conference on mining software repositories, MSR, IEEE, pp 1–10

  27. Lamkanfi A, Demeyer S, Soetens QD, Verdonck T (2011) Comparing mining algorithms for predicting the severity of a reported bug. In: 15Th european conference on software maintenance and reengineering, CSMR, IEEE, pp 249–258

  28. Le TD, Lo D (2013) Will fault localization work for these failures? an automated approach to predict effectiveness of fault localization tools. In: 2013 IEEE International conference on software maintenance, eindhoven, the Netherlands, September 22-28, 2013, pp 310–319

  29. Le TD, Lo D, Thung F (2014a) Should I follow this fault localization tools output? Empirical Software Engineering pp 1–38. doi:10.1007/s10664-014-9349-1

  30. Le TD, Thung F, Lo D (2014b) Predicting effectiveness of ir-based bug localization techniques. In: 25th IEEE International Symposium on Software Reliability Engineering, ISSRE 2014, Naples, Italy, November 3-6, 2014, pp 335–345

  31. Le TD, Oentaryo RJ, Lo D (2015) Information retrieval and spectrum based bug localization: better together. In: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015, Bergamo, Italy, August 30 - September 4, 2015. doi:10.1145/2786805.2786880, pp 579–590

  32. Le TD, Lo D, Le Goues C, Grunske L (2016) A learning-to-rank based fault localization approach using likely invariants. In: Proceedings of the 25th International Symposium on Software Testing and Analysis, ACM, pp 177–188

  33. Lemmens A, Croux C (2006) Bagging and boosting classification trees to predict churn. J Mark Res 43(2):276–286

    Article  Google Scholar 

  34. Lucia L o D, Jiang L, Thung F, Budi A (2014) Extended comprehensive study of association measures for fault localization. J Softw: Evol Process 26(2):172–219

    Google Scholar 

  35. Lukins SK, Kraft NA, Etzkorn LH (2010) Bug localization using latent dirichlet allocation. Inf Softw Technol 52(9):972–990

    Article  Google Scholar 

  36. Manning CD, Raghavan P, Schütze H (2008) Introduction to information retrieval. Cambridge University Press, New York

    Book  MATH  Google Scholar 

  37. Marcus A, Maletic JI (2003) Recovering documentation-to-source-code traceability links using latent semantic indexing. In: Proceedings of the 25th international conference on software engineering, may 3-10, 2003, Portland, Oregon, USA, pp 125–137

  38. Menzies T, Marcus A (2008) Automated severity assessment of software defect reports. In: 24Th IEEE international conference on software maintenance (ICSM 2008), september 28 - october 4, 2008, Beijing, China, pp 346–355

  39. Mitchell T (1997) Machine Learning. McGraw Hill

  40. Mothe J, Tanguy L (2005) Linguistic features to predict query difficulty. In: ACM Conference on research and development in information retrieval, SIGIR, Predicting query difficulty-methods and applications workshop, pp 7–10

  41. Parnin C, Orso A (2011) Are automated debugging techniques actually helping programmers?. In: Proceedings of the 20th international symposium on software testing and analysis, ISSTA 2011, toronto, ON, Canada, July 17-21, 2011, pp 199–209

  42. Porter MF (1980) An algorithm for suffix stripping. Program 14(3):130–137

    Article  Google Scholar 

  43. Prasad AM, Iverson LR, Liaw A (2006) Newer classification and regression tree techniques: bagging and random forests for ecological prediction. Ecosystems 9 (2):181–199

    Article  Google Scholar 

  44. Rao S, Kak AC (2011) Retrieval from software libraries for bug localization: a comparative study of generic and composite text models. In: Proceedings of the 8th International Working Conference on Mining Software Repositories, MSR 2011 (Co-located with ICSE), Waikiki, Honolulu, HI, USA, May 21-28, 2011, Proceedings, pp 43–52

  45. Saha RK, Lease M, Khurshid S, Perry DE (2013) Improving bug localization using structured information retrieval. In: 2013 28Th IEEE/ACM international conference on automated software engineering, ASE 2013, silicon valley, CA, USA, November 11-15, 2013, pp 345–355

  46. Seo H, Kim S (2012) Predicting recurring crash stacks. In: IEEE/ACM International conference on automated software engineering, ASE’12, essen, Germany, September 3-7, 2012, pp 180–189

  47. Shihab E, Ihara A, Kamei Y, Ibrahim WM, Ohira M, Adams B, Hassan AE, Matsumoto K (2010) Predicting re-opened bugs: A case study on the eclipse project. In: 17Th working conference on reverse engineering, WCRE 2010, 13-16 october, 2010, Beverly, MA, USA, pp 249–258

  48. Shihab E, Ihara A, Kamei Y, Ibrahim WM, Ohira M, Adams B, Hassan AE, ichi Matsumoto K (2013) Studying re-opened bugs in open source software. Empir Softw Eng 18(5):1005–1042

    Article  Google Scholar 

  49. Shtok A, Kurland O, Carmel D (2009) Predicting query performance by query-drift estimation. In: Advances in Information Retrieval Theory, Springer, pp 305–312

  50. Shtok A, Kurland O, Carmel D (2010) Using statistical decision theory and relevance models for query-performance prediction. In: Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, ACM, pp 259–266

  51. Sisman B, Kak AC (2012) Incorporating version histories in information retrieval based bug localization. In: Proceedings of the 9th IEEE Working Conference on Mining Software Repositories, IEEE Press, pp 50–59

  52. Tantithamthavorn C, Ihara A, Matsumoto K (2013) Using co-change histories to improve bug localization performance. In: 14Th ACIS international conference on software engineering, artificial intelligence, networking and parallel/distributed computing, SNPD 2013, honolulu, hawaii, USA, 1-3 July, 2013, pp 543–548

  53. Tassey G (2002) The economic impacts of inadequate infrastructure for software testing. National Institute of Standards and Technology Planning Report 02–32002

  54. Thomas SW, Nagappan M, Blostein D, Hassan AE (2013) The impact of classifier configuration and classifier combination on bug localization. IEEE, Trans Software Eng 39(10):1427–1443

    Article  Google Scholar 

  55. Tian Y, Lo D, Sun C (2012a) Information retrieval based nearest neighbor classification for fine-grained bug severity prediction. In: 19Th working conference on reverse engineering, WCRE, IEEE, pp 215–224

  56. Tian Y, Sun C, Lo D (2012b) Improved duplicate bug report identification. In: 16th European Conference on Software Maintenance and Reengineering, CSMR 2012, Szeged, Hungary, March 27-30, 2012, pp 385–390

  57. Valdivia Garcia H, Shihab E (2014) Characterizing and predicting blocking bugs in open source projects. In: Proceedings of the 11th Working Conference on Mining Software Repositories, ACM, pp 72–81

  58. Vinay V, Cox IJ, Milic-Frayling N, Wood K (2006) On ranking the effectiveness of searches. In: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, ACM, pp 398–404

  59. Wang S, Lo D (2014) Version history, similar report, and structure: Putting them together for improved bug localization. In: Proceedings of the 22nd International Conference on Program Comprehension, ACM, pp 53–63

  60. Xia X, Bao L, Lo D, Li S (2016) automated debugging considered harmful considered harmful – a user study revisiting the usefulness of spectra-based fault localization techniques with professionals using real bugs from large systems. In: Proceedings of the 32nd International Conference on Software Maintenance and Evolution (ICSME)

  61. Xie X, Chen TY, Kuo FC, Xu B (2013a) A theoretical analysis of the risk evaluation formulas for spectrum-based fault localization. ACM Trans Softw Eng Methodol 22(4):31

  62. Xie X, Kuo FC, Chen TY, Yoo S, Harman M (2013b) Provably optimal and human-competitive results in sbse for spectrum based fault localisation. In: International Symposium on Search Based Software Engineering, Springer, pp 224–238

  63. Xuan J, Monperrus M (2014) Learning to combine multiple ranking metrics for fault localization. In: Proceedings of the 2014 IEEE International Conference on Software Maintenance and Evolution, IEEE Computer Society, pp 191–200

  64. Yoo S (2012) Evolving human competitive spectra-based fault localisation techniques. In: International Symposium on Search Based Software Engineering, Springer, pp 244–258

  65. Zeller A (2002) Isolating cause-effect chains from computer programs. In: Proceedings of the tenth ACM SIGSOFT symposium on foundations of software engineering 2002, charleston, south carolina, USA, November 18-22, 2002, pp 1–10

  66. Zeller A, Hildebrandt R (2002) Simplifying and isolating failure-inducing input. IEEE Trans Softw Eng 28(2):183–200

    Article  Google Scholar 

  67. Zhou J, Zhang H, Lo D (2012) Where should the bugs be fixed? more accurate information retrieval-based bug localization based on bug reports. In: 34Th international conference on software engineering, ICSE 2012, june 2-9, 2012, Zurich, Switzerland, pp 14–24

  68. Zimmermann T, Nagappan N, Gall HC, Giger E, Murphy B (2009) Cross-project defect prediction: a large scale experiment on data vs. domain vs. process. In: Proceedings of the 7th joint meeting of the european software engineering conference and the ACM SIGSOFT international symposium on foundations of software engineering, 2009, amsterdam, the Netherlands, August 24-28, 2009, pp 91–100

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Tien-Duy B. Le.

Additional information

Communicated by: Lin Tan

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Le, TD.B., Thung, F. & Lo, D. Will this localization tool be effective for this bug? Mitigating the impact of unreliability of information retrieval based bug localization tools. Empir Software Eng 22, 2237–2279 (2017). https://doi.org/10.1007/s10664-016-9484-y

Download citation

Keywords

  • Text classification
  • Information retrieval
  • Bug reports
  • Bug localization
  • Effectiveness prediction