Skip to main content
Log in

Towards building a universal defect prediction model with rank transformed predictors

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Software defects can lead to undesired results. Correcting defects costs 50 % to 75 % of the total software development budgets. To predict defective files, a prediction model must be built with predictors (e.g., software metrics) obtained from either a project itself (within-project) or from other projects (cross-project). A universal defect prediction model that is built from a large set of diverse projects would relieve the need to build and tailor prediction models for an individual project. A formidable obstacle to build a universal model is the variations in the distribution of predictors among projects of diverse contexts (e.g., size and programming language). Hence, we propose to cluster projects based on the similarity of the distribution of predictors, and derive the rank transformations using quantiles of predictors for a cluster. We fit the universal model on the transformed data of 1,385 open source projects hosted on SourceForge and GoogleCode. The universal model obtains prediction performance comparable to the within-project models, yields similar results when applied on five external projects (one Apache and four Eclipse projects), and performs similarly among projects with different context factors. At last, we investigate what predictors should be included in the universal model. We expect that this work could form a basis for future work on building a universal model and would lead to software support tools that incorporate it into a regular development workflow.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. https://www.openhub.net (NOTE: ‘Ohloh’ was changed to ‘Open Hub’ in 2014.)

  2. https://weka.sourceforge.net/doc.dev/weka/classifiers/bayes/NaiveBayes.html

  3. http://www.cs.waikato.ac.nz/ml/weka

  4. http://bug.inf.usi.ch/download.php

  5. http://stat.ethz.ch/R-manual/R-patched/library/stats/html/hclust.html

  6. http://stat.ethz.ch/R-manual/R-patched/library/stats/html/cutree.html

  7. http://stat.ethz.ch/R-manual/R-patched/library/stats/html/glm.html

  8. http://fengzhang.bitbucket.org/replications/universalModel.html

References

  • Akiyama F (1971) An example of software system debugging. In: Proceedings of the international federation of information processing societies congress, pp 353–359

  • Alves T, Ypma C, Visser J (2010) Deriving metric thresholds from benchmark data. In: Proceedings of the 26th IEEE international conference on software maintenance, pp 1–10

  • Arisholm E, Briand L C, Johannessen E B (2010) A systematic and comprehensive investigation of methods to build and evaluate fault prediction models. J Syst Softw 83(1):2–17

    Article  Google Scholar 

  • Baggen R, Correia J, Schill K, Visser J (2012) Standardized code quality benchmarking for improving software maintainability. Softw Qual J 20:287–307

    Article  Google Scholar 

  • Bettenburg N, Hassan AE (2010) Studying the impact of social structures on software quality. In: Proceedings of the 18th IEEE international conference on program comprehension, ICPC ’10, pp 124–133

  • Bird C, Bachmann A, Aune E, Duffy J, Bernstein A, Filkov V, Devanbu P (2009) Fair and balanced?: bias in bug-fix datasets. In: Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering, ESEC/FSE ’09, pp 121–130

  • Cliff N (1993) Dominance statistics: ordinal analyses to answer ordinal questions. Psychol Bull 114(3):494–509

    Article  MathSciNet  Google Scholar 

  • Cohen J (1988) Statistical power analysis for the behavioral sciences: Jacob Cohen, 2nd edn. Lawrence Erlbaum

  • Cohen J (1992) A power primer. Psychol Bull 112(1):155–159

    Article  Google Scholar 

  • Cruz A, Ochimizu K (2009) Towards logistic regression models for predicting fault-prone code across software projects. In: 3rd international symposium on empirical software engineering and measurement, 2009. ESEM 2009, pp 460–463

  • D’Ambros M, Lanza M, Robbes R (2010) An extensive comparison of bug prediction approaches. In: Proceedings of the 7th IEEE working conference on mining software repositories, MSR’10, pp 31–41

  • D’Ambros M, Lanza M, Robbes R (2012) Evaluating defect prediction approaches: a benchmark and an extensive comparison. Empir Softw Eng 17(4-5):531–577

    Article  Google Scholar 

  • Denaro G, Pezzè M (2002) An empirical evaluation of fault-proneness models. In: Proceedings of the 24rd International Conference on Software Engineering, 2002. ICSE 2002, pp 241–251

  • Hailpern B, Santhanam P (2002) Software debugging, testing, and verification. IBM Syst J 41(1):4–12

    Article  Google Scholar 

  • Hall T, Beecham S, Bowes D, Gray D, Counsell S (2012) A systematic literature review on fault prediction performance in software engineering. IEEE Trans Softw Eng 38(6):1276–1304

    Article  Google Scholar 

  • Hassan A (2009) Predicting faults using the complexity of code changes. In: Proceedings of the 31st IEEE international conference on software engineering, ICSE’09, pp 78–88

  • He Z, Shu F, Yang Y, Li M, Wang Q (2012) An investigation on the feasibility of cross-project defect prediction. Autom Softw Eng 19(2):167–199

    Article  Google Scholar 

  • He Z, Peters F, Menzies T, Yang Y (2013) Learning from open-source projects: an empirical study on defect prediction. In: 2013 ACM / IEEE international symposium on empirical software engineering and measurement, pp 45–54

  • Herzig K, Just S, Zeller A (2013) It’s not a bug, it’s a feature: how misclassification impacts bug prediction. In: Proceedings of the 35th international conference on software engineering, ICSE ’13, pp 392–401

  • Hosmer DW Jr, Lemeshow S, Sturdivant RX (2013) Interpretation of the Fitted Logistic Regression Model. Wiley, pp 49–88

  • Jiang Y, Cukic B, Menzies T (2008) Can data transformation help in the detection of fault-prone modules?. In: Proceedings of the 2008 workshop on defects in large software systems, DEFECTS ’08, pp 16–20

  • Kim S, Zhang H, Wu R, Gong L (2011) Dealing with noise in defect prediction. In: Proceedings of the 33rd international conference on software engineering, ICSE ’11, pp 481–490

  • Lessmann S, Baesens B, Mues C, Pietsch S (2008) Benchmarking classification models for software defect prediction: a proposed framework and novel findings. IEEE Trans Softw Eng (TSE) 34(4):485–496

    Article  Google Scholar 

  • Li M, Zhang H, Wu R, Zhou ZH (2012) Sample-based software defect prediction with active and semi-supervised learning. Autom Softw Eng 19(2):201–230

    Article  Google Scholar 

  • Ma Y, Luo G, Zeng X, Chen A (2012) Transfer learning for cross-company software defect prediction. Inf Softw Technol 54(3):248–256

    Article  Google Scholar 

  • Mair C, Shepperd M (2005) The consistency of empirical comparisons of regression and analogy-based software project cost prediction. In: Proceedings of the 2005 international symposium on empirical software engineering, pp 509–518

  • Menzies T, Dekhtyar A, Distefano J, Greenwald J (2007a) Problems with precision: a response to comments on ‘data mining static code attributes to learn defect predictors. IEEE Trans Softw Eng (TSE) 33(9):637–640

    Article  Google Scholar 

  • Menzies T, Greenwald J, Frank A (2007b) Data mining static code attributes to learn defect predictors. IEEE Trans Softw Eng (TSE) 33(1):2–13

    Article  Google Scholar 

  • Menzies T, Butcher A, Marcus A, Zimmermann T, Cok D (2011) Local vs. global models for effort estimation and defect prediction. In: Proceedings of the 2011 26th IEEE/ACM international conference on automated software engineering, ASE ’11, pp 343–351

  • Mockus A (2009) Amassing and indexing a large sample of version control systems: towards the census of public source code history. In: Proceedings of the 6th IEEE international working conference on mining software repositories, MSR’09, pp 11–20

  • Mockus A, Votta L (2000) Identifying reasons for software changes using historic databases. In: Proceedings of the 16th international conference on software maintenance, ICSM ’00, pp 120–130

  • Nagappan M, Zimmermann T, Bird C (2013) Diversity in software engineering research. In: Proceedings of the 2013 9th joint meeting on foundations of software engineering, vol 2013. ACM, New York, pp 466–476

    Google Scholar 

  • Nagappan N, Ball T, Zeller A (2006) Mining metrics to predict component failures. In: Proceedings of the 28th international conference on software engineering, ACM, ICSE ’06, pp 452–461

  • Nam J, Pan SJ, Kim S (2013) Transfer defect learning. In: Proceedings of the 2013 international conference on software engineering, ICSE ’13, pp 382–391

  • Nguyen TT, Nguyen TN, Phuong TM (2011) Topic-based defect prediction (nier track). In: Proceedings of the 33rd international conference on software engineering, ICSE ’11. ACM, New York, pp 932–935

    Google Scholar 

  • Pan S J, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359

    Article  Google Scholar 

  • Peters F, Menzies T, Gong L, Zhang H (2013a) Balancing privacy and utility in cross-company defect prediction. IEEE Trans Softw Eng 39(8):1054–1068

    Article  Google Scholar 

  • Peters F, Menzies T, Marcus A (2013b) Better cross company defect prediction. In: Proceedings of the 10th Working Conference on Mining Software Repositories, MSR ’13, pp 409–418

  • Posnett D, Filkov V, Devanbu P (2011) Ecological inference in empirical software engineering. In: Proceedings of the 26th IEEE/ACM international conference on automated software engineering, ASE ’11. IEEE Computer Society, Washington, pp 362–371

    Google Scholar 

  • Premraj R, Herzig K (2011) Network versus code metrics to predict defects: a replication study. In: 2011 international symposium on empirical software engineering and measurement (ESEM), pp 215–224

  • Radjenović D, Heričko M, Torkar R, živkovič A (2013) Software fault prediction metrics: A systematic literature review. Inf Softw Technol 55(8):1397–1418

    Article  Google Scholar 

  • Rahman F, Posnett D, Devanbu P (2012) Recalling the “imprecision” of cross-project defect prediction. In: Proceedings of the ACM SIGSOFT 20th international symposium on the foundations of software engineering, FSE ’12, pp 61:1–61:11

  • Rahman F, Posnett D, Herraiz I, Devanbu P (2013) Sample size vs. bias in defect prediction. In: Proceedings of the 21th ACM SIGSOFT symposium and the 15th European conference on foundations of software engineering, ESEC/FSE ’13

  • Romano J, Kromrey JD, Coraggio J, Skowronek J (2006) Appropriate statistics for ordinal level data: should we really be using t-test and cohen’s d for evaluating group differences on the nsse and other surveys?. In: Annual meeting of the Florida association of institutional research, pp 1–33

  • Sarro F, Di Martino S, Ferrucci F, Gravino C (2012) A further analysis on the use of genetic algorithm to configure support vector machines for inter-release fault prediction. In: Proceedings of the 27th annual ACM symposium on applied computing, SAC ’12. ACM, New York, pp 1215–1220

    Chapter  Google Scholar 

  • SciTools (2015) Understand 3.1 build 726. https://scitools.com, [Online; accessed 15-June-2015]

  • Shatnawi R, Li W (2008) The effectiveness of software metrics in identifying error-prone classes in post-release software evolution process. J Syst Softw 81 (11):1868–1882

    Article  Google Scholar 

  • Sheskin DJ (2007) Handbook of parametric and nonparametric statistical procedures, 4th edn. Chapman & Hall/CRC

  • Shihab E, Jiang ZM, Ibrahim WM, Adams B, Hassan AE (2010) Understanding the impact of code and process metrics on post-release defects: a case study on the eclipse project. In: Proceedings of the 2010 ACM/IEEE international symposium on empirical software engineering and measurement, ESEM ’10. ACM, New York, pp 4:1–4:10

    Google Scholar 

  • Śliwerski J, Zimmermann T, Zeller A (2005) When do changes induce fixes?. In: Proceedings of the 2nd international workshop on mining software repositories, MSR ’05, pp 1–5

  • Tassey G (2002) The economic impacts of inadequate infrastructure for software testing. Tech. Rep. Planning Report 02-3, National Institute of Standards and Technology

  • Turhan B, Menzies T, Bener A B, Di Stefano J (2009) On the relative value of cross-company and within-company data for defect prediction. Empir Softw Eng 14 (5):540–578

    Article  Google Scholar 

  • Watanabe S, Kaiya H, Kaijiri K (2008) Adapting a fault prediction model to allow inter languagereuse. In: Proceedings of the 4th international workshop on predictor models in software engineering, PROMISE ’08. ACM, New York, pp 19–24

    Chapter  Google Scholar 

  • Yin RK (2002) Case study research: design and methods, 3rd edn. SAGE Publications

  • Zhang F, Mockus A, Zou Y, Khomh F, Hassan AE (2013) How does context affect the distribution of software maintainability metrics?. In: Proceedings of the 29th IEEE international conference on software maintainability, ICSM ’13, pp 350–359

  • Zhang F, Mockus A, Keivanloo I, Zou Y (2014) Towards building a universal defect prediction model. In: Proceedings of the 11th working conference on mining software repositories, MSR ’14, pp 41–50

  • Zhou Y, Leung H (2007) Predicting object-oriented software maintainability using multivariate adaptive regression splines. J Syst Softw 80(8):1349–1361

    Article  Google Scholar 

  • Zimmermann T, Nagappan N (2008) Predicting defects using network analysis on dependency graphs. In: Proceedings of the 30th international conference on software engineering, ICSE ’08. ACM, New York, pp 531–540

    Google Scholar 

  • Zimmermann T, Premraj R, Zeller A (2007) Predicting defects for eclipse. In: Proceedings of the international workshop on predictor models in software engineering, PROMISE ’07, p 9

  • Zimmermann T, Nagappan N, Gall H, Giger E, Murphy B (2009) Cross-project defect prediction: a large scale experiment on data vs. domain vs. process. In: Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering, ESEC/FSE ’09, pp 91–100

  • Zimmermann T, Nagappan N, Guo PJ, Murphy B (2012) Characterizing and predicting which bugs get reopened. In: 34th International Conference on Software Engineering (ICSE), 2012, pp 1074–1083

Download references

Acknowledgments

The authors would like to thank Professor Ahmed E. Hassan from Software Analysis and Intelligence Lab (SAIL) at Queen’s University for his strong support during this work. The authors would also like to thank Professor Daniel German from University of Victoria for his insightful advice. The authors are appreciated for the great help of Mr. Shane McIntosh from Software Analysis and Intelligence Lab (SAIL) at Queen’s University during the improvement of this work. The authors are also grateful to the anonymous reviewers of MSR and EMSE for valuable and insightful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feng Zhang.

Additional information

Communicated by: Sung Kim and Martin Pinzger

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, F., Mockus, A., Keivanloo, I. et al. Towards building a universal defect prediction model with rank transformed predictors. Empir Software Eng 21, 2107–2145 (2016). https://doi.org/10.1007/s10664-015-9396-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-015-9396-2

Keywords

Navigation