Empirical Software Engineering

, Volume 24, Issue 2, pp 602–636 | Cite as

On the relative value of data resampling approaches for software defect prediction

  • Kwabena Ebo BenninEmail author
  • Jacky W. Keung
  • Akito Monden


Software defect data sets are typically characterized by an unbalanced class distribution where the defective modules are fewer than the non-defective modules. Prediction performances of defect prediction models are detrimentally affected by the skewed distribution of the faulty minority modules in the data set since most algorithms assume both classes in the data set to be equally balanced. Resampling approaches address this concern by modifying the class distribution to balance the minority and majority class distribution. However, very little is known about the best distribution for attaining high performance especially in a more practical scenario. There are still inconclusive results pertaining to the suitable ratio of defect and clean instances (Pfp), the statistical and practical impacts of resampling approaches on prediction performance and the more stable resampling approach across several performance measures. To assess the impact of resampling approaches, we investigated the bias and effect of commonly used resampling approaches on prediction accuracy in software defect prediction. Analyzes of six resampling approaches on 40 releases of 20 open-source projects across five performance measures and five imbalance rates were performed. The experimental results obtained indicate that there were statistical differences between the prediction results with and without resampling methods when evaluated with the geometric-mean, recall(pd), probability of false alarms(pf ) and balance performance measures. However, resampling methods could not improve the AUC values across all prediction models implying that resampling methods can help in defect classification but not defect prioritization. A stable Pfp rate was dependent on the performance measure used. Lower Pfp rates are required for lower pf values while higher Pfp values are required for higher pd values. Random Under-Sampling and Borderline-SMOTE proved to be the more stable resampling method across several performance measures among the studied resampling methods. Performance of resampling methods are dependent on the imbalance ratio, evaluation measure and to some extent the prediction model. Newer oversampling methods should aim at generating relevant and informative data samples and not just increasing the minority samples.


Software defect prediction Imbalanced data Data resampling approaches Class imbalance Empirical study 



This work is supported in part by the General Research Fund of the Research Grants Council of Hong Kong (No. 11208017), and the research funds of City University of Hong Kong (No. 7004474, 7004683) and JSPS KAKENHI 17K00102.


  1. Agrawal A, Menzies T (2017) Better data is better than better data miners (benefits of tuning smote for defect prediction). arXiv:1705.03697
  2. Arisholm E, Briand LC, Johannessen EB (2010) A systematic and comprehensive investigation of methods to build and evaluate fault prediction models. J Syst Softw 83 (1):2–17Google Scholar
  3. Barua S, Md MI, Yao Xi, Murase K (2014) Mwmote–majority weighted minority oversampling technique for imbalanced data set learning. IEEE Trans Knowl Data Eng 26(2):405–425Google Scholar
  4. Bennin K, Keung J, Monden A, Phannachitta P, Mensah S (2017) The significant effects of data sampling approaches on software defect prioritization and classification. In: 11th international symposium on empirical software engineering and measurement, ESEM 2017Google Scholar
  5. Bennin KE, Keung J, Monden A, Kamei Y, Ubayashi N (2016) Investigating the effects of balanced training and testing datasets on effort-aware fault prediction models. In: 2016 IEEE 40th annual Computer software and applications conference (COMPSAC), vol 1. IEEE, pp 154–163Google Scholar
  6. Bennin KE, Toda K, Kamei Y, Keung J, Monden A, Ubayashi N (2016) Empirical evaluation of cross-release effort-aware defect prediction models. In: S2016 IEEE international conference on oftware quality, reliability and security (QRS). IEEE, pp 214–221Google Scholar
  7. Bennin KE, Keung J, Phannachitta P, Monden A, Mensah S (2017) Mahakil: diversity based oversampling approach to alleviate the class imbalance issue in software defect prediction. IEEE Trans Softw EngGoogle Scholar
  8. Bradley AP (1997) The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern Recogn 30(7):1145–1159Google Scholar
  9. Brunner E, Munzel U, Puri ML (2002) The multivariate nonparametric behrens–fisher problem. J Stat Plan Inference 108(1):37–53MathSciNetzbMATHGoogle Scholar
  10. Bunkhumpornpat C, Sinapiromsaran K, Lursinsap C (2009) Safe-level-smote: Safe-level-synthetic minority over-sampling technique for handling the class imbalanced problem. In: Pacific-asia conference on knowledge discovery and data mining. Springer, pp 475–482Google Scholar
  11. Chawla NV (2010) Data mining for imbalanced datasets: an overview. In: Data mining and knowledge discovery handbook. Springer, pp 875–886Google Scholar
  12. Chawla NV, Bowyer KW, Hall LO., Kegelmeyer WP (2002) Smote: synthetic minority over-sampling technique. J Artif Intell Res:321–357Google Scholar
  13. D’Ambros M, Lanza M, Robbes R (2010) An extensive comparison of bug prediction approaches. In: Proceedings of 2010 7th IEEE Working Conference on Mining Software Repositories (MSR). IEEE, pp 31–41Google Scholar
  14. D’Ambros M, Lanza M, Robbes R (2012) Evaluating defect prediction approaches: a benchmark and an extensive comparison. Empir Softw Eng 17(4-5):531–577Google Scholar
  15. Domingos P (1999) Metacost: a general method for making classifiers cost-sensitive. In: Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 155–164Google Scholar
  16. Drown DJ, Khoshgoftaar TM, Seliya N (2009) Evolutionary sampling and software quality modeling of high-assurance systems. IEEE Trans Syst, Man, Cybern-Part A: Syst Humans 39(5):1097– 1107Google Scholar
  17. Estabrooks A, Jo T, Japkowicz N (2004) A multiple resampling method for learning from imbalanced data sets. Comput Intell 20(1):18–36MathSciNetGoogle Scholar
  18. García V, Sánchez JS, Mollineda RA (2012) On the effectiveness of preprocessing methods when dealing with different levels of class imbalance. Knowl-Based Syst 25(1):13–21Google Scholar
  19. Gray D, Bowes D, Davey N, Yi S, Christianson B (2011) The misuse of the nasa metrics data program data sets for automated software defect prediction. In: Proceedings of 15th Annual Conference on Evaluation & Assessment in Software Engineering (EASE 2011). IET, pp 96–103Google Scholar
  20. Hall T, Beecham S, Bowes D, Gray D, Counsell S (2012) A systematic literature review on fault prediction performance in software engineering. IEEE Trans Softw Eng 38(6):1276–1304Google Scholar
  21. Han H, Wang W-Y, Mao B-H (2005) Borderline-smote: a new over-sampling method in imbalanced data sets learning. In: Advances in intelligent computing. Springer, pp 878–887Google Scholar
  22. Hata H, Mizuno O, Kikuno T (2012) Bug prediction based on fine-grained module histories. In: Proceedings of the 34th International Conference on Software Engineering. IEEE Press, pp 200–210Google Scholar
  23. He H, Garcia EA (2009) Learning from imbalanced data. IEEE Trans knowl data Eng 21(9):1263–1284Google Scholar
  24. He H, Bai Y, Garcia E, Shutao L et al (2008) Adasyn: adaptive synthetic sampling approach for imbalanced learning. In: IEEE international joint conference on Neural networks, 2008. IJCNN 2008. (IEEE world congress on computational intelligence). IEEE, pp 1322–1328Google Scholar
  25. He Z, Shu F, Ye Y, Li M, Wang Q (2012) An investigation on the feasibility of cross-project defect prediction. Autom Softw Eng 19(2):167–199Google Scholar
  26. Japkowicz N, Stephen S (2002) The class imbalance problem: a systematic study. Intell Data Anal 6(5):429–449zbMATHGoogle Scholar
  27. Jiang Y, Cukic B, Ma Y (2008) Techniques for evaluating fault prediction models. Empir Softw Eng 13(5):561–595Google Scholar
  28. Joshi MV, Kumar V, Agarwal RC (2001) Evaluating boosting algorithms to classify rare classes: comparison and improvements. In: Proceedings IEEE International Conference on Data Mining, 2001. ICDM 2001. IEEE, pp 257–264Google Scholar
  29. Jureczko M, Madeyski L (2010) Towards identifying software project clusters with regard to defect prediction. In: Proceedings of the 6th International Conference on Predictive Models in Software Engineering. ACM, p 9Google Scholar
  30. Jureczko M, Spinellis D (2010) Using object-oriented design metrics to predict software defects: models and methods of system dependability. Oficyna Wydawnicza Politechniki Wroclawskiej:69–81Google Scholar
  31. Kamei Y, Monden A, Matsumoto S, Kakimoto T, Matsumoto Kx-I (2007) The effects of over and under sampling on fault-prone module detection. In: First international symposium on empirical software engineering and measurement, 2007. ESEM 2007. IEEE, pp 196–204Google Scholar
  32. Kamei Y, Matsumoto S, Monden A, Matsumoto K-I, Adams B, Hassan AE (2010) Revisiting common bug prediction findings using effort-aware models. In: Proceedings of 2010 IEEE International Conference onSoftware Maintenance (ICSM). IEEE, pp 1–10Google Scholar
  33. Kitchenham B, Madeyski L, Budgen D, Keung J, Brereton P, Charters S, Gibbs S, Pohthong A (2016) Robust statistical methods for empirical software engineering. Empir Softw Eng:1–52Google Scholar
  34. Kocaguneli E, Menzies T, Bener A, Keung JW (2012) Exploiting the essential assumptions of analogy-based effort estimation. IEEE Trans Softw Eng 38 (2):425–438Google Scholar
  35. Kocaguneli E, Menzies T, Keung J, Cok D, Madachy R (2013) Active learning and effort estimation: finding the essential content of software effort estimation data. IEEE Trans Softw Eng 39(8):1040–1053Google Scholar
  36. Kraemer HC, Kupfer DJ (2006) Size of treatment effects and their importance to clinical research and practice. Biological Psych 59(11):990–996Google Scholar
  37. Kubat M, Matwin S et al (1997) Addressing the curse of imbalanced training sets: one-sided selection. In: ICML, vol 97, Nashville, USA, pp 179–186Google Scholar
  38. Kuhn M, Wing J, Weston S, Williams A, Keefer C, Engelhardt A, Cooper T, Mayer Z (2014) Caret: classification and regression training. r package version 6.0–24Google Scholar
  39. Laradji IH, Alshayeb M, Ghouti L (2015) Software defect prediction using ensemble learning on selected features. Inf Softw Technol 58:388–402Google Scholar
  40. Lee SS (2000) Noisy replication in skewed binary classification. Comput Stat Data Anal 34(2):165–191zbMATHGoogle Scholar
  41. Lessmann S, Baesens B, Mues C, Pietsch S (2008) Benchmarking classification models for software defect prediction: a proposed framework and novel findings. IEEE Trans Softw Eng 34(4):485–496Google Scholar
  42. Liu M, Miao L, Zhang D (2014) Two-stage cost-sensitive learning for software defect prediction. IEEE Trans Reliab 63(2):676–686Google Scholar
  43. Madeyski L, Jureczko M (2015) Which process metrics can significantly improve defect prediction models? an empirical study. Softw Qual J 23(3):393–422Google Scholar
  44. Menzies T, Dekhtyar A, Distefano J, Greenwald J (2007) Problems with precision: a response to comments on data mining static code attributes to learn defect predictors. IEEE Trans Softw Eng 33(9):637Google Scholar
  45. Menzies T, Greenwald J, Frank A (2007) Data mining static code attributes to learn defect predictors. IEEE Trans Softw Eng 33(1):2–13Google Scholar
  46. Menzies T, Turhan B, Bener A, Gay G, Cukic B, Jiang Y (2008) Implications of ceiling effects in defect predictors. In: Proceedings of the 4th international workshop on Predictor models in software engineering. ACM, pp 47–54Google Scholar
  47. Menzies T, Milton Z, Turhan B, Cukic B, Jiang Y, Bener AY (2010) Defect prediction from static code features: current results, limitations, new approaches. Autom Softw Eng 17(4):375–407Google Scholar
  48. Moser R, Pedrycz W, Succi G (2008) A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. In: ACM/IEEE 30th international conference on Software engineering, 2008. ICSE’08. IEEE, pp 181–190Google Scholar
  49. Nickerson A, Japkowicz N, Milios E (2001) Using unsupervised learning to guide resampling in imbalanced data sets. In: Proceedings of the Eighth International Workshop on AI and Statitsics, pp 261–265Google Scholar
  50. Pazzani M, Merz C, Murphy P, Ali K, Hume T, Brunk C (1994) Reducing misclassification costs. In: Proceedings of the Eleventh International Conference on Machine Learning, pp 217–225Google Scholar
  51. Pelayo L, Dick S (2007) Applying novel resampling strategies to software defect prediction. In: Annual meeting of the north american Fuzzy information processing society, 2007. NAFIPS’07. IEEE, pp 69–72Google Scholar
  52. Phung SL, Bouzerdoum A, Nguyen GH (2009) Learning pattern classification tasks with imbalanced data setsGoogle Scholar
  53. Radjenovic D, Hericko M, Torkar R, živkovic A (2013) Software fault prediction metrics: a systematic literature review. Inf Softw Technol 55(8):1397–1418Google Scholar
  54. Riquelme JC, Ruiz R, Rodríguez D, Moreno J (2008) Finding defective modules from highly unbalanced datasets. Actas de los Talleres de las Jornadas de Ingeniería del Software y Bases de Datos 2(1):67–74Google Scholar
  55. Shirabad JS, Menzies TJ (2005) The PROMISE repository of software engineering databases. School of Information Technology and Engineering, University of Ottawa, CanadaGoogle Scholar
  56. Seiffert C, Khoshgoftaar TM, Hulse JV, Rusboost AN (2010) A hybrid approach to alleviating class imbalance. IEEE Trans Syst Man, and Cybernetics-Part A: Systems and Humans 40(1):185–197Google Scholar
  57. Shanab A, Khoshgoftaar TM, Wald R, Napolitano A (2012) Impact of noise and data sampling on stability of feature ranking techniques for biological datasets. In: 2012 IEEE 13th international conference on Information reuse and integration (IRI). IEEE, pp 415–422Google Scholar
  58. Shatnawi R (2017) The application of roc analysis in threshold identification, data imbalance and metrics selection for software fault prediction. Innov Syst Softw Eng:1–17Google Scholar
  59. Shepperd M, Kadoda G (2001) Comparing software prediction techniques using simulation. IEEE Trans Softw Eng 27(11):1014–1022Google Scholar
  60. Sun Y, Kamel MS, Wong AKC, Wang Y (2007) Cost-sensitive boosting for classification of imbalanced data. Pattern Recogn 40(12):3358–3378zbMATHGoogle Scholar
  61. Sun Z, Song Q, Zhu X (2012) Using coding-based ensemble learning to improve software defect prediction. IEEE Trans Syst, Man, Cybern, Part C (Appl Rev) 42(6):1806–1817Google Scholar
  62. Tang Y, Zhang Y-Q, Chawla NV, Krasser S (2009) Svms modeling for highly imbalanced classification. IEEE Trans Syst, Man, Cybern, Part B (Cybernetics) 39(1):281–288Google Scholar
  63. R Core Team (2012) R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical ComputingGoogle Scholar
  64. Wang S, Yao X (2013) Using class imbalance learning for software defect prediction. IEEE Trans Reliab 62(2):434–443Google Scholar
  65. Weiss GM, Provost F (2001) The effect of class distribution on classifier learning: an empirical study. Rutgers UnivGoogle Scholar
  66. Weiss GM, Provost F (2003) Learning when training data are costly: the effect of class distribution on tree induction. J Artif Intell Res:315–354Google Scholar
  67. Wilcox RR, Schönbrodt FD (2014) The wrs package for robust statistics in r (version 0.26). Available: Retrieved from
  68. Wong GY, Leung FHF, Ling S-H (2013) A novel evolutionary preprocessing method based on over-sampling and under-sampling for imbalanced datasets. In: 2013-39th annual conference of the IEEE Industrial electronics society, IECON. IEEE, pp 2354–2359Google Scholar
  69. Yan M, Fang Y, Lo D, Xia X, Zhang X (2017) File-level defect prediction: unsupervised vs. supervised models. In: 2017 ACM/IEEE international symposium on Empirical software engineering and measurement (ESEM). IEEE, pp 344–353Google Scholar
  70. Yoon K, Kwek S (2007) A data reduction approach for resolving the imbalanced data issue in functional genomics. Neural Comput Appl 16(3):295–306Google Scholar
  71. Zheng J (2010) Cost-sensitive boosting neural networks for software defect prediction. Expert Syst Appl 37(6):4537–4543Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer ScienceCity University of Hong KongKowloon TongHong Kong
  2. 2.Graduate School of Natural Science and TechnologyOkayama UniversityOkayamaJapan

Personalised recommendations