Software Quality Journal

, Volume 23, Issue 2, pp 205–227 | Cite as

Predicting defective modules in different test phases

  • Bora Caglayan
  • Ayse Tosun Misirli
  • Ayse Basar Bener
  • Andriy Miranskyy
Article

Abstract

Defect prediction is a well-established research area in software engineering . Prediction models in the literature do not predict defect-prone modules in different test phases. We investigate the relationships between defects and test phases in order to build defect prediction models for different test phases. We mined the version history of a large-scale enterprise software product to extract churn and static code metrics. We used three testing phases that have been employed by our industry partner, namely function, system and field, to build a learning-based model for each testing phase. We examined the relation of different defect symptoms with the testing phases. We compared the performance of our proposed model with a benchmark model that has been constructed for the entire test phase (benchmark model). Our results show that building a model to predict defect-prone modules for each test phase significantly improves defect prediction performance and shortens defect detection time. The benefit analysis shows that using the proposed model, the defects are detected on the average 7 months earlier than the actual. The outcome of prediction models should lead to an action in a software development organization. Our proposed model gives a more granular outcome in terms of predicting defect-prone modules in each testing phase so that managers may better organize the testing teams and effort.

Keywords

Software testing Testing phase Defect prediction 

References

  1. Alpaydin, E. (2004). Introduction to machine learning. Cambridge: The MIT Press.Google Scholar
  2. Bener, A., & Menzies, T. (2012). Guest editorial: Learning to organize testing. Automated Software Engineering, 19(2), 137–140. doi:10.1007/s10515-011-0095-y.CrossRefGoogle Scholar
  3. Boehm, B., Basili, V.R., (2001). Software defect reduction top 10 list. Computer 34(1), 135–137. doi:http://doi.ieeecomputersociety.org/10.1109/2.962984.
  4. Brooks, F., (1995). The mythical man-month: Essays on software engineering. Reading, MA: Addison-Wesley Pub. Co., http://books.google.ca/books?id=fUYPAQAAMAAJ.
  5. Caglayan, B., & Bener, A. (2014). DSL-2014-02 metric distributions for the enterprise software dataset for modules in different test phases. Tech. rep., Ryerson University, Data Science Laboratory.Google Scholar
  6. Caglayan, B., Bener, A., Koch, S., (2009). Merits of using repository metrics in defect prediction for open source projects. In Proceedings of the 2009 ICSE workshop on emerging trends in Free/Libre/Open source software research and development, IEEE Computer Society, Washington, DC, USA, FLOSS ’09, pp. 31–36.Google Scholar
  7. Caglayan, B., Tosun, A., Miranskyy, A., Bener, A., Ruffolo, N. (2010). Usage of multiple prediction models based on defect categories. In Proceedings of the 6th International conference on predictive models in software engineering, ACM, New York, NY, USA, PROMISE ’10, pp 8:1–8:9.Google Scholar
  8. Chillarege, R., Bhandari, I., Chaar, J., Halliday, M., Moebus, D., Ray, B., et al. (1992). Orthogonal defect classification-a concept for in-process measurements. IEEE Transactions on Software Engineering, 18(11), 943–956. doi:10.1109/32.177364.CrossRefGoogle Scholar
  9. Cotroneo, D., Natella, R., & Pietrantuono, R. (2013a). Predicting aging-related bugs using software complexity metrics. Performance Evaluation, 70, 163–178.CrossRefGoogle Scholar
  10. Cotroneo, D., Pietrantuono, R., & Russo, S. (2013b). Testing techniques selection based on odc fault types and software metrics. Journal of Systems and Software, 86, 1613–1637.CrossRefGoogle Scholar
  11. Dallmeier, V., Zimmermann, T. (2007). Extraction of bug localization benchmarks from history. In Proceedings of the 22nd IEEE/ACM international conference on automated software engineering.Google Scholar
  12. Di Fatta, G., Leue, S., & Stegantova, E. (2006). Discriminative pattern mining in software fault detection. In SOQUA ’06: Proceedings of the 3rd international workshop on software quality assurance (pp. 62–69). New York, NY, USA: ACM.Google Scholar
  13. Fagan, M. E. (1999). Design and code inspections to reduce errors in program development. IBM Systems Journal, 38(2–3), 258–287. doi:10.1147/sj.382.0258.CrossRefGoogle Scholar
  14. Fenton, N. E., & Neil, M. (1999). A critique of software defect prediction models. IEEE Transactions on Software Engineering, 25(5), 675–689.CrossRefGoogle Scholar
  15. Fenton, N. E., & Ohlsson, N. (2000). Quantitative analysis of faults and failures in a complex software system. IEEE Transactions on Software Engineering, 26(8), 797–814.CrossRefGoogle Scholar
  16. Gelman, A., Hill, J. (2007). Data analysis using regression and multilevel/hierarchical models. Analytical methods for social research, Cambridge: Cambridge University Press. http://books.google.com.tr/books?id=c9xLKzZWoZ4C.
  17. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., & Witten, I. H. (2009). The weka data mining software: An update. SIGKDD Explorations Newsletter, 11(1), 10–18.CrossRefGoogle Scholar
  18. IEEE Standard Computer Dictionary. (1991). A compilation of IEEE standard computer glossaries. IEEE Std 610, pp 1. 10.1109/IEEESTD.1991.106963.
  19. Jiang, Y., Cukic, B., Menzies, T., (2008). Can data transformation help in the detection of fault-prone modules? In DEFECTS ’08: Proceedings of the 2008 workshop on defects in large software systems, ACM, New York, NY, USA, pp. 16–20. doi: 10.1145/1390817.1390822.
  20. Kocaguneli, E., Tosun, A., Bener, A.B., Turhan, B., Caglayan, B. (2009). Prest: An intelligent software metrics extraction, analysis and defect prediction tool. In SEKE, pp. 637–642.Google Scholar
  21. Koru, A. G., & Emam, K. E. (2010). The theory of relative dependency: Higher coupling concentration in smaller modules. IEEE Software, 27, 81–89.CrossRefGoogle Scholar
  22. Koru, A. G., Zhang, D., El Emam, K., & Liu, H. (2009). An investigation into the functional form of the size-defect relationship for software modules. IEEE Transactions on Software Engineering, 35(2), 293–304.CrossRefGoogle Scholar
  23. Lessmann, S., Baesens, B., Mues, C., Pietsch, S. (2008). Benchmarking classification models for software defect prediction: A proposed framework and novel findings. IEEE Transactions on Software Engineering 34(4), 485–496. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4527256.
  24. Leszak, M., Perry, D. E., & Stoll, D. (2002). Classification and evaluation of defects in a project retrospective. Journal of Systems Software, 61(3), 173–187.CrossRefGoogle Scholar
  25. Li, N., Li, Z., Zhang, L. (2010). Mining frequent patterns from software defect repositories for black-box testing. In Intelligent systems and applications (ISA), 2010 2nd international workshop on, pp. 1–4. doi:10.1109/IWISA.2010.5473578.
  26. Maloof, M.A. (2003). Learning when data sets are imbalanced and when costs are unequal and unknown. In ICML-2003 workshop on learning from imbalanced data sets II.Google Scholar
  27. Meneely, A., Williams, L., Snipes, W., & Osborne, J. (2008). Predicting failures with developer networks and social network analysis. In SIGSOFT ’08/FSE-16: Proceedings of the 16th ACM SIGSOFT international symposium on foundations of software engineering (pp. 13–23). New York, NY, USA: ACM.Google Scholar
  28. Menzies, T., Greenwald, J., & Frank, A. (2007). Data mining static code attributes to learn defect predictors. IEEE Transactions on Software Engineering, 33(1), 2–13.CrossRefGoogle Scholar
  29. Menzies, T., Milton, Z., Turhan, B., Cukic, B., Jiang, Y., & Bener, A. (2010). Defect prediction from static code features: Current results, limitations, new approaches. Automated Software Engineering, 17(4), 375–407. doi:10.1007/s10515-010-0069-5.CrossRefGoogle Scholar
  30. Misirli, A. T., Bener, A. B., & Kale, R. (2011). Ai-based software defect predictors: Applications and benefits in a case study. AI Magazine, 32(2), 57–68.Google Scholar
  31. Misirli, A. T., Caglayan, B., Bener, A., Turhan, B. (2013). A retrospective study of software analytics projects: In-depth interviews with practitioners. IEEE Software 30(5):54–61. doi:10.1109/MS.2013.93.
  32. Myers, G. J., Badgett, T., Thomas, T., & Sandler, C. (2004). The art of software testing (2nd ed.). New York: Wiley.Google Scholar
  33. Nagappan, N., & Ball, T. (2007). Using software dependencies and churn metrics to predict field failures: An empirical case study. In ESEM ’07: Proceedings of the first international symposium on empirical software engineering and measurement (pp. 364–373). Washington, DC, USA: IEEE Computer Society.Google Scholar
  34. Nagappan, N., Ball, T., & Zeller, A. (2006). Mining metrics to predict component failures. In Proceedings of the 28th international conference on software engineering, ACM, Shanghai, China.Google Scholar
  35. Ostrand, T. J., Weyuker, E. J., & Bell, R. M. (2004). Where the bugs are. In Proceedings of the 2004 ACM SIGSOFT international symposium on software testing and analysis. ACM, New York, NY, USA, ISSTA ’04, pp. 86–96. doi:10.1145/1007512.1007524.
  36. Ostrand, T. J., Weyuker, E. J., & Bell, R. M. (2005). Predicting the location and number of faults in large software systems. IEEE Transactions on Software Engineering, 31(4), 340–355.CrossRefGoogle Scholar
  37. Ostrand, T. J., Weyuker, E. J., & Bell, R. M. (2007). Automating algorithms for the identification of fault-prone files. In Proceedings of the 2007 international symposium on Software testing and analysis (pp. 219–227). ACM.Google Scholar
  38. Pressman, R. (2010). Software engineering: a practitioner’s approach. McGraw-Hill higher education, McGraw-Hill Higher Education. http://books.google.ca/books?id=y4k_AQAAIAAJ.
  39. Ratzinger, J., Pinzger, M., & Gall, H. (2007). EQ-Mine: Predicting short-term defects for software evolution. In M. B. Dwyer & A. Lopes (Eds.), Proceedings of the fundamental approaches to software engineering at the European joint conference on theory and practice of software (pp. 12–26). Berlin: Springer.Google Scholar
  40. Schroter, A., Zimmermann, T., Premraj, R., Zeller, A. (2006). If your bug database could talk. In Proceedings of the 5th international symposium on empirical software engineering, Volume II: Short papers and posters, pp 18–20.Google Scholar
  41. Shull, F., Basili, V., Boehm, B., Brown, A.W., Costa, P., Lindvall, M., Port, D., Rus, I., Tesoriero, R., Zelkowitz, M. (2002). What we have learned about fighting defects. In Proceedings of 8th international software metrics symposium, pp 249–258.Google Scholar
  42. Stringfellow, C., Andrews, A., Wohlin, C., & Petersson, H. (2002). Estimating the number of components with defects post-release that showed no defects in testing. Software Testing Verification and Reliability, 12(2), 93–122.CrossRefGoogle Scholar
  43. Tosun, A., Turhan, B., & Bener, A. (2009a). Practical considerations in deploying ai for defect prediction: A case study within the turkish telecommunication industry. In PROMISE ’09: Proceedings of the 5th international conference on predictor models in software engineering (pp. 1–9). New York, NY, USA: ACM.Google Scholar
  44. Tosun, A., Turhan, B., & Bener, A. (2009b). Validation of network measures as indicators of defective modules in software systems. In PROMISE ’09: Proceedings of the 5th international conference on predictor models in software engineering (pp. 1–9). New York, NY, USA: ACM.Google Scholar
  45. Turhan, B., Bener, A. (2007). A multivariate analysis of static code attributes for defect prediction. In Quality software, 2007. QSIC ’07. Seventh international conference on, pp 231–237.Google Scholar
  46. Turhan, B., Menzies, T., Bener, A. B., & Di Stefano, J. (2009). On the relative value of cross-company and within-company data for defect prediction. Empirical Software Engineering, 14, 540–578.CrossRefGoogle Scholar
  47. Weyuker, E. J., Ostrand, T. J., Bell, R. M. (2007). Using developer information as a factor for fault prediction. In Proceedings of the third international workshop on predictor models in software engineering, IEEE computer society. doi: 10.1109/PROMISE.2007.14. http://portal.acm.org/citation.cfm?id=1269056.
  48. Zimmermann, T., & Nagappan, N. (2008). Predicting defects using network analysis on dependency graphs. In ICSE ’08: Proceedings of the 30th international conference on software engineering (pp. 531–540). New York, NY, USA: ACM.Google Scholar
  49. Zimmermann, T., Nagappan, N. (2009). Predicting defects with program dependencies. 2009 3rd international symposium on empirical software engineering and measurement pp 435–438. doi:10.1109/ESEM.2009.5316024. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5316024.
  50. Zimmermann, T., Premraj, R., Zeller, A. (2007). Predicting defects for eclipse. In PROMISE ’07: Proceedings of the third international workshop on predictor models in software engineering, IEEE computer society, Washington, DC, USA.Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Bora Caglayan
    • 1
  • Ayse Tosun Misirli
    • 2
  • Ayse Basar Bener
    • 3
  • Andriy Miranskyy
    • 4
  1. 1.Department of Computer EngineeringBogazici UniversityIstanbulTurkey
  2. 2.Department of Information Processing ScienceOulu UniversityOuluFinland
  3. 3.Data Science Lab, Department of Mechanical and Industrial EngineeringRyerson UniversityTorontoCanada
  4. 4.Department of Computer ScienceRyerson UniversityTorontoCanada

Personalised recommendations