Assessment of the Software Defect Prediction Cost Effectiveness in an Industrial Project

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 504)

Abstract

Software defect prediction is a promising, new approach to increase both, software quality and development pace. Unfortunately, the cost effectiveness of software defect prediction in industrial settings is not eagerly shared by the pioneering companies. In particular, the cost effectiveness of using the DePress open source software measurement framework, developed by Wroclaw University of Science and Technology, and Capgemini software development company, for defect prediction in commercial software development projects have not been previously investigated. Thus, in this paper, we explore whether defect prediction can positively impact an industrial software development project by generating profits. To meet this goal, we conducted a defect prediction and simulated potential quality assurance costs based on the best prediction result, as well as the proposed Quality Assurance (QA) strategy. Results of our investigation were optimistic: we estimated that quality assurance costs can be reduced by almost 30 % when proposed approach will be used, while estimated DePress tool usage Return on Investment (ROI) is fully 73 (7300 %), and Benefits Cost Ratio (BCR) is 74. Such promising results have caused the acceptance of continued usage of the DePress-based software defect prediction for actual industrial projects run by Volvo Group.

References

  1. 1.
    Madeyski, L., Majchrzak, M.: Software measurement and defect prediction with depress extensible framework. Found. Comput. Decis. Sci. 39(4), 249–270 (2014). http://dx.doi.org/10.2478/fcds-2014-0014, doi:10.2478/fcds-2014-0014
  2. 2.
    KNIME.COM AG: KNIME Framework Documentation (2016). https://tech.knime.org/documentation/, Accessed 06 May 2016
  3. 3.
    Madeyski, L., Majchrzak, M.: ImpressiveCode DePress (Defect Prediction for software systems) Extensible Framework (2016). https://github.com/ImpressiveCode/ic-depress
  4. 4.
    Rijsbergen, C.J.V.: Information Retrieval. Butterworth-Heinemann Newton (1979)Google Scholar
  5. 5.
    Müller, M.M., Padberg, F.: About the return on investment of test-driven development. In: International Workshop on Economics-Driven Software Engineering Research EDSER-5, pp. 26–31 (2003)Google Scholar
  6. 6.
    Hryszko, J., Madeyski, L.: Bottlenecks in Software defect prediction implementation in industrial projects. Found. Comput. Decis. Sci. 40(1), 17–33 (2015). http://dx.doi.org/10.1515/fcds-2015-0002, doi:10.1515/fcds-2015-0002
  7. 7.
    Khoshgoftaar, T.M., Allen, E.B., Hudepohl, J.P., Aud, S.J.: Application of neural networks to software quality modelling of a very large telecommunications system. IEEE Trans. Neural Netw. 8(4), 902–909 (1997)Google Scholar
  8. 8.
    Khoshgoftaar, T.M., Seliya, N.: Comparative assessment of software quality classification techniques: an empirical case study. Empir. Softw. Eng. 9(3), 229–257 (2004)CrossRefGoogle Scholar
  9. 9.
    Khoshgoftaar, T.M., Seliya, N.: Assessment of a new three-group software quality classification technique: an empirical case study. Empir. Softw. Eng. 10(2), 183–218 (2005)CrossRefGoogle Scholar
  10. 10.
    Ostrand, T.J., Weyuker, E.J.: The distribution of faults in a large industrial software system. SIGSOFT Softw. Eng. Notes 27, 55–64 (2002)CrossRefGoogle Scholar
  11. 11.
    Ostrand, T.J., Weyuker, E.J., Bell, R.M.: Programmer-based fault prediction. In: Proceedings of the Sixth International Conference on Predictive Models in Software Engineering, pp. 1–10 (2010)Google Scholar
  12. 12.
    Tosun, A., Bener, A., Turhan, B., Menzies, T.: Practical considerations in deploying statistical methods for defect prediction: a case study within the Turkish telecommunications industry. Inf. Softw. Technol. 52(11), 1242–1257 (2010)CrossRefGoogle Scholar
  13. 13.
    Turhan, B., Kocak, G., Bener, A.: Data mining source code for locating software bugs: a case study in telecommunication industry. Expert Syst. Appl. 36(6), 9986–9990 (2009)CrossRefGoogle Scholar
  14. 14.
    Turhan, B., Menzies, T., Bener, A., Stefano, J.D.: On the relative value of cross-company and within-company data for defect prediction. Empir. Softw. Eng. 14(5), 540–578 (2009)CrossRefGoogle Scholar
  15. 15.
    Wong, W.E., Horgan, J., Syring, M., Zage, W., Zage, D.: Applying design metrics to predict fault-proneness: a case study on a large-scale software system. Softw. Pract. Exp. 30(14), 1587–1608 (2000)Google Scholar
  16. 16.
    Succi, G., Pedrycz, W., Stefanovic, M., Miller, J.: Practical assessment of the models for identification of defect-prone classes in object-oriented commercial systems using design metrics. J. Syst. Softw. 65(1), 1–12 (2003)CrossRefGoogle Scholar
  17. 17.
    Kläs, M., Nakao, H., Elberzhager, F., Münch, J.: Predicting defect content and quality assurance effectiveness by combining expert judgment and defect data-a case study. In: Proceedings of the 19th International Symposium on Software Reliability Engineering, pp. 17–26 (2008)Google Scholar
  18. 18.
    Ostrand, T.J., Weyuker, E.J., Bell, R.M.: Predicting the location and number of faults in large software systems. IEEE Trans. Softw. Eng. 31(4), 340–355 (2005)CrossRefGoogle Scholar
  19. 19.
    Li, P.L., Herbsleb, J., Shaw, M., Robinson, B.: Experiences and results from initiating field defect prediction and product test prioritization efforts at ABB Inc. In: Proceedings of the 28th International Conference on Software Engineering, pp. 413–422 (2006)Google Scholar
  20. 20.
    Tosun, A., Turhan, B., Bener, A.: Practical considerations in deploying AI for defect prediction: a case study within the Turkish telecommunication industry. In: Proceedings of the Fifth International Conference on Predictor Models in Software Engineering, p. 11 (2009)Google Scholar
  21. 21.
    Monden, A., Shinoda, S., Shirai, K., Yoshida, J., Barker, M., Matsumoto, K.: Assessing the cost effectiveness of fault prediction in acceptance testing. IEEE Trans. Softw. Eng. 39(10), 1345–1357 (2013)CrossRefGoogle Scholar
  22. 22.
    Endres, A., Rombach, D.: A Handbook of Software and Systems Engineering. Addison-Wesley (2003)Google Scholar
  23. 23.
    Pressman, R.: Software Engineering: A Practitioner’s Approach. McGraw-Hill (2010)Google Scholar
  24. 24.
    Rizwan, M., Iqbal, M.: Application of 80/20 rule in software engineering waterfall model. In: Proceedings of the International Conference on Information and Communication Technologies ‘09 (2009)Google Scholar
  25. 25.
    Boehm, B.W.: Software engineering. IEEE Trans. Comput. 25(12), 1226–1241 (1976)CrossRefMATHGoogle Scholar
  26. 26.
    Slaughter, S.A., Harter, D.E., Krishnan, M.S.: Evaluating the cost of software quality. Commun. ACM 41(8), 67–73 (1998)CrossRefGoogle Scholar
  27. 27.
    Witten, I.H., Frank, E., Hall, M.A.: Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann (2005)Google Scholar
  28. 28.
    Source of Information on Salaries in Poland (2015), http://wynagrodzenia.pl/, Accessed 28 Feb 2015
  29. 29.
    Hall, T., Beecham, S., Bowes, D., Gray, D., Counsell, S.: A systematic literature review on fault prediction performance in software engineering. IEEE Trans. Softw. Eng. 38(6), 1276–1304 (2012)Google Scholar
  30. 30.
    Khoshgoftaar, T.M., Pandya, A.S., Lanning, D.L.: Application of neural networks for predicting faults. Ann. Softw. Eng. 1(1), 141–154 (1995)CrossRefGoogle Scholar
  31. 31.
    Moser, R., Pedrycz, W., Succi, G.: A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. In: ACM/IEEE 30th International Conference on Software Engineering, 2008. ICSE ’08, pp. 181–190 (2008)Google Scholar
  32. 32.
    Selby, R.W., Porter, A.: Learning from examples: generation and evaluation of decision trees for software resource analysis. IEEE Trans. Softw. Eng. 14(12), 1743–1756 (1988)CrossRefGoogle Scholar
  33. 33.
    Menzies, T., Jalali, O., Hihn, J., Baker, D., Lum, K.: Stable rankings for different effort models. Autom. Softw. Eng. 17(4), 409–437 (2010)CrossRefGoogle Scholar
  34. 34.
    Jureczko, M., Madeyski, L.: A review of process metrics in defect prediction studies. Metody Informatyki Stosowanej 30(5), 133–145 (2011). http://madeyski.e-informatyka.pl/download/Madeyski11.pdf
  35. 35.
    Madeyski, L., Jureczko, M.: Which process metrics can significantly improve defect prediction models? An empirical study. Softw. Qual. J. 23(3), 393–422 (2015). http://dx.doi.org/10.1007/s11219-014-9241-7, doi:10.1007/s11219-014-9241-7
  36. 36.
    Jureczko, M., Madeyski, L.: Towards identifying software project clusters with regard to defect prediction. In: Proceedings of the 6th International Conference on Predictive Models in Software Engineering. pp. 9:1–9:10. PROMISE ’10, ACM, New York, USA (2010). http://dx.doi.org/10.1145/1868328.1868342, doi:10.1145/1868328.1868342
  37. 37.
    Jureczko, M., Madeyski, L.: Cross–project defect prediction with respect to code ownership model: an empirical study. e-Informatica Softw. Eng. J. 9(1), 21–35 (2015). http://dx.doi.org/10.5277/e-Inf150102, doi:10.5277/e-Inf150102

Copyright information

© Springer International Publishing Switzerland 2017

Authors and Affiliations

  1. 1.Faculty of Computer Science and ManagementWroclaw University of Science and TechnologyWrocławPoland
  2. 2.Volvo GroupGothenburgSweden

Personalised recommendations