Software Quality Journal

, Volume 23, Issue 2, pp 363–390 | Cite as

A quality cost reduction model for large-scale software development

  • Tihana Galinac Grbac
  • Željka Car
  • Darko Huljenić


Understanding quality costs is recognized as a prerequisite for decreasing the variability of the success of software development projects. This paper presents an empirical quality cost reduction (QCR) model to support the decision-making process for additional investment in the early phases of software verification. The main idea of the QCR model is to direct additional investment into software units that have some fault-slip potential in their later verification phases, with the aim of reducing costs and increasing product quality. The fault-slip potential of a software unit within a system is determined by analogy with historical projects. After a preliminary study on a sample of software units, which proves that we can lower quality costs with additional investment into particular verification activities, we examine the effectiveness of the proposed QCR model using real project data. The results show that applying the model produces a positive business case, meaning that the model lowers quality costs and increases quality, resulting in economic benefit. The potential to reduce quality costs is growing significantly with the evolution of software systems and the reuse of their software units. The proposed model is the result of a research project performed at Ericsson.


Quality cost Verification Control model Fault detection Large-scale software 



The first author is partially supported by the University of Rijeka research grant


  1. Andersson, C., & Runeson, P. (2007). A replicated quantitative analysis of fault distributions in complex software systems. IEEE Transactions on Software Engineering, 33(5), 273–286.CrossRefGoogle Scholar
  2. Biffl, S., Aurum, A., Boehm, B., Erdogmus, H., & Grnbacher, P. (Eds.). (2006). Value-based software engineering. Heidelberg: Springer.Google Scholar
  3. Boehm, B. W. (1997). COCOMO II model definition manual. Technical report, University of Southern California.Google Scholar
  4. Boehm, B. W. (1981). Software engineering economics. New York: Prentice Hall.MATHGoogle Scholar
  5. Breyfogle, F. W. (2003). Implementing six sigma. Hoboken: Wiley.Google Scholar
  6. Campanella, J. (1999). Principles of quality costs. Milwaukee: ASQ Quality Press.Google Scholar
  7. Crosby, P. B. (1996). Quality is still free: Making quality certain in uncertain times. New York: McGraw-Hill.Google Scholar
  8. Damm, L.-O., Lundberg, L., & Wohlin, C. (2004). Determining the improvement potential of a software development organization through fault analysis: A method and a case study. In: T. Dingsøyr (Ed.), Software process improvement, 11th European conference, EuroSPI 2004. Vol. 3281 of lecture notes in computer science (pp. 138–149). Berlin: Springer.Google Scholar
  9. Damm, L.-O., Lundberg, L., & Wohlin, C. (2008). A model for software rework reduction through a combination of anomaly metrics. The Journal of Systems and Software, 81(11), 1968–1982.CrossRefGoogle Scholar
  10. Deissenboeck, F., Juergens, E., Lochmann, K., & Wagner, S. (2009). Software quality models: Purposes, usage scenarios and requirements. In B. Wong (Ed.), Proceedings of the seventh ICSE conference on software quality (WOSQ’09) (pp. 9–14). Washington, DC: IEEE Computer Society.Google Scholar
  11. Deming, W. E. (1986). Out of crisis. Cambridge: MIT Center for Advanced Engineering Study.Google Scholar
  12. Ebenau, R. G., & Strauss, S. H. (1994). Software inspection process. Workingham: McGraw Hill.Google Scholar
  13. Fagan, M. E. (1976). Design and code inspections to reduce errors in program development. IBM Systems Journal, 15(3), 575–607.CrossRefGoogle Scholar
  14. Fenton, E. N., & Pfleeger, S. L. (1998). Software metrics: A rigorous and practical approach (2nd ed.). Boston, MA: PWS Pub. Co.Google Scholar
  15. Fenton, N. E., & Ohlsson, N. (2000). Quantitative analysis of faults and failures in a complex software system. IEEE Transactions on Software Engineering, 26(8), 797–814.CrossRefGoogle Scholar
  16. Galinac Grbac, T., Car, Ž., & Huljenić, D. (2012). Quantifying value of adding inspection effort early in the development process: A case study. IET Software, 6(3), 249–259.CrossRefGoogle Scholar
  17. Galinac Grbac, T., Runeson, P., & Huljenić, D. (2013). A second replicated quantitative analysis of fault distributions in complex software systems. IEEE Transactions on Software Engineering, 39(4), 462–476.CrossRefGoogle Scholar
  18. Galinac, T., & Car, Ž. (2007). Software verification process improvement proposal using Six Sigma. In: J. Munch & P. Abrahamsson (Eds.), PROFES ’07: Proceedings of 8th international product-focused software process improvement conference. Vol. 4589 of lecture notes in computer science (pp. 51–64). Berlin: Springer.Google Scholar
  19. Gondra, I. (2008). Applying machine learning to software fault-proneness prediction. The Journal of Systems and Software, 81(2), 186–195.CrossRefGoogle Scholar
  20. Hall, T., Beecham, S., Bowes, D., Gray, D., & Counsell, S. (2012). A systematic literature review on fault prediction performance in software engineering. IEEE Transactions on Software Engineering, 38(6), 1276–1304.CrossRefGoogle Scholar
  21. Jørgensen, M., Boehm, B., & Rifkin, S. (2009). Software development effort estimation: Formal models or expert judgment? IEEE Software, 26(2), 14–19.CrossRefGoogle Scholar
  22. Juran, J., et al. (1999). Juran’s quality control handbook (5th ed.). New York: McGraw-Hill.Google Scholar
  23. Karg, L. M., Grottke, M., & Beckhaus, A. (2011). A systematic literature review of software quality cost research. Journal of Systems and Software, 84(3), 415–427.CrossRefGoogle Scholar
  24. Khoshgoftaar, T. M., Allen, E. B., Jones, W. D., & Hudepohl, J. P. (2001). Cost-benefit analysis of software quality models. Software Quality Journal, 9, 9–30.CrossRefGoogle Scholar
  25. Lessmann, S., Baesens, B., Mues, C., & Pietsch, S. (2008). Benchmarking classification models for software defect prediction: A proposed framework and novel findings. IEEE Transactions on Software Engineering, 34(4), 485–496.CrossRefGoogle Scholar
  26. Musa, J. D., Iannino, A., & Okumoto, K. (1987). Software reliability measurement, prediction, application. New York: McGraw-Hill.Google Scholar
  27. Provost, F. (2000). Machine learning from imbalanced data sets 101. In Proceedings of learning from imbalanced data sets: Papers from the American Association for Artificial Intelligence workshop, Technical report WS-00-05.Google Scholar
  28. Putnam, L. H. (1978). A general empirical solution to the macro software sizing and estimating problem. IEEE Transactions on Software Engineering, 4(4), 345–361.CrossRefMATHGoogle Scholar
  29. Siy, H., & Votta, L. (2001). Does the modern code inspection have value? In Proceedings of the 17th IEEE international conference on software maintenance ICSM 2001 (pp. 281–291). Los Alamitos: IEEE Computer Society.Google Scholar
  30. Slaughter, S. A., Harter, D. E., & Krishnan, M. S. (1998). Evaluating the cost of software quality. Communications of the ACM, 41(8), 67–73.CrossRefGoogle Scholar
  31. Tosun, A., Turhan, B., & Bener., A. (2009). Validation of network measures as indicators of defective modules in software systems, In Proceedings of the 5th international conference on predictor models in software engineering (PROMISE ’09). New York, NY: ACM, Article 5.Google Scholar
  32. Villalba, M. T., Sanz, L. F., & Martinez, J. J. (2010). Empirical support for the generation of domain-oriented quality models. IET Software, 4(1), 1–14.CrossRefGoogle Scholar
  33. Westland, J. C. (2004). The cost behavior of software defects. Decision Support Systems, 37(2), 229–238.CrossRefGoogle Scholar
  34. Zimmermann, T., Nagappan, N., Gall, H., Giger, E., & Murphy, B. (2009). Cross-project defect prediction: A large scale experiment on data vs. domain vs. process. In Proceedings of the ESEC/FSE ’09 (pp. 91–100). New York, NY: ACM.Google Scholar
  35. Zimmermann, T., & Nagappan, N. (2008). Predicting defects using network analysis on dependency graphs. ICSE, 2008, 531–540.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Tihana Galinac Grbac
    • 1
  • Željka Car
    • 2
  • Darko Huljenić
    • 3
  1. 1.Faculty of EngineeringUniversity of RijekaRijekaCroatia
  2. 2.Faculty of Electrical Engineering and ComputingUniversity of ZagrebZagrebCroatia
  3. 3.Ericsson Nikola TeslaZagrebCroatia

Personalised recommendations