Skip to main content
Log in

Are delayed issues harder to resolve? Revisiting cost-to-fix of defects throughout the lifecycle

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Many practitioners and academics believe in a delayed issue effect (DIE); i.e. the longer an issue lingers in the system, the more effort it requires to resolve. This belief is often used to justify major investments in new development processes that promise to retire more issues sooner. This paper tests for the delayed issue effect in 171 software projects conducted around the world in the period from 2006–2014. To the best of our knowledge, this is the largest study yet published on this effect. We found no evidence for the delayed issue effect; i.e. the effort to resolve issues in a later phase was not consistently or substantially greater than when issues were resolved soon after their introduction. This paper documents the above study and explores reasons for this mismatch between this common rule of thumb and empirical data. In summary, DIE is not some constant across all projects. Rather, DIE might be an historical relic that occurs intermittently only in certain kinds of projects. This is a significant result since it predicts that new development processes that promise to faster retire more issues will not have a guaranteed return on investment (depending on the context where applied), and that a long-held truth in software engineering should not be considered a global truism.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. Endres & Rombach note that these are not laws of nature in the scientific sense, but theories with repeated empirical evidence.

  2. For example, popular sources such as Pressman (2005), Boehm and Basili (2001), Glass (2002), and Endres and Rombach (2003), with a combined citation count of over 14,500 on Google Scholar, can all trace their evidence to Software Engineering Economics (Boehm 1981).

  3. We use the RqtsErr formulation since this issue typically needs no supportive explanatory text. If we had asked respondents about our more general term “delayed issue effect”, we would have had to burden our respondents with extra explanations.

  4. We selected 30 for this threshold via the central limit theorem (Maxwell 2002).

  5. Recall that in a sorted list of numbers, the inter-quartile range, or IQR, is the difference between the 75th and 25th percentile value.

  6. In retrospect, empirical software engineering studies at that time were extremely rare, and guidance for reporting empirical case studies and experiments have improved substantially. One of the seminal books on quasi-experimentation and reporting of validity concerns, Cook and Campbell (1979), had not been published when most of the DIE papers were written.

References

  • Arcuri A, Briand L (2011) A practical guide for using statistical tests to assess randomized algorithms in software engineering. In: ICSE’11, pp 1–10

  • Aschwanden C (2010) Convincing the public to accept new medical guidelines. http://goo.gl/RT6SK7. FiveThiryEight.com. Accessed: 2015-02-10

  • Aschwanden C (2015) Your brain is primed to reach false conclusions. http://goo.gl/OO3B7s. FiveThirtyEight.com. Accessed: 2015-02-10

  • Avram A (2014) Idc study: how many software developers are out there? infoq.com/news/2014/01/IDC-software-developers

  • Bachmann FH, Carballo L, McHale J, Nord RL (2013) Integrate end to end early and often. Softw IEEE 30(4):9–14

    Article  Google Scholar 

  • Baziuk W (1995) Bnr/nortel: path to improve product quality, reliability and customer satisfaction. In: Sixth international symposium on software reliability engineering, 1995. Proceedings. IEEE, pp 256–262

  • Beck K, Beedle M, van Bennekum A, Cockburn A, Cunningham W, Fowler M, Grenning J, Highsmith J, Hunt A, Jeffries R, Kern J, Marick B, Martin RC, Mallor S, Schwaber K, Sutherland J, Thomas D (2001) The agile manifesto. http://www.agilemanifesto.org

  • Beck K (2000) Extreme programming explained: embrace change. Addison Wesley

  • Bettenburg N, Nagappan M, Hassan AE (2012) Think locally, act globally: improving defect and effort prediction models. In: MSR’12

  • Bettenburg N, Nagappan M, Hassan AE (2014) Towards improving statistical modeling of software engineering data: think locally, act globally!. Emp Softw Eng:1–42

  • Boehm B.W., Papaccio P.N. (1988) Understanding and controlling software costs. IEEE Trans Softw Eng 14(10):1462–1477

    Article  Google Scholar 

  • Boehm B (1976) Software engineering. IEEE Trans Comput C-25(12):1226–1241

    Article  MATH  Google Scholar 

  • Boehm B (1980) Developing small-scale application software products: some experimental results. In: Proceedings of the IFIP congress, pp 321–326

  • Boehm B (1981) Software engineering economics. Prentice Hall, Englewood Cliffs

    MATH  Google Scholar 

  • Boehm B (2010) Architecting: how much and when? In: Oram A, Wilson G (eds) Making software: what really works, and why we believe it. O’Reilly Media, pp 141–186

  • Boehm B, Basili VR (2001) Software defect reduction top 10 list. IEEE Softw:135–137

  • Boehm B, Horowitz E, Madachy R, Reifer D, Clark BK, Steece B, Winsor Brown A, Chulani S, Abts C (2000) Software cost estimation with cocomo II. Prentice Hall

  • Boehm BW (1988) A spiral model of software development and enhancement. Computer 21(5):61–72

    Article  Google Scholar 

  • Boehm BW (2012) Architecting: how much and when. In: Oram A, Wilson G (eds) Making software: what really works, and why we believe it. O’Reilly, pp 161–186

  • Bright P (2015) What windows as a service and a ’free upgrade’ mean at home and at work. http://goo.gl/LOM1NJ/

  • Budgen D, Brereton P, Kitchenham B (2009) Is evidence based software engineering mature enough for practice & policy? In: 33rd Annual IEEE software engineering workshop 2009 (SEW-33). Skovde

  • Carlson MDA, Sean Morrison R (2009) Study design, precision, and validity in observational studies. J Palliative Med 12(1):77–82

    Article  Google Scholar 

  • Cook TD, Campbell DT (1979) Quasi-experimentation: design & analysis issues for field settings. Houghton Mifflin Boston

  • Daly EB (1977) Management of software development. IEEE Trans Softw Eng SE-3(3):229–242

    Article  Google Scholar 

  • Deming WE (1986) Out of the crisis. MIT Press

  • Devanbu P, Zimmermann T, Bird C (2016) Belief & evidence in empirical software engineering. In: Proceedings of the 38th international conference on software engineering, pp 108–119. ACM

  • Dunsmore HE (1988) Evidence supports some truisms, belies others. (some empirical results concerning software development). IEEE Softw:96–99

  • Efron B, Tibshirani RJ (1993) An introduction to the bootstrap. Mono. Stat. Appl. Probab. Chapman and Hall, London

    Book  Google Scholar 

  • Elssamadisy A, Schalliol G (2002) Recognizing and responding to ”bad smells” in extreme programming. In: Proceedings of the 24th international conference on software engineering, ICSE ’02. ACM, New York, pp 617–622

  • Endres A, Rombach D (2003) A handbook of software and systems engineering: empirical observations, laws and theories. Addison Wesley

  • Fagan ME (1976) Design and code inspections to reduce errors in program development. IBM Syst J 15(3):182–211

    Article  Google Scholar 

  • Fenton NE, Neil M (2000) Software metrics: a roadmap. In: Finkelstein A (ed) Software metrics: a roadmap. Available from http://citeseer.nj.nec.com/fenton00software.html. ACM Press, New York

  • Fenton NE, Ohlsson N (2000) Quantitative analysis of faults and failures in a complex software system. IEEE Trans Softw Eng:797–814

  • Fenton NE, Pfleeger SL (1997) Software metrics: a rigorous & practical approach. International Thompson Press

  • Ghotra B, McIntosh S, Hassan AE (2015) Revisiting the impact of classification techniques on the performance of defect prediction models. In: Proc. of the international conference on software engineering (ICSE), pp 789–800

  • Glass RL (2002) Facts and fallacies of software engineering. Addison-Wesley Professional, Boston

  • Gordon P (2016) The cost of requirements errors. https://goo.gl/HSMQtP

  • Harter DE, Kemerer CF, Slaughter SA (2012) Does software process improvement reduce the severity of defects? A longitudinal field study. IEEE Trans Softw Eng 38(4):810–827

    Article  Google Scholar 

  • Humphrey WS (1995) A discipline for software engineering. Addison-Wesley Longman Publishing Co. Inc.

  • Humphrey WS (2000) Introduction to the team software process. Addison-Wesley Longman Ltd., Essex

    Google Scholar 

  • Humphrey WS (2005) TSP(SM)-leading a development team (SEI series in software engineering). Addison-Wesley Professional

  • IEEE-1012 (1998) IEEE standard 1012-2004 for software verification and validation

  • IfSQ (2013) Catching defects during testing is 10 times more expensive. https://goo.gl/dXIwu4

  • Gartner Inc (2014) Gartner says worldwide software market grew 4.8 percent in 2013. gartner.com/newsroom/id/2696317

  • Jacobson I, Booch G, Rumbaugh J (1999) The unified software development process. Addison-Wesley Reading

  • Jones C (2007) Estimating software costs, 2nd edn. McGraw-Hill

  • Jones C, Bonsignour O (2012) The economics of software quality. Addison Wesley

  • Jørgensen M, Gruschke TM (2009) The impact of lessons-learned sessions on effort estimation and uncertainty assessments. IEEE Trans Softw Eng 35(3):368–383

    Article  Google Scholar 

  • Kampenes VBy, Dybå T, Hannay JE, Sjøberg DIK (2007) A systematic review of effect size in software engineering experiments. Inf Softw Technol 49(11-12):1073–1086

    Article  Google Scholar 

  • Karg LM, Grottke M, Beckhaus A (2011) A systematic literature review of software quality cost research. J Syst Softw 84(3):415–427

    Article  Google Scholar 

  • Kim D (2013) Making agile mandatory at the department of defense

  • Kitchenham BA, Dyba T, Jørgensen M (2004) Evidence-based software engineering. In: ICSE ’04: Proceedings of the 26th international conference on software engineering. IEEE Computer Society, Washington, pp 273–281

  • Kocaguneli E, Zimmermann T, Bird C, Nagappan N, Menzies T (2013) Distributed development considered harmful? In: Proceedings - international conference on software engineering, pp 882– 890

  • Lapham MA, Garcia-Miller S, Nemeth-Adams L, Brown N, Hackemack L, Hammons CB, Levine L, Schenker AR (2011) Agile methods: selected dod management and acquisition concerns. Technical report, Carnegie Mellon University - Software Engineering Institute

  • Larman C, Basili VR (2003) Iterative and incremental development: a brief history. Computer 36(6):47–56

    Article  Google Scholar 

  • Leffingwell D (1996) Calculating your return on investment form more effective requirements management. http://goo.gl/3WHsla. Rational Software Corporation

  • Madigan D, Stang PE, Berlin JA, Schuemie M, Overhage MJ, Suchard MA, Dumouchel B, Hartzema AG, Ryan PB (2014) A systematic statistical approach to evaluating evidence from observational studies. Ann Rev Stat Appl 1:11–39

    Article  Google Scholar 

  • Maxwell KD (2002) Applied statistics for software managers. Prentice-Hall, Englewood Cliffs

    Google Scholar 

  • McConnell S (1996) Software quality at top speed. Softw Develop 4(8):38–42

    Google Scholar 

  • McConnell S (2001) An ounce of prevention. IEEE Softw 18(3):5–7

    Article  Google Scholar 

  • McHale J (2002) Tsp: process costs and benefits. Crosstalk

  • Mead NR, Allen JH, Barnum S, Ellison RJ, McGraw G (2004) Software security engineering: a guide for project managers. Addison-Wesley Professional

  • Menzies T, Benson M, Costello K, Moats C, Northey M, Richarson J (2008) Learning better IV&V practices. Innovations in Systems and Software Engineering. Available from http://menzies.us/pdf/07ivv.pdf

  • Menzies T, Butcher A, Cok DR, Marcus A, Layman L, Shull F, Turhan B, Zimmermann T (2013) Local versus global lessons for defect prediction and effort estimation, vol 39. Available from http://menzies.us/pdf/12localb.pdf

  • Menzies T, Butcher A, Marcus A, Zimmermann T, Cok D (2011) Local vs global models for effort estimation and defect prediction. In: IEEE ASE’11. Available from http://menzies.us/pdf/11ase.pdf

  • Minku LL, Yao X (2013) Ensembles and locality: insight on improving software effort estimation. Inf Softw Technol 55(8):1512–1528

    Article  Google Scholar 

  • Mittas N, Angelis L (2013) Ranking and clustering software cost estimation models through a multiple comparisons algorithm. IEEE Trans Softw Eng 39(4):537–551

    Article  Google Scholar 

  • Paivarinta T, Smolander K (2015) Theorizing about software development practices. Sci Comput Program 101:124–135

    Article  Google Scholar 

  • Parker J (2013) Good requirements deliver a high roi. http://goo.gl/JvB9BW

  • Passos C, Braun AP, Cruzes DS, Mendonca M (2011) Analyzing the impact of beliefs in software project practices. In: ESEM’11

  • Paul R Exclusive: a behind-the-scenes look at facebook release engineering. http://arstechnica.com/business/2012/04/exclusive-a-behind-the-scenes-look-at-facebook-release-engineering/1/. Accessed: 2016-06-14

  • Popper K (1959) The logic of scientific discovery. Basic Books. New York

  • Posnett D, Filkov V, Devanbu P (2011) Ecological inference in empirical software engineering. In: Proceedings of ASE’11

  • Prasad V, Vandross A, Toomey C, Cheung M, Rho J, Quinn S, Jacob Chacko S, Borkar D, Gall V, Selvaraj S, Ho N, Cifu A (2013) A decade of reversal: an analysis of 146 contradicted medical practices. Mayo Clinic Proc 88(8):790–798

    Article  Google Scholar 

  • Pressman RS (2005) Software engineering: a practitioner’s approach. Palgrave Macmillan

  • Ray B, Posnett D, Filkov V, Devanbu P (2014) A large scale study of programming languages and code quality in github. In: Proceedings of the ACM SIGSOFT 22nd international symposium on the foundations of software engineering, FSE ’14. ACM

  • Reifer DJ (2007) Profiles of level 5 cmmi organizations. Crosstalk: J Defense Softw Eng:24–28

  • Royce W (1998) Software project management: a unified framework. Addison-Wesley, Reading

    Google Scholar 

  • Shepperd MJ, MacDonell SG (2012) Evaluating prediction systems in software project estimation. Inf Softw Technol 54(8):820–827

    Article  Google Scholar 

  • Shepperd MJ, Song Q, Sun Z, Mair C (2013) Data quality: some comments on the nasa software defect datasets. IEEE Trans Software Eng 39(9):1208–1215

    Article  Google Scholar 

  • Shirai Y, Nichols W, Kasunic M (2014) Initial evaluation of data quality in a tsp software engineering project data repository. In: Proceedings of the 2014 international conference on software and system process, ICSSP 2014. ACM, New York, pp 25–29

  • Shull F, Basili V, Boehm B, Winsor Brown A, Costa P, Lindvall M, Port D, Rus I, Tesoriero R, Zelkowitz M (2002) What we have learned about fighting defects. In: Eighth IEEE symposium on software metrics, 2002. Proceedings, pp 249–258

  • Shull F, Feldmann R (2008) Building theories from multiple evidence sources. In: Shull F, Singer J, Sjoberg DIK (eds) Guide to advanced empirical software engineering. Springer-Verlag, pp 337–364

  • Sjøberg DIK, Dybå T, Anda BCD, Hannay JE (2008) Building theories in software engineering. In: Shull F, Singer J, Sjøberg DIK (eds) Guide to advanced empirical software engineering. Springer-Verlag, pp 312–336

  • Soni M (2016) Defect prevention: reducing costs and enhancing quality. https://goo.gl/k2cBnW

  • Stecklein J, Dabney J, Dick B, Haskins B, Lovell R, Moroney G (2004) Error cost escalation through the project life cycle. In: 14th Annual INCOSE international symposium. Toulouse

  • Stephenson WE (1976) An analysis of the resources used in the safeguard system software development. In: Proceedings of the 2Nd International Conference On Software Engineering, ICSE ’76. IEEE Computer Society Press, Los Alamitos, pp 312–321

  • Stol K-J, Fitzgerald B (2015) Theory-oriented software engineering. Sci Comput Program 101:79–98

    Article  Google Scholar 

  • Tassey G (2002) The economic impacts of inadequate infrastructure for software testing. Technical report, National Institute of Standards and Technology

  • Westland CJ (2002) The cost of errors in software development: evidence from industry. J Syst Softw 62(1):1–9

    Article  Google Scholar 

  • Willis RR, Rova RM, Scott MD, Johnson MJ, Ryskowski JF, Moon JA, Winfield TO, Shumate KC (1998) Hughes aircrafts widespread deployment of a continuously improving software process. Technical report, Carnegie Mellon University - Software Engineering Institute

  • Wohlin C (2014) Guidelines for snowballing in systematic literature studies and a replication in software engineering. In: Proceedings of the 18th international conference on evaluation and assessment in software engineering, page Article 38

  • Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A (2012) Experimentation in software engineering. Springer Science & Business Media

  • Yang Y, He Z, Mao K, Li Q, Nguyen V, Boehm BW, Valerdi R (2013) Analyzing and handling local bias for calibrating parametric cost estimation models. Inf Softw Technol 55(8):1496–1511

    Article  Google Scholar 

  • Ye Y, Xie L, He Z, Qi L, Nguyen V, Boehm BW, Valerdi R (2011) Local bias and its impacts on the performance of parametric estimation models. In: PROMISE

Download references

Acknowledgments

The authors wish to thank David Tuma and Yasutaka Shirai for their work on the SEI databases that made this analysis possible. In particular, we thank Tuma Solutions for providing the Team Process Data Warehouse software. Also, the authors gratefully acknowledge the careful comments of anonymous reviewers from the FSE and ICSE conferences. This work was partially funded by an National Science Foundation grants NSF-CISE 1302169 and CISE 1506586.

This material is based upon work funded and supported by TSP Licensing under Contract No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center sponsored by the United States Department of Defense. This material has been approved for public release and unlimited distribution. DM-0003956

Personal Software Process SM, Team Software Process SM, and TSP SM are service marks of Carnegie Mellon University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tim Menzies.

Additional information

Communicated by: Per Runeson

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Menzies, T., Nichols, W., Shull, F. et al. Are delayed issues harder to resolve? Revisiting cost-to-fix of defects throughout the lifecycle. Empir Software Eng 22, 1903–1935 (2017). https://doi.org/10.1007/s10664-016-9469-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-016-9469-x

Keywords

Navigation