Dynamic test planning: a study in an industrial context

  • Gabriella CarrozzaEmail author
  • Roberto Pietrantuono
  • Stefano Russo


Testing accounts for a relevant part of the production cost of complex or critical software systems. Nevertheless, time and resources budgeted to testing are often underestimated with respect to the target quality goals. Test managers need engineering methods to perform appropriate choices in spending testing resources, so as to maximize the outcome. We present a method to dynamically allocate testing resources to software components minimizing the estimated number of residual defects and/or the estimated residual defect density. We discuss the application to a real-world critical system in the homeland security domain. We describe a support tool aimed at easing industrial technology transfer by hiding to practitioners the mathematical details of the method application.


Test planning Reliability growth model Resource allocation Risk-based testing Mission-critical systems 



This work has been partially supported by MIUR under project SVEVIA (PON02_00485_3487758) of the public-private laboratory COSMIC (PON02_00669) and by the European Commission in the context of the FP7 project ICEBERG, Marie Curie Industry-Academia Partnerships and Pathways (IAPP) number 324356. The work of Dr. Pietrantuono is supported by the project Embedded Systems in Critical Domains (CUP B25B09000100007) in the framework of POR Campania FSE 2007–2013.


  1. 1.
    Cotroneo, D., Pietrantuono, R., Russo, S.: Testing techniques selection based on ODC fault types and software metrics. Journal of Systems and Software 86(6), 1613–1637 (2013)CrossRefGoogle Scholar
  2. 2.
    Goel, A.L.: Software Reliability Models: Assumptions, Limitations and Applicability. IEEE Transactions on Software Engineering SE–11(12), 1411–1423 (1985)CrossRefGoogle Scholar
  3. 3.
    Cotroneo, D., Pietrantuono, R., Russo, S.: Combining operational and debug testing for improving reliability. IEEE Transactions on Reliability 62(2), 408–423 (2013)CrossRefGoogle Scholar
  4. 4.
    Catal, C., Diri, B.M.: A systematic review of software fault prediction studies. Expert Systems with Applications 36(4), 7346–7354 (2009)CrossRefGoogle Scholar
  5. 5.
    Halstead, M.: Elements of Software Science. Elsevier Science, New York (1977)zbMATHGoogle Scholar
  6. 6.
    Chidamber, S.R., Kemerer, C.F.: A Metrics Suite for Object Oriented Design. IEEE Transactions on Software Engineering 20(6), 476–493 (1994)CrossRefGoogle Scholar
  7. 7.
    Gokhale, S.S., Lyu, M.R.: Regression Tree Modeling for the Prediction of Software Quality. In: Proc. 3rd ISSAT (1997)Google Scholar
  8. 8.
    Subramanyam, R., Krishnan, M.S.: Empirical Analysis of CK Metrics for Object-Oriented Design Complexity: Implications for Software Defects. IEEE Transactions on Software Engineering 29(4), 297–310 (2003)CrossRefGoogle Scholar
  9. 9.
    Basili, V.R., Briand, L.C., Melo, W.L.: A Validation of Object-Oriented Design Metrics as Quality Indicators. IEEE Transactions on Software Engineering 22(10), 751–761 (1996)CrossRefGoogle Scholar
  10. 10.
    Ohlsson, N., Alberg, H.: Predicting fault-prone software modules in telephone switches. IEEE Transactions on Software Engineering. 22(12), 886–894 (1996)CrossRefGoogle Scholar
  11. 11.
    Denaro, G., Pezzè, M.: An Empirical Evaluation of Fault-proneness Models. In: Proc. 24th Int. Conference on Software Engineering (ICSE), pp. 241–251 (2002)Google Scholar
  12. 12.
    Nagappan, N., Ball, T., Zeller, A.: Mining Metrics to Predict Component Failures. In: Proc. 28th Int. Conference on Software Engineering (ICSE), pp. 452–461 (2006)Google Scholar
  13. 13.
    Ostrand, T., Weyuker, E., Bell, R.: Predicting the Location and Number of Faults in Large Software Systems. IEEE Transactions on Software Engineering 31(4), 340–355 (2005)CrossRefGoogle Scholar
  14. 14.
    Menzies, T., Greenwald, J., Frank, A.: Data Mining Static Code Attributes to Learn Defect Predictors. IEEE Transactions on Software Engineering 33(1), 2–13 (2007)CrossRefGoogle Scholar
  15. 15.
    Nam, J., Jialin Pan, S., Kim, S.: Transfer Defect Learning. In: Proc. 35th Int. Conference on Software Engineering (ICSE), pp. 382–391 (2013)Google Scholar
  16. 16.
    Zimmermann T., et al.: Cross-project Defect Prediction: A Large Scale Experiment on Data vs. Domain vs. Process. In: Proc. 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Eng., pp. 91–100 (2009)Google Scholar
  17. 17.
    Dugan, J.B.: Automated Analysis of Phase-Mission Reliability. IEEE Transactions on Reliability 40, 45–52 (1991)CrossRefzbMATHGoogle Scholar
  18. 18.
    Garzia, M.R.: Assessing the Reliability of Windows Servers. In: Proc. of IEEE Dependable Systems and Networks conference (2002)Google Scholar
  19. 19.
    Pietrantuono, R., Russo, S., Trivedi, K.S.: Online Monitoring of Software System Reliability. In: Proc. of the European Dependable Computing Conference (EDCC), 209–218 (2010)Google Scholar
  20. 20.
    Goel, A.L., Okumoto, K.: Time-dependent error-detection rate model for software reliability and other performance measures. IEEE Transactions on Reliability R–28(3), 206–211 (1979)CrossRefGoogle Scholar
  21. 21.
    Yamada, S., Ohba, M., Osaki, S.: S-Shaped Reliability Growth Modeling for Software Error Detection. IEEE Transactions on Reliability R–32(5), 475–485 (1983)CrossRefGoogle Scholar
  22. 22.
    Gokhale, S.S., Trivedi, K.S.: Log-logistic software reliability growth model. In: Proc. 3rd Int. High-Assurance Systems Engineering Symposium, pp. 34–41 (1998)Google Scholar
  23. 23.
    Mullen, R.E.: The lognormal distribution of software failure rates: application to software reliability growth modeling. In: Proc. 9th Int. Symposium on Software Reliability Engineering (ISSRE), pp. 134–142 (1998)Google Scholar
  24. 24.
    Okamura, H., Dohi, T., Osaki, S.: EM algorithms for logistic software reliability models. In: Proc. 22nd IASTED Int. Conference on Software Engineering, pp. 263–268 (2004)Google Scholar
  25. 25.
    Yamada, S., Ichimori, T., Nishiwaki, M.: Optimal Allocation Policies for Testing-Resource Based on a Software Reliability Growth Model. Int. Journal of Mathematical and Computer Modeling. 22(10–12), 295–301 (1995)CrossRefzbMATHGoogle Scholar
  26. 26.
    Huang, C., Kuo, S., Lyu, M.R.: An Assessment of Testing-Effort Dependent Software Reliability Growth Models. IEEE Transactions on Reliability 56(2), 198–211 (2007)CrossRefGoogle Scholar
  27. 27.
    Yamada, S., Ohtera, H., Narihisa, H.: Software reliability growth models with testing effort. IEEE Transactions on Reliability R–35, 19–23 (1986)CrossRefGoogle Scholar
  28. 28.
    Lyu, M.R., Rangarajan, S., van Moorsel, A.P.A.: Optimal Allocation of Test Resources for Software Reliability Growth Modeling in Software Development. IEEE Transactions on Reliability 51(2), 336–347 (2002)CrossRefGoogle Scholar
  29. 29.
    Huang, C.Y., Lo, J.H., Kuo, S.Y., Lyu, M.R.: Optimal Allocation of Testing Resources for Modular Software Systems. In: Proc. 13th Int. Symposium on Software Reliability Engineering (ISSRE), pp. 129–138 (2002)Google Scholar
  30. 30.
    Huang, C.Y., Lo, J.H.: Optimal Resource Allocation for Cost and Reliability of Modular Software Systems in the Testing Phase. Journal of Systems and Software 79(5), 653–664 (2006)CrossRefGoogle Scholar
  31. 31.
    Hou, R.H., Kuo, S.Y., Chang, Y.P.: Efficient allocation of testing resources for software module testing based on the hyper-geometric distribution software reliability growth model. In: Proc. 7th Int. Symposium on Software Reliability Engineering (ISSRE), pp. 289–298 (1996)Google Scholar
  32. 32.
    Everett, W.: Software Component Reliability Analysis. In: Proc. Symposium on Application-specific Systems and Software Eng. and Techn. (ASSET), pp. 204–211 (1999) Google Scholar
  33. 33.
    Pietrantuono, R., Russo, S., Trivedi, K.S.: Software Reliability and Testing Time Allocation: An Architecture-Based Approach. IEEE Transactions on Software Engineering 36(3), 323–337 (2010)Google Scholar
  34. 34.
    U.S. Department of Defense, MIL-STD-498. Overview and Tailoring Guidebook, 1996. [Online]. Available at:
  35. 35.
    Almering, V., Van Genuchten, M., Cloudt, G., Sonnemans, P.J.M.: Using Software Reliability Growth Models in Practice. IEEE Software 24(6), 82–88 (2007)CrossRefGoogle Scholar
  36. 36.
    Stringfellow, C., Amschler, A.: Andrews: An Empirical Method for Selecting Software Reliability Growth Models. Empirical Software Engineering 7(4), 319–343 (2002)Google Scholar
  37. 37.
    Farr, W.: Handbook of Software Reliability Engineering, M.R. Lyu (Ed.), chapter: Software Reliability Modeling Survey, pp. 71–117. McGraw-Hill, New York, NY (1996)Google Scholar
  38. 38.
    Musa, J.D., Okumoto, K.: A logarithmic Poisson execution time model for software reliability measurement. In: Proc. 7th Int. Conference on Software Engineering (ICSE), pp. 230–238 (1984)Google Scholar
  39. 39.
    Zachariah, B., Rattihalli, R.N.: Failure Size Proportional Models and an Analysis of Failure Detection Abilities of Software Testing Strategies. IEEE Transactions on Reliability 56(2), 246–253 (2007)CrossRefGoogle Scholar
  40. 40.
    Okamura, H., Watanabe, Y., Dohi, T.: An iterative scheme for maximum likelihood estimation in software reliability modeling. In: Proc. 14th Int. Symposium on Software Reliab. Eng. (ISSRE). IEEE CS Press, pp. 246–256 (2003)Google Scholar
  41. 41.
    Ohishi, K., Okamura, H., Dohi, T.: Gompertz software reliability model: Estimation algorithm and empirical validation. Journal of Systems and Software 82(3), 535–543 (2009)CrossRefGoogle Scholar
  42. 42.
    Okamura, H., Dohi, T., Osaki, S.: Software reliability growth model with normal distribution and its parameter estimation. In: Proc. Int. Conference on Quality, Reliability, Risk, Maintenance, and Safety, Engineering (ICQR2MSE), pp. 411–416 (2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Gabriella Carrozza
    • 1
    Email author
  • Roberto Pietrantuono
    • 2
  • Stefano Russo
    • 2
    • 3
  1. 1.SESM Finmeccanica CompanyNaplesItaly
  2. 2.DIETIUniversità degli Studi di Napoli Federico IINaplesItaly
  3. 3.Critiware spin off, Incubatore IncipitComplesso Univ. di Monte S. AngeloNaplesItaly

Personalised recommendations