Skip to main content
Log in

Dynamic test planning: a study in an industrial context

  • RBT
  • Published:
International Journal on Software Tools for Technology Transfer Aims and scope Submit manuscript

Abstract

Testing accounts for a relevant part of the production cost of complex or critical software systems. Nevertheless, time and resources budgeted to testing are often underestimated with respect to the target quality goals. Test managers need engineering methods to perform appropriate choices in spending testing resources, so as to maximize the outcome. We present a method to dynamically allocate testing resources to software components minimizing the estimated number of residual defects and/or the estimated residual defect density. We discuss the application to a real-world critical system in the homeland security domain. We describe a support tool aimed at easing industrial technology transfer by hiding to practitioners the mathematical details of the method application.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Note that detecting more faults does not imply improving reliability: this requires removing the faults occurring more frequently.

  2. The term fault (defect) is preferred in the fault tolerance (software engineering) community; here, we use them as synonymous.

  3. Improving operational reliability is more desirable, but it requires the knowledge of the operational profile, seldom available.

  4. We assume that the number of man-weeks worked per week is fixed; namely, the assigned resources are spent uniformly across the weeks.

  5. This is the worst case; it may happen indeed that past data exist and can be used as starting point to build a model.

  6. Authors in [35] indicate a time of 25 % to have an SRGM with a definitive accuracy deviation of 20 %; however, in our method, we practically need much less time, since we do not select a definitive model now, but we will iteratively select the best model (and thus improve the initial accuracy deviation) as testing time proceeds.

  7. We assure them to receive a certain amount of resources, in order to not stop completely their testing, so as to have more data for it in the next iteration. This amount will be the same as the previous iteration but diminished by a factor \(\alpha \in [0,1]\). The latter is computed as: \(\alpha _j = [\textit{DR}_j-min_i(\textit{DR})]/[max_i(DR)-min_i(\textit{DR})]\), where \(\textit{DR}_j\) is the current detection rate (defect/weeks) of the component \(j\) and the index \(i\) spans across components. Thus, if \(C_j\) had the best detection rate, it receives the same amount as the previous cycle; otherwise, it is penalized.

  8. \(B^*\) also takes into account the possible resources already allocated to nonstatistically valid components (see note 7).

  9. This function also manages the case in which the tester wants to go on with even a subset of statistically valid components; in such a case, the function sets ready to true when at least \(x\) components are valid, with \(x\) decided by the tester (omitted for simplicity).

  10. It also applies the detection rate proportional allocation described above (note 6), in case there is some component with no valid SRGM.

  11. We assume to have a budget lower than 326 man-weeks, so as to avoid to allocate a number of man-weeks to a CSCI greater than the amount actually available (e.g., CSCI 1 has been tested with 32 man-weeks, if the number of man-weeks allocated by the scheme is greater than 32, the experiment may be invalid).

    Fig. 1
    figure 1

    Cumulative number of detected defects for each component

  12. The models that more often fitted data have been: exponential, especially in the beginning, truncated logistic, and truncated normal.

  13. The Analyzer is left out from the explanation, since it does not regard the test planning issue presented here.

    Fig. 3
    figure 3

    Architecture of the effecT! \(^{\copyright }\) support tool

  14. The re-allocation step input can also be empty, giving the possibility to the tester of requiring the re-allocation at any time she/he wants by just pressing a button.

  15. The best fitting SRGM is chosen by default.

References

  1. Cotroneo, D., Pietrantuono, R., Russo, S.: Testing techniques selection based on ODC fault types and software metrics. Journal of Systems and Software 86(6), 1613–1637 (2013)

    Article  Google Scholar 

  2. Goel, A.L.: Software Reliability Models: Assumptions, Limitations and Applicability. IEEE Transactions on Software Engineering SE–11(12), 1411–1423 (1985)

    Article  Google Scholar 

  3. Cotroneo, D., Pietrantuono, R., Russo, S.: Combining operational and debug testing for improving reliability. IEEE Transactions on Reliability 62(2), 408–423 (2013)

    Article  Google Scholar 

  4. Catal, C., Diri, B.M.: A systematic review of software fault prediction studies. Expert Systems with Applications 36(4), 7346–7354 (2009)

    Article  Google Scholar 

  5. Halstead, M.: Elements of Software Science. Elsevier Science, New York (1977)

    MATH  Google Scholar 

  6. Chidamber, S.R., Kemerer, C.F.: A Metrics Suite for Object Oriented Design. IEEE Transactions on Software Engineering 20(6), 476–493 (1994)

    Article  Google Scholar 

  7. Gokhale, S.S., Lyu, M.R.: Regression Tree Modeling for the Prediction of Software Quality. In: Proc. 3rd ISSAT (1997)

  8. Subramanyam, R., Krishnan, M.S.: Empirical Analysis of CK Metrics for Object-Oriented Design Complexity: Implications for Software Defects. IEEE Transactions on Software Engineering 29(4), 297–310 (2003)

    Article  Google Scholar 

  9. Basili, V.R., Briand, L.C., Melo, W.L.: A Validation of Object-Oriented Design Metrics as Quality Indicators. IEEE Transactions on Software Engineering 22(10), 751–761 (1996)

    Article  Google Scholar 

  10. Ohlsson, N., Alberg, H.: Predicting fault-prone software modules in telephone switches. IEEE Transactions on Software Engineering. 22(12), 886–894 (1996)

    Article  Google Scholar 

  11. Denaro, G., Pezzè, M.: An Empirical Evaluation of Fault-proneness Models. In: Proc. 24th Int. Conference on Software Engineering (ICSE), pp. 241–251 (2002)

  12. Nagappan, N., Ball, T., Zeller, A.: Mining Metrics to Predict Component Failures. In: Proc. 28th Int. Conference on Software Engineering (ICSE), pp. 452–461 (2006)

  13. Ostrand, T., Weyuker, E., Bell, R.: Predicting the Location and Number of Faults in Large Software Systems. IEEE Transactions on Software Engineering 31(4), 340–355 (2005)

    Article  Google Scholar 

  14. Menzies, T., Greenwald, J., Frank, A.: Data Mining Static Code Attributes to Learn Defect Predictors. IEEE Transactions on Software Engineering 33(1), 2–13 (2007)

    Article  Google Scholar 

  15. Nam, J., Jialin Pan, S., Kim, S.: Transfer Defect Learning. In: Proc. 35th Int. Conference on Software Engineering (ICSE), pp. 382–391 (2013)

  16. Zimmermann T., et al.: Cross-project Defect Prediction: A Large Scale Experiment on Data vs. Domain vs. Process. In: Proc. 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Eng., pp. 91–100 (2009)

  17. Dugan, J.B.: Automated Analysis of Phase-Mission Reliability. IEEE Transactions on Reliability 40, 45–52 (1991)

    Article  MATH  Google Scholar 

  18. Garzia, M.R.: Assessing the Reliability of Windows Servers. In: Proc. of IEEE Dependable Systems and Networks conference (2002)

  19. Pietrantuono, R., Russo, S., Trivedi, K.S.: Online Monitoring of Software System Reliability. In: Proc. of the European Dependable Computing Conference (EDCC), 209–218 (2010)

  20. Goel, A.L., Okumoto, K.: Time-dependent error-detection rate model for software reliability and other performance measures. IEEE Transactions on Reliability R–28(3), 206–211 (1979)

    Article  Google Scholar 

  21. Yamada, S., Ohba, M., Osaki, S.: S-Shaped Reliability Growth Modeling for Software Error Detection. IEEE Transactions on Reliability R–32(5), 475–485 (1983)

    Article  Google Scholar 

  22. Gokhale, S.S., Trivedi, K.S.: Log-logistic software reliability growth model. In: Proc. 3rd Int. High-Assurance Systems Engineering Symposium, pp. 34–41 (1998)

  23. Mullen, R.E.: The lognormal distribution of software failure rates: application to software reliability growth modeling. In: Proc. 9th Int. Symposium on Software Reliability Engineering (ISSRE), pp. 134–142 (1998)

  24. Okamura, H., Dohi, T., Osaki, S.: EM algorithms for logistic software reliability models. In: Proc. 22nd IASTED Int. Conference on Software Engineering, pp. 263–268 (2004)

  25. Yamada, S., Ichimori, T., Nishiwaki, M.: Optimal Allocation Policies for Testing-Resource Based on a Software Reliability Growth Model. Int. Journal of Mathematical and Computer Modeling. 22(10–12), 295–301 (1995)

    Article  MATH  Google Scholar 

  26. Huang, C., Kuo, S., Lyu, M.R.: An Assessment of Testing-Effort Dependent Software Reliability Growth Models. IEEE Transactions on Reliability 56(2), 198–211 (2007)

    Article  Google Scholar 

  27. Yamada, S., Ohtera, H., Narihisa, H.: Software reliability growth models with testing effort. IEEE Transactions on Reliability R–35, 19–23 (1986)

    Article  Google Scholar 

  28. Lyu, M.R., Rangarajan, S., van Moorsel, A.P.A.: Optimal Allocation of Test Resources for Software Reliability Growth Modeling in Software Development. IEEE Transactions on Reliability 51(2), 336–347 (2002)

    Article  Google Scholar 

  29. Huang, C.Y., Lo, J.H., Kuo, S.Y., Lyu, M.R.: Optimal Allocation of Testing Resources for Modular Software Systems. In: Proc. 13th Int. Symposium on Software Reliability Engineering (ISSRE), pp. 129–138 (2002)

  30. Huang, C.Y., Lo, J.H.: Optimal Resource Allocation for Cost and Reliability of Modular Software Systems in the Testing Phase. Journal of Systems and Software 79(5), 653–664 (2006)

    Article  Google Scholar 

  31. Hou, R.H., Kuo, S.Y., Chang, Y.P.: Efficient allocation of testing resources for software module testing based on the hyper-geometric distribution software reliability growth model. In: Proc. 7th Int. Symposium on Software Reliability Engineering (ISSRE), pp. 289–298 (1996)

  32. Everett, W.: Software Component Reliability Analysis. In: Proc. Symposium on Application-specific Systems and Software Eng. and Techn. (ASSET), pp. 204–211 (1999)

  33. Pietrantuono, R., Russo, S., Trivedi, K.S.: Software Reliability and Testing Time Allocation: An Architecture-Based Approach. IEEE Transactions on Software Engineering 36(3), 323–337 (2010)

  34. U.S. Department of Defense, MIL-STD-498. Overview and Tailoring Guidebook, 1996. [Online]. Available at: www.abelia.com/498pdf/498GBOT.PDF

  35. Almering, V., Van Genuchten, M., Cloudt, G., Sonnemans, P.J.M.: Using Software Reliability Growth Models in Practice. IEEE Software 24(6), 82–88 (2007)

    Article  Google Scholar 

  36. Stringfellow, C., Amschler, A.: Andrews: An Empirical Method for Selecting Software Reliability Growth Models. Empirical Software Engineering 7(4), 319–343 (2002)

  37. Farr, W.: Handbook of Software Reliability Engineering, M.R. Lyu (Ed.), chapter: Software Reliability Modeling Survey, pp. 71–117. McGraw-Hill, New York, NY (1996)

  38. Musa, J.D., Okumoto, K.: A logarithmic Poisson execution time model for software reliability measurement. In: Proc. 7th Int. Conference on Software Engineering (ICSE), pp. 230–238 (1984)

  39. Zachariah, B., Rattihalli, R.N.: Failure Size Proportional Models and an Analysis of Failure Detection Abilities of Software Testing Strategies. IEEE Transactions on Reliability 56(2), 246–253 (2007)

    Article  Google Scholar 

  40. Okamura, H., Watanabe, Y., Dohi, T.: An iterative scheme for maximum likelihood estimation in software reliability modeling. In: Proc. 14th Int. Symposium on Software Reliab. Eng. (ISSRE). IEEE CS Press, pp. 246–256 (2003)

  41. Ohishi, K., Okamura, H., Dohi, T.: Gompertz software reliability model: Estimation algorithm and empirical validation. Journal of Systems and Software 82(3), 535–543 (2009)

    Article  Google Scholar 

  42. Okamura, H., Dohi, T., Osaki, S.: Software reliability growth model with normal distribution and its parameter estimation. In: Proc. Int. Conference on Quality, Reliability, Risk, Maintenance, and Safety, Engineering (ICQR2MSE), pp. 411–416 (2011)

Download references

Acknowledgments

This work has been partially supported by MIUR under project SVEVIA (PON02_00485_3487758) of the public-private laboratory COSMIC (PON02_00669) and by the European Commission in the context of the FP7 project ICEBERG, Marie Curie Industry-Academia Partnerships and Pathways (IAPP) number 324356. The work of Dr. Pietrantuono is supported by the project Embedded Systems in Critical Domains (CUP B25B09000100007) in the framework of POR Campania FSE 2007–2013.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gabriella Carrozza.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Carrozza, G., Pietrantuono, R. & Russo, S. Dynamic test planning: a study in an industrial context. Int J Softw Tools Technol Transfer 16, 593–607 (2014). https://doi.org/10.1007/s10009-014-0319-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10009-014-0319-0

Keywords

Navigation