Predicting Total Number of Failures in a Software Using NHPP Software Reliability Growth Models

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 259)

Abstract

For a software development project, management often faces the dilemma of when to stop testing the software and release it for operation. Estimating the remaining defects (or failures) in software can help test management to make release decisions. Several methods exist to estimate the defect content in software; among them are also a variety of software reliability growth models (SRGMs). SRGMs have underlying assumptions that are often violated in practice, but empirical evidence has shown that a number of models are quite robust despite these assumption violations. However it is often difficult to know which model to apply in practice. In the present study a method for selecting SRGMs to predict total number of defects in a software is proposed. The method is applied to a case study containing 3 datasets of defect reports from system testing of three releases of a large medical record system to see how well it predicts the expected total number of failures in a software.

Keywords

Reliability testing Software reliability growth models Goodness of fit Least squared estimation Release time 

Notation

m(t)

mean value function

a(t)

error content function

b(t)

error detection rate per error at time t

N(t)

random variable representing the cumulative number of software errors predicted by time t

References

  1. 1.
    Briand, L., El Emam, K., Freimut, B., Laitenberger, B., Laitenberger, O.: Quantitative evaluation of capture–recapture models to control software inspections. In: Proceedings of the Eighth International Conference on Software Reliability Engineering, Albuquerque, NM, pp. 234–244 (1997)Google Scholar
  2. 2.
    Eick, S., Loader, C., Long, M., Votta, L., VanderWeil, S.: Estimating software fault content before coding. In: Proceedings of the International Conference on Software Engineering. Melbourne, Australia, pp. 59–65 (1992)Google Scholar
  3. 3.
    Runeson, P., Wohlin, C.: An experimental evaluation of an experience-based capture-recapture method in software code inspections. Empirical Softw. Eng. Int. J. 3(4), 381–406 (1992)Google Scholar
  4. 4.
    Vander Wiel, S., Votta, L.: Assessing software designs using capture–recapture methods. IEEE Trans. Softw. Eng. 19(11), 1045–1054 (1993)Google Scholar
  5. 5.
    Wohlin, C., Runeson, P.: Defect content estimations from review data. In: Proceedings of the International Conference on Software Engineering, Kyoto, Japan, pp. 400–409 (1998)Google Scholar
  6. 6.
    Yang, M., Chao, A.: Reliability-estimation and stopping-rules for software testing based on repeated appearances of bugs. IEEE Trans. Reliab. 44(2), 315–321 (1995)CrossRefGoogle Scholar
  7. 7.
    Briand, L., El Emam, K., Freimut, B.: A comparison and integration of capture–recapture models and the detection profile method. In: Proceedings of the Ninth International Conference on Software Reliability Engineering, Paderborn, Germany, pp. 32–41 (1998)Google Scholar
  8. 8.
    Biyani, S., Santhanam, P.: Exploring defect data from development and customer usage on software modules over multiple releases. In: Proceedings of the Ninth International Conference on Software Reliability Engineering, Paderborn, Germany, pp. 316–320 (1998)Google Scholar
  9. 9.
    Yu, T., Shen, V., Dunsmore, H.: An analysis of several software defect models. IEEE Trans. Softw. Eng. 14(9), 1261–1270 (1988)CrossRefGoogle Scholar
  10. 10.
    Goel, A.L.: Software reliability models: assumptions, limitations, and applicability. IEEE Trans. Reliab. 11(12), 1411–1421 (1985)Google Scholar
  11. 11.
    Goel, A.L., Okumoto, K.: A time dependent error detection model for software reliability and other performance measures. IEEE Trans. Reliab. 28(3), 206–211 (1979)CrossRefMATHGoogle Scholar
  12. 12.
    Kececioglu, D.: Reliability Engineering Handbook, vol. 2. Prentice-Hall, Englewood Cliffs, NJ (1991)MATHGoogle Scholar
  13. 13.
    Musa, J., Ackerman, A.: Quantifying software validation: when to stop testing. IEEE Softw. 6, 19–27 (1989)CrossRefGoogle Scholar
  14. 14.
    Musa, J., Iannino, A., Okumoto, K.: Software Reliability: Measurement, Prediction, Application. McGraw-Hill, New York (1987)Google Scholar
  15. 15.
    Yamada, S., Ohba, M., Osaki, S.: S-shaped reliability growth modeling for software error detection. IEEE Trans. Reliab. 32(5), 475–478 (1983)CrossRefGoogle Scholar
  16. 16.
    Zhang, X., Teng, X., Pham, H.: Considering fault removal efficiency in software reliability assessment. IEEE Trans. Syst. Man. Cybern. Part A: Syst. Hum. 33(1) (2003)Google Scholar
  17. 17.
    Sharma, K., Garg, R., Nagpal, C.K., Garg, R.K.: Selection of optimal software reliability growth model using distance based approach. IEEE Trans. 59(2), 266–276 (2012)MathSciNetGoogle Scholar
  18. 18.
    Garg, R., Sharma, K., Kumar, R., Garg, R.K.: Performance analysis of software reliability models using matrix method. World Acad. Sci. Eng. Technol. 71, 31–38 (2010)Google Scholar
  19. 19.
    Duygulu, H.B., Tosun, O.: An algorithm for software reliability growth model selection. IADIS International Conference Informatics (2008)Google Scholar
  20. 20.
    Wood, A.: Predicting software reliability. IEEE Comput. 29(11), 69–78 (1996)CrossRefGoogle Scholar
  21. 21.
    Khoshgoftaar, T., Woodcock, T.: Software reliability model selection: a case study. In: Proceedings of the Second International Symposium on Software Reliability Engineering, pp. 183–191. IEEE Computer Society Press, Austin, TX (1991)Google Scholar
  22. 22.
    Lyu, M., Nikora, A.: CASREA: a computer-aided software reliability estimation tool. In: Proceedings of the Fifth International Workshop on Computer-Aided Software Engineering, Montreal, CA, pp. 264–275 (1992)Google Scholar
  23. 23.
    Iannino, A., Musa, J., Okumoto, K., Littlewood, B.: Criteria for software model comparisons. IEEE Trans. Softw. Eng. 10(6), 687–691 (1984)CrossRefGoogle Scholar
  24. 24.
    Wood, A.: Software reliability growth models: assumptions vs. reality. In: Proceedings of the International Symposium on Software Reliability Engineering 23(11), 136–141 (1997)Google Scholar
  25. 25.
    Gaudoin, O., Xie, M., Yang, B.: A simple goodness-of-fit test for the power-law process, based on the Duane plot. IEEE Trans. Reliab. (2002)Google Scholar
  26. 26.
    Stringfellow, C., Andrews, A.A.: An empirical method for selecting software reliability growth models. Empirical Softw. Eng. 7(4), 319–343 (2002)Google Scholar
  27. 27.
    Andersson, C.: An empirical method for selecting software reliability growth models. Empirical Softw. Eng. 7, 161–182 (2007)CrossRefGoogle Scholar
  28. 28.
    Huang, C.Y., Lyu, M.R., Kuo, S.Y.: A unified scheme of some non-homogenous Poisson process models for software reliability estimation. IEEE Trans. Softw. Eng. 29(3), 261–269 (2003)CrossRefGoogle Scholar
  29. 29.
    Musa, J.D., Okumoto, K.: A logarithmic Poisson execution time model for software reliability measurement. In: Conference Proceedings of the 7th International Conference on Software Engineering, pp. 230–237 (1983)Google Scholar
  30. 30.
    Yamada, S., Tokuno, K., Osaki, S.: Imperfect debugging models with fault introduction rate for software reliability assessment. Int. J. Syst. Sci. 23, 2241–2252 (1992)Google Scholar
  31. 31.
    Pham, H.: Software reliability and cost models: perspectives, comparison and practice. Eur. J. Oper. Res. 149, 475–489 (2003)CrossRefMATHMathSciNetGoogle Scholar
  32. 32.
    Pham, H., Nordmann, L., Zhang, X.: A general imperfect software debugging model with s-shaped fault detection rate. IEEE Trans. Reliab. 48, 169–175 (1999)CrossRefGoogle Scholar
  33. 33.
    Pham, H., Zhang, X.: An NHPP software reliability models and its comparison. Int. J. Reliab. Qual. Saf. Eng. 14(3), 269–282 (1997)CrossRefGoogle Scholar
  34. 34.
    Pham, H.: System Software Reliability. Springer, London (2006)Google Scholar

Copyright information

© Springer India 2014

Authors and Affiliations

  1. 1.Thapar UniversityPatialaIndia

Personalised recommendations