Skip to main content
Log in

Modeling reliability growth during non‐representative testing

  • Published:
Annals of Software Engineering

Abstract

A reliability growth model is presented that permits prediction of operational reliability without requiring that testing be conducted according to the operation profile of the program input space. Compared to prior growth models, this one shifts the observed random variable from interfailure time to a post‐mortem analysis of the debugged faults, using order statistics to combine the observed failure rates of faults no matter how those faults were detected. The primary advantages of this model are:

the flexibility it offers to test planners, as the choice of testing method is no longer solely determined by the desire to predict operational reliability, and

more robust experimental designs can be formulated by taking advantage of a wider variety of options for data collection.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Adrion, W., M.A. Branstad, and J.C. Cherniavski (1982), “Validation, Verification, and Testing of Computer Software,” Computing Surveys 14,2, 159–192.

    Article  Google Scholar 

  • Budd, T.A. (1983). “The Portable Mutation Testing Suite,” Technical Report TR 83-8, University of Arizona.

  • Butler, R.W. and G.B. Finelli (1993), “The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software,” IEEE Transactions on Software Engineering 19,1, 3–12.

    Article  Google Scholar 

  • Chen, M.-H., J.R. Horgan, A.P. Mathur, and V.J. Rego (1992), “A Time/Structure Based Model for Estimating Software Reliability,” Technical Report SERC-TR-117-P, Purdue University, West Lafayette, IN.

    Google Scholar 

  • Clarke, L.A., A. Podgurski, D.J. Richardson, and S.J. Zeil (1989), “A Formal Evaluation of Data Flow Path Selection Criteria,” IEEE Transactions on Software Engineering 15,11, 1318–1332.

    Article  Google Scholar 

  • Cobb, R.H. and H.D. Mills (1990), “Engineering Software Under Statistical Quality Control,” IEEE Software 7,6, 44–54.

    Article  Google Scholar 

  • David, H.A. (1981), Order Statistics, John Wiley and Sons, Second Edition, New York, NY.

    MATH  Google Scholar 

  • DeMillo, R.A., F.G. Sayward, and R.J. Lipton (1978), “Hints on Test Data Selection: Help for the Practicing Programmer,” Computer 11,4, 34–41.

    Article  Google Scholar 

  • Frankl, P.G. and E.J. Weyuker (1988), “An Applicable Family of Data Flow Testing Criteria,” IEEE Transactions on Software Engineering 14,10, 1483–1498.

    Article  MathSciNet  Google Scholar 

  • Hamlet, R. (1992), “Are We Testing for True Reliability?” IEEE Software 9,4, 21–27.

    Article  Google Scholar 

  • Hamlet, R. and R. Taylor (1990), “Partition Testing Does Not Inspire Confidence,” IEEE Transactions on Software Engineering 16,12, 1402–1411.

    Article  MathSciNet  Google Scholar 

  • Hamlet, R. and J. Voas (1993), “Faults on Its Sleeve: Amplifying Software Reliability Testing,” In Proceedings of the International Symposium on Software Testing and Analysis, Volume 18, ACM/SIGSOFT, Baltimore, MD, pp. 89–98.

    Google Scholar 

  • Hoppa, M.A. and L.W. Wilson (1994), “Some Effects of Fault Recovery Order on Software Reliability Models,” In Fifth International Symposium on Software Reliability Engineering (ISSRE 94), IEEE Computer Society Press, Los Alamitos, CA, pp. 338–342.

    Chapter  Google Scholar 

  • Howden, W.E. and Y. Huang (1994), “Software Trustability,” In Fifth International Symposium on Software Reliability Engineering (ISSRE 94), IEEE Computer Society Press, Los Alamitos, CA, pp. 143–151.

    Chapter  Google Scholar 

  • Jelinski, Z. and P.B. Moranda (1972), “Software Reliability Research,” In Statistical Computer Reliability Engineering, W. Freiberger, Ed., Academic Press, New York, NY, pp. 465–497.

    Google Scholar 

  • Knight, J. and N. Leveson (1986), “An Experimental Evaluation of the Assumption of Independence in Multiversion Programming,” IEEE Transactions on Software Engineering SE-12,1, 96–109.

    Google Scholar 

  • Linger, R.C. (1994), “Cleanroom Process Model,” IEEE Software 11,2, 50–58.

    Article  Google Scholar 

  • Littlewood, B. (1980), “Theories of Software Reliability: How Good Are They and How Can They Be Improved?,” IEEE Transactions on Software Engineering SE-6,5, 489–500.

    Article  Google Scholar 

  • Littlewood, B. (1981), “Stochastic Reliability-Growth: A Model for Fault-Removal in Computer-Programs and Hardware-Designs,” IEEE Transactions on Reliability R-30,4, 313–320.

    Article  MathSciNet  Google Scholar 

  • Malaiya, Y.K., N. Li, J. Bieman, R. Karich, and B. Skibbe (1994), “The Relationship Between Test Coverage and Reliability,” In Fifth International Symposium on Software Reliability Engineering (ISSRE 94), IEEE Compute Society Press, Los Alamitos, CA, pp. 186–195.

    Chapter  Google Scholar 

  • Mitchell, B. and S.J. Zeil (1996), “A Reliability Model Combining Representative and Directed Testing,” In Proceedings of the 18th International Conference on Software Engineering, IEEE Computer Society Press, Los Alamitos, CA, pp. 506–514.

    Chapter  Google Scholar 

  • Musa, J.D. (1979), “Software Reliability Data,” Technical report, Data and Analysis Center for Software, Utica, NY.

    Google Scholar 

  • Musa, J.D. (1993), “Operational Profiles in Software-Reliability Engineering,” IEEE Software 10,2, 14–32.

    Article  Google Scholar 

  • Musa, J.D., A. Iannino, and K. Okumoto (1987), Software Reliability: Measurement, Prediction, Application, McGraw-Hill, New York, NY.

    Google Scholar 

  • Podgurski, A., C. Yang, and W. Masri (1993), “Partitioned Testing, Stratified Sampling, and Cluster Analysis,” In Proceedings of the First ACM SIGSOFT Symposium on the Foundations of Software Engineering, pp. 169–181. Also published as Software Engineering Notes 18, 5.

  • Voas, J. (1991), “Preliminary Observations on Program Testability,” In Proceedings of the Pacific Northwest Quality Conference, Pacific Northwest Quality Conference, OR, pp. 235–247.

  • Voas, J.M. (1992), “PIE: A Dynamic Failure-Based Technique,” IEEE Transactions on Software Engineering 18,8, 717–727.

    Article  Google Scholar 

  • Weiser, M.D., J.D. Gannon, and P.R. McMullin (1985), “Comparison of Structural Test Coverage Metrics,” IEEE Software 2,2, 80–85.

    Article  Google Scholar 

  • White, L.J. (1987), “Software Testing and Verification,” In Advances in Computers, M. Yovits, Ed., Volume 26, Academic Press, London, UK, pp. 335–391.

    Google Scholar 

  • Wild, C., S. Zeil, J. Chen, and G. Feng (1992), “Employing Accumulated Knowledge to Refine Test Cases,” Software Testing, Verification, and Reliability 2,2, 53–68.

    Article  Google Scholar 

  • Zeil, S.J. (1989), “Perturbation Testing for Domain Errors,” IEEE Transactions on Software Engineering 15,6, 737–746.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mitchell, B., Zeil, S.J. Modeling reliability growth during non‐representative testing. Annals of Software Engineering 4, 11–29 (1997). https://doi.org/10.1023/A:1018970928797

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1018970928797

Keywords

Navigation