Advertisement

SAFECOMP ’93 pp 197-206 | Cite as

Confidently Assessing a Zero Probability of Software Failure

  • Jeffrey M. Voas
  • Christoph C. Michael
  • Keith W. MillerEmail author
Conference paper

Abstract

Randomly generated software tests are an established method of estimating software reliability [5, 7]. But as software applications require higher and higher reliabilities, practical difficulties with random testing have become increasingly problematic. These practical problems are particularly acute in life-critical applications, where requirements of 10−7failures per hour of system reliability translate into a probability of failure (pof) of perhaps 10−9 or less for each individual execution of the software [4]. We refer to software with reliability requirements of this magnitude as ultra-reliable software.

This paper presents a method for assessing the confidence that the software does not contain any faults given that software testing and software testability analysis have been performed. In this method, it is assumed that software testing of the current version has not resulted in any failures, and that software testing has not been exhaustive. In previous publications, we have termed this method of combining testability and testing to assess a confidence in correctness as the “Squeeze Play” and “Reliability Amplification,” [15, 13] however, we have not formally developed the mathematical foundation for quantifying a confidence that the software is correct. We do so in this paper.

Keywords

Random Testing Software Reliability True Probability Software Fault Input Distribution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    R. Butler and G. Finelli. The infeasibility of experimental quantification of life-critical software reliability. Proceedings of SIGSOFT ’91: Software for Critical Systems (December 4–6, 1991), New Orleans, LA., 66–76.CrossRefGoogle Scholar
  2. [2]
    D. R. Miller. Making statistical inferences about software reliability. NASA Contractor Report 4197 (December 1988).Google Scholar
  3. [3]
    K. Miller, L. Morell, R. Noonan, S. Park, D. Nicol, B. Murrill, and J. Voas. Estimating the probability of failure when testing reveals no errors. IEEE Trans. on Software Engineering 18(1):33–44, January, 1992.CrossRefGoogle Scholar
  4. [4]
    I. Peterson. Software failure: counting up the risks. Science News, Vol. 140, No. 24 (December 14, 1991), 140–141.CrossRefGoogle Scholar
  5. [5]
    T. A. Thayer, M. Lipow, and E. C. Nelson. Software Reliability (TRW Series of Software Technology, Vol. 2). New York: North-Holland, 1978.Google Scholar
  6. [6]
    J. Voas, L. Morell, and K. Miller. Predicting where faults can hide from testing. IEEE Software (March 1991), 41–48.Google Scholar
  7. [7]
    S. N. Weiss and E. J. Weyuker. An extended domain-based model of software reliability. IEEE Trans. on Software Engineering, Vol 14, No. 10 (October 1988), 1512–1524.MathSciNetCrossRefGoogle Scholar
  8. [8]
    L. J. Morell. Theoretical Insights into Fault-Based Testing. Proc. of the Second Workshop on Software Testing, Validation, and Analysis, July, 1988, 45–62.Google Scholar
  9. [9]
    J. Voas and K. Miller. The Revealing Power of a Test Case. Journal of Software Testing, Verification, and Reliability 2(1), 1992.CrossRefGoogle Scholar
  10. [10]
    J. Voas and K. Miller. PA: A Dynamic Method for Debugging Certain Classes of Software Faults. To appear in Software Quality Journal, 1993.Google Scholar
  11. [11]
    J. Voas. PIE: A Dynamic Failure-Based Technique. IEEE Transactions on Software Engineering 18(8):717–727, August, 1992.CrossRefGoogle Scholar
  12. [12]
    Richard A. DeMillo, Richard J. Lipton, and Frederick G. Sayward. Hints on Test Data Selection: Help for the Practicing Programmer. IEEE Computer, April, 1978, 11(4):34–41.CrossRefGoogle Scholar
  13. [13]
    J. Voas and K. Miller. Improving the Software Development Process Using Testability Research. Proc. of the 3rd International Symposium on Software Reliability Engineering, October, 1992, Research Triangle Park, NC.Google Scholar
  14. [14]
    John Musa. Operational Profiles in Software-Reliability Engineering. IEEE Software, March, 1993, 10(2):14–32.CrossRefGoogle Scholar
  15. [15]
    R. Hamlet and J. Voas. Faults on Its Sleeve: Amplifying Software Reliability Testing. Proc. of the International Symposium on Software Testing and Analysis, June 28–30, 1993.Google Scholar
  16. [16]
    R. Hamlet. Probable Correctness Theory. Information Processing Letters, 25(l):17–25, April, 1987.MathSciNetCrossRefGoogle Scholar
  17. [17]
    W. Hoeffding. Probability Inequalities for Sums of Bounded Random Variables. American Statistical Association Journal, March, 1963, p.13–30.Google Scholar
  18. [18]
    J. Voas, K. Miller, and J. Payne. PISCES: A Tool for Predicting Software Testability. Proc. of the 2nd Symposium on Assessment of Quality Software Development Tools, May, 1992. IEEE Computer Society.Google Scholar

Copyright information

© Springer-Verlag London Limited 1993

Authors and Affiliations

  • Jeffrey M. Voas
  • Christoph C. Michael
    • 1
  • Keith W. Miller
    • 2
    Email author
  1. 1.Reliable Software Technologies CorporationArlingtonUSA
  2. 2.Department of Computer ScienceCollege of William & MaryWilliamsburgUSA

Personalised recommendations