Acceptance Criteria for Critical Software Based on Testability Estimates and Test Results
- 79 Downloads
Testability is defined as the probability that a program will fail a test, conditional on the program containing some fault. In this paper, we show that statements about the testability of a program can be more simply described in terms of assumptions on the probability distribution of the failure intensity of the program. We can thus state general acceptance conditions in clear mathematical terms using Bayesian inference. We develop two scenarios, one for software for which the reliability requirements are that the software must be completely fault-free, and another for requirements stated as an upper bound on the acceptable failure probability.
Unable to display preview. Download preview PDF.
- Musa JD. Operational profiles in software-reliability engineering. IEEE Software 1993; March: 14–32.Google Scholar
- Butler RW, Finelli GB. The infeasibility of experimental quantification of life-critical software reliability. In Proc. ACM Conference on Software for Critical Systems, in ACM SIGSOFT Software Eng. Notes, Vol. 16 (5). New Orleans, Louisiana, 1991, pp 66–76.Google Scholar
- Hamlet D, Voas J. Faults on its sleeve: amplifying software reliability testing. In Proc. 1993 Int. Symposium on Software Testing and Analysis (ISSTA), in ACM SIGSOFT Software Eng. Notes, Vol. 18 (3). Cambridge, Massachusetts, U.S.A., 1993, pp 89–98.Google Scholar
- Voas JM, Miller KW. Improving the software development process using testability research. In Proc. of the Third Int. Symposium on Software Reliability Engineering. 1992, pp 114–121.Google Scholar
- Voas JM, Michael CC, Miller KW. Confidently assessing a zero probability of software failure. In Proc. SAFECOMP ’93 12th International Conference on Computer Safety, Reliability and Security. Poznan-Kiekrz, Poland, 1993, pp 197–206.Google Scholar
- Voas JM, Michael CC, Miller KW. Confidently assessing a zero probability of software failure. High Integrity Systems 1995; 1: 269–275.Google Scholar
- Voas JM, Miller KW. Software testability: The new verification. IEEE Software 1995; May: 17–28.Google Scholar
- Bertolino A, Strigini L. Predicting software reliability from testing taking into account other knowledge about a program. In Proc. Quality Week ’96. San Francisco, 1996.Google Scholar
- Bertolino A, Strigini L. Is it more convenient to assess a probability of failure or of correctness? Submitted for publication 1996.Google Scholar
- Kahnemann D, Slovic P, Tversky A (ed). Judgment under uncertainty: heuristics and biases. Cambridge University Press, 1982.Google Scholar
- Strigini L. Engineering judgement in reliability and safety and its limits: what can we learn from research in psychology? SHIP project Technical Report T/030, July, 1994.Google Scholar
- Littlewood B, Wright D. On a stopping rule for the operational testing of safety critical software. In Proc. FTCS25 (25th Annual International Symposium on Fault -Tolerant Computing). Pasadena, 1995, pp 444–451.Google Scholar