Safe Comp 95 pp 173-188 | Cite as

A Bayesian Model that Combines Disparate Evidence for the Quantitative Assessment of System Dependability

  • Bev Littlewood
  • David Wright


For safety-critical systems, the required reliability (or safety) is often extremely high. Assessing the system, to gain confidence that the requirement has been achieved, is correspondingly hard, particularly when the system depends critically upon extensive software. In practice, such an assessment is often carried out rather informally, taking account of many different types of evidence—experience of previous, similar systems; evidence of the efficacy of the development process; testing; expert judgement, etc. Ideally, the assessment would allow all such evidence to be combined into a final numerical measure of reliability in a scientifically rigorous way. In this paper we address one part of this problem: we present a means whereby our confidence in a new product can be augmented beyond what we would believe merely from testing that product, by using evidence of the high dependability in operation of previous products. We present some illustrative numerical results that seem to suggest that such experience of previous products, even when these have shown very high dependability in operational use, can improve our confidence in a new product only modestly.


Prior Distribution Failure Probability Product Family Failure Behaviour Software Reliability 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [Butler93]
    R. W. Butler and G. B. Finelli. The infeasibility of quantifying the reliability of life-critical real-time software. IEEE Transactions on Software Engineering, 19(1):3–12, 1993CrossRefGoogle Scholar
  2. [DeGroot70]
    M. H. DeGroot. Optimal Statistical Decisions. McGraw-Hill, New York, 1970MATHGoogle Scholar
  3. [Henrion86]
    M. Henrion and B. Fischhoff. Assessing Uncertainty in Physical Constants. American Journal of Physics, 54(9):791–8, 1986CrossRefGoogle Scholar
  4. [Laprie92]
    J. C. Laprie. For a Product-in-a-Process Approach to Software Reliability Evaluation. In Proc. 3rd International Symposium on Software Reliability Engineering (ISSRE92), pages 134–9, Research-Triangle Park, USA, 1992. Invited PaperGoogle Scholar
  5. [Littlewood93]
    B. Littlewood and L. Strigini. Validation of Ultra-High Dependability for Software-Based Systems. Comm. Assoc. Computing Machinery, 36(11), November 1993Google Scholar
  6. [Littlewood95]
    B. Littlewood and D. R. Wright. On a Stopping Rule for the Operational Testing of Safety-Critical Software. In Proc. 25th Fault Tolerant Computing Symposium, Pasadena, June 1995. IEEEGoogle Scholar
  7. [Rouquet86]
    J. C. Rouquet and Z. Z. Traverse. Safe and Reliable Computing on board the Airbus and ATR aircraft. In W. J. Quirk, Editor, Proc. Fifth IFAC Worshop on Safety of Computer Control Systems, pages 93–97, Oxford, 1986. Pergamon PressGoogle Scholar

Copyright information

© Springer-Verlag London 1995

Authors and Affiliations

  • Bev Littlewood
    • 1
  • David Wright
    • 1
  1. 1.Centre for Software ReliabilityCity UniversityLondonEngland

Personalised recommendations