Advertisement

The Need for Evidence from Disparate Sources to Evaluate Software Safety

  • Bev Littlewood

Abstract

A system may fail because of its engineering hardware, computer hardware, computer software, or because of a human component. The impact on overall system dependability of hardware is well understood, provided that it is free from design faults. Furthermore, we can often engineer our systems so that the impact of these sources of unreliability is negligible.

However, some hardware failures and all software failures are due to design faults. Reliability in the presence of design faults and human operator errors is not well understood. This poses acute problems for the assessment of safety-critical systems in the presence of the effects of human errors, made during the design process or during operation. It can easily be shown that only modest reliability can be demonstrated by the direct observation of the system in test or operation. In this paper we discuss these problems in detail and consider some ways in which evidence from other sources might be used to increase our confidence. This work forms the basis of the project DATUM: Dependability Assessment of Safety-critical Systems Through the Unification of Measurable Evidence.1

Keywords

Formal Method Software Reliability Design Fault Human Information Processing Reliability Growth 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [Abdel-Ghaly, Chan et al. 1986]
    A.A. Abdel-Ghaly, P.Y. Chan and B. Littlewood, “Evaluation of Competing Software Reliability Predictions,” IEEE Trans. on Software Engineering, vol. SE-12, no. 9, pp. 950–967, 1986.Google Scholar
  2. [Aitchison and Dunsmore 1975]
    J. Aitchison and I.R. Dunsmore. Statisitical Prediction Analysis, Cambridge, Cambridge University Press, 1975.CrossRefGoogle Scholar
  3. [Anderson 1985]
    J.R. Anderson. Cognitive Psychology and its Implications, New York, W. H. Freeman, 1985.Google Scholar
  4. [Bainbridge 1992]
    L. Bainbridge. “Mental models and industrial process operation,” in Models of the Mind, pp. 119–143, Academic Press, 1992.Google Scholar
  5. [Barlow and Proschan 1975]
    R.E. Barlow and F. Proschan. Statistical Theory of Reliability and Life Testing, New York, Holt, Rinehart and Winston, 1975, 290 p.MATHGoogle Scholar
  6. [Barnard 1987]
    P.J. Barnard. “Cognitive resources and the learning of human computer dialogues,” in Interfacing Thought: Cognitive Aspect of Human Computer Interaction, MIT Press, 1987.Google Scholar
  7. [Bowen and Stavridou 1992]
    J. Bowen and V. Stavridou. Safety-critical systems, formal methods and standards, PRG-TR-5–92, Programming Research Group, Oxford University Computing Laboratory, 1992.Google Scholar
  8. [Brocklehurst, Chan et al. 1990]
    S. Brocklehurst, P.Y. Chan, B. Littlewood and J. Snell, “Recalibrating software reliability models,” IEEE Trans Software Engineering, vol. SE-16, no. 4, pp. 458–470, 1990.CrossRefGoogle Scholar
  9. [Cohn 1988]
    A.J. Cohn. “Correctness properties of the Viper block model: the second level,” in 2nd Banff Workshop on Hardware Verification, Springer Verlag, 1988.Google Scholar
  10. [Csenki 1989a]
    A. Csenki. “Recovery block reliability analysis with failure clustering,” in Dependable Computing for Critical applications, Santa Barbara, Ca, 1989a.Google Scholar
  11. [Csenki 1989b]
    A. Csenki. Recovery block reliability modelling with nested clusters of failure points, Centre for Software Reliability, City University, London, 1989b.Google Scholar
  12. [Csenki 1989c]
    A. Csenki. Reliability models of fault tolerant software, Centre for Software Reliability, City University, London, 1989c.Google Scholar
  13. [DeGroot 1970]
    M.H. DeGroot. Optimal Statistical Decisions, New York, McGraw-Hill, 1970.MATHGoogle Scholar
  14. [Dubois and Prade 1988]
    D. Dubois and H. Prade. Possibility Theory: An Approach to Computerised Processing of Uncertainty, New York, Plenum Press, 1988.Google Scholar
  15. [Eckhardt, Caglayan et al. 1991]
    D.E. Eckhardt, A.K. Caglayan, J.C. Knoght, D.F.M. L. D. Lee, M.A. Vouk and J.P.J. Kelly, “An experimental evaluation of software redundancy as a strategy for improving reliability,” IEEE Trans Software Eng, vol. SE-17, no. 7, pp. 692–702, 1991.CrossRefGoogle Scholar
  16. [FAA 1982]
    FAA. System Design Analysis, 25.1309–2, US Department of Transportation, Federal Aviation Administration, 1982.Google Scholar
  17. [Fenton 1991]
    N.E. Fenton. Software Metrics: A Rigorous Approach, London, Chapman and Hall, 1991.MATHGoogle Scholar
  18. [Galer 1987]
    I. Galer. Applied Ergonomics Handbook, Butterworths, 1987.Google Scholar
  19. [Goel and Okumoto 1979]
    A.L. Goel and K. Okumoto, “Time-Dependent Error-Detection Rate Model for Software and Other Performance Measures,” IEEE Trans. on Reliability, vol. R-28, no. 3, pp. 206–211, 1979.CrossRefGoogle Scholar
  20. [Grant 1992]
    A.S. Grant. “A context model needed for complex tasks,” in 2nd Interdisciplinary workshop on Mental Models, pp. 94–102, 1992.Google Scholar
  21. [Henrion and Fischhoff 1986]
    M. Henrion and B. Fischhoff, “Assessing uncertainty in physical constants,” Americal Journal of Physics, vol. 54, no. 9, pp. 791–798, 1986.CrossRefGoogle Scholar
  22. [Hitch 1987]
    G.J. Hitch. “Working memory,” in Applying cognitive psychology to user interface design, London, John Wiley, 1987.Google Scholar
  23. [Jelinski and Moranda 1972]
    Z. Jelinski and P.B. Moranda. “Software Reliability Research,” in Statistical Computer Performance Evaluation, pp. 465–484, New York, Academic Press, 1972.Google Scholar
  24. [Knight and Leveson 1986a]
    J.C. Knight and N.G. Leveson. “An Empirical Study of Failure Probabilities in Multi-version Software,” in Proc. 16th IEEE Int. Symp. on Fault-Tolerant Computing (FTCS-16), pp. 165–170, Vienna, Austria, 1986a.Google Scholar
  25. [Knight and Leveson 1986b]
    J.C. Knight and N.G. Leveson, “Experimental evaluation of the assumption of independence in multiversion software,” IEEE Trans Software Engineering, vol. SE-12, no. 1, pp. 96–109, 1986b.Google Scholar
  26. [Lewis, Poison et al. 1990]
    C. Lewis, P. Poison, C. Wharton and R. J. “Testing a Walkthrough methodology for Theory-based design of Walk-up-and-use Interfaces,” in CHI-90, pp. 235–241, ACM Press, 1990.Google Scholar
  27. [Littlewood 1975]
    B. Littlewood. “A Reliability Model for Markov Structured Software,” in Proc. 1975 Int. Conf. on Reliable Software, Los Angeles, IEEE. New York, 1975.Google Scholar
  28. [Littlewood 1976]
    B. Littlewood. “A Semi-Markov Model for Software Reliability with Failure Costs,” in MRI Symp. Computer Software Engineering, pp. 281–300, Polytechnic of New York, New York, Polytechnic Press, 1976.Google Scholar
  29. [Littlewood 1979]
    B. Littlewood, “Software reliability model for modular program structure,” IEEE Trans Reliability, vol. R-28, no. 3, pp. 241–246, 1979.CrossRefGoogle Scholar
  30. [Littlewood 1981]
    B. Littlewood, “Stochastic Reliability Growth: A model for fault removal in computer programs and hardware designs,” IEEE Trans. on Reliability, vol. R-30, pp. 313–320, 1981.Google Scholar
  31. [Littlewood 1988]
    B. Littlewood. “Forecasting software reliability,” in Software Reliability Modelling and Identification, pp. 141–209, Heidelberg, Springer, 1988.CrossRefGoogle Scholar
  32. [Littlewood 1991]
    B. Littlewood. “Limits to evaluation of software dependability,” in Software Reliability and Metrics (Proceedings of 7th Annual CSR Conference, Garmisch-Partenkirchen), pp. 81–110, London, Elsevier, 1991.Google Scholar
  33. [Littlewood and Miller 1989]
    B. Littlewood and D.R. Miller, “Conceptual modelling of coincident failures in multi-version software,” IEEE Trans on Software Engineering, vol. SE-15, no. 12, pp. 1596–1614, 1989.MathSciNetCrossRefGoogle Scholar
  34. [Littlewood and Strigini 1991]
    B. Littlewood and L. Strigini. Validating ultra-high dependability for software-based systems, Volume 3, Chapter 3, Part 1, PDCS, 1991.Google Scholar
  35. [Littlewood and Verrall 1973]
    B. Littlewood and J.L. Verrall, “A Bayesian Reliability Growth Model for Computer Software,” J. Roy. Statist. Soc. C, vol. 22, pp.332–346, 1973. MathSciNetGoogle Scholar
  36. [Maiden and Sutcliffe 1992]
    N.A.M. Maiden and A.G. Sutcliffe, “Exploiting reusable specification through analogy,” Communications ACM, vol. 35, no. 4, pp. 55–64, 1992.CrossRefGoogle Scholar
  37. [Miller 1989]
    D. Miller. “The role of statistical modelling and inference in software quality assurance,” in Software Certification, Barking, Elsevier Applied Science, 1989.Google Scholar
  38. [MoD 1989]
    MoD. Draft Interim Defence Standard 00–56, Requirements for the analysis of safety-critical software in defence equipment, Ministry of Defence, London, 1989.Google Scholar
  39. [Moser and Melliar-Smith 1990]
    L.E. Moser and P.M. Melliar-Smith, “Formal verification of safety-critical systems,” Software–Practice and Experience, vol. 20, no. 8, pp. 799–821, 1990.CrossRefGoogle Scholar
  40. [Musa 1975]
    J.D. Musa, “A Theory of Software Reliability and its Application,” IEEE Trans. on Software Engineering, vol. SE-1, pp. 312–327, 1975.Google Scholar
  41. [Musa and Okumoto 1984]
    J.D. Musa and K. Okumoto. “A Logarithmic Poisson Execution Time Model for Software Reliability Measurement,” in Proc. Compsac 84, pp. 230–238, Chicago, 1984.Google Scholar
  42. [Norman 1988]
    D.A. Norman. The Psychology of Everyday Things, New York, Basic Books, 1988.Google Scholar
  43. [Norman and Shallice 1986]
    D.A. Norman and T. Shallice. “Attention to action: willed and automatic control of behaviour,” in Consciousness and Self Regulation, Plenum, 1986.Google Scholar
  44. [Parnas, Schowan et al. 1990]
    D.L. Parnas, A.J.v. Schowan and S.P. Kwan, “Evaluation of safety-critical software,” Communications ACM, vol. 33, no. 6, pp. 636–648, 1990.CrossRefGoogle Scholar
  45. [Rassmussen 1986]
    J. Rassmussen. Information Processing and Human Computer Interaction: An Approach to Cognitive Engineering, North Holland, 1986.Google Scholar
  46. [Ravden and Johnson 1989]
    S. Ravden and G. Johnson. Evaluating Usability of Human Computer Interfaces, Ellis Harwood, 1989.Google Scholar
  47. [Reason 1990]
    J.T. Reason. Human Error, Cambridge University Press, 1990.Google Scholar
  48. [Reubenstein 1990]
    H.B. Reubenstein. Automated Acquisition of Evolving Informal Descriptions. 1990.Google Scholar
  49. [Rouse 1981]
    W.B. Rouse, “Human-Computer Interaction in the Control of Dynamic systems,” ACM Computing Surveys, vol. 13, no. 1, pp. 71–99, 1981.MathSciNetCrossRefGoogle Scholar
  50. [Sanders and McCormick 1987]
    M.S. Sanders and E.J. McCormick. Human Factors in Engineering and Design, MacGraw-Hill, 1987.Google Scholar
  51. [Shafer 1976]
    G. Shafer. A Mathematical Theory of Evidence, Princeton University Press, 1976.Google Scholar
  52. [Siegrist 1988a]
    K. Siegrist, “Reliability of systems with Markov transfers of control,” IEEE Trans Software Engineering, vol. SE-14, no. 7, pp. 1049–1053, 1988a.MathSciNetCrossRefGoogle Scholar
  53. [Siegrist 1988b]
    K. Siegrist, “Reliability of systems with Markov transfers of control, II,” IEEE Trans Software Engineering, vol. SE-14, no. 10, pp. 1478–1480, 1988b.MathSciNetCrossRefGoogle Scholar
  54. [Sutcliffe and N.A.M. 1991]
    A.G. Sutcliffe and M. N.A.M., “Analogical software reuse: Empirical investigations of analogy based reuse and software engineering practices,” Acta Psychologica, vol. 78, pp. 173–197, 1991.CrossRefGoogle Scholar
  55. [Sutcliffe and Springett 1992]
    A.G. Sutcliffe and M.V. Springett. “From user’s problems to design errors: Linking evaluation to improving design practice,” in HCI92, Cambridge Univ Press, 1992.Google Scholar
  56. [Whiteside, Bennett et al. 1988]
    J. Whiteside, J. Bennett and K. Holzblatt. “Usability Engineering: Our experiences and evolution,” in Handbook of Human Computer Interaction, North Holland, 1988.Google Scholar
  57. [Wickens 1984]
    C. Wickens. Engineering Psychology and Human Performance, Columbus, Ohio, Merrill, 1984.Google Scholar
  58. [Woods 1988]
    D.D. Woods. “Cognitive engineering in complex and dynamic worlds,” in Cognitive Engineering in Dynamic Worlds, Academic Press, 1988.Google Scholar
  59. [Wright and Monk 1989]
    P. Wright and A.F. Monk. “Evaluation for design,” in People and Computers, Cambridge University Press, 1989.Google Scholar
  60. [Xie 1991]
    M. Xie. Software Reliability Modelling, Singapore, World Scientific, 1991.MATHCrossRefGoogle Scholar
  61. [Young and Barnard 1987]
    R.M. Young and P. Barnard. “The use of scenarios in human computer interaction research: Turbocharging the tortoise of cumulative science,” in Human Factors and the Graphics Interface, pp. 291–296, New York, ACM Press, 1987.Google Scholar
  62. [Young, Green et al. 1989]
    R.M. Young, T.R.G. Green and T. Simon. “Programmable user models for predictive evaluation of interface designs,” in CHI 89, ACM, 1989.Google Scholar

Copyright information

© Springer-Verlag London Limited 1993

Authors and Affiliations

  • Bev Littlewood
    • 1
  1. 1.Centre for Software ReliabilityCity UniversityLondonUK

Personalised recommendations