Advertisement

A Methodology for Software Failure Risk Measurement

  • Susan A. Sherer
Part of the Applications of Modern Technology in Business book series (AMTB)

Abstract

This chapter deals with the economic significance of software malfunction by presenting a framework for measuring software failure risk followed by a detailed examination of components of that methodology to assess software failure risk.

Keywords

Prior Distribution Software Reliability Fault Tree Risk Identification External Exposure 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Akiyama, F., “An Example of Software System Debugging,” Proc. IFIP Congress (August, 1971), Ljubljana, Yugoslavia, Amsterdam: North Holland, pp. 353–359.Google Scholar
  2. Apostolakis, G., “Probability and Risk Assessment: The Subjectivistic Viewpoint and Some Suggestions,” Nuclear Safety 19(3), (May-June 1978), 305–315.Google Scholar
  3. Basili, V. R. and D. H. Hutchens, “An Empirical Study of a Syntactic Complexity Family, ” IEEE Trans. Software Engrg. SE-9(5), (November 1983), 664–672.CrossRefGoogle Scholar
  4. Basili, V. R. and B. T. Perricone, “Software Errors and Complexity: An Empirical Investigation,” Comm. ACM 27(1), (January 1984), 42–52.CrossRefGoogle Scholar
  5. Basili, V. R. and T. Y. Phillips, “Evaluating and Comparing Software Metrics in the Software Engineering Laboratory,” NASA Collected Software Engineering Papers: Volume 1, July, 1982, Section 4, pp. 18–36.Google Scholar
  6. Box, G. and G. C. Tiao, Bayesian Inference in Statistical Analysis, Reading, MA: Addison-Wesley, 1973.Google Scholar
  7. Card, D. N., V. B. Church, and W. W. Agresti, “An Empirical Study of Software Design Practice,” IEEE Trans. Software Engrg. SE-12(2), (February 1986), 264–271.CrossRefGoogle Scholar
  8. Cha, S. S., N. G. Leveson, and T. J. Shimeall, “Safety Verification in MURPHY using Fault Tree Analysis,” Proc. 10th Internat. Conf. Software Engrg., (April 11–15, 1988), Singapore, Washington: IEEE, pp. 377–386.Google Scholar
  9. Feuer, A. R. and E. B. Fowlkes, “Some Results from an Empirical Study of Computer Software,” Proc. Fourth Internat. Conf. Software Engrg., (September 17–19, 1979), Munich, Germany, New York: IEEE, pp. 351–355.Google Scholar
  10. Fischhoff, B., S. Lichtenstein, P. Slovic, S. Derby, and R. Keeney, Acceptable Risk, Cambridge: Cambridge, 1981.Google Scholar
  11. Forman, E.H. and N. D. Singpurwalla, “An Empirical Stopping Rule for Debugging and Testing Computer Software, ” J. Amer. Statist. Assoc. 72(360), (December 1987), 750–757.CrossRefGoogle Scholar
  12. Gaffney, J., “Estimating the Number of Faults in Code,” IEEE Trans. Software Engrg. SE-10(4), (July 1984), 459–464.CrossRefGoogle Scholar
  13. Goel, A. and K. Okumoto, “Time Dependent Error Detection Rate Model for Software Reliability and Other Performance Measures,” IEEE Trans. Reliability R28(3), (August 1979), 206–211.CrossRefGoogle Scholar
  14. Gremillon, L. L., “Determinants of Program Repair Maintenance Requirements,” Comm. ACM 27(8), (August 1984), 826–832.CrossRefGoogle Scholar
  15. Henley, E. and H. Kumamoto, Reliability Engineering and Risk Assessment, Englewood Cliffs, NJ: Prentice-Hall, 1981.Google Scholar
  16. Hertz, D. and H. Thomas, Risk Analysis and its Applications, New York: Wiley, 1983.Google Scholar
  17. Jewell, W. S., “Bayesian Extensions to a Basic Model of Software Reliability,” IEEE Trans. Software Engrg. SE-11(12), (December 1985), 1465–1471.CrossRefGoogle Scholar
  18. Jones, C., Programming Productivity, New York: McGraw-Hill, 1986.Google Scholar
  19. Juris, R., “EDP Auditing Lessens Risk Exposure,” Computer Decisions, (July 15, 1986), pp. 36–42.Google Scholar
  20. Kubat, P. and H. S. Koch, “On the Estimation of the Number of Errors and Reliability of Software Systems,” Working Paper Series No. 8013, Graduate School of Management, University of Rochester, May, 1980.Google Scholar
  21. Langberg, N. and N. Singpurwalla, “A Unification of Some Software Reliability Models,” SIAM J. Sci. Statist. Comput. 6(3), (July 1985), 781–790.CrossRefGoogle Scholar
  22. Levendal, Y., “Improving Quality with a Manufacturing Process,” IEEE Software, March, 1991, pp. 13–25.Google Scholar
  23. Leveson, N. G. and P. R. Harvey, “Analyzing Software Safety,” IEEE Trans. Software Engrg. SE-9(5), (September 1983), 569–579.CrossRefGoogle Scholar
  24. Levine, S., “Probabilistic Risk Assessment: Identifying the Real Risks of Nuclear Power,” Tech. Rev. 87, (February-March, 1984), 40–44.Google Scholar
  25. Lind, N., “Methods of Risk Analysis,” Working Paper, Institute for Risk Research, University of Waterloo, Waterloo, Ontario, Canada, 1986.Google Scholar
  26. Lipow, M., “Number of Faults per Line of Code,” IEEE Trans. Software Engrg. SE-8(4), (July 1982), 437–439.CrossRefGoogle Scholar
  27. Lipow, M. and T. A. Thayer, “Prediction of Software Failures,” Proc. Reliability and Maintainability Symposium, Philadelphia, New York: IEEE, 1977, pp. 489–494.Google Scholar
  28. Littlewood, B., “MTBF is Meaningless in Software Reliability,” IEEE Trans. Reliability R-24, (April 1975), 82.CrossRefGoogle Scholar
  29. Littlewood, B. and J. L. Verrall, “A aayesian Reliability Growth Model for Computer Software,” IEEE 1973 Computer Software Reliability Conf, (1973), New York, pp. 70–77.Google Scholar
  30. Mackenzie, J. J., “Finessing the Risks of Nuclear Power,” Tech. Rev. 87, (February-March 1984), 34–39.Google Scholar
  31. McCormick, N. J., Reliability and Risk Analysis, New York: Academic Press, 1981.Google Scholar
  32. Martz, H. F. and R. A. Waller, Bayesian Reliability Analysis, Wiley, 1982.Google Scholar
  33. Misra, P. N., “Software Reliability Analysis,” IBM Systems J. 22(3), (1983), 262–270.CrossRefGoogle Scholar
  34. Musa, J. D., “Theory of Software Reliability and its Application,” IEEE Trans. Software Engrg. SE-1(3), (September 1975), 312–327.CrossRefGoogle Scholar
  35. Musa, J. D., A. Iannino, and K. Okumoto, Software Reliability: Measurement, Prediction, Application, New York: McGraw-Hill, 1987.Google Scholar
  36. Musa, J. D. and K. Okumoto, “Software Reliability Models: Concepts, Classification, Comparisons, and Practice,” in Electronic Systems Effectiveness and Life Cycle Costing (J. K. Skwrizynski, ed.), Heidelberg: Springer-Verlag, 1983, pp. 395–424.CrossRefGoogle Scholar
  37. Myers, G. J., Software Reliability, New York: Wiley, 1976.Google Scholar
  38. Myers, G. J., The Art of Software Testing, New York: Wiley, 1979.Google Scholar
  39. Nuclear Energy Agency, Nuclear Safety Research in the OECD Area: The Response to the Three Mile Island Accident, Organization for Economic Co-operation and Development, September, 1980.Google Scholar
  40. Pate-Cornell, M. E., “Fault Trees vs. Event Trees in Reliability Analysis,” Risk Anal. 4(3), (1984), 177–186.CrossRefGoogle Scholar
  41. Rasmussen, J., “Human Reliability in Risk Analysis,” in High Risk Safety Technology (A. Green, ed.), New York: Wiley, 1982, pp. 143–170.Google Scholar
  42. Roberts, L., Nuclear Power and Public Responsibility, Cambridge: Cambridge, 1984.Google Scholar
  43. Schneidewind, N. F. and H. Hoffman, “An Experiment in Software Error Data Collection and Analysis,” IEEE Trans. Software Engrg. SE-5(3), (May 1979), 276–286.CrossRefGoogle Scholar
  44. Selby, R. and A. Porter, “Learning from Examples: Generation and Evaluation of Decision Trees for Software Resource Analysis,” IEEE Trans. Software Engrg. SE-14(12), (December 1988), 1743–1756. CrossRefGoogle Scholar
  45. Selby, R., V. Basili, and F. Baker, “Cleanroom Software Development: An Empirical Evaluation,” IEEE Trans. software Engrg. SE-13(9), (September 1987), 1027–1037.CrossRefGoogle Scholar
  46. Shen, V. Y., T. Yu, S. Thiebaut, and L. Paulsen, “Identifying Error-Prone Software—An Empirical Study,” IEEE Trans. software Engrg. SE-11(4), (April 1985), 317–323.CrossRefGoogle Scholar
  47. Sheridan, T. B., “Human Errors in Nuclear Power Plants,” Tech. Rev. 82, (February 1980), 23–33.Google Scholar
  48. Shrader-Frechette, K., Risk Analysis and Scientific Method, Dordrecht: D. Reidel, 1985.CrossRefGoogle Scholar
  49. Takahashi, M. and Y. Kamayachi, “An Empirical Study of a Model for Program Error Prediction,” IEEE Trans. Software Engrg. SE-15(1), (January 1989), 82–86.CrossRefGoogle Scholar
  50. Tbersky, A. and D. Kahneman, “Judgement under Uncertainty: Heuristics and Biases,” Science 185, (1974), 1124–1131.CrossRefGoogle Scholar
  51. U.S. Nuclear Regulatory Commission, Reactor Safety Study: An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants, Main Report, Appendix 1, WASH-1400, October, 1975.Google Scholar
  52. Vesely, W. E. and D. M. Rasmuson, “Uncertainties in Nuclear Probabilistic Risk Analyses,” Risk Anal. 4(4), (1984), 313–322.CrossRefGoogle Scholar
  53. Youngs, E., “Human Errors in Programming,” in Tutorial: Human Factors in Software Development (W. Curtis, ed.), New York: IEEE, 1981, pp. 383–392.Google Scholar

Copyright information

© Springer Science+Business Media New York 1992

Authors and Affiliations

  • Susan A. Sherer
    • 1
  1. 1.College of Business and EconomicsLehigh UniversityBethlehemUSA

Personalised recommendations