Advertisement

Empirical Software Engineering

, Volume 21, Issue 6, pp 2413–2455 | Cite as

An automated software reliability prediction system for safety critical software

  • Xiang Li
  • Chetan Mutha
  • Carol S. Smidts
Article

Abstract

Software reliability is one of the most important software quality indicators. It is concerned with the probability that the software can execute without any unintended behavior in a given environment. In previous research we developed the Reliability Prediction System (RePS) methodology to predict the reliability of safety critical software such as those used in the nuclear industry. A RePS methodology relates the software engineering measures to software reliability using various models, and it was found that RePS’s using Extended Finite State Machine (EFSM) models and fault data collected through various software engineering measures possess the most satisfying prediction capability. In this research the EFSM-based RePS methodology is improved and implemented into a tool called Automated Reliability Prediction System (ARPS). The features of the ARPS tool are introduced with a simple case study. An experiment using human subjects was also conducted to evaluate the usability of the tool, and the results demonstrate that the ARPS tool can indeed help the analyst apply the EFSM-based RePS methodology with less number of errors and lower error criticality.

Keywords

Software reliability Reliability modeling Experimental validation Operational profile Finite state machine 

Notes

Acknowledgments

This paper was prepared as an account of work sponsored by an agency of the U.S. Government. Neither the U.S. Government nor any agency thereof, nor any of their employees, makes any warranty, expressed or implied, or assumes any legal liability or responsibility for any third party’s use, or the results of such use, of any information, apparatus, product, or process disclosed in this report, or represents that its use by such third party would not infringe privately owned rights. The views expressed in this paper are not necessarily those of the U.S. Nuclear Regulatory Commission. We are grateful to Kevin Smearsoll and Boyuan Li for supporting this research.

References

  1. IEEE (1990) IEEE Standard Glossary of Software Engineering Terminology, IEEE Std.610.12-1990. IEEE, New YorkGoogle Scholar
  2. ISO/IEC (2001) ISO/IEC 9126-1: 2001, Software Engineering – Product Quality – Part 1: Quality modelGoogle Scholar
  3. Musa J (1975) A theory of software reliability and its application. IEEE Trans Softw Eng 1(3):312–327CrossRefGoogle Scholar
  4. Huang C (2005) Performance analysis of software reliability growth models with testing-effort and change-point. J Syst Softw 76(2):181–194CrossRefGoogle Scholar
  5. Huang C, Kuo S, et al. (2007) An assessment of testing-effort dependent software reliability growth models. IEEE Trans Reliab 56(2):198–211CrossRefGoogle Scholar
  6. Mills H (1972) On the statistical validation of computer programs. IBM Federal Systems Division Report:72–6015Google Scholar
  7. Walia G, Carver J (2008) The Effect of the Number of Defects on Estimates Produced by Capture-Recapture Models. In: Software Reliability Engineering, 2008. ISSRE 2008. 19th International Symposium on, pp. 305-306Google Scholar
  8. Li M, Smidts C (2003) A ranking of software engineering measures based on expert opinion, vol 29, pp. 24–811Google Scholar
  9. Pham H (2007) System software reliability. SpringerGoogle Scholar
  10. Smidts C, Huang F, et al. (2015) A Method for Quantifying the Dependability Attributes of Software-Based Safety Critical Instrumentation Control Systems in Nuclear Power Plants. In: Proc. NPIC-HMIT 2015Google Scholar
  11. Huang F, Liu B (2013) Study on the correlations between program metrics and defect rate by a controlled experiment. Int J Softw Eng 7(3):114–120CrossRefGoogle Scholar
  12. Huang F, Liu B, et al. (2015) The impact of software process consistency on residual defects. Journal of Software Evolution and ProcessGoogle Scholar
  13. Smidts C, Li M (2004) Validation of A Methodology for Assessing Software Quality. NUREG/CR-6848, Office of Nuclear Regulatory Research, Washington DCGoogle Scholar
  14. Smidts C, Li M, et al. (2010) A Large Scale Validation of a Methodology for Assessing Software Reliability. NUREG/CR-7042, Office of Nuclear Regulatory Research, Washington DCGoogle Scholar
  15. Li X, Gupta J, et al. (2013) ARPS: An Automated Reliability Prediction System Tool for Safety Critical Software, PSA 2013, Columbia, South Carolina, September 22-27Google Scholar
  16. Wang CJ, Liu MT (1993) Generating Test Cases for EFSM with Given Fault Models. In: Proceedings of 12th IEEE Computer and Communications SocietiesGoogle Scholar
  17. Voas J (1992) PIE: A Dynamic Failure-Based Technique. IEEE Trans Softw Eng 18(8)Google Scholar
  18. Lyu M (1996) Handbook of software reliability engineering. Vol. 222. IEEE computer society press, CAGoogle Scholar
  19. Smidts C, Li B, et al. (2002) Software Reliability Models, vol 2, 2nd ed. Wiley, New York, pp. 1594–1610Google Scholar
  20. Pandey A, Goyal N (2013) Early Software Reliability Prediction. SpringerGoogle Scholar
  21. Cheung L, Roshandel R, et al. (2008) Early prediction of software component reliability. In: Proceedings of the 30th international conference on Software engineering, pp. 111–120Google Scholar
  22. Gaffney G, Pietrolewiez J (1990) An automated model for software early error prediction (SWEEP). In: Proceeding of 13th Minnow Brook Workshop on Software ReliabilityGoogle Scholar
  23. Fenton N, Neil M (1999) A critique of software defect prediction models. IEEE Trans Softw Eng 25(5):675–689CrossRefGoogle Scholar
  24. Langseth H, Portinale L (2007) Bayesian networks in reliability. Reliab Eng Syst Saf 92(1):92–108CrossRefGoogle Scholar
  25. Gokhale S, Trivedi K (2006) Analytical models for architecture-based software reliability prediction: a unification framework. IEEE Trans Reliab 55(4):578–590CrossRefGoogle Scholar
  26. Lyu M, Nikora A (1992) CASRE: a computer-aided software reliability estimation tool. In: Computer-Aided Software Engineering, 1992. Proceedings., Fifth International Workshop on, pp. 264-275Google Scholar
  27. Ramani S, Gokhale S, et al. (2000) SREPT: software reliability estimation and prediction tool. Perform Eval 39(1):37–60CrossRefMATHGoogle Scholar
  28. Chen C, Lin C, et al. (2006) CARATS: a computer-aided reliability assessment tool for software based on object-oriented design. In: TENCON 2006. 2006 IEEE Region 10 Conference, pp. 1-4Google Scholar
  29. Wang W, Scannell D (2005) An architecture-based software reliability modeling tool and its support for teaching. In: Frontiers in Education, 2005. FIE’05. Proceedings 35th Annual Conference, pp. T4C-T4CGoogle Scholar
  30. Boudali H, Dugan J (2006) A continuous-time Bayesian network reliability modeling, and analysis framework. IEEE Trans Reliab 55(1):86–97CrossRefGoogle Scholar
  31. IEEE Computer Society (1998) Software Engineering Standards Committee, and IEEE-SA Standards Board. IEEE Recommended Practice for Software Requirements Specifications. IEEE Std 830 -1998, Institute of Electrical and Electronics EngineersGoogle Scholar
  32. Musa J, Lannino A, et al. (1987) Software Reliability-Measurement, Prediction, Applications. McGraw-Hill, New YorkGoogle Scholar
  33. Musa J (1993) Operational profiles in software-reliability engineering. Software, IEEE 10(2):14–32CrossRefGoogle Scholar
  34. Lam M, Sethi R, et al. (2006) Compilers: Principles, Techniques, and ToolsGoogle Scholar
  35. Wolfram (2014) Equation solving. http://reference.wolfram.com/language/guide/EquationSolving.html. [Retrieved: 2014-10-14]
  36. Mathworks (2014) Solve equations and inequalities. http://www.mathworks.com/help/symbolic/mupad_ref/solve.html. [Retrieved: 2014-10-14]
  37. IEEE Computer Society (1998). Software & System Engineering Standards Committee, IEEE Standard for Information Technology—Systems Design—Software Design Descriptions IEEE Std 1016-1998, Institute of Electrical and Electronics EngineersGoogle Scholar
  38. Booch G, Rumbaugh J, et al. (2005) Unified Modeling Language User Guide, the 2nd Edition. Addison-WesleyGoogle Scholar
  39. Huang F, Liu B, et al. (2014) The links between human error diversity and software diversity: Implications for fault diversity seeking. Science of Computer Programming 89(Part C):350–373CrossRefGoogle Scholar
  40. Chambers J, Cleveland W, et al. (1983) Graphical Methods for Data Analysis. WadsworthGoogle Scholar
  41. Siegel S (1956) Non-parametric statistics for the behavioral sciences, New York: McGraw-Hill, pp. 75–83Google Scholar
  42. Mendenhall W, Wackerly D, et al. (1989) 15: Nonparametric statistics, Fourth ed. PWS-Kent, pp. 674–679Google Scholar
  43. Dixon W (1953) Power functions of the sign test and power efficiency for normal alternatives. Ann Math Stat:467–473Google Scholar
  44. Li B, Li M, et al. (2005) Integrating software into PRA: A software-related failure mode taxonomy. Risk Anal 26(4)Google Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.Nuclear Engineering Program at The Ohio State UniversityColumbusUSA
  2. 2.Oblon, McClelland, Maier & Neustadt, LLPWashingston D.C Metro AreaUSA
  3. 3.Department of Mechanical and Aerospace Engineering at The Ohio State UniversityColumbusUSA

Personalised recommendations