Advertisement

Annals of Software Engineering

, Volume 11, Issue 1, pp 89–106 | Cite as

A Measurement-Based Framework for Software Reliability Improvement

  • Karama Kanoun
Article

Abstract

Programs for software reliability improvement based on measurements require the collection and analysis of comprehensive and consistent data sets on several software projects. In this paper, we put emphasis on data collection and analysis programs for software reliability improvement. We first present the objectives of data collection programs, report some success stories related to software reliability improvement, then discuss the practical aspects of data collection, validation and processing before giving recommendations for successful data collection and analysis programs. The success stories show that the gain in productivity and reliability is obtained at almost no extra cost, and even with an overall cost reduction most of the time. Data processing consists in performing statistical treatments. For reliability purposes, we consider three main activities: descriptive analysis, trend analysis, and reliability evaluation. The recommendations and examples of results, given at the end of the paper, are based on our experience in processing failure data collected on real-life software systems. We discuss in particular, the relevance of software reliability evaluation according to the life-cycle phase considered.

data collection data processing software reliability reliability improvement 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Adams, N. (1984), “Optimizing Preventive Service of Software Products,” IBM Journal of Research and Development 28, 1, 2-14.Google Scholar
  2. Aïvazian, S. (1970), Statistical Study of Dependencies, Mir, Moscow (in French).Google Scholar
  3. Aoyama, M. (1993), “Concurrent-Development Process Model,” IEEE Software, July, 46-55.Google Scholar
  4. Basili, V.R. and D.M. Weiss (1984), “A Methodology for Collecting Valid Software Engineering Data,” IEEE Transactions on Software Engineering 10, 6, 728-738.Google Scholar
  5. Boehm, B.W. (1981), Software Engineering Economics, Prentice-Hall, Englewood Cliffs, NJ.Google Scholar
  6. Castelli, L., B. Coan, J.P. Harbison and E.L. Miller (1997), “Tradeoffs when Integrating Multiple Software Components into a Highly Available Application,” In Proceedings of the 16th IEEE Symposium on Reliable distributed Systems, IEEE Computer Society, Durham, NC, pp. 121-128.Google Scholar
  7. Chillarege, R., S. Biyani and J. Rosenthal (1995), “Measurement of Failure Rate in Widely Distributed Software,” In Proceedings of the 25th IEEE International Symposium on Fault Tolerant Computing (FTCS-25), Pasadena, CA, pp. 424-433.Google Scholar
  8. Diaz, M. and J. Sligo (1997), “How Process Improvement Helped Motorola,” IEEE Software, September, 75-81.Google Scholar
  9. Donnelly, M.M., J.D. Musa, W.W. Everett and G. Wilson (1992), “Best Current Practice: Software Reliability Engineering,” AT & T Bell Laboratories, Software Quality and Productivity Cabinet.Google Scholar
  10. Ehrlich, W., B. Prasanna, J. Stampfel and J. Wu (1993), “Determining the Cost of a Stop-Test Decision,” IEEE Software, March, 33-42.Google Scholar
  11. Grable, R., J. Jernigan, C. Pogue and D. Divis (1999), “Metrics for Small Projects: Experiences at the SED,” IEEE Software, March/April, 21-29.Google Scholar
  12. Grady, R.B. (1992), Practical Software Metrics for Project Management and Process Improvement, Prentice-Hall, Englewood Cliffs, NJ.Google Scholar
  13. Grady, R.B. and D.L. Caswell (1987), Software Metrics: Establishing a Company-Wide Program, Prentice-Hall, Englewood Cliffs, NJ.Google Scholar
  14. Gray, J. (1990), “A Census of Tandem System Availability Between 1985 and 1990,” IEEE Transactions on Reliability 39, 4, 409-418.Google Scholar
  15. Humphrey, W.S. (1989), Managing the Software Process, Addison-Wesley, Reading, MA.Google Scholar
  16. Kaâniche, M. and K. Kanoun (1996), “Reliability of a Commercial Telecommunications System,” In Proceedings of the 7th International Symposium on Software Reliability Engineering, White Plains, NY, USA, pp. 207-212.Google Scholar
  17. Kaâniche, M. and K. Kanoun (1998), “Software Reliability Analysis of Three Successive Generations of a Switching System,” In Proceedings of the IEEE Workshop on Application-Specific Software Engineering and Technology (ASSET'98), IEEE Computer Society, Richardson, TX, pp. 122-127.Google Scholar
  18. Kaâniche, M., K. Kanoun and S. Metge (1990), “Failure Analysis and Validation Monitoring of a Telecommunication Equipment Software System,” Annales des Telecommunications 45, 11/12, 657-670 (in French).Google Scholar
  19. Kanoun, K., M. Bastos Martini and J. Moreira de Souza (1991), “A Method for Software Reliability Analysis and Prediction-Application to the TROPICO-R Switching System,” IEEE Transactions Software Engineering 17, 4, 334-344.Google Scholar
  20. Kanoun, K., M. Kaâniche and J.-C. Laprie (1997), “Qualitative and Quantitative Reliability Assessment,” IEEE Software 14, 2, 77-87.Google Scholar
  21. Kanoun, K., M. Kaâniche, J.-C. Laprie and S. Metge (1993), “SoRel: A Tool for Reliability Growth Analysis and Prediction from Statistical Failure Data,” In Proceedings of the 23rd International Symposium Fault-Tolerant Computing (FTCS-23), Toulouse, France, pp. 654-659.Google Scholar
  22. Kanoun, K. and J.-C. Laprie (1996), Trend Analysis, Handbook of Software Reliability Engineering, M. Lyu, Ed., McGraw-Hill, chapter 10, pp. 401-437.Google Scholar
  23. Kanoun, K. and T. Sabourin (1987), “Software Dependability of a Telephone Switching System,” In Proceedings of the 17th IEEE International Symposium on Fault-Tolerant Computing (FTCS-17), Pittsburgh, PA, USA, pp. 236-241.Google Scholar
  24. Kautz, K. (1999), “Making Sense of Measurement for Small Organizations,” IEEE Software, March/April, 14-20.Google Scholar
  25. Laprie, J.-C. (1992), “For a Product-in-a Process Approach to Software Reliability Evaluation,” In Proceedings of the 3rd International Symposium on Software Reliability Engineering, Raleigh, NC, USA, pp. 134-139.Google Scholar
  26. Laryd, A. (1994), “Operating Experience of Software in Programmable Equipment Used in ABB Atom Nuclear I&C Applications,” In Proceedings of the IAEA, Technical Committee Meeting, Helsinki, Finland, pp. 1-12.Google Scholar
  27. Levendel, Y. (1995), “The Cost Effectiveness of the Telecommunication Service Dependability,” Software Fault Tolerance, M.R. Lyu, Ed., Wiley, New York, chapter 12, pp. 279-314.Google Scholar
  28. Musa, J. (1998), Software Reliability Engineering, Computing, McGraw-Hill, New York.Google Scholar
  29. Musa, J.D., A. Iannino and K. Okumoto (1987), Software Reliability: Measurement, Prediction, Application, McGraw-Hill, New York.Google Scholar
  30. Ross, N. (1989), “The Collection and Use of Data for Monitoring Software Projects,” In Measurement for Software Control and Assurance, B.A. Kitchenham and B. Littlewood, Eds., Elsevier Applied Science, London/New York, pp. 125-154.Google Scholar
  31. Weller, E.F. (2000), “Practical Applications of Statistical Process Control,” IEEE Software, May/June, 48-55.Google Scholar
  32. Xie, M. (1991), Software Reliability Modeling, World Scientific, Singapore.Google Scholar

Copyright information

© Kluwer Academic Publishers 2001

Authors and Affiliations

  • Karama Kanoun
    • 1
  1. 1.LAAS-CNRSFrance

Personalised recommendations