Abstract
Software metrics are used to evaluate the software developmentprocess and the quality of the resultingproduct. We used five metrics during the testing phase of the Shuttle Mission Control Center (MCC) Upgrade (MCCU) at the National Aeronautics and Space Administration's (NASA) Johnson Space Center. All but one metric provided useful information. Based on our experience we recommend using metrics during the test phase of software development and propose additional candidate metrics for further study.
Similar content being viewed by others
References
Auerbach, H. (1989) Logiscope automated source code analyzer, Technical Presentation to NASA, Verilog Corp., December.
Catalytix (1983)The Safe C Runtime Analyzer, Catalytix Corp., Cambridge, MA.
Christenson, D. (1988) Using software reliability models to predict field failure rates in electronic switching systems, inProceedings of the Fourth Annual National Conference on Software Quality and Productivity, Washington, D.C., 1–3 March, pp. 163–173.
Ejzak, R.P. (1987) On the successful application of software reliability modeling, inProceedings of the Fall Joint Computer Conference Dallas, TX, October 26–29, pp. 119.
Farr, W.H. (1983)A Survey of Software Reliability Modeling and Estimation, NSWC-TR-82-171, Naval Surface Weapons Center, Dahlgren, VA, September.
Kearney III, M.W., (1987) The evolution of the Mission Control Center,Proceedings of the IEEE,75, 399–416.
Levendel, Y. (1990) Reliability analysis of large software systems: defect data modeling,IEEE Transactions on Software Engineering,16, 141–52.
Lyu, M.R. (1989) Applying software reliability models: validity, predictive ability, usefulness, presented at the AIAA SBOS/COS Software Reliability Workshop, Houston, TX, December.
McCabe, T.J. (1983)Structured Testing, Catalog No. EHO 200-6i, New York, IEEE Computer Society Press
Musa, J.D. and Okumoto, K. (1982) Software reliability models: concepts, classification, comparisons, and practice, presented at the NATO Advanced Study Institute, Norwich, U.K., July.
Musa, J.D. Iannino, A. and Okumoto, K. (1987)Software Reliability: Measurement, Prediction, Application, McGraw-Hill, New York.
Myers, G.J. (1976)Software Reliability: Principles and Practices, John Wiley and Sons, New York.
NASA, MCC STS and JSC POCC Mature OPS Timeframe Level A Requirements, JSC-12804, Aug. 1985, pp. III.B-100-III.B-108.
Schultz, H.P. (1988)Software Management Metrics, M88-1, The MITRE Corporation, Bedford, MA.
SET (1988)UX-METRIC User's Guide, SET Laboratories Inc., P.O. Box 868 Mulino, OR, 97042
Softool (1989)FORTRAN Testing Instrumenters User's Guide, Softool Corp., Goleta, CA.
Software Research (1986)S-Tcat/C User's Guide, Software Research Corp., San Francisco, CA
Stark, G.E. (1990) Software reliability for flight crew training simulators, inAIAA Flight Simulation Technologies Conference and Exhibit, Dayton, OH, September, pp. 22–26.
Stark, G.E. (1991) A survey of software reliability measurement tools, inProceedings of the International Symposium on Software Reliability Engineering, Austin, TX, 18–19 May, pp. 90–97.
Stark, G.E. and Shooman, M.L. (1986) A comparison of software reliability models based on field data, presented at ORSA/TIMS Joint National Meeting, Miami, FL., October.
TORCH (1984)TORCH/PMS User's Guide, Datametrics Systems Corp., Fairfax, VA.
VAX (1982)VAX Performance and Coverage Analyzer, Digital Equipment Corp., Nashua, NH
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Stark, G.E., Durst, R.C. & Pelnik, T.M. Evaluation of software testing metrics for NASA's Mission Control Center. Software Qual J 1, 115–132 (1992). https://doi.org/10.1007/BF01845743
Issue Date:
DOI: https://doi.org/10.1007/BF01845743