Abstract
There has been no shortage of proposed indicators of software quality, but there has not been a good deal of confidence engendered by these indicators. Presumably this is because the proposed indicators fail to satisfy certain properties that are expected of them. But what are these properties?
Recently, there have been several attempts to identify properties that good quality indicators should have. Some of these attempts are directed at specific areas of software quality, such as complexity or test data adequacy, while others are of a general nature. For certain properties, a subjective determination must be made in order to conclude if the property is met, while other properties can be objectively assessed. We examine some of these “meta-quality-indicators,” as they might be called. We also evaluate several well-known quality indicators, including popular (non-computational) complexity metrics and test data adequacy criteria, using these meta-quality-indicators.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
A. Baker and S. Zweben, “A Comparison of Measures of Control Flow Complexity,” IEEE Trans, on Software Engr.. 6, 6, Nov. 1980, 506–512.
B. Boehm, et al, “Quantitative Evaluation of Software Quality,” Proc. Second Int’l. Conf. on Software Engr., 1976, 592–605.
T. Budd, “Mutation Analysis: Ideas, Examples, Problems, and Prospects,” in Computer Program Testing, B. Chandrasekaran and S. Radicchi, eds, North Holland, 1981, 129–148.
L. Clarke, et al, “A Comparison of Data Flow Path Selection Criteria,” Proc. Eighth Int’l. Conf. on Software Engr., 1985, 244–251.
S. Conte, et al, Software Engineering Metrics and Models. Benjamin/Cummings, 1986.
M. Cook, “Software Metrics: An Introduction and Annotated Bibliography,” Software Engineering Notes. 7, 2, Apr. 1982, 41–60.
N. Coulter, “Software Science and Cognitive Psychology,” IEEE Trans, on Software Engr., 9, 2, Mar. 1983, 166–171.
B. Curtis, et al, “Measuring the Psychological Complexity of Software Maintenance Tasks with the Halstead and McCabe Metrics,” IEEE Trans, on Software Engr., 5, 2, Mar. 1979, pp. 96–104.
B. Curtis, et al, “Third Time Charm: Stronger Prediction of Programmer Performance by Software Complexity Metrics,” Proc. Fourth Int’l. Conf. on Software Engr.. 1979, 356–360.
R. DeMillo, et al, “Hints on Test Data Selection: Help for the Practicing Programmer,” Computer, 11, 4, Apr. 1978, 34–41.
J. Elshoff, “An Investigation into the Effect of the Counting Method Used on Software Science Measurements,” ACM SIGPLAN Notices. 13, 2, Feb. 1978, 30–45.
P. Frankl and E. Weyuker, “An Applicable Family of Data Flow Testing Criteria,” IEEE Trans, on Software Engr.. 14, 10, Oct. 1988, 1483–1498.
R. Grady, “Work-Product Analysis: The Philosopher’s Stone of Software?” IEEE Software. 7, 2, Mar. 1990, 26–34.
W. Hall, “The Cloze Procedure and Software Comprehension,” Ph.D. Thesis, Ohio State Univ., Dept. of Computer and Information Science, 1984.
M. Halstead, Elements of Software Science. Elsevier North-Holland, 1977.
J. Huang, “An Approach to Program Testing,” ACM Computing Surveys. 7, 3, Sept. 1975, 113–128.
“IEEE Standard Dictionary of Measures to Produce Reliable Software,” IEEE Std. 982.1–1988, IEEE, 345 E. 47th St., NY, NY 10017.
“IEEE Guide for the Use of IEEE Standard Dictionary of Measures to Produce Reliable Software,” IEEE Std. 982.2–1988, IEEE, 345 E. 47th St., NY, NY 10017.
G. Jones, Software Engineering. John Wiley & Sons, Inc., 1990.
T. McCabe, “A Complexity Measure,” IEEE Trans, on Software Engr.. 2, 4, Dec. 1976, 308–320.
T. McCabe and C. Butler, “Design Complexity Measurement and Testing,” Communic, of the ACM. 32, 12, Dec. 1989, 1415–1425.
E. Mills, Software Metrics, SEI Curriculum Module SEI-CM-12-1.1, Software Engineering Institute, 1988.
A. Parrish and S. Zweben, “A Formal Analysis of the Relationships Among Software Test Data Adequacy Properties,” OSU-CISRC-11/89-TR51, Ohio State Univ., Dept. of Computer and Information Science, Nov. 1989.
A. Parrish, “An Axiomatic Theory of Software Test Data Adequacy Criteria,” Ph.D. Thesis, Ohio State Univ., Dept. of Computer and Information Science, Aug. 1990.
D. Perry and G. Kaiser, “Adequate Testing and Object-Oriented Programming,” Journal of Object Oriented Programming. Jan/Feb 1990, 13–19.
R. Pressman, Software Engineering. A Practitioner’s Approach. 2nd Ed., McGraw-Hill, 1987.
V. Shen, et al, “Software Science Revisited: A Critical Analysis of the Theory and its Empirical Support,” IEEE Trans.on Software Engr.. 9, 2, Mar. 1983, 155–165.
E. Weyuker, “Axiomatizing Software Test Data Adequacy,” IEEE Trans, on Software Engr., 12, 12, Dec. 1986, 1128–1138.
E. Weyuker, “The Evaluation of Program-Based Software Test Data Adequacy Criteria,” Communic. of the ACM. 31, 6, June 1988, 668–675.
E. Weyuker, “Evaluating Software Complexity Measures,” IEEE Trans, on Software Engr., 14, 9, Sept. 1988, 1357–1365.
S. Zweben and J. Gourlay, “On the Adequacy of Weyuker’s Test Data Adequacy Axioms,” IEEE Trans, on Software Engr., 15, 4, Apr. 1989, 496–500.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1992 Springer-Verlag New York, Inc.
About this paper
Cite this paper
Zweben, S.H. (1992). Evaluating the Quality of Software Quality Indicators. In: Page, C., LePage, R. (eds) Computing Science and Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-2856-1_34
Download citation
DOI: https://doi.org/10.1007/978-1-4612-2856-1_34
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-97719-5
Online ISBN: 978-1-4612-2856-1
eBook Packages: Springer Book Archive