Skip to main content

Evaluating the Quality of Software Quality Indicators

  • Conference paper
Computing Science and Statistics
  • 863 Accesses

Abstract

There has been no shortage of proposed indicators of software quality, but there has not been a good deal of confidence engendered by these indicators. Presumably this is because the proposed indicators fail to satisfy certain properties that are expected of them. But what are these properties?

Recently, there have been several attempts to identify properties that good quality indicators should have. Some of these attempts are directed at specific areas of software quality, such as complexity or test data adequacy, while others are of a general nature. For certain properties, a subjective determination must be made in order to conclude if the property is met, while other properties can be objectively assessed. We examine some of these “meta-quality-indicators,” as they might be called. We also evaluate several well-known quality indicators, including popular (non-computational) complexity metrics and test data adequacy criteria, using these meta-quality-indicators.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. Baker and S. Zweben, “A Comparison of Measures of Control Flow Complexity,” IEEE Trans, on Software Engr.. 6, 6, Nov. 1980, 506–512.

    Article  MathSciNet  Google Scholar 

  2. B. Boehm, et al, “Quantitative Evaluation of Software Quality,” Proc. Second Int’l. Conf. on Software Engr., 1976, 592–605.

    Google Scholar 

  3. T. Budd, “Mutation Analysis: Ideas, Examples, Problems, and Prospects,” in Computer Program Testing, B. Chandrasekaran and S. Radicchi, eds, North Holland, 1981, 129–148.

    Google Scholar 

  4. L. Clarke, et al, “A Comparison of Data Flow Path Selection Criteria,” Proc. Eighth Int’l. Conf. on Software Engr., 1985, 244–251.

    Google Scholar 

  5. S. Conte, et al, Software Engineering Metrics and Models. Benjamin/Cummings, 1986.

    Google Scholar 

  6. M. Cook, “Software Metrics: An Introduction and Annotated Bibliography,” Software Engineering Notes. 7, 2, Apr. 1982, 41–60.

    Article  Google Scholar 

  7. N. Coulter, “Software Science and Cognitive Psychology,” IEEE Trans, on Software Engr., 9, 2, Mar. 1983, 166–171.

    Article  Google Scholar 

  8. B. Curtis, et al, “Measuring the Psychological Complexity of Software Maintenance Tasks with the Halstead and McCabe Metrics,” IEEE Trans, on Software Engr., 5, 2, Mar. 1979, pp. 96–104.

    Article  Google Scholar 

  9. B. Curtis, et al, “Third Time Charm: Stronger Prediction of Programmer Performance by Software Complexity Metrics,” Proc. Fourth Int’l. Conf. on Software Engr.. 1979, 356–360.

    Google Scholar 

  10. R. DeMillo, et al, “Hints on Test Data Selection: Help for the Practicing Programmer,” Computer, 11, 4, Apr. 1978, 34–41.

    Article  Google Scholar 

  11. J. Elshoff, “An Investigation into the Effect of the Counting Method Used on Software Science Measurements,” ACM SIGPLAN Notices. 13, 2, Feb. 1978, 30–45.

    Article  Google Scholar 

  12. P. Frankl and E. Weyuker, “An Applicable Family of Data Flow Testing Criteria,” IEEE Trans, on Software Engr.. 14, 10, Oct. 1988, 1483–1498.

    Article  MathSciNet  Google Scholar 

  13. R. Grady, “Work-Product Analysis: The Philosopher’s Stone of Software?” IEEE Software. 7, 2, Mar. 1990, 26–34.

    Article  Google Scholar 

  14. W. Hall, “The Cloze Procedure and Software Comprehension,” Ph.D. Thesis, Ohio State Univ., Dept. of Computer and Information Science, 1984.

    Google Scholar 

  15. M. Halstead, Elements of Software Science. Elsevier North-Holland, 1977.

    Google Scholar 

  16. J. Huang, “An Approach to Program Testing,” ACM Computing Surveys. 7, 3, Sept. 1975, 113–128.

    Article  Google Scholar 

  17. “IEEE Standard Dictionary of Measures to Produce Reliable Software,” IEEE Std. 982.1–1988, IEEE, 345 E. 47th St., NY, NY 10017.

    Google Scholar 

  18. “IEEE Guide for the Use of IEEE Standard Dictionary of Measures to Produce Reliable Software,” IEEE Std. 982.2–1988, IEEE, 345 E. 47th St., NY, NY 10017.

    Google Scholar 

  19. G. Jones, Software Engineering. John Wiley & Sons, Inc., 1990.

    Google Scholar 

  20. T. McCabe, “A Complexity Measure,” IEEE Trans, on Software Engr.. 2, 4, Dec. 1976, 308–320.

    Article  MathSciNet  Google Scholar 

  21. T. McCabe and C. Butler, “Design Complexity Measurement and Testing,” Communic, of the ACM. 32, 12, Dec. 1989, 1415–1425.

    Article  Google Scholar 

  22. E. Mills, Software Metrics, SEI Curriculum Module SEI-CM-12-1.1, Software Engineering Institute, 1988.

    Google Scholar 

  23. A. Parrish and S. Zweben, “A Formal Analysis of the Relationships Among Software Test Data Adequacy Properties,” OSU-CISRC-11/89-TR51, Ohio State Univ., Dept. of Computer and Information Science, Nov. 1989.

    Google Scholar 

  24. A. Parrish, “An Axiomatic Theory of Software Test Data Adequacy Criteria,” Ph.D. Thesis, Ohio State Univ., Dept. of Computer and Information Science, Aug. 1990.

    Google Scholar 

  25. D. Perry and G. Kaiser, “Adequate Testing and Object-Oriented Programming,” Journal of Object Oriented Programming. Jan/Feb 1990, 13–19.

    Google Scholar 

  26. R. Pressman, Software Engineering. A Practitioner’s Approach. 2nd Ed., McGraw-Hill, 1987.

    Google Scholar 

  27. V. Shen, et al, “Software Science Revisited: A Critical Analysis of the Theory and its Empirical Support,” IEEE Trans.on Software Engr.. 9, 2, Mar. 1983, 155–165.

    Article  Google Scholar 

  28. E. Weyuker, “Axiomatizing Software Test Data Adequacy,” IEEE Trans, on Software Engr., 12, 12, Dec. 1986, 1128–1138.

    Google Scholar 

  29. E. Weyuker, “The Evaluation of Program-Based Software Test Data Adequacy Criteria,” Communic. of the ACM. 31, 6, June 1988, 668–675.

    Article  MathSciNet  Google Scholar 

  30. E. Weyuker, “Evaluating Software Complexity Measures,” IEEE Trans, on Software Engr., 14, 9, Sept. 1988, 1357–1365.

    Article  MathSciNet  Google Scholar 

  31. S. Zweben and J. Gourlay, “On the Adequacy of Weyuker’s Test Data Adequacy Axioms,” IEEE Trans, on Software Engr., 15, 4, Apr. 1989, 496–500.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1992 Springer-Verlag New York, Inc.

About this paper

Cite this paper

Zweben, S.H. (1992). Evaluating the Quality of Software Quality Indicators. In: Page, C., LePage, R. (eds) Computing Science and Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-2856-1_34

Download citation

  • DOI: https://doi.org/10.1007/978-1-4612-2856-1_34

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-0-387-97719-5

  • Online ISBN: 978-1-4612-2856-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics