Skip to main content
Log in

ASSISTing Management Decisions in the Software Inspection Process

  • Published:
Information Technology and Management Aims and scope Submit manuscript

Abstract

Software inspection is a valuable technique for detecting defects in the products of software development. One avenue of research concerns the development of computer support tools that will hopefully lead to even greater improvements to the software inspection process. A number of prototype systems have been developed, but to date all suffer from some fundamental limitations. One of the most serious of these concerns is the lack of facilities to monitor the inspection process, and thereby to provide the moderator with quantitative information on the performance of the process. This paper commences by briefly describing work undertaken at Strathclyde into research in the area of computer support for software inspection (ASSIST). It then outlines a measurement component based on capture–recapture techniques that has been introduced into the system to aid the moderator in making a decision on when to terminate the inspection process. Finally, an evaluation of different capture–recapture models is presented, using data collected from software inspection experiments, demonstrating that the approach is viable and can make a contribution to improving performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. B.W. Boehm, Software Engineering Economics Englewood Coffs (Prentice-Hall, NJ, 1981).

    Google Scholar 

  2. L. Briand, K. El Emam, B. Freimut and O. Laitengerger, Quantitative evaluation of capture-recapture models to control software inspections, in: Proceedings of the 8th International Symposium on Software Reliability Engineering (1997) pp. 234-244.

  3. L. Briand, K. El Emam, O. Laitengerger and T. Fussbroich, Using simulation to build inspection effi-ciency benchmarks for development projects, International Software Engineering Network, Technical report, ISERN-19997-21 (1997).

  4. L.R. Brothers, V. Sembugamoorthy and M. Muller, ICICLE: Groupware for code inspections, in: Proceedings of the 1990 ACM Conference on Computer Supported Cooperative Work (October 1990) pp. 169-181.

  5. Bull HN Information Systems, Inc., US Applied Research Laboratory, Scrutiny User’s Guide (May 1994).

  6. K.P. Burnham and W.S. Overton, Estimation of the size of a closed population when capture probabilities vary among animals, Biometrika 65 (1978).

  7. A. Chao, Estimating the population size for capture-recapture data with unequal catchability, Biometrics 45 (1987).

  8. A. Chao and S.M. Lee, Estimating population size via sample coverage for closed capture-recapture models, Biometrics 50 (1990).

  9. J.N. Darroch, The multiple recapture census: I. Estimation of a closed population, Biometrika 45 (1958) 336-351.

    Google Scholar 

  10. Deckview, http://www.eecs.harvard.edu/~shieber/DeckView/.

  11. E.P. Doolan, Experience with Fagan’s inspection method, Software Practice and Experience 22(2) (1992) 173-182.

    Google Scholar 

  12. M.E. Fagan, Design and code inspections to reduce errors in program development, IBM Systems Journal 15(3) (1976) 182-211.

    Google Scholar 

  13. M.E. Fagan, Advances in software inspection, IEEE Transactions on Software Engineering 12(7) (1986) 744-751.

    Google Scholar 

  14. T. Gilb and D. Graham, Software Inspection (Addison-Wesley, 1993).

  15. J.W. Gintell, J. Arnold, M. Houde, J. Kruszelnicki, R. McKenney and G. Memmi, Scrutiny: A collaborative inspection and review system, in: Proceedings of the 4th European Software Engineering Conference (September 1993).

  16. P.M. Johnson, An instrumented approach to improving software quality through formal technical review, in: Proceedings of the 16th International Conference on Software Engineering (May 1994).

  17. S. Lee and A. Chao, Estimating population size via sample coverage for closed capture-recapture models, Biometrics 50 (1994) 88-97.

    Google Scholar 

  18. F. Macdonald, J. Miller, A. Brooks, M. Roper and M. Wood, A review of tool support for software inspection, in: Proceedings of the 7th International Workshop on Computer Aided Software Engineering (July 1995) pp. 340-349.

  19. F. Macdonald, J. Miller, A. Brooks, M. Roper and M. Wood, Automating the software inspection process, Automated Software Engineering: An International Journal 3 (August 1996) 193-218.

    Google Scholar 

  20. F. Macdonald and J. Miller, A software inspection process definition language and prototype support tool, Journal of Software Testing, Verification and Reliability 7(2) (June 1997) 99-128.

    Google Scholar 

  21. F. Macdonald and J. Miller, A comparison of tool-based and paper-based software inspection, Journal of Empirical Software Engineering 3 (1998) 233-253.

    Google Scholar 

  22. J. Miller, Estimating the number of remaining defects after software inspection, Technical Report EFOCS-32-98, University of Strathclyde (1998).

  23. J. Miller, The evaluation of capture-recapture models for exit decisions for software inspections, Technical report, Department of Computer Science, University of Strathclyde (1998).

  24. J. Miller, M. Roper and M. Wood, Further experiences with scenarios and checklists, Empirical Software Engineering 3 (1998) 37-64.

    Google Scholar 

  25. Palisade Corporation, Guide to Using @RISK, Newfield, NY (1996).

  26. D.L. Parnas and D.M. Weiss, Active design reviews: Principles and practices, in: Proceedings of the 8th International Conference on Software Engineering (August 1985) pp. 132-136.

  27. A.A. Porter, L.G. Votta and V.R. Basili, Comparing detection methods for software requirements inspections: A replicated experiment, IEEE Transactions on Software Engineering 21(6) (1995) 563-575.

    Google Scholar 

  28. A. Porter and P. Johnson, Assessing software review meetings: Results of a comparative analysis of two experimental studies, IEEE Transactions on Software Engineering 23(3) (1997) 129-145.

    Google Scholar 

  29. A.A. Porter, H. Siy, A. Mockus and L.G. Votta, Understanding the sources of variation in software inspections, ACM Transactions on Software Engineering and Methodology 7(1) (1998).

  30. G.W. Russell, Experience with inspection in ultra large-scale developments, IEEE Software 8(1) (1991) 25-31.

    Google Scholar 

  31. V. Sembugamoorthy and L.R. Brothers, ICICLE: Intelligent code inspection in a C language environment, in: Proceedings of the 14th Annual Computer Software and Applications Conference (October 1990) pp. 146-154.

  32. S.A. Vander and L.G. Votta, Assessing software designs using capture-recapture methods, IEEE Transactions on Software Engineering 19(11) (1993) 1045-1054.

    Google Scholar 

  33. L.G. Votta, Does every inspection need a meeting? in: Proceedings of the First ACM SIGSOFT Symposium on the Foundations of Software Engineering (December 1993) pp. 107-114.

  34. E.F. Weller, Lessons from three years of inspection data, IEEE Software 10(5) (September 1993) 38-45.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Miller, J., Macdonald, F. & Ferguson, J. ASSISTing Management Decisions in the Software Inspection Process. Information Technology and Management 3, 67–83 (2002). https://doi.org/10.1023/A:1013112826330

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1013112826330

Navigation