Advertisement

Validating the defect detection performance advantage of group designs for software reviews: Report of a laboratory experiment using program code

  • Lesley Pek Wee Land
  • Chris Sauer
  • Ross Jeffery
Regular Sessions Empirical Studies
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1301)

Abstract

It is widely accepted that software development technical reviews (SDTRs) are a useful technique for finding defects in software products. Recent debates centre around the need for review meetings (Porter and Votta 1994, Porter et al 1995, McCarthy et al 1996, Lanubile and Visaggio 1996). This paper presents the findings of an experiment that was conducted to investigate the performance advantage of interacting groups over average individuals and artificial (nominal) groups. We found that interacting groups outperform the average individuals and nominal groups. The source of performance advantage of interacting groups is not in finding defects, but rather in discriminating between true defects and false positives. The practical implication for this research is that nominal groups constitute an alternative review design in situations where individuals discover a low level of false positives.

Keywords

Software Development Technical Review defect detection interacting group nominal group false positives 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Basili, V. R., Green, S., Laitenberger, O. U., Lanubile, F., Shull, F., Sorumgaard, S., Zelkowitz, M. V., The Empirical Investigation of Perspective-Based Reading, Journal of Empirical Software Engineering, 1(2), 1996.Google Scholar
  2. Bottger, P. C., Yetton, P. W., Improving Group Performance by Training in Individual Problem Solving, Journal of Applied Psychology, 42, 234–249, 1988.Google Scholar
  3. Eick, S. G., Loader, C. R., Long, M. D., Votta, L. G., Wiel, S. V., Estimating Software Fault Content Before Coding, 14th Proceedings International Conference on Software Engineering, 59–65, May 11–15, 1992.Google Scholar
  4. Fagan, M. E., Design and Code Inspections to Reduce Errors in Program Development, IBM Systems Journal, 15(3), 1976.Google Scholar
  5. Freedman, D. P., Weinberg, G. M., Handbook of Walkthroughs, Inspections, and Technical Reviews: Evaluating Programs, Projects, and Products, Third Edition, Dorset House Publishing, 353 West 12th St., New York, NY 10014, 1990.Google Scholar
  6. Gilb, T., Graham, D., Software Inspection, Addison-Wesley, 1993.Google Scholar
  7. Jelinski, Z., Moranda, P. B., Applications of a Probability-Based Model to a Code Reading Experiment, IEEE Symposium on Computer Software Reliability, New York City, 1973.Google Scholar
  8. Johnson, P. M., Tjahjono, D., Assessing Software Review Meetings: A Controlled Experimental Study Using CSRS, Technical Report 96-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, HI, 96734, USA, 1996.Google Scholar
  9. Judd, C. M., Smith, E. R., Kidder, L. H., Research Methods in Social Relations, Sixth Edition, Harcourt Bruce Jovanovich College Publishers, 1991.Google Scholar
  10. Kamsties, E., Lott, C., An Empirical Evaluation of Three Defect Detection Techniques, Proceedings of 5th European Software Engineering Conference, September 1995.Google Scholar
  11. Kim, L. P. W., Sauer, C., Jeffery, R., A Framework of Software Development Technical Reviews, Software Quality and Productivity: Theory, Practice, Education and Training, Edited by Matthew Lee, Ben-Zion Barta, Peter Juliff, Chapman and Hall, 294–299, IFIP 1995.Google Scholar
  12. Knight, J. C., Myers, A. N., An Improved Inspection Technique, Communications of the ACM, 36(11), November 1993.Google Scholar
  13. Lanubile, F., Visaggio, G., Assessing Defect Detection Methods for Software Requirements Inspections Through External Replication, International Software Engineering Research Network, Technical report ISERN-96-01, January 1996.Google Scholar
  14. Lau, L. P. W., Sauer, C., Jeffery, R., Validating the Defect Detection Performance Advantage of Group Designs for Software Reviews: Report of a Replicated Experiment, Centre for Advanced Empirical Software Research Technical Report 9618, The University of New South Wales, 1997.Google Scholar
  15. Lorge, I., Fox, D., Davitz, L, Brenner, M., A Survey of Studies Contrasting the Quality of Group Performance and Individual Performance, Psychological Bulletin, 55, 337–371, 1958.Google Scholar
  16. McCarthy, P., Porter, A., Harvey, S., Votta, L., An Experiment to Assess Cost Benefits of Inspection Meetings and their Alternatives: A Pilot Study, Proceedings of the Third International Software Metrics Symposium, Berlin, Germany, March 25–26 1996.Google Scholar
  17. Myers, G. J., A Controlled Experiment in Program Testing and Code Walkthroughs/Inspections, Communications ofACM, 21(9), September, 1978.Google Scholar
  18. Noru⩛is, M. J., SPSS: SPSS 6.1 Guide to Data Analysis, Prentice Hall, Englewood, New Jersey 07632, 1995.Google Scholar
  19. Parnas, D. L., Weiss, D. M., Active Design Reviews: Principles and Practices, The Journal of Systems and Software, 7, 259–265, 1987.Google Scholar
  20. Porter, A. A., Votta, L. G., An Experiment to Assess Different Defect Detection Methods for Software Requirements Inspections, Proceedings of the Sixteenth International Conference on Software Engineering, Sorrento, Italy, May 1994.Google Scholar
  21. Porter, A. A., Votta, L. G., Basili, V. R., Comparing Detection Methods for Software Requirements Inspections: AReplicated Experiment, IEEE Transactions on Software Engineering, 21(6), 563–575, June 1995.Google Scholar
  22. Sauer, C., Jeffery, R., Lau, L. P. W., Yetton, P., A Behaviourally Motivated Programme for Empirical Research into Software Development Technical Reviews, Technical Report 9615, Centre for Advanced Empirical Software Research, School of Information Systems, University of New South Wales, Sydney 2052, 1996.Google Scholar
  23. Schneider G. M., Martin, J., Tsai, W. T., An Experimental Study of Fault Detection in User Requirements Documents, ACM Transactions on Software Engineering and Methodology, 1( 2), April 1992.Google Scholar
  24. Shaw, M. E., Group Dynamics: The Psychology of Small Group Behaviour, Third Edition, McGraw-Hill Publishing Company, 1981.Google Scholar
  25. Siy, H. P., Identifying the Mechanisms Driving Code Inspection Costs and Benefits, PhD Dissertation, 1996.Google Scholar
  26. Steiner, I. D., Group Process and Productivity, Academic Press, New York, 1972.Google Scholar
  27. Strauss, R. G., Ebenau, R. G., Software Inspection Process, McGraw-Hill, Inc., 1994.Google Scholar
  28. Votta, L. G., Does Every Inspection Need a Meeting, Proceedings of the ACM SIGSOFT, Symposium on Foundations of Software Engineering, December, 1993. Welburn, T., Structured COBOL: Fundamentals and Style, Mayfield Publishing Company, Mitchell Publishing. INC., 1981.Google Scholar
  29. Yetton, P. W., Bottger, P. C., Individual Versus Group Problem Solving: an Experimental Test of a Best-Member Strategy, Organizational Behavior and Human Performance, 307–321, June 1982.Google Scholar
  30. Yourdon, E., Structured Walkthrough, Fourth Edition, Prentice-Hall, 1989.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Lesley Pek Wee Land
    • 1
  • Chris Sauer
    • 2
  • Ross Jeffery
    • 1
  1. 1.School of Information SystemsUniversity of New South WalesSydneyAustralia
  2. 2.Australian Graduate School of Management, Fujitsu Centre for Managing Information Technology in OrganisationsUniversity of New South WalesSydneyAustralia

Personalised recommendations