Advertisement

Empirical Software Engineering

, Volume 9, Issue 1–2, pp 77–110 | Cite as

Evaluation of Usage-Based Reading—Conclusions after Three Experiments

  • Thomas Thelin
  • Per Runeson
  • Claes Wohlin
  • Thomas Olsson
  • Carina Andersson
Article

Abstract

Software inspections have been introduced in software engineering in order to detect faults before testing is performed. Reading techniques provide reviewers in software inspections with guidelines on how they should check the documents under inspection. Several reading techniques with different purposes have been introduced and empirically evaluated. In this paper, we describe a reading technique with the special aim to detect faults that are severe from a user’s point of view. The reading technique is named usage-based reading (UBR) and it can be used to inspect all software artefacts. In the series of experiments, a high-level design document is used. The main focus of the paper is on the third experiment, which investigates the information needed for UBR in the individual preparation and the meeting of software inspections. Hence, the paper discusses (1) the series of three experiments of UBR, (2) the individual preparation of the third experiment, and (3) the meeting part of the third experiment. For each of these three parts, results are produced. The main results are (1) UBR is an efficient and effective reading technique that can be used for user-focused software inspections, (2) UBR is more efficient and effective if the information used for UBR is developed prior to, instead of during the individual preparation, and (3) the meeting affects the UBR inspection in terms of increased effectiveness and decreased efficiency. In summary, the empirical evidence shows that UBR is an efficient and effective reading technique to be used by software organizations that produce software for which the user perceived quality is important.

Empirical study reading technique software inspection usage-based reading 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aurum, A., Petersson, H., and Wohlin, C. 2002. State-of-the-art: Software inspections after 25 years. Software Testing, Verification and Reliability 12(3): 133–154.Google Scholar
  2. Basili, V. R., Green, S., Laitenberger, O., Lanubile, F., Shull, F., Sørumgård, S., and Zelkowitz, M. V. 1996. The empirical investigation of perspective-based reading. Empirical Software Engineering: An International Journal 1(2): 133–164.Google Scholar
  3. Basili, V. R., Shull, F., and Lanubile, F. 1999. Building knowledge through families of experiments. IEEE Transactions on Software Engineering 25(4): 456–473.Google Scholar
  4. Cheng, B., and Jeffery, R. 1996. Comparing inspection strategies for software requirement specifications. Proceedings of the 8th Australian Software Engineering Conference. pp. 203-211.Google Scholar
  5. Dunsmore, A., Roper, M., and Wood, M. 2002. Further investigations into the development and evaluation of reading techniques for object-oriented code inspection. Proceedings of the 24th International Conference on Software Engineering. pp. 47-57.Google Scholar
  6. Fagan, M. E. 1976. Design and code inspections to reduce errors in program development. IBM System Journal 15(3): 182–211.Google Scholar
  7. ITU-T Z.100. 1993. Specification and Description Language. SDL, ITU-T Recommendation Z.100.Google Scholar
  8. ITU-T Z.120. 1996. Message Sequence Charts. MSC, ITU-T Recommendation Z.120.Google Scholar
  9. Jacobson, I., Christerson, M., Jonsson, P., and Övergaard, G. 1992. Object-Oriented Software Engineering: A Use Case Driven Approach. Addison-Wesley, USA.Google Scholar
  10. Knight, J. C., and Myers, A. E. 1993. An improved inspection technique. Communications of ACM 36(11): 50–69.Google Scholar
  11. Laitenberger, O., and DeBaud, J.-M. 2000. An encompassing life cycle centric survey of software inspection. Journal of Systems and Software 50(1): 5–31.Google Scholar
  12. Lauesen, S. 2002. Software Requirements-Styles and Techniques. Addison-Wesley, UK.Google Scholar
  13. Martin, J., and Tsai, W. T. 1990. N-fold inspection: A requirements analysis technique. Communications of ACM 33(2): 225–232.Google Scholar
  14. Musa, J. D. 1998. Software Reliability Engineering: More Reliable Software, Faster Development and Testing. McGraw-Hill, USA.Google Scholar
  15. Olofsson, M., and Wennberg, M. 1996. Statistical Usage Inspection. Master's Thesis, Dept. of Communication Systems, Lund University, CODEN: LUTEDX (TETS-5244)/1-81/(1996)&local 9.Google Scholar
  16. Petersson, H., Thelin, T., Runeson, P., and Wohlin, C. 2004. Capture-recapture in software inspections after 10 years research-theory, evaluation and application. To appear in Journal of Systems and Software.Google Scholar
  17. Porter, A., Votta, L., and Basili, V. R. 1995. Comparing detection methods for software requirements inspection: A replicated experiment. IEEE Transactions on Software Engineering 21(6): 563–575.Google Scholar
  18. Runeson, P. 2000. A new software engineering program-structure and initial experiences. Proceedings of the 13th International Conference on Software Engineering Education & Training. pp. 223-232.Google Scholar
  19. Saaty, T. L., and Vargas, L. G. 2001. Models, methods, concepts & applications of the analytic hierarchy process. Netherlands: Kluwer Academic Publishers.Google Scholar
  20. Siegel, S., and Castellan, N. J. 1988. Nonparametric Statistics for the Behavioral Sciences. Singapore: McGraw-Hill.Google Scholar
  21. Thelin, T., Runeson, P., and Regnell, B. 2001. Usage-based reading-an experiment to guide reviewers with use cases. Information and Software Technology 43(15): 925–938.Google Scholar
  22. Thelin, T., Runeson, P., and Wohlin, C. 2003a. An experimental comparison of usage-based and checklistbased reading. IEEE Transactions on Software Engineering August, 29(8): 687–704.Google Scholar
  23. Thelin, T., Runeson, P., and Wohlin, C. 2003b. Prioritized use cases as a vehicle for software inspections. IEEE Software July/August, 20(4): 30–33.Google Scholar
  24. Travassos, G., Shull, F., Fredericks, M., and Basili, V. R. 1999. Detecting defects in object-oriented designs: Using reading techniques to increase software quality. Proc. of the International Conference on Object-Oriented Programming Systems, Languages & Applications.Google Scholar
  25. Votta, L. G. 1993. Does every inspection need a meeting? Proceedings of the 1st ACM SIGSOFT Symposium on Foundations of Software Engineering, ACM Software Engineering Notes 18(5): 107–114.Google Scholar
  26. Wohlin, C. 1997. Meeting the challenge of large-scale software engineering in a university environment. Proceedings of the 10th Conference on Software Engineering Education & Training. pp. 40-52.Google Scholar
  27. Wohlin, C., Runeson, P., Höst, M., Ohlsson, M. C., Regnell, B., and Wesslén, A. 2000. Experimentation in Software Engineering: An Introduction. USA: Kluwer Academic Publisher.Google Scholar

Copyright information

© Kluwer Academic Publishers 2004

Authors and Affiliations

  • Thomas Thelin
    • 1
  • Per Runeson
    • 1
  • Claes Wohlin
    • 2
  • Thomas Olsson
    • 3
  • Carina Andersson
    • 1
  1. 1.Department of Communication SystemsLund UniversityLundSweden
  2. 2.Department of Software Eng. and Computer ScienceBlekinge Institute of TechnologyRonnebySweden
  3. 3.Fraunhofer IESEKaiserslauternGermany

Personalised recommendations