ISERN: A Distributed Experiment - Ein verteiltes Inspektionsexperiment

  • Marcus Ciolkowski
  • Stefan Biffl
  • Dieter Rombach

Abstract

In the long run, high quality of software is prerequisite for software companies to survive. Inspections of software products help to detect and remove errors in software development early and cost-effectively. Thus, they help to enhance the quality of software products. One main goal in practice is, for a given situation, to select a suitable inspection technique. Empirical studies are necessary to make such a selection as they provide real-world data from a documented context.

The results of particular empirical studies, however, are not easily transferable to other situations without knowledge about their situation, their context. Thereby, it is especially unclear to which degree experimental results depend on their context, that is, which context factors influence the results. Insight into experimental results in different contexts can, for example, be gained by repeating, or replicating, experiments in these contexts. Thus, distributed experiments that are conducted in many different contexts and that systematically measure context factors are especially desirable, as they can improve the generalization of experimental results.

This paper describes a concept for a distributed inspection experiment being planned and conducted by the International Software Engineering Research Network (ISERN). Thereby, about 20 organizations from industry and research collaborate. The first step of the empirical study is a broad survey of software companies to reveal the state of the practice in the areas of inspection process and inspection techniques. The second step is to conduct experiments with documents from within ISERN and state-of-the-art inspection techniques, as well as with the participating organization’s own documents and inspection techniques. Goal of the distributed experiment is to describe where the variations in practice of inspections techniques are and to observe the effectiveness of these inspection techniques with respect to relevant context factors. From a research perspective, this experiment can help find out how experimental results can be generalized more effectively.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Basili, V., Caldiera, C., Rombach, D. The Experience Factory. Encyclopedia of Software Engineering, Vol.!, pp. 469–476, John Wiley & Sons, 1994.Google Scholar
  2. [2]
    Basili, V., Green, S., Laitenberger, O., Lanubile, F., Shull, F., Sorumgard, S., and Zelkowitz, M., 1996. The Empirical Investigation of Perspective-based Reading. Journal of Empirical Software Engineering, 2 (1): 133–164.CrossRefGoogle Scholar
  3. [3]
    Basili, V. R., Shull, F., Lanubile, F., 1998, Using Experiments to Build a Body of Knowledge, IEEE Transactions on Software Engineering, 25 (4): 456–474.CrossRefGoogle Scholar
  4. [4]
    Ciolkowski, M., Differding, C., Laitenberger, O., and Minch, J. 1997. Empirical Investigation of Perspective-based Reading: A Replicated Experiment. Fraunhofer Institute for Experimental Software Engineering, Germany, Tech. Report Int. Software Engineering Research Network, ISERN97–13.Google Scholar
  5. [5]
    Ciolkowski, M., 1999, Evaluating the Effectiveness of Different Inspection Techniques on Informal Requirements Documents, Master’s thesis, University of Kaiserslautern.Google Scholar
  6. [6]
    Gilb, T. and Graham, D., 1993. Software Inspection. Addison-Wesley Publishing Company.Google Scholar
  7. [7]
    Humphrey, W. H., 1990, Managing the Software Process, Addison-Wesley.Google Scholar
  8. [8]
    Laitenberger, O. May, Cost-effective Detection of Software Defects through Perspective-based Inspections. PhD thesis, University of Kaiserslautern, Germany, www.iese.fhg.de, 2000.Google Scholar
  9. [9]
    O. Laitenberger, and J.-M. DeBaud, “An encompassing life cycle centric survey of software inspection”, Journal of Systems and Software, vol. 50, no. 1, 2000, pp. 5–31.CrossRefGoogle Scholar
  10. [10]
    Linger, R. C., Mills, H. D., and Witt, B. I., 1979. Structured Programming: Theory and Practice. Addison-Wesley Publishing Company.Google Scholar
  11. [11]
    Miller, J. 1998. Applying Meta-Analytical Procedures to Software Engineering Experiments. TR EFoCS-30–98, University of Strathclyde.Google Scholar
  12. [12]
    Porter, A., and Votta, L. 1994. An Experiment to Assess Different Defect Detection Methods for Software Requirements Inspections. Proc. 16“ Int. Conf. on Software Engineering, Los Alamitos, CA, USA, 103–112.Google Scholar
  13. [13]
    Porter, A., and Johnson, P.M. 1997. Assessing Software Review Meetings: Results of a Comparative Analysis of Two Experimental Studies. IEEE Transactions on Software Engineering 23 (3): 129–144.CrossRefGoogle Scholar
  14. [14]
    Porter A., and Votta, L. 1998. Comparing Detection Methods for Software Requirements Inspections: a Replicated Experiment using professional subjects. Empirical Software Engineering: An International Journal 3(4): 355–379.CrossRefGoogle Scholar
  15. [15]
    Porter, A., Siy, H., Mockus, A. and Votta, L. Jan. 1998. Understanding the sources of variation in software inspections. ACM Transactions on Software Engineering and Methodology 7(1): 41–79.CrossRefGoogle Scholar
  16. [16]
    Porter, A. A., Votta, L. G., and Basili, V. R., 1995b. Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment. IEEE Transactions on Software Engineering, 21 (6): 563–575.CrossRefGoogle Scholar
  17. [17]
    Sauer, C., Jeffery, D., Land, L. and Yetton, P. Jan. 2000. The Effectiveness of Software Development Technical Reviews: A Behaviorally Motivated Program of Research. IEEE Transactions on Software Engineering 26 (1): 11–14.CrossRefGoogle Scholar
  18. [18]
    Schouwen, J. van, Parnas, D. L., Madey, J., 1993, Documentation of Requirements for Computer Systems, IEEE International Symposium on Requirements Engineering.Google Scholar
  19. [19]
    W. Tichy. Should computer Scientists Experiment More?. IEEE Computer, May 1998.Google Scholar
  20. [20]
    Wohlin, C., Aurum, A., Petersson, H., Shull, F., and Ciolkowski, M. Software Inspecion benchmarking—A Qualitative and Quantitative Comparative Opportunity. Submission to Metrics 2000.Google Scholar
  21. [21]
    M. Zelkowitz, D. Wallace. Experimental Models for Validating Technology. IEEE Computer, May 1998.Google Scholar
  22. [22]
    Zendler A. 2001. A Preliminary Software Engineering Theory as Investigated by Published Experiments. Empirical Software Engineering: An International Journal 6: 161–180.CrossRefGoogle Scholar

Copyright information

© Springer Fachmedien Wiesbaden 2002

Authors and Affiliations

  • Marcus Ciolkowski
    • 1
  • Stefan Biffl
    • 2
  • Dieter Rombach
    • 1
    • 2
  1. 1.Universität KaiserslauternGermany
  2. 2.Fraunhofer Institut Experimentelles Software EngineeringKaiserslauternGermany

Personalised recommendations