ISERN: A Distributed Experiment - Ein verteiltes Inspektionsexperiment
In the long run, high quality of software is prerequisite for software companies to survive. Inspections of software products help to detect and remove errors in software development early and cost-effectively. Thus, they help to enhance the quality of software products. One main goal in practice is, for a given situation, to select a suitable inspection technique. Empirical studies are necessary to make such a selection as they provide real-world data from a documented context.
The results of particular empirical studies, however, are not easily transferable to other situations without knowledge about their situation, their context. Thereby, it is especially unclear to which degree experimental results depend on their context, that is, which context factors influence the results. Insight into experimental results in different contexts can, for example, be gained by repeating, or replicating, experiments in these contexts. Thus, distributed experiments that are conducted in many different contexts and that systematically measure context factors are especially desirable, as they can improve the generalization of experimental results.
This paper describes a concept for a distributed inspection experiment being planned and conducted by the International Software Engineering Research Network (ISERN). Thereby, about 20 organizations from industry and research collaborate. The first step of the empirical study is a broad survey of software companies to reveal the state of the practice in the areas of inspection process and inspection techniques. The second step is to conduct experiments with documents from within ISERN and state-of-the-art inspection techniques, as well as with the participating organization’s own documents and inspection techniques. Goal of the distributed experiment is to describe where the variations in practice of inspections techniques are and to observe the effectiveness of these inspection techniques with respect to relevant context factors. From a research perspective, this experiment can help find out how experimental results can be generalized more effectively.
Unable to display preview. Download preview PDF.
- Basili, V., Caldiera, C., Rombach, D. The Experience Factory. Encyclopedia of Software Engineering, Vol.!, pp. 469–476, John Wiley & Sons, 1994.Google Scholar
- Ciolkowski, M., Differding, C., Laitenberger, O., and Minch, J. 1997. Empirical Investigation of Perspective-based Reading: A Replicated Experiment. Fraunhofer Institute for Experimental Software Engineering, Germany, Tech. Report Int. Software Engineering Research Network, ISERN97–13.Google Scholar
- Ciolkowski, M., 1999, Evaluating the Effectiveness of Different Inspection Techniques on Informal Requirements Documents, Master’s thesis, University of Kaiserslautern.Google Scholar
- Gilb, T. and Graham, D., 1993. Software Inspection. Addison-Wesley Publishing Company.Google Scholar
- Humphrey, W. H., 1990, Managing the Software Process, Addison-Wesley.Google Scholar
- Laitenberger, O. May, Cost-effective Detection of Software Defects through Perspective-based Inspections. PhD thesis, University of Kaiserslautern, Germany, www.iese.fhg.de, 2000.Google Scholar
- Linger, R. C., Mills, H. D., and Witt, B. I., 1979. Structured Programming: Theory and Practice. Addison-Wesley Publishing Company.Google Scholar
- Miller, J. 1998. Applying Meta-Analytical Procedures to Software Engineering Experiments. TR EFoCS-30–98, University of Strathclyde.Google Scholar
- Porter, A., and Votta, L. 1994. An Experiment to Assess Different Defect Detection Methods for Software Requirements Inspections. Proc. 16“ Int. Conf. on Software Engineering, Los Alamitos, CA, USA, 103–112.Google Scholar
- Schouwen, J. van, Parnas, D. L., Madey, J., 1993, Documentation of Requirements for Computer Systems, IEEE International Symposium on Requirements Engineering.Google Scholar
- W. Tichy. Should computer Scientists Experiment More?. IEEE Computer, May 1998.Google Scholar
- Wohlin, C., Aurum, A., Petersson, H., Shull, F., and Ciolkowski, M. Software Inspecion benchmarking—A Qualitative and Quantitative Comparative Opportunity. Submission to Metrics 2000.Google Scholar
- M. Zelkowitz, D. Wallace. Experimental Models for Validating Technology. IEEE Computer, May 1998.Google Scholar