Abstract
As the demand for empirical evidence for claims of improvements in software development and evolution has increased, the use of empirical methods such as case studies has grown. In case study methodology various types of triangulation is a commonly recommended technique for increasing validity. This study investigates a multiple data source case study with the objective of identifying whether more findings, trustworthier findings and other findings are made using multiple data source triangulation, than had a single data source been used. The case study investigated analyses key lead-time success factors for a software evolution project in a large organization developing eBusiness systems with high-availability high throughput transaction characteristics. By tracing each finding in that study to the individual evidences motivating the finding, it is suggested that a multiple data source explorative case study can have a higher validity than a single data source study. It is concluded that a careful case study design with multiple sources of evidence can result in not only better justified findings than a single data source study, but also other findings. Thus this study provides empirically derived evidence that a multiple data source case study is more trustworthy than a comparable single data source case study.
Similar content being viewed by others
References
Bratthall, L., and Jø rgensen, M. 2000. A Cost/Benefit Analysis of Project Lead-time Impact Factors in a Financial Systems Company—A Longitudinal Case Study. Oslo University, Department of Informatics, Technical Report 291. ISBN 82-7368-240-4.
Bratthall, L., and Runeson, P. 1999. Architecture Design Recovery of a Family of Embedded Software Systems—An Experience Report. in: Donohoe, P. (ed) Proc. First IFIP Working Conference on Software Architecture (WICSA1), San Antonio, TX, USA: Kluwer, pp. 3–14.
Bratthall, L., Runeson, P., Adelswärd, K., and Eriksson, W. 2000. A Survey of Lead-time Challenges in the Development and Evolution of Distributed Real-time Systems. Information and Software Technology, 42(13): 947–958.
Burgess, R. C. 1988. (ed.) Studies in Qualitative Methodology—Conducting Qualitative Research. London, United Kingdom: Jai Press Inc.
Denzin, N. K. 1989. The Research Act. 3rd Ed. Englewood Cliffs, NJ, USA: Prentice-Hall.
Denzin, N. K., and Lincoln, Y. 1988 (eds.) Collecting and Interpreting Qualitative Materials. CA, USA: Sage Publications.
Frankfort-Nachmias, C., and Nachmias, D. 1992. Research Methods in the Social Sciences, 4th Ed. United Kingdom: St. Martin's Press.
Glaser, B. G. 1978. Theoretical Sensitivity. Advances in the Methodology of Grounded Theory. Sociology Press, Mill Valey, CA, USA.
Hammerslay, M., and Atkinson, P. 1983. Ethnography: Principles in Practise. London, Great Britain: Tavistock.
Hammerslay, M. 1987. Some Notes on the Terms 'Validity' and 'Reliability'. British Educational Research Journal 13(1): 73–81.
Johansson, E., Bratthall, L., Wesslén, A., and Höst, M. 2001. A survey on the importance of quality requirements for different stakeholders in software architecture development. In: IEEE Computer Society. (ed.) Proceedings of Hawaii International Conference on System Sciences.
Jø rgensen, M. 1995. “The Quality of Questionnaire Based Software Maintenance Studies”. ACM SIGSOFT— Software Engineering Notes, 20(1): 71–73.
Karlsson, E., Andersson, L. G., and Leion, P. 2000. Daily Build and Feature Development in Large Distributed Projects. In: IEEE Computer Society. (ed.) Proceedings. 22nd Int'l Conference on Software Engineering, pp. 649–658. Limerick, Ireland.
Lather, P. 1986. Issues of Validity in Openly Ideological Research. Interchange 17(4): 63–84.
Leszak, M., Perry, D.E., and Stoll, D. 2000. A Case Study in Root Cause Defect Analysis”. In: IEEE Computer Society. (ed.) Proceedings. 22nd Int'l Conference on Software Engineering, pp. 428–37. Limerick, Ireland.
Marshall, C., and Rossman, G. 1989. Designing Qualitative Research. USA: Sage Publications.
Miller, G., Dingwall, R. (eds.) 1997. Context and Method in Qualitative Research. USA: Sage Publications.
Olsson, T., Bauer, N., Runeson, P., and Bratthall, L. 2001. An Experiment on Lead-Time Impact in Testing of Distributed Real-Time Systems. In: Proceeding of 7th International Symposium on software metrics (METRICS' 2001). pp. 159–168. London; United Kingdom: IEEE Computer Society.
Patton, M. Q. 1980. Qualitative Evaluation Methods. USA: Sage Publications.
Robson, C. 1993. Real World Research: A Resource for Social Scientists and Practitioner-Researchers. Great Britain: Blackwell Publishers
Silverman, D. 2000. Doing Qualitative Research—A Practical Handbook. USA: Sage Publications.
Tichy, W. F. 1998. Should Computer Scientists Experiment More? IEEE Software 31(5) 32–40
Winter, G. 2000. A Comparative Discussion of the Notion of 'Validity' in Qualitative and Quantitative Research. The Qualitative Report [On-line Serial], 4(3/4). Available (17 Oct. 2000) http://www.nova. edu/ssss/QR/QR4-3/winter.html.
Wohlin, C., Runeson, P., Höst, M., Ohlsson, M. C., Regnell, B., and Wesslén, A. 1999. Experimentation in Software Engineering: An Introduction, Boston, MA, USA: Kluwer Academic Publishers
Yin, R. K. 1984. Case Study Research: Design and Methods. 2nd Ed. USA: Sage Publications.
Zelkowitz, M. V., and Wallace, D. R. 1998. Experimental Models for Validating Technology. IEEE Software 31(5): 23–31.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Bratthall, L., Jørgensen, M. Can you Trust a Single Data Source Exploratory Software Engineering Case Study?. Empirical Software Engineering 7, 9–26 (2002). https://doi.org/10.1023/A:1014866909191
Issue Date:
DOI: https://doi.org/10.1023/A:1014866909191