Advertisement

Empirical Strategies

  • Claes Wohlin
  • Per Runeson
  • Martin Höst
  • Magnus C. Ohlsson
  • Björn Regnell
  • Anders Wesslén
Chapter

Abstract

There are two types of research paradigms that have different approaches to empirical studies. Exploratory research is concerned with studying objects in their natural setting and letting the findings emerge from the observations. This implies that a flexible research design [1] is needed to adapt to changes in the observed phenomenon. Flexible design research is also referred to as qualitative research, as it primarily is informed by qualitative data. Inductive research attempts to interpret a phenomenon based on explanations that people bring forward. It is concerned with discovering causes noticed by the subjects in the study, and understanding their view of the problem at hand. The subject is the person, which is taking part in an empirical study in order to evaluate an object.

Keywords

Technology Transfer Software Engineering Systematic Literature Review Empirical Strategy Case Study Research 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Anastas, J.W., MacDonald, M.L.: Research Design for the Social Work and the Human Services, 2nd edn. Columbia University Press, New York (2000)Google Scholar
  2. 2.
    Andersson, C., Runeson, P.: A spiral process model for case studies on software quality monitoring – method and metrics. Softw. Process: Improv. Pract. 12(2), 125–140 (2007). doi: 10.1002/spip.311Google Scholar
  3. 3.
    Andrews, A.A., Pradhan, A.S.: Ethical issues in empirical software engineering: the limits of policy. Empir. Softw. Eng. 6(2), 105–110 (2001)Google Scholar
  4. 4.
    American Psychological Association: Ethical principles of psychologists and code of conduct. Am. Psychol. 47, 1597–1611 (1992)Google Scholar
  5. 5.
    Avison, D., Baskerville, R., Myers, M.: Controlling action research projects. Inf. Technol. People 14(1), 28–45 (2001). doi:  10.1108/09593840110384762 http://www.emeraldinsight.com/10.1108/09593840110384762 Google Scholar
  6. 6.
    Babbie, E.R.: Survey Research Methods. Wadsworth, Belmont (1990)Google Scholar
  7. 7.
    Basili, V.R.: Quantitative evaluation of software engineering methodology. In: Proceedings of the First Pan Pacific Computer Conference, vol. 1, pp. 379–398. Australian Computer Society, Melbourne (1985)Google Scholar
  8. 8.
    Basili, V.R.: Software development: a paradigm for the future. In: Proceedings of the 13th Annual International Computer Software and Applications Conference, COMPSAC’89, Orlando, pp. 471–485. IEEE Computer Society Press, Washington (1989)Google Scholar
  9. 9.
    Basili, V.R.: The experimental paradigm in software engineering. In: H.D. Rombach, V.R. Basili, R.W. Selby (eds.) Experimental Software Engineering Issues: Critical Assessment and Future Directives. Lecture Notes in Computer Science, vol. 706. Springer, Berlin Heidelberg (1993)Google Scholar
  10. 10.
    Basili, V.R.: Evolving and packaging reading technologies. J. Syst. Softw. 38(1), 3–12 (1997)Google Scholar
  11. 11.
    Basili, V.R., Weiss, D.M.: A methodology for collecting valid software engineering data. IEEE Trans. Softw. Eng. 10(6), 728–737 (1984)Google Scholar
  12. 12.
    Basili, V.R., Selby, R.W.: Comparing the effectiveness of software testing strategies. IEEE Trans. Softw. Eng. 13(12), 1278–1298 (1987)Google Scholar
  13. 13.
    Basili, V.R., Rombach, H.D.: The TAME project: towards improvement-oriented software environments. IEEE Trans. Softw. Eng. 14(6), 758–773 (1988)Google Scholar
  14. 14.
    Basili, V.R., Green, S.: Software process evaluation at the SEL. IEEE Softw. 11(4), pp. 58–66 (1994)Google Scholar
  15. 15.
    Basili, V.R., Selby, R.W., Hutchens, D.H.: Experimentation in software engineering. IEEE Trans. Softw. Eng. 12(7), 733–743 (1986)Google Scholar
  16. 16.
    Basili, V.R., Caldiera, G., Rombach, H.D.: Experience factory. In: J.J. Marciniak (ed.) Encyclopedia of Software Engineering, pp. 469–476. Wiley, New York (1994)Google Scholar
  17. 17.
    Basili, V.R., Caldiera, G., Rombach, H.D.: Goal Question Metrics paradigm. In: J.J. Marciniak (ed.) Encyclopedia of Software Engineering, pp. 528–532. Wiley (1994)Google Scholar
  18. 18.
    Basili, V.R., Green, S., Laitenberger, O., Lanubile, F., Shull, F., Sørumgård, S., Zelkowitz, M.V.: The empirical investigation of perspective-based reading. Empir. Soft. Eng. 1(2), 133–164 (1996)Google Scholar
  19. 19.
    Basili, V.R., Green, S., Laitenberger, O., Lanubile, F., Shull, F., Sørumgård, S., Zelkowitz, M.V.: Lab package for the empirical investigation of perspective-based reading. Technical report, Univeristy of Maryland (1998). http://www.cs.umd.edu/projects/SoftEng/ESEG/manual/pbr_package/manual.html
  20. 20.
    Basili, V.R., Shull, F., Lanubile, F.: Building knowledge through families of experiments. IEEE Trans. Softw. Eng. 25(4), 456–473 (1999)Google Scholar
  21. 21.
    Baskerville, R.L., Wood-Harper, A.T.: A critical perspective on action research as a method for information systems research. J. Inf. Technol. 11(3), 235–246 (1996). doi: 10.1080/026839696345289Google Scholar
  22. 22.
    Benbasat, I., Goldstein, D.K., Mead, M.: The case research strategy in studies of information systems. MIS Q. 11(3), 369 (1987). doi: 10.2307/248684Google Scholar
  23. 23.
    Bergman, B., Klefsjö, B.: Quality from Customer Needs to Customer Satisfaction. Studentlitteratur, Lund (2010)Google Scholar
  24. 24.
    Brereton, P., Kitchenham, B.A., Budgen, D., Turner, M., Khalil, M.: Lessons from applying the systematic literature review process within the software engineering domain. J. Syst. Softw. 80(4), 571–583 (2007). doi: 10.1016/j.jss.2006.07.009Google Scholar
  25. 25.
    Brereton, P., Kitchenham, B.A., Budgen, D.: Using a protocol template for case study planning. In: Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering. University of Bari, Italy (2008)Google Scholar
  26. 26.
    Briand, L.C., Differding, C.M., Rombach, H.D.: Practical guidelines for measurement-based process improvement. Softw. Process: Improv. Pract. 2(4), 253–280 (1996)Google Scholar
  27. 27.
    Briand, L.C., El Emam, K., Morasca, S.: On the application of measurement theory in software engineering. Empir. Softw. Eng. 1(1), 61–88 (1996)Google Scholar
  28. 28.
    Briand, L.C., Bunse, C., Daly, J.W.: A controlled experiment for evaluating quality guidelines on the maintainability of object-oriented designs. IEEE Trans. Softw. Eng. 27(6), 513–530 (2001)Google Scholar
  29. 29.
    British Psychological Society: Ethical principles for conducting research with human participants. Psychologist 6(1), 33–35 (1993)Google Scholar
  30. 30.
    Budgen, D., Kitchenham, B.A., Charters, S., Turner, M., Brereton, P., Linkman, S.: Presenting software engineering results using structured abstracts: a randomised experiment. Empir. Softw. Eng. 13, 435–468 (2008). doi: 10.1007/s10664-008-9075-7Google Scholar
  31. 31.
    Budgen, D., Burn, A.J., Kitchenham, B.A.: Reporting computing projects through structured abstracts: a quasi-experiment. Empir. Softw. Eng. 16(2), 244–277 (2011). doi: 10.1007/s10664-010-9139-3Google Scholar
  32. 32.
    Campbell, D.T., Stanley, J.C.: Experimental and Quasi-experimental Designs for Research. Houghton Mifflin Company, Boston (1963)Google Scholar
  33. 33.
    Chrissis, M.B., Konrad, M., Shrum, S.: CMMI(R): Guidelines for process integration and product improvement. Technical report, SEI (2003)Google Scholar
  34. 34.
    Ciolkowski, M., Differding, C.M., Laitenberger, O., Münch, J.: Empirical investigation of perspective-based reading: A replicated experiment. Technical report, 97-13, ISERN (1997)Google Scholar
  35. 35.
    Coad, P., Yourdon, E.: Object-Oriented Design, 1st edn. Prentice-Hall, Englewood (1991)Google Scholar
  36. 36.
    Cohen, J.: Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol. Bull. 70, 213–220 (1968)Google Scholar
  37. 37.
    Cook, T.D., Campbell, D.T.: Quasi-experimentation – Design and Analysis Issues for Field Settings. Houghton Mifflin Company, Boston (1979)Google Scholar
  38. 38.
    Corbin, J., Strauss, A.: Basics of Qualitative Research, 3rd edn. SAGE, Los Angeles (2008)Google Scholar
  39. 39.
    Cruzes, D.S., Dybå, T.: Research synthesis in software engineering: a tertiary study. Inf. Softw. Technol. 53(5), 440–455 (2011). doi: 10.1016/j.infsof.2011.01.004Google Scholar
  40. 40.
    Dalkey, N., Helmer, O.: An experimental application of the delphi method to the use of experts. Manag. Sci. 9(3), 458–467 (1963)Google Scholar
  41. 41.
    DeMarco, T.: Controlling Software Projects. Yourdon Press, New York (1982)Google Scholar
  42. 42.
    Demming, W.E.: Out of the Crisis. MIT Centre for Advanced Engineering Study, MIT Press, Cambridge, MA (1986)Google Scholar
  43. 43.
    Dieste, O., Grimán, A., Juristo, N.: Developing search strategies for detecting relevant experiments. Empir. Softw. Eng. 14, 513–539 (2009). http://dx.doi.org/10.1007/s10664-008-9091-7
  44. 44.
    Dittrich, Y., Rönkkö, K., Eriksson, J., Hansson, C., Lindeberg, O.: Cooperative method development. Empir. Softw. Eng. 13(3), 231–260 (2007). doi: 10.1007/s10664-007-9057-1Google Scholar
  45. 45.
    Doolan, E.P.: Experiences with Fagan’s inspection method. Softw. Pract. Exp. 22(2), 173–182 (1992)Google Scholar
  46. 46.
    Dybå, T., Dingsøyr, T.: Empirical studies of agile software development: a systematic review. Inf. Softw. Technol. 50(9-10), 833–859 (2008). doi: DOI:  10.1016/j.infsof.2008.01.006 Google Scholar
  47. 47.
    Dybå, T., Dingsøyr, T.: Strength of evidence in systematic reviews in software engineering. In: Proceedings of the 2nd ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM ’08, Kaiserslautern, pp. 178–187. ACM, New York (2008). doi: http://doi.acm.org/10.1145/1414004.1414034
  48. 48.
    Dybå, T., Kitchenham, B.A., Jørgensen, M.: Evidence-based software engineering for practitioners. IEEE Softw. 22, 58–65 (2005). doi:http://doi.ieeecomputersociety.org/10.1109/MS.2005.6 Google Scholar
  49. 49.
    Dybå, T., Kampenes, V.B., Sjøberg, D.I.K.: A systematic review of statistical power in software engineering experiments. Inf. Softw. Technol. 48(8), 745–755 (2006). doi:  10.1016/j.infsof.2005.08.009
  50. 50.
    Easterbrook, S., Singer, J., Storey, M.-A., Damian, D.: Selecting empirical methods for software engineering research. In: F. Shull, J. Singer, D.I. Sjøberg (eds.) Guide to Advanced Empirical Software Engineering. Springer, London (2008)Google Scholar
  51. 51.
    Eick, S.G., Loader, C.R., Long, M.D., Votta, L.G., Vander Wiel, S.A.: Estimating software fault content before coding. In: Proceedings of the 14th International Conference on Software Engineering, Melbourne, pp. 59–65. ACM Press, New York (1992)Google Scholar
  52. 52.
    Eisenhardt, K.M.: Building theories from case study research. Acad. Manag. Rev. 14(4), 532 (1989). doi:  10.2307/258557 Google Scholar
  53. 53.
    Endres, A., Rombach, H.D.: A Handbook of Software and Systems Engineering – Empirical Observations, Laws and Theories. Pearson Addison-Wesley, Harlow/New York (2003)Google Scholar
  54. 54.
    Fagan, M.E.: Design and code inspections to reduce errors in program development. IBM Syst. J. 15(3), 182–211 (1976)Google Scholar
  55. 55.
    Fenton, N.: Software measurement: A necessary scientific basis. IEEE Trans. Softw. Eng. 3(20), 199–206 (1994)Google Scholar
  56. 56.
    Fenton, N., Pfleeger, S.L.: Software Metrics: A Rigorous and Practical Approach, 2nd edn. International Thomson Computer Press, London (1996)Google Scholar
  57. 57.
    Fenton, N., Pfleeger, S.L., Glass, R.: Science and substance: A challenge to software engineers. IEEE Softw. 11, 86–95 (1994)Google Scholar
  58. 58.
    Fink, A.: The Survey Handbook, 2nd edn. SAGE, Thousand Oaks/London (2003)Google Scholar
  59. 59.
    Flyvbjerg, B.: Five misunderstandings about case-study research. In: Qualitative Research Practice, concise paperback edn., pp. 390–404. SAGE, London (2007)Google Scholar
  60. 60.
    Frigge, M., Hoaglin, D.C., Iglewicz, B.: Some implementations of the boxplot. Am. Stat. 43(1), 50–54 (1989)Google Scholar
  61. 61.
    Fusaro, P., Lanubile, F., Visaggio, G.: A replicated experiment to assess requirements inspection techniques. Empir. Softw. Eng. 2(1), 39–57 (1997)Google Scholar
  62. 62.
    Glass, R.L.: The software research crisis. IEEE Softw. 11, 42–47 (1994)Google Scholar
  63. 63.
    Glass, R.L., Vessey, I., Ramesh, V.: Research in software engineering: An analysis of the literature. Inf. Softw. Technol. 44(8), 491–506 (2002). doi: 10.1016/S0950-5849(02)00049-6Google Scholar
  64. 64.
    Gómez, O.S., Juristo, N., Vegas, S.: Replication types in experimental disciplines. In: Proceedings of the 4th ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, Bolzano-Bozen (2010)Google Scholar
  65. 65.
    Gorschek, T., Wohlin, C.: Requirements abstraction model. Requir. Eng. 11, 79–101 (2006). doi: 10.1007/s00766-005-0020-7Google Scholar
  66. 66.
    Gorschek, T., Garre, P., Larsson, S., Wohlin, C.: A model for technology transfer in practice. IEEE Softw. 23(6), 88–95 (2006)Google Scholar
  67. 67.
    Gorschek, T., Garre, P., Larsson, S., Wohlin, C.: Industry evaluation of the requirements abstraction model. Requir. Eng. 12, 163–190 (2007). doi: 10.1007/s00766-007-0047-zGoogle Scholar
  68. 68.
    Grady, R.B., Caswell, D.L.: Software Metrics: Establishing a Company-Wide Program. Prentice-Hall, Englewood (1994)Google Scholar
  69. 69.
    Grant, E.E., Sackman, H.: An exploratory investigation of programmer performance under on-line and off-line conditions. IEEE Trans. Human Factor Electron. HFE-8(1), 33–48 (1967)Google Scholar
  70. 70.
    Gregor, S.: The nature of theory in information systems. MIS Q. 30(3), 491–506 (2006)Google Scholar
  71. 71.
    Hall, T., Flynn, V.: Ethical issues in software engineering research: a survey of current practice. Empir. Softw. Eng. 6, 305–317 (2001)Google Scholar
  72. 72.
    Hannay, J.E., Sjøberg, D.I.K., Dybå, T.: A systematic review of theory use in software engineering experiments. IEEE Trans. Softw. Eng. 33(2), 87–107 (2007). doi: 10.1109/TSE.2007.12Google Scholar
  73. 73.
    Hannay, J.E., Dybå, T., Arisholm, E., Sjøberg, D.I.K.: The effectiveness of pair programming: a meta-analysis. Inf. Softw. Technol. 51(7), 1110–1122 (2009). doi: 10.1016/j.infsof.2009.02.001Google Scholar
  74. 74.
    Hayes, W.: Research synthesis in software engineering: a case for meta-analysis. In: Proceedings of the 6th International Software Metrics Symposium, Boca Raton, pp. 143–151 (1999)Google Scholar
  75. 75.
    Hetzel, B.: Making Software Measurement Work: Building an Effective Measurement Program. Wiley, New York (1993)Google Scholar
  76. 76.
    Hevner, A.R., March, S.T., Park, J., Ram, S.: Design science in information systems research. MIS Q. 28(1), 75–105 (2004)Google Scholar
  77. 77.
    Höst, M., Regnell, B., Wohlin, C.: Using students as subjects – a comparative study of students and professionals in lead-time impact assessment. Empir. Softw. Eng. 5(3), 201–214 (2000)Google Scholar
  78. 78.
    Höst, M., Wohlin, C., Thelin, T.: Experimental context classification: Incentives and experience of subjects. In: Proceedings of the 27th International Conference on Software Engineering, St. Louis, pp. 470–478 (2005)Google Scholar
  79. 79.
    Höst, M., Runeson, P.: Checklists for software engineering case study research. In: Proceedings of the 1st International Symposium on Empirical Software Engineering and Measurement, Madrid, pp. 479–481 (2007)Google Scholar
  80. 80.
    Hove, S.E., Anda, B.: Experiences from conducting semi-structured interviews in empirical software engineering research. In: Proceedings of the 11th IEEE International Software Metrics Symposium, pp. 1–10. IEEE Computer Society Press, Los Alamitos (2005)Google Scholar
  81. 81.
    Humphrey, W.S.: Managing the Software Process. Addison-Wesley, Reading (1989)Google Scholar
  82. 82.
    Humphrey, W.S.: A Discipline for Software Engineering. Addison Wesley, Reading (1995)Google Scholar
  83. 83.
    Humphrey, W.S.: Introduction to the Personal Software Process. Addison Wesley, Reading (1997)Google Scholar
  84. 84.
    IEEE: IEEE standard glossary of software engineering terminology. Technical Report, IEEE Std 610.12-1990, IEEE (1990)Google Scholar
  85. 85.
    Iversen, J.H., Mathiassen, L., Nielsen, P.A.: Managing risk in software process improvement: an action research approach. MIS Q. 28(3), 395–433 (2004)Google Scholar
  86. 86.
    Jedlitschka, A., Pfahl, D.: Reporting guidelines for controlled experiments in software engineering. In: Proceedings of the 4th International Symposium on Empirical Software Engineering, Noosa Heads, pp. 95–104 (2005)Google Scholar
  87. 87.
    Johnson, P.M., Tjahjono, D.: Does every inspection really need a meeting? Empir. Softw. Eng. 3(1), 9–35 (1998)Google Scholar
  88. 88.
    Juristo, N., Moreno, A.M.: Basics of Software Engineering Experimentation. Springer, Kluwer Academic Publishers, Boston (2001)Google Scholar
  89. 89.
    Juristo, N., Vegas, S.: The role of non-exact replications in software engineering experiments. Empir. Softw. Eng. 16, 295–324 (2011). doi: 10.1007/s10664-010-9141-9Google Scholar
  90. 90.
    Kachigan, S.K.: Statistical Analysis: An Interdisciplinary Introduction to Univariate and Multivariate Methods. Radius Press, New York (1986)Google Scholar
  91. 91.
    Kachigan, S.K.: Multivariate Statistical Analysis: A Conceptual Introduction, 2nd edn. Radius Press, New York (1991)Google Scholar
  92. 92.
    Kampenes, V.B., Dyba, T., Hannay, J.E., Sjø berg, D.I.K.: A systematic review of effect size in software engineering experiments. Inf. Softw. Technol. 49(11–12), 1073–1086 (2007). doi: 10.1016/j.infsof.2007.02.015Google Scholar
  93. 93.
    Karahasanović, A., Anda, B., Arisholm, E., Hove, S.E., Jørgensen, M., Sjøberg, D., Welland, R.: Collecting feedback during software engineering experiments. Empir. Softw. Eng. 10(2), 113–147 (2005). doi: 10.1007/s10664-004-6189-4. http://www.springerlink.com/index/10.1007/s10664-004-6189-4
  94. 94.
    Karlström, D., Runeson, P., Wohlin, C.: Aggregating viewpoints for strategic software process improvement. IEE Proc. Softw. 149(5), 143–152 (2002). doi: 10.1049/ip-sen:20020696Google Scholar
  95. 95.
    Kitchenham, B.A.: The role of replications in empirical software engineering – a word of warning. Empir. Softw. Eng. 13, 219–221 (2008).  10.1007/s10664-008-9061-0
  96. 96.
    Kitchenham, B.A., Charters, S.: Guidelines for performing systematic literature reviews in software engineering (version 2.3). Technical Report, EBSE Technical Report EBSE-2007-01, Keele University and Durham University (2007)Google Scholar
  97. 97.
    Kitchenham, B.A., Pickard, L.M., Pfleeger, S.L.: Case studies for method and tool evaluation. IEEE Softw. 12(4), 52–62 (1995)Google Scholar
  98. 98.
    Kitchenham, B.A., Pfleeger, S.L., Pickard, L.M., Jones, P.W., Hoaglin, D.C., El Emam, K., Rosenberg, J.: Preliminary guidelines for empirical research in software engineering. IEEE Trans. Softw. Eng. 28(8), 721–734 (2002). doi: 10.1109/TSE.2002.1027796. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1027796
  99. 99.
    Kitchenham, B., Fry, J., Linkman, S.G.: The case against cross-over designs in software engineering. In: Proceedings of the 11th International Workshop on Software Technology and Engineering Practice, Amsterdam, pp. 65–67. IEEE Computer Society, Los Alamitos (2003)Google Scholar
  100. 100.
    Kitchenham, B.A., Dybå, T., Jørgensen, M.: Evidence-based software engineering. In: Proceedings of the 26th International Conference on Software Engineering, Edinburgh, pp. 273–281 (2004)Google Scholar
  101. 101.
    Kitchenham, B.A., Al-Khilidar, H., Babar, M.A., Berry, M., Cox, K., Keung, J., Kurniawati, F., Staples, M., Zhang, H., Zhu, L.: Evaluating guidelines for reporting empirical software engineering studies. Empir. Softw. Eng. 13(1), 97–121 (2007). doi: 10.1007/s10664-007-9053-5. http://www.springerlink.com/index/10.1007/s10664-007-9053-5
  102. 102.
    Kitchenham, B.A., Jeffery, D.R., Connaughton, C.: Misleading metrics and unsound analyses. IEEE Softw. 24, 73–78 (2007). doi: 10.1109/MS.2007.49Google Scholar
  103. 103.
    Kitchenham, B.A., Brereton, P., Budgen, D., Turner, M., Bailey, J., Linkman, S.G.: Systematic literature reviews in software engineering – a systematic literature review. Inf. Softw. Technol. 51(1), 7–15 (2009). doi: 10.1016/j.infsof.2008.09.009. http://www.dx.doi.org/10.1016/j.infsof.2008.09.009
  104. 104.
    Kitchenham, B.A., Pretorius, R., Budgen, D., Brereton, P., Turner, M., Niazi, M., Linkman, S.: Systematic literature reviews in software engineering – a tertiary study. Inf. Softw. Technol. 52(8), 792–805 (2010). doi: 10.1016/j.infsof.2010.03.006Google Scholar
  105. 105.
    Kitchenham, B.A., Sjøberg, D.I.K., Brereton, P., Budgen, D., Dybå, T., Höst, M., Pfahl, D., Runeson, P.: Can we evaluate the quality of software engineering experiments? In: Proceedings of the 4th ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ACM, Bolzano/Bozen (2010)Google Scholar
  106. 106.
    Kitchenham, B.A., Budgen, D., Brereton, P.: Using mapping studies as the basis for further research – a participant-observer case study. Inf. Softw. Technol. 53(6), 638–651 (2011). doi: 10.1016/j.infsof.2010.12.011Google Scholar
  107. 107.
    Laitenberger, O., Atkinson, C., Schlich, M., El Emam, K.: An experimental comparison of reading techniques for defect detection in UML design documents. J. Syst. Softw. 53(2), 183–204 (2000)Google Scholar
  108. 108.
    Larsson, R.: Case survey methodology: quantitative analysis of patterns across case studies. Acad. Manag. J. 36(6), 1515–1546 (1993)Google Scholar
  109. 109.
    Lee, A.S.: A scientific methodology for MIS case studies. MIS Q. 13(1), 33 (1989). doi: 10.2307/248698. http://www.jstor.org/stable/248698?origin=crossref
  110. 110.
    Lehman, M.M.: Program, life-cycles and the laws of software evolution. Proc. IEEE 68(9), 1060–1076 (1980)Google Scholar
  111. 111.
    Lethbridge, T.C., Sim, S.E., Singer, J.: Studying software engineers: data collection techniques for software field studies. Empir. Softw. Eng. 10, 311–341 (2005)Google Scholar
  112. 112.
    Linger, R.: Cleanroom process model. IEEE Softw. pp. 50–58 (1994)Google Scholar
  113. 113.
    Linkman, S., Rombach, H.D.: Experimentation as a vehicle for software technology transfer – a family of software reading techniques. Inf. Softw. Technol. 39(11), 777–780 (1997)Google Scholar
  114. 114.
    Lucas, W.A.: The case survey method: aggregating case experience. Technical Report, R-1515-RC, The RAND Corporation, Santa Monica (1974)Google Scholar
  115. 115.
    Lucas, H.C., Kaplan, R.B.: A structured programming experiment. Comput. J. 19(2), 136–138 (1976)Google Scholar
  116. 116.
    Lyu, M.R. (ed.): Handbook of Software Reliability Engineering. McGraw-Hill, New York (1996)Google Scholar
  117. 117.
    Maldonado, J.C., Carver, J., Shull, F., Fabbri, S., Dória, E., Martimiano, L., Mendonça, M., Basili, V.: Perspective-based reading: a replicated experiment focused on individual reviewer effectiveness. Empir. Softw. Eng. 11, 119–142 (2006). doi: 10.1007/s10664-006-5967-6Google Scholar
  118. 118.
    Manly, B.F.J.: Multivariate Statistical Methods: A Primer, 2nd edn. Chapman and Hall, London (1994)Google Scholar
  119. 119.
    Marascuilo, L.A., Serlin, R.C.: Statistical Methods for the Social and Behavioral Sciences. W. H. Freeman and Company, New York (1988)Google Scholar
  120. 120.
    Miller, J.: Estimating the number of remaining defects after inspection. Softw. Test. Verif. Reliab. 9(4), 167–189 (1999)Google Scholar
  121. 121.
    Miller, J.: Applying meta-analytical procedures to software engineering experiments. J. Syst. Softw. 54(1), 29–39 (2000)Google Scholar
  122. 122.
    Miller, J.: Statistical significance testing: a panacea for software technology experiments? J. Syst. Softw. 73, 183–192 (2004). doi: http://dx.doi.org/10.1016/j.jss.2003.12.019
  123. 123.
    Miller, J.: Replicating software engineering experiments: a poisoned chalice or the holy grail. Inf. Softw. Technol. 47(4), 233–244 (2005)Google Scholar
  124. 124.
    Miller, J., Wood, M., Roper, M.: Further experiences with scenarios and checklists. Empir. Softw. Eng. 3(1), 37–64 (1998)Google Scholar
  125. 125.
    Montgomery, D.C.: Design and Analysis of Experiments, 5th edn. Wiley, New York (2000)Google Scholar
  126. 126.
    Myers, G.J.: A controlled experiment in program testing and code walkthroughs/inspections. Commun. ACM 21, 760–768 (1978). doi: http://doi.acm.org/10.1145/359588.359602
  127. 127.
    Noblit, G.W., Hare, R.D.: Meta-Ethnography: Synthesizing Qualitative Studies. Sage Publications, Newbury Park (1988)Google Scholar
  128. 128.
    Ohlsson, M.C., Wohlin, C.: A project effort estimation study. Inf. Softw. Technol. 40(14), 831–839 (1998)Google Scholar
  129. 129.
    Owen, S., Brereton, P., Budgen, D.: Protocol analysis: a neglected practice. Commun. ACM 49(2), 117–122 (2006). doi: 10.1145/1113034.1113039Google Scholar
  130. 130.
    Paulk, M.C., Curtis, B., Chrissis, M.B., Weber, C.V.: Capability maturity model for software. Technical Report, CMU/SEI-93-TR-24, Software Engineering Institute, Pittsburgh (1993)Google Scholar
  131. 131.
    Petersen, K., Feldt, R., Mujtaba, S., Mattsson, M.: Systematic mapping studies in software engineering. In: Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering, Electronic Workshops in Computing (eWIC). BCS, University of Bari, Italy (2008)Google Scholar
  132. 132.
    Petersen, K., Wohlin, C.: Context in industrial software engineering research. In: Proceedings of the 3rd ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, Lake Buena Vista, pp. 401–404 (2009)Google Scholar
  133. 133.
    Pfleeger, S.L.: Experimental design and analysis in software engineering part 1–5. ACM Sigsoft, Softw. Eng. Notes, 19(4), 16–20; 20(1), 22–26; 20(2), 14–16; 20(3), 13–15; 20, (1994)Google Scholar
  134. 134.
    Pfleeger, S.L., Atlee, J.M.: Software Engineering: Theory and Practice, 4th edn. Pearson Prentice-Hall, Upper Saddle River (2009)Google Scholar
  135. 135.
    Pickard, L.M., Kitchenham, B.A., Jones, P.W.: Combining empirical results in software engineering. Inf. Softw. Technol. 40(14), 811–821 (1998). doi: 10.1016/S0950-5849(98)00101-3Google Scholar
  136. 136.
    Porter, A.A., Votta, L.G.: An experiment to assess different defect detection methods for software requirements inspections. In: Proceedings of the 16th International Conference on Software Engineering, Sorrento, pp. 103–112 (1994)Google Scholar
  137. 137.
    Porter, A.A., Votta, L.G.: Comparing detection methods for software requirements inspection: a replicated experiment. IEEE Trans. Softw. Eng. 21(6), 563–575 (1995)Google Scholar
  138. 138.
    Porter, A.A., Votta, L.G.: Comparing detection methods for software requirements inspection: a replicated experimentation: a replication using professional subjects. Empir. Softw. Eng. 3(4), 355–380 (1998)Google Scholar
  139. 139.
    Porter, A.A., Siy, H.P., Toman, C.A., Votta, L.G.: An experiment to assess the cost-benefits of code inspections in large scale software development. IEEE Trans. Softw. Eng. 23(6), 329–346 (1997)Google Scholar
  140. 140.
    Potts, C.: Software engineering research revisited. IEEE Softw. pp. 19–28 (1993)Google Scholar
  141. 141.
    Rainer, A.W.: The longitudinal, chronological case study research strategy: a definition, and an example from IBM Hursley Park. Inf. Softw. Technol. 53(7), 730–746 (2011)Google Scholar
  142. 142.
    Robinson, H., Segal, J., Sharp, H.: Ethnographically-informed empirical studies of software practice. Inf. Softw. Technol. 49(6), 540–551 (2007). doi: 10.1016/j.infsof.2007.02.007Google Scholar
  143. 143.
    Robson, C.: Real World Research: A Resource for Social Scientists and Practitioners-Researchers, 1st edn. Blackwell, Oxford/Cambridge (1993)Google Scholar
  144. 144.
    Robson, C.: Real World Research: A Resource for Social Scientists and Practitioners-Researchers, 2nd edn. Blackwell, Oxford/Madden (2002)Google Scholar
  145. 145.
    Runeson, P., Skoglund, M.: Reference-based search strategies in systematic reviews. In: Proceedings of the 13th International Conference on Empirical Assessment and Evaluation in Software Engineering. Electronic Workshops in Computing (eWIC). BCS, Durham University, UK (2009)Google Scholar
  146. 146.
    Runeson, P., Höst, M., Rainer, A.W., Regnell, B.: Case Study Research in Software Engineering. Guidelines and Examples. Wiley, Hoboken (2012)Google Scholar
  147. 147.
    Sandahl, K., Blomkvist, O., Karlsson, J., Krysander, C., Lindvall, M., Ohlsson, N.: An extended replication of an experiment for assessing methods for software requirements. Empir. Softw. Eng. 3(4), 381–406 (1998)Google Scholar
  148. 148.
    Seaman, C.B.: Qualitative methods in empirical studies of software engineering. IEEE Trans. Softw. Eng. 25(4), 557–572 (1999)Google Scholar
  149. 149.
    Selby, R.W., Basili, V.R., Baker, F.T.: Cleanroom software development: An empirical evaluation. IEEE Trans. Softw. Eng. 13(9), 1027–1037 (1987)Google Scholar
  150. 150.
    Shepperd, M.: Foundations of Software Measurement. Prentice-Hall, London/New York (1995)Google Scholar
  151. 151.
    Shneiderman, B., Mayer, R., McKay, D., Heller, P.: Experimental investigations of the utility of detailed flowcharts in programming. Commun. ACM 20, 373–381 (1977). doi: 10.1145/359605.359610Google Scholar
  152. 152.
    Shull, F.: Developing techniques for using software documents: a series of empirical studies. Ph.D. thesis, Computer Science Department, University of Maryland, USA (1998)Google Scholar
  153. 153.
    Shull, F., Basili, V.R., Carver, J., Maldonado, J.C., Travassos, G.H., Mendonça, M.G., Fabbri, S.: Replicating software engineering experiments: addressing the tacit knowledge problem. In: Proceedings of the 1st International Symposium on Empirical Software Engineering, Nara, pp. 7–16 (2002)Google Scholar
  154. 154.
    Shull, F., Mendoncça, M.G., Basili, V.R., Carver, J., Maldonado, J.C., Fabbri, S., Travassos, G.H., Ferreira, M.C.: Knowledge-sharing issues in experimental software engineering. Empir. Softw. Eng. 9, 111–137 (2004). doi: 10.1023/B:EMSE.0000013516.80487.33Google Scholar
  155. 155.
    Shull, F., Carver, J., Vegas, S., Juristo, N.: The role of replications in empirical software engineering. Empir. Softw. Eng. 13, 211–218 (2008). doi: 10.1007/s10664-008-9060-1Google Scholar
  156. 156.
    Sieber, J.E.: Protecting research subjects, employees and researchers: implications for software engineering. Empir. Softw. Eng. 6(4), 329–341 (2001)Google Scholar
  157. 157.
    Siegel, S., Castellan, J.: Nonparametric Statistics for the Behavioral Sciences, 2nd edn. McGraw-Hill International Editions, New York (1988)Google Scholar
  158. 158.
    Singer, J., Vinson, N.G.: Why and how research ethics matters to you. Yes, you! Empir. Softw. Eng. 6, 287–290 (2001). doi: 10.1023/A:1011998412776Google Scholar
  159. 159.
    Singer, J., Vinson, N.G.: Ethical issues in empirical studies of software engineering. IEEE Trans. Softw. Eng. 28(12), 1171–1180 (2002). doi: 10.1109/TSE.2002.1158289. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1158289 Google Scholar
  160. 160.
    Simon S.: Fermat’s Last Theorem. Fourth Estate, London (1997)Google Scholar
  161. 161.
    Sjøberg, D.I.K., Hannay, J.E., Hansen, O., Kampenes, V.B., Karahasanovic, A., Liborg, N.-K., Rekdal, A.C.: A survey of controlled experiments in software engineering. IEEE Trans. Softw. Eng. 31(9), 733–753 (2005). doi: 10.1109/TSE.2005.97. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1514443 Google Scholar
  162. 162.
    Sjøberg, D.I.K., Dybå, T., Anda, B., Hannay, J.E.: Building theories in software engineering. In: Shull, F., Singer, J., Sjøberg D. (eds.) Guide to Advanced Empirical Software Engineering. Springer, London (2008)Google Scholar
  163. 163.
    Sommerville, I.: Software Engineering, 9th edn. Addison-Wesley, Wokingham, England/ Reading (2010)Google Scholar
  164. 164.
    Sørumgård, S.: Verification of process conformance in empirical studies of software development. Ph.D. thesis, The Norwegian University of Science and Technology, Department of Computer and Information Science, Norway (1997)Google Scholar
  165. 165.
    Stake, R.E.: The Art of Case Study Research. SAGE Publications, Thousand Oaks (1995)Google Scholar
  166. 166.
    Staples, M., Niazi, M.: Experiences using systematic review guidelines. J. Syst. Softw. 80(9), 1425–1437 (2007). doi: 10.1016/j.jss.2006.09.046Google Scholar
  167. 167.
    Thelin, T., Runeson, P.: Capture-recapture estimations for perspective-based reading – a simulated experiment. In: Proceedings of the 1st International Conference on Product Focused Software Process Improvement (PROFES), Oulu, pp. 182–200 (1999)Google Scholar
  168. 168.
    Thelin, T., Runeson, P., Wohlin, C.: An experimental comparison of usage-based and checklist-based reading. IEEE Trans. Softw. Eng. 29(8), 687–704 (2003). doi: 10.1109/TSE.2003.1223644Google Scholar
  169. 169.
    Tichy, W.F.: Should computer scientists experiment more? IEEE Comput. 31(5), 32–39 (1998)Google Scholar
  170. 170.
    Tichy, W.F., Lukowicz, P., Prechelt, L., Heinz, E.A.: Experimental evaluation in computer science: a quantitative study. J. Syst. Softw. 28(1), 9–18 (1995)Google Scholar
  171. 171.
    Trochim, W.M.K.: The Research Methods Knowledge Base, 2nd edn. Cornell Custom Publishing, Cornell University, Ithaca (1999)Google Scholar
  172. 172.
    van Solingen, R., Berghout, E.: The Goal/Question/Metric Method: A Practical Guide for Quality Improvement and Software Development. McGraw-Hill International, London/Chicago (1999)Google Scholar
  173. 173.
    Verner, J.M., Sampson, J., Tosic, V., Abu Bakar, N.A., Kitchenham, B.A.: Guidelines for industrially-based multiple case studies in software engineering. In: Third International Conference on Research Challenges in Information Science, Fez, pp. 313–324 (2009)Google Scholar
  174. 174.
    Vinson, N.G., Singer, J.: A practical guide to ethical research involving humans. In: Shull, F., Singer, J., Sjøberg, D. (eds.) Guide to Advanced Empirical Software Engineering. Springer, London (2008)Google Scholar
  175. 175.
    Votta, L.G.: Does every inspection need a meeting? In: Proceedings of the ACM SIGSOFT Symposium on Foundations of Software Engineering, ACM Software Engineering Notes, vol. 18, pp. 107–114. ACM Press, New York (1993)Google Scholar
  176. 176.
    Wallace, C., Cook, C., Summet, J., Burnett, M.: Human centric computing languages and environments. In: Proceedings of Symposia on Human Centric Computing Languages and Environments, Arlington, pp. 63–65 (2002)Google Scholar
  177. 177.
    Wohlin, C., Gustavsson, A., Höst, M., Mattsson, C.: A framework for technology introduction in software organizations. In: Proceedings of the Conference on Software Process Improvement, Brighton, pp. 167–176 (1996)Google Scholar
  178. 178.
    Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in Software Engineering: An Introduction. Kluwer, Boston (2000)Google Scholar
  179. 179.
    Wohlin, C., Aurum, A., Angelis, L., Phillips, L., Dittrich, Y., Gorschek, T., Grahn, H., Henningsson, K., Kågström, S., Low, G., Rovegård, P., Tomaszewski, P., van Toorn, C., Winter, J.: Success factors powering industry-academia collaboration in software research. IEEE Softw. (PrePrints) (2011). doi:  10.1109/MS.2011.92
  180. 180.
    Yin, R.K.: Case Study Research Design and Methods, 4th edn. Sage Publications, Beverly Hills (2009)Google Scholar
  181. 181.
    Zelkowitz, M.V., Wallace, D.R.: Experimental models for validating technology. IEEE Comput. 31(5), 23–31 (1998)Google Scholar
  182. 182.
    Zendler, A.: A preliminary software engineering theory as investigated by published experiments. Empir. Softw. Eng. 6, 161–180 (2001). doi: http://dx.doi.org/10.1023/A:1011489321999

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Claes Wohlin
    • 1
  • Per Runeson
    • 2
  • Martin Höst
    • 2
  • Magnus C. Ohlsson
    • 3
  • Björn Regnell
    • 2
  • Anders Wesslén
    • 4
  1. 1.School of Computing Blekinge Institute of TechnologyKarlskronaSweden
  2. 2.Department of Computer ScienceLund UniversityLundSweden
  3. 3.System Verification Sweden ABMalmöSweden
  4. 4.ST-Ericsson ABLundSweden

Personalised recommendations