Skip to main content

A method for evaluating rigor and industrial relevance of technology evaluations

Abstract

One of the main goals of an applied research field such as software engineering is the transfer and widespread use of research results in industry. To impact industry, researchers developing technologies in academia need to provide tangible evidence of the advantages of using them. This can be done trough step-wise validation, enabling researchers to gradually test and evaluate technologies to finally try them in real settings with real users and applications. The evidence obtained, together with detailed information on how the validation was conducted, offers rich decision support material for industry practitioners seeking to adopt new technologies and researchers looking for an empirical basis on which to build new or refined technologies. This paper presents model for evaluating the rigor and industrial relevance of technology evaluations in software engineering. The model is applied and validated in a comprehensive systematic literature review of evaluations of requirements engineering technologies published in software engineering journals. The aim is to show the applicability of the model and to characterize how evaluations are carried out and reported to evaluate the state-of-research. The review shows that the model can be applied to characterize evaluations in requirements engineering. The findings from applying the model also show that the majority of technology evaluations in requirements engineering lack both industrial relevance and rigor. In addition, the research field does not show any improvements in terms of industrial relevance over time.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Notes

  1. 1.

    The number of citations was retrieved on August 27, 2010, using Scopus. Eighty-eight percent of the included papers were indexed in Scopus.

References

  1. Afzal W, Torkar R et al (2008) A systematic mapping study on non-functional search-based software testing. 20th International Conference on Software Engineering and Knowledge Engineering (SEKE 2008)

  2. Afzal W, Torkar R et al (2009) A systematic review of search-based testing for non-functional system properties. Inf Softw Technol 51(6):957–976

    Article  Google Scholar 

  3. Anda B, Hansen K et al (2006) Experiences from introducing UML-based development in a large safety-critical project. Empir Softw Eng 11(4):555–581

    Article  Google Scholar 

  4. Arisholm E, Sjoberg DIK (2004) Evaluating the effect of a delegated versus centralized control style on the maintainability of object-oriented software. IEEE Trans Softw Eng 30(8):521–534

    Article  Google Scholar 

  5. Arisholm E, Gallis H et al (2007) Evaluating pair programming with respect to system complexity and programmer expertise. IEEE Trans Softw Eng 33(2):65–86

    Article  Google Scholar 

  6. Arthur JD, Gröner MK (2005) An operational model for structuring the requirements generation process. Requir Eng 10(1):45

    Article  Google Scholar 

  7. Aurum A, Wohlin C et al (2006) Aligning software project decisions: a case study. Int J Software Engineer Knowledge Engineer 16(6):795–818

    Article  Google Scholar 

  8. Basili VR, Selby RW et al (1986) Experimentation in software engineering. IEEE Trans Softw Eng 12(7):733–743

    Google Scholar 

  9. Beecham S, Hall T et al (2005) Using an expert panel to validate a requirements process improvement model. J Syst Softw 76(3):251–275

    Article  Google Scholar 

  10. Benbasat I, Zmud RW (1999) Empirical research in information systems: the practice of relevance. MIS Quart 23(1):3–16

    Article  Google Scholar 

  11. Berling T, Runeson P (2003) Evaluation of a perspective based review method applied in an industrial setting. IEE Proc Softw 150(3):177–184

    Article  Google Scholar 

  12. Carlshamre P (2002) Release planning in market-driven software product development: provoking an understanding. Requir Eng 7(3):139

    Article  Google Scholar 

  13. Dybå T, Kampenes VB et al (2006) A systematic review of statistical power in software engineering experiments. Inf Softw Technol 48(8):745–755

    Article  Google Scholar 

  14. Dybå T, Dingsøyr T et al (2007) Applying systematic reviews to diverse study types: an experience report. First International Symposium on Empirical Software Engineering and Measurement (ESEM)

  15. Dzida W, Herda S et al (1978) User-perceived quality of interactive systems. IEEE Trans Softw Eng 4(4):270–276

    Article  Google Scholar 

  16. Emam K, Madhavji NH (1996) An instrument for measuring the success of the requirements engineering process in information systems development. Empir Softw Eng 1(3):201–240

    Article  Google Scholar 

  17. Feldt R, Höst M et al (2009) Generic skills in software engineering master thesis projects: towards rubric-based evaluation. 22nd Conference on Software Engineering Education and Training

  18. Glass RL (1994) The software-research crisis. IEEE Softw 11(6):42–47

    Article  Google Scholar 

  19. Glass RL (1997) Pilot studies: what, why and how. J Syst Softw 36(1):85–97

    Article  Google Scholar 

  20. Glass RL, Vessey I et al (2002) Research in software engineering: an analysis of the literature. Inf Softw Technol 44(8):491–506

    Article  Google Scholar 

  21. Gorschek T, Davis AM (2008) Requirements engineering: In search of the dependent variables. Inf Softw Technol 50(1–2):67–75

    Article  Google Scholar 

  22. Gorschek T, Garre P et al (2006) A model for technology transfer in practice. IEEE Softw 23(6):88–95

    Article  Google Scholar 

  23. Gorschek T, Garre P et al (2007) Industry evaluation of the requirements abstraction model. Requir Eng 12(3):163–190

    Article  Google Scholar 

  24. Hall T, Beecham S et al (2002) Requirements problems in twelve software companies: an empirical analysis. IEE Proc Softw 149(5):153–160

    Article  Google Scholar 

  25. Hannay JE, Sjoberg DIK et al (2007) A systematic review of theory use in software engineering experiments. IEEE Trans Softw Eng 33(2):87–107

    Article  Google Scholar 

  26. Hirsch JE (2005) An index to quantify an individual’s scientific research output. Proc Natl Acad Sci USA 102(46):16569–16572

    Article  Google Scholar 

  27. Höfer A, Tichy W (2007) Status of empirical research in software engineering. In: Basili V, Rombach D et al (ed) Empirical software engineering issues. Critical Assessment and Future Directions, LNCS, vol 4336 Springer, Berlin/Heidelberg, pp 10–19

  28. Hsia P, Davis AM et al (1993) Status report: requirements engineering. IEEE Softw 10(6):75–79

    Article  Google Scholar 

  29. IEEE (1990) IEEE standard glossary of software engineering terminology. IEEE Std 610:12–1990

    Google Scholar 

  30. Ivarsson M, Gorschek T (2009) Technology transfer decision support in requirements engineering research: a systematic review of REj. Requir Eng 14(3):155–175

    Article  Google Scholar 

  31. Jedlitschka A, Ciolkowski M et al (2007) Relevant information sources for successful technology transfer: a survey using inspections as an example. First International Symposium on Empirical Software Engineering and Measurement (ESEM)

  32. Jiang L, Eberlein A et al (2008) A case study validation of a knowledge-based approach for the selection of requirements engineering techniques. Requir Eng 13(2):117–146

    Article  Google Scholar 

  33. Juristo N, Moreno AM et al (2002) Is the European industry moving toward solving requirements engineering problems? IEEE Softw 19(6):70–77

    Article  Google Scholar 

  34. Kaindl H, Brinkkemper S et al (2002) Requirements engineering and technology transfer: obstacles, incentives and improvement agenda. Requir Eng 7(3):113–123

    Article  Google Scholar 

  35. Kampenes VB, Dybå T et al (2007) A systematic review of effect size in software engineering experiments. Inf Softw Technol 49(11–12):1073–1086

    Article  Google Scholar 

  36. Karlsson L, Regnell B et al (2006) Case studies in process improvement through retrospective analysis of release planning decisions. Int J Software Engineer Knowledge Engineer 16(6):885–915

    Article  Google Scholar 

  37. Keen PGW (1991) Relevance and rigor in information systems research: Improving quality, confidence, cohesion and impact. In: H.-E. Nissen, H. Klein and R. Hirschheim (ed) Information Systems Research: Contemporary Approaches & Emergent Traditions. Elsevier, Amsterdam 27–49

  38. Khurum M, Gorschek T (2009) A systematic review of domain analysis solutions for product lines. J Syst Softw 82(12):1982–2003

    Article  Google Scholar 

  39. Kitchenham BA (1996–1998) Evaluating software engineering methods and tools, Part 1 to 12. ACM SIGSOFT Software Engineering Notes 21–23

  40. Kitchenham B, Charters S (2007) Guidelines for performing systematic literature reviews in software engineering. Keele University and Durham University Joint Report

  41. Kitchenham BA, Pfleeger SL (2001–2003) Principles of survey research, Part 1 to 6. ACM SIGSOFT Software Engineering Notes 26–28

  42. Kitchenham BA, Pfleeger SL et al (2002) Preliminary guidelines for empirical research in software engineering. IEEE Trans Softw Eng 28(8):721–734

    Article  Google Scholar 

  43. Laitenberger O, Beil T et al (2002) An industrial case study to examine a non-traditional inspection implementation for requirements specifications. Empir Softw Eng 7(4):345–374

    Article  MATH  Google Scholar 

  44. Lau F (1999) Toward a framework for action research in information systems studies. Inf Technol People 12(2):148–176

    Article  Google Scholar 

  45. Lauesen S, Vinter O (2001) Preventing requirement defects: an experiment in process improvement. Requir Eng 6(1):37–50

    Article  MATH  Google Scholar 

  46. Maiden N, Manning S et al (2005) Generating requirements from systems models using patterns: a case study. Requir Eng 10(4):276–288

    Article  Google Scholar 

  47. Meyer B, Choppy C et al (2009) Viewpoint: research evaluation for computer science. Commun ACM 52(4):31–34

    Article  Google Scholar 

  48. Mich L, Anesi C et al (2005) Applying a pragmatics-based creativity-fostering technique to requirements elicitation. Requir Eng 10(4):262

    Article  Google Scholar 

  49. Morris P, Masera M et al (1998) Requirements engineering and industrial uptake. Requir Eng 3(2):79–83

    Article  Google Scholar 

  50. Moskal BM (2000) Scoring rubrics: what, when and how. Pract Assess Res Eval 7(3). http://PAREonline.net/getvn.asp?v=7&n=3. Accessed 30 September 2010

  51. Neill CJ, Laplante PA (2003) Requirements engineering: the state of the practice. IEEE Softw 20(6):40–45

    Article  Google Scholar 

  52. Parnas DL (2007) Stop the numbers game. Commun ACM 50(11):19–21

    Article  Google Scholar 

  53. Perry DE, Porter AA et al (2000) Empirical studies of software engineering: a roadmap. International Conference on Software Engineering (ICSE)

  54. Petersen K, Wohlin C (2009) Context in industrial software engineering research. Proceedings 3rd International Symposium on Empirical Software Engineering and Measurement, Orlando, USA

  55. Petersen K, Feldt R et al (2008) Systematic mapping studies in software engineering. 12th International Conference on Evaluation and Assessment in Software Engineering, Bari, Italy

  56. Pfleeger SL (1994–1995) Experimental design and analysis in software engineering, Parts 1 to 5. ACM SIGSOFT Software Engineering Notes 19–20

  57. Pfleeger SL (1999) Understanding and improving technology transfer in software engineering. J Syst Softw 47(2–3):111–124

    Article  Google Scholar 

  58. Pfleeger SL, Menezes W (2000) Marketing technology to software practitioners. IEEE Softw 17(1):27–33

    Article  Google Scholar 

  59. Potts C (1993) Software-engineering research revisited. IEEE Softw 10(5):19–28

    Article  Google Scholar 

  60. Redwine ST, Riddle WE (1985) Software technology maturation. 8th international conference on Software engineering, London, England, IEEE Computer Society Press

  61. Regnell B, Höst M et al (2001) An industrial case study on distributed prioritisation in market-driven requirements engineering for packaged software. Requir Eng 6(1):51–62

    Article  MATH  Google Scholar 

  62. Robson C (2002) Real world research. Blackwell Publishing, Cornwall

    Google Scholar 

  63. Ross DT, Schoman KE Jr (1977) Structured analysis for requirements definition. IEEE Trans Softw Eng 3(1):6–15

    Article  Google Scholar 

  64. Runeson P, Höst M (2009) Guidelines for conducting and reporting case study research in software engineering. Empir Softw Eng 14(2):131–164

    Article  Google Scholar 

  65. Sjøberg DIK, Anda B et al (2002) Conducting realistic experiments in software engineering. 18th International Symposium on Empirical Software Engineering (ISESE)

  66. Sjøberg DIK, Hannay JE et al (2005) A survey of controlled experiments in software engineering. IEEE Trans Softw Eng 31(9):733–753

    Article  Google Scholar 

  67. Sjøberg DIK, Dybå T et al (2007) The future of empirical methods in software engineering research. Future of Software Engineering (FOSE)

  68. Šmite D, Wohlin C et al (2010) Empirical evidence in global software engineering: a systematic review. Empir Softw Eng. doi:10.1007/s10664-009-9123-y

    Google Scholar 

  69. Tichy WF, Lukowicz P et al (1995) Experimental evaluation in computer science: a quantitative study. J Syst Softw 28(1):9–18

    Article  Google Scholar 

  70. Wieringa R, Heerkens J (2006) The methodological soundness of requirements engineering papers: a conceptual framework and two case studies. Requir Eng 11(4):295–307

    Article  Google Scholar 

  71. Wohlin C (2009a) An analysis of the most cited articles in software engineering journals—2002. Inf Softw Technol 51(1):2–6

    Article  Google Scholar 

  72. Wohlin C (2009b) A new index for the citation curve of researchers. Scientometrics 81(2):521–533

    Google Scholar 

  73. Wohlin C, Runeson P et al (2000) Experimentation in software engineering. Kluwer Academic Publishers, Boston

    MATH  Google Scholar 

  74. Wong WE, Tse TH et al (2009) An assessment of systems and software engineering scholars and institutions (2002–2006). J Syst Softw 82(8):1370–1373

    Article  Google Scholar 

  75. Yin RK (2008) Case study research: Design and methods. Sage publications, Beverly Hills

  76. Zannier C, Melnik G et al (2006) On the success of empirical studies in the international conference on software engineering. Proceedings of the 28th International Conference on Software Engineering. Shanghai, China, ACM

  77. Zelkowitz MV (2009) An update to experimental models for validating computer technology. J Syst Softw 82(3):373–376

    Article  Google Scholar 

  78. Zelkowitz MV, Wallace D (1997) Experimental validation in software engineering. Inf Softw Technol 39(11):735–743

    Article  Google Scholar 

  79. Zelkowitz MV, Wallace DR et al (1998) Culture conflicts in software engineering technology transfer. Maryland, University of Maryland, College Park

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Martin Ivarsson.

Additional information

Editor: Forrest Shull

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Ivarsson, M., Gorschek, T. A method for evaluating rigor and industrial relevance of technology evaluations. Empir Software Eng 16, 365–395 (2011). https://doi.org/10.1007/s10664-010-9146-4

Download citation

Keywords

  • Systematic review
  • Requirements engineering
  • Technology evaluation