Advertisement

Empirical Software Engineering

, Volume 15, Issue 3, pp 250–276 | Cite as

Applying empirical software engineering to software architecture: challenges and lessons learned

  • Davide FalessiEmail author
  • Muhammad Ali Babar
  • Giovanni Cantone
  • Philippe Kruchten
Experience Report

Abstract

In the last 15 years, software architecture has emerged as an important software engineering field for managing the development and maintenance of large, software-intensive systems. Software architecture community has developed numerous methods, techniques, and tools to support the architecture process (analysis, design, and review). Historically, most advances in software architecture have been driven by talented people and industrial experience, but there is now a growing need to systematically gather empirical evidence about the advantages or otherwise of tools and methods rather than just rely on promotional anecdotes or rhetoric. The aim of this paper is to promote and facilitate the application of the empirical paradigm to software architecture. To this end, we describe the challenges and lessons learned when assessing software architecture research that used controlled experiments, replications, expert opinion, systematic literature reviews, observational studies, and surveys. Our research will support the emergence of a body of knowledge consisting of the more widely-accepted and well-formed software architecture theories.

Keywords

Software architecture Empirical software engineering 

Notes

Acknowledgement

The authors wish to thank the colleagues involved in the empirical studies from which the reported challenges and lessons were drawn, Professor June Verner for helping in the preparation of the proof version, and the anonymous reviewers for their very insightful comments and helpful suggestions. When working on this article, Dr. Ali Babar was with Lero, which is funded by Science Foundation Ireland under grant number 03/CE2/I303-1.

References

  1. Ali Babar M (2008) Assessment of a framework for designing and evaluating security sensitive architecture. 12th International Conference on Evaluation and Assessment in Software Engineering (EASE08). Bari, ItalyGoogle Scholar
  2. Ali Babar M, Kitchenham B (2007a) Assessment of a framework for comparing software architecture analysis methods. 11th International Conference on Evaluation and Assessment in Software Engineering (EASE)Google Scholar
  3. Ali Babar M, Kitchenham B (2007b) The impact of group size on software architecture evaluation: A controlled experiment. Proceedings of the First International Symposium on Empirical Software Engineering and Measurement. IEEE Computer SocietyGoogle Scholar
  4. Ali Babar M, Zhu L, Jeffery R (2004) A framework for classifying and comparing software architecture evaluation methods. Proceedings of the Australian Software Engineering ConferenceGoogle Scholar
  5. Ali Babar M, Bass L, Gorton I (2007) Factors influencing industrial practices of software architecture evaluation: An empirical investigation. Quality of Software Architectures (QoSA). Massachusetts,USAGoogle Scholar
  6. Ali Babar M, Kitchenham B, Jeffery R (2008) Comparing distributed and face-to-face meetings for software architecture evaluation: a controlled experiment. Empir Software Eng; Int J 13(1):39–62CrossRefGoogle Scholar
  7. Arisholm E, Gallis H, Dybå T, Sjoberg D (2007) Evaluating pair programming with respect to system complexity and programmer expertise. IEEE Trans Softw Eng 33(2):65–86CrossRefGoogle Scholar
  8. Barbacci MR, Ellison R, Lattanze AJ, Stafford JA, Weinstock CB, Wood WG (2003) Quality attribute workshops (qaws), third edition. http://www.sei.cmu.edu/publications/documents/03.reports/03tr016.html
  9. Basili VR (1996) The role of experimentation in software engineering: past, current, and future. Proceedings of the 18th International Conference on Software Engineering. IEEE Computer Society, BerlinGoogle Scholar
  10. Basili V, Selby R, Hutchens D (1986) Experimentation in software engineering. IEEE Trans Softw Eng 12(7):11Google Scholar
  11. Basili V, Caldiera G, Rombach D (1994) Goal/question/metric paradigm. Encyclopedia of Software Engineering 1 (John Wiley & Sons), 528–532Google Scholar
  12. Bass L, Clements P, Kazman R (2003) Software architecture in practice, 2nd edn. Addison-Wesley, ReadingGoogle Scholar
  13. Bengtsson P, Lassing N, Bosch J, Vliet Hv (2004) Architecture-level modifiability analysis (alma). J Syst Softw 69(1–2):129–147CrossRefGoogle Scholar
  14. Biffl S, Gutjahr W (2001) Influence of team size and defect detection technique on inspection effectiveness. Proceedings of the 7th International Symposium on Software Metrics. IEEE Computer SocietyGoogle Scholar
  15. Biffl S, Aurum A, Bohem B, Erdogmus H, Grünbacher P (2005) Value-based software engineering. SpringerGoogle Scholar
  16. Boehm BW (1981) Software engineering economics Prentice Hall PTR Advances in computing science and technology. Prentice-Hall, Englewood CliffsGoogle Scholar
  17. Booch G (2006a) The accidental architecture. IEEE Softw 23(3):9–11CrossRefGoogle Scholar
  18. Booch G (2006b) Goodness of fit. IEEE Softw 23(6):14–15CrossRefGoogle Scholar
  19. Booch G (2007a) The economics of architecture-first. IEEE Softw 24(5):18–20CrossRefGoogle Scholar
  20. Booch G (2007b) The irrelevance of architecture. IEEE Softw 24(3):10–11CrossRefGoogle Scholar
  21. Brereton P, Kitchenham BA, Budgen D, Turner M, Khalil M (2007) Lessons from applying the systematic literature review process within the software engineering domain. J Syst Softw 80(4):571–583CrossRefGoogle Scholar
  22. Brooks A, Roper M, Wood M, Daly J, Miller J (2008) Replication’s role in software engineering. In: Guide to advanced empirical software engineering SpringerGoogle Scholar
  23. Brown WJ, Malveau RC, McCormick HW, Mowbray TJ (1998) Antipatterns: Refactoring software, architectures, and projects in crisis. WileyGoogle Scholar
  24. Cantone G, D’Angiò A, Falessi D, Lomartire A, Pesce G, Scarrone S (2008) Does an intentional architecture pay off? A controlled experiment. Technical Report 09.77, University of Rome TorVergataGoogle Scholar
  25. Carver J, Jaccheri L, Morasca S, Shull F (2003) Issues in using students in empirical studies in software engineering education. Proceedings of the 9th International Symposium on Software Metrics. IEEE Computer SocietyGoogle Scholar
  26. Chen L, Ali Babar M, Cawley C (2009) Evaluation of variability management approaches: A systematic review. 13th International Conference on Evaluation and Assessment in Software EngineeringGoogle Scholar
  27. Clements P, Bachmann F, Bass L, Garlan D, Ivers J, Little R, Nord R, Stafford J (2002a) Documenting software architectures: views and beyond. Addison-Wesley, BostonGoogle Scholar
  28. Clements P, Kazman R, Klein M (2002b) Evaluating software architecture: methods and case studies. Addison-Wesley, BostonGoogle Scholar
  29. De Marco T (1986) Controlling software projects. P Hall, New YorkGoogle Scholar
  30. Dehnadi S, Bornat R (2006) The camel has two humps. http://www.cs.mdx.ac.uk/research/PhDArea/saeed
  31. Desouza K, Dingsøyr T, Awazu Y (2005) Experiences with conducting project postmortems: Reports vs. Stories and practitioner perspective. Proceedings of the Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS’05) - Track 8 - Volume 08. IEEE Computer SocietyGoogle Scholar
  32. Dobrica L, Niemelä E (2002) A survey on software architecture analysis methods. IEEE Trans Softw Eng 28(7):638–653CrossRefGoogle Scholar
  33. Dybå T, Dingsøyr T (2008) Empirical studies of agile software development: a systematic review. Inform Software Tech 50(9–10):833–859CrossRefGoogle Scholar
  34. Dybå T, Kitchenham B, Jørgensen M (2005) Evidence-based software engineering for practitioners. IEEE Softw 22(1):58–65CrossRefGoogle Scholar
  35. Dzidek W, Arisholm E, Briand L (2008) A realistic empirical evaluation of the costs and benefits of uml in software maintenance. IEEE Trans. Softw. Eng. 34(3):407–432CrossRefGoogle Scholar
  36. Easterbrook S, Singer J, Storey M-A, Damian D (2008) Selecting empirical methods for software engineering research. In: Shull F, Singer J, Sjøberg D (eds) Guide to advanced empirical software engineering SpringerGoogle Scholar
  37. Eden A, Kazman R (2003) Architecture, design, implementation. Proceedings of the 25th International Conference on Software Engineering. Portland, Oregon. IEEE Computer SocietyGoogle Scholar
  38. Eeles P (2005) Capturing architectural requirements. IBM Rational developer works. Available at: http://www.ibm.com/developerworks/rational/library/4706.html
  39. Eguiluz HR, Barbacci MR (2003) Interactions among techniques addressing quality attributesGoogle Scholar
  40. Ellis G, Dix A (2006) An explorative analysis of user evaluation studies in information visualisation. Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization. ACM, VeniceGoogle Scholar
  41. Engstrom E, Skoglund M, Runeson P (2008) Empirical evaluations of regression test selection techniques: a systematic review. Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement. ACM, KaiserslauternGoogle Scholar
  42. Erdogmus H (2008) Must software research stand divided? IEEE Softw 25(5):4–6CrossRefGoogle Scholar
  43. Falessi D, Becker M (2006) Documenting design decisions: A framework and its analysis in the ambient intelligence domain. BelAmI-Report 005.06/E, Fraunhofer IESEGoogle Scholar
  44. Falessi D, Cantone G, Becker M (2006) Documenting design decision rationale to improve individual and team design decision making: An experimental evaluation. Proceedings of the 5th ACM/IEEE International Symposium on Empirical Software Engineering. Rio de Janeiro, BrazilGoogle Scholar
  45. Falessi D, Cantone G, Kruchten P (2007a) Do architecture design methods meet architects' needs?. Sixth Working IEEE/IFIP Conference on Software Architecture, Mumbai, IndiaGoogle Scholar
  46. Falessi D, Kruchten P, Cantone G (2007b) Issues in applying empirical software engineering to software architecture. First European conference on software architecture. Springer, AranjunezGoogle Scholar
  47. Falessi D, Cantone G, Kruchten P (2008a) Value-based design decision rationale documentation: Principles and empirical feasibility study. Proceeding of the Seventh Working IEEE / IFIP Conference on Software Architecture (WICSA 2008). IEEE Computer Society, VancouverGoogle Scholar
  48. Falessi D, Capilla R, Cantone G (2008b) A value-based approach for documenting design decisions rationale: A replicated experiment. Proceedings of the 3rd international workshop on sharing and reusing architectural knowledge. ACM, LeipzigGoogle Scholar
  49. Glass R (1994) The software-research crisis. IEEE Softw 11(6):42–47CrossRefGoogle Scholar
  50. Glass R (2008) Negative productivity and what to do about it. IEEE Softw 25(5):96CrossRefGoogle Scholar
  51. Hannay J, Jørgensen M (2008) The role of deliberate artificial design elements in software engineering experiments. IEEE Trans Softw Eng 34(2):242–259CrossRefGoogle Scholar
  52. Harrison W (1998) Sharing the wealth: accumulating and sharing lessons learned in empirical software engineering research. Empir Software Eng 3(1):7–8CrossRefGoogle Scholar
  53. Hofmeister C, Kruchten P, Nord RL, Obbink H, Ran A, America P (2007) A general model of software architecture design derived from five industrial approaches. J Syst Softw 80(1):106–126. doi: 10.1016/j.jss.2006.05.024 CrossRefGoogle Scholar
  54. Host M, Regnell B, Wohlin C (2000) Using students as subjects: a comparative study of students and professionals in lead-time impact assessment. Empir Software Eng J 5(3):201–214CrossRefGoogle Scholar
  55. Houdek F (2003) External experiments: A workable paradigm for collaboration between industry and academia. In: Lecture notes on empirical software engineering World Scientific Publishing Co., IncGoogle Scholar
  56. ISESE (2009) http://www.belami-project.org/, Last access: November 2009
  57. ISO/IEC 42010 I. P (2008) Systems and software engineering—architecture descriptionGoogle Scholar
  58. Jansen A, Bosch J (2005) Software architecture as a set of architectural design decisions. 5th Working IEEE/IFIP Conference on Software Architecture (WICSA 5). IEEE CS, PittsburghGoogle Scholar
  59. Jeffery R, Scott L (2002) Has twenty-five years of empirical software engineering made a difference? Proceedings of the Ninth Asia-Pacific Software Engineering Conference. IEEE Computer SocietyGoogle Scholar
  60. Ji J, Li J, Conradi R, Liu C, Ma J, Chen W (2008) Some lessons learned in conducting software engineering surveys in china. Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement. ACM, KaiserslauternGoogle Scholar
  61. Jones C (1994) Assessment and control of software risks. P HallGoogle Scholar
  62. Juristo N, Moreno AM (2006) Basics of software engineering experimentation SpringerGoogle Scholar
  63. Kazman R, Klein M, Clements P (2004) Atam: Method for architecture evaluationGoogle Scholar
  64. Kitchenham BA (1996) Evaluating software engineering methods and tool part 1: The evaluation context and evaluation methodsGoogle Scholar
  65. Kitchenham B (2004) Procedures for performing systematic reviews. Joint Technical Report, Keele University TR/SE-0401 and NICTA 0400011T.1Google Scholar
  66. Kitchenham B (2008) The role of replications in empirical software engineering—a word of warning. J Empir Softw Eng 13(2):219–221CrossRefGoogle Scholar
  67. Kitchenham B, Dybå T, Jørgensen M (2004) Evidence-based software engineering. Proceedings of the 26th International Conference on Software Engineering. IEEE Computer SocietyGoogle Scholar
  68. Kruchten P (2001) Common misconceptions about software architecture. The Rational EdgeGoogle Scholar
  69. Kruchten P (2003) The rational unified process: An introduction, 3rd edn. Addison-Wesley ProfessionalGoogle Scholar
  70. Kruchten P (2004) An ontology of architectural design decisions in software intensive systems. Proceedings of the 2nd Groningen Workshop on Software VariabilityGoogle Scholar
  71. Kruchten P, Obbink H, Stafford J (2006) The past, present and future for software architecture. IEEE Softw 23(2):2–10CrossRefGoogle Scholar
  72. Laitenberger O, Rombach D (2003) (quasi-)experimental studies in industrial settings. In: Lecture notes on empirical software engineering World Scientific Publishing Co., IncGoogle Scholar
  73. Liu V, Gorton I, Fekete A (2005) Design-level performance prediction of component-based applications. IEEE Trans Softw Eng 31(11):928–941CrossRefGoogle Scholar
  74. Lung J, Aranda J, Easterbrook S, Wilson G (2008) On the difficulty of replicating human subjects studies in software engineering. Proceedings of the 30th international conference on Software engineering. ACM, LeipzigGoogle Scholar
  75. Maranzano JF, Rozsypal SA, Zimmerman GH, Warnken GW, Wirth PE, Weiss DM (2005) Architecture reviews: practice and experience. IEEE Softw 22(2):34–43CrossRefGoogle Scholar
  76. Murphy G, Walker R, Baniassad E (1999) Evaluating emerging software development technologies: lessons learned from assessing aspect-oriented programming. IEEE Trans Softw Eng 25(4):438–455CrossRefGoogle Scholar
  77. Oates BJ (2003) Widening the scope of evidence gathering in software engineering. Proceedings of the Eleventh Annual International Workshop on Software Technology and Engineering Practice. IEEE Computer SocietyGoogle Scholar
  78. Obbink H, Kruchten P, Kozaczynski W, Hilliard R., Ran A, Postema H, Lutz D, Kazman R, Tracz W, Kahane E (2002) Report on software architecture review and assessment (sara), version 1.0. At http://philippe.Kruchten.Com/architecture/sarav1.Pdf
  79. Perry DE, Porter AA, Votta LG (2000) Empirical studies of software engineering: a roadmap. Proceedings of the Conference on the future of software engineering. ACM, LimerickGoogle Scholar
  80. Potts C (1993) Software-engineering research revisited. IEEE Softw 10(5):19–28CrossRefGoogle Scholar
  81. Prechelt L (2007) Optimizing return-on-investment (roi) for empirical software engineering studies working group results. In: Empirical software engineering issues. Critical assessment and future directionsGoogle Scholar
  82. Punter T, Ciolkowski M, Freimut B, John I (2003) Conducting on-line surveys in software engineering. Proceedings of the 2003 International Symposium on Empirical Software Engineering. IEEE Computer SocietyGoogle Scholar
  83. Reeves JW (1992) What is software design? C++ Journal 2 (2)Google Scholar
  84. SEI (2007) Published software architecture definitions. http://www.sei.cmu.edu/architecture/published_definitions.html
  85. Shaw M, Clements P (2006) The golden age of software architecture. IEEE Softw 23(2):31–39CrossRefGoogle Scholar
  86. Shaw M, Garlan D (1996) Software architecture: perspectives on an emerging discipline. Prentice-Hall, Upper Saddle RiverzbMATHGoogle Scholar
  87. Shull F, Basili V, Carver J, Maldonado J, Travassos G, Mendon M, Fabbri S (2002) Replicating software engineering experiments: Addressing the tacit knowledge problem. Proceedings of the 2002 International Symposium on Empirical Software Engineering. IEEE Computer SocietyGoogle Scholar
  88. Shull F, Carver J, Hochstein L, Basili V (2005) Empirical study design in the area of high-performance computing (hpc). International Symposium on Empirical Software Engineering, 2005Google Scholar
  89. Shull F, Seaman C, Zelkowitz M (2006) Victor R. Basili’s contributions to software quality. IEEE Softw 23(1):16–18CrossRefGoogle Scholar
  90. Simon H (1996) The sciences of the artificial, 3rd edn. The MIT, CambridgeGoogle Scholar
  91. Sjøberg DIK, Arisholm E, Jørgensen M (2001) Conducting experiments on software evolution. Proceedings of the 4th International Workshop on Principles of Software Evolution. ACM, ViennaGoogle Scholar
  92. Sjøberg DIK, Anda B, Arisholm E, Tore D, Jørgensen M, Karahasanovic A, Koren E, Marek V (2002) Conducting realistic experiments in software engineering. Proceedings of the 2002 International Symposium on Empirical Software Engineering. IEEE Computer SocietyGoogle Scholar
  93. Sjøberg D, Anda B, Arisholm E, Dybå T, Jørgensen M, Karahasanović A, Vokác M (2003) Challenges and recommendations when increasing the realism of controlled software engineering experiments. In: Empirical methods and studies in software engineeringGoogle Scholar
  94. Sjøberg DIK, Dybå T, Jørgensen M (2007) The future of empirical methods in software engineering research. Future of Software Engineering, Proceedings of the 29th International Conference on Software Engineering (ICSE 2007). IEEE Computer Society, MinneapolisGoogle Scholar
  95. Sjøberg DIK, Dybå T, Anda BCD, Hannay JE (2008) Building theories in software engineering. In: Guide to advanced empirical software engineeringGoogle Scholar
  96. Smolander K (2002) Four metaphors of architecture in software organizations: Finding out the meaning of architecture in practice. International Symposium on Empirical Software Engineering (ISESE 2002). Nara, JapanGoogle Scholar
  97. Staples M, Niazi M (2007) Experiences using systematic review guidelines. J Syst Softw 80(9):1425–1437CrossRefGoogle Scholar
  98. Svahnberg M, Aurum A, Wohlin C (2008) Using students as subjects—an empirical evaluation. Proceedings of the Second ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ACM, KaiserslauternGoogle Scholar
  99. Tang A, Ali Babar M, Gorton I, Han J (2007) A survey of architecture design rationale. J Syst Softw 79(12):1792–1804CrossRefGoogle Scholar
  100. Tonella P, Torchiano M, Bois B, Syst T (2007) Empirical studies in reverse engineering: state of the art and future trends. Empir Software Eng 12(5):551–571CrossRefGoogle Scholar
  101. Tyree J, Akerman A (2005) Architecture decisions: demystifying architecture. IEEE Softw 22(2):19–27CrossRefGoogle Scholar
  102. Vegas S, Basili V (2005) A characterisation schema for software testing techniques. Empir Software Eng 10:437–466CrossRefGoogle Scholar
  103. Vegas S, Juristo N, Moreno A, Solari M, Letelier P (2006) Analysis of the influence of communication between researchers on experiment replication. Proceedings of the 2006 ACM/IEEE international symposium on Empirical software engineering. ACM, Rio de JaneiroGoogle Scholar
  104. Vokac M, Tichy W, Sjoberg D, Arisholm E, Aldrin M (2004) A controlled experiment comparing the maintainability of programs designed with and without design patterns; a replication in a real programming environment. Empir Software Eng: An Int J 9(3):149–195CrossRefGoogle Scholar
  105. Williams L, Upchurch RL (2001) In support of student pair-programming. Proceedings of the 32nd SIGCSE Technical Symposium on Computer Science Education. ACM, CharlotteGoogle Scholar
  106. Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslen A (2000) Experimentation in software engineering: An introduction. SpringerGoogle Scholar
  107. Zelkowitz MV, Wallace DR (1998) Experimental models for validating technology. Comput 31(5):23–31CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Davide Falessi
    • 1
    Email author
  • Muhammad Ali Babar
    • 2
  • Giovanni Cantone
    • 1
  • Philippe Kruchten
    • 3
  1. 1.University of Rome Tor Vergata, DISPRomeItaly
  2. 2.Software Development GroupIT University of CopenhagenCopenhagen SDenmark
  3. 3.University of British Columbia ECEVancouverCanada

Personalised recommendations