Empirical Software Engineering

, Volume 16, Issue 5, pp 623–666 | Cite as

Assessing architectural evolution: a case study

  • Michel Wermelinger
  • Yijun Yu
  • Angela Lozano
  • Andrea Capiluppi
Article

Abstract

This paper proposes to use a historical perspective on generic laws, principles, and guidelines, like Lehman’s software evolution laws and Martin’s design principles, in order to achieve a multi-faceted process and structural assessment of a system’s architectural evolution. We present a simple structural model with associated historical metrics and visualizations that could form part of an architect’s dashboard. We perform such an assessment for the Eclipse SDK, as a case study of a large, complex, and long-lived system for which sustained effective architectural evolution is paramount. The twofold aim of checking generic principles on a well-know system is, on the one hand, to see whether there are certain lessons that could be learned for best practice of architectural evolution, and on the other hand to get more insights about the applicability of such principles. We find that while the Eclipse SDK does follow several of the laws and principles, there are some deviations, and we discuss areas of architectural improvement and limitations of the assessment approach.

Keywords

Software architecture Software evolution Design principles Structured design Metrics Eclipse 

References

  1. Basili V, Briand L, Melo W (1996) A validation of object-oriented design metrics as quality indicators. IEEE Trans Softw Eng 22(10):751–761CrossRefGoogle Scholar
  2. Ben-Ari M (1982) Principles of concurrent programming. Prentice-Hall, Englewood CliffsMATHGoogle Scholar
  3. Beyer D (2008) CCVisu: automatic visual software decomposition. In: Proc int’l conf on software engineering, companion volume. ACM, New York, pp 967–968Google Scholar
  4. Beyer D, Noack A, Lewerentz C (2005) Efficient relational calculation for software analysis. IEEE Trans Softw Eng 31(2):137–149CrossRefGoogle Scholar
  5. Bloch J (2001) Effective Java. Addison-Wesley, ReadingGoogle Scholar
  6. Bois BD, Rompaey BV, Meijfroidt K, Suijs E (2008) Supporting reengineering scenarios with FETCH: an experience report. In: Electronic communications of the EASST 8, selected papers from the 2007 ERCIM symp on software evolutionGoogle Scholar
  7. Briand LC, Morasca S, Basili VR (1996) Property-based software engineering measurement. IEEE Trans Softw Eng 22(1):68–86CrossRefGoogle Scholar
  8. Briand LC, Morasca S, Basili VR (1997) Response to: comments on “Property-based software engineering measurement: refining the additivity properties”. IEEE Trans Softw Eng 23(3):196–197CrossRefGoogle Scholar
  9. Briand LC, Wüst J, Daly JW, Porter DV (2000) Exploring the relationship between design measures and software quality in object-oriented systems. J Syst Softw 51(3):245–273CrossRefGoogle Scholar
  10. Brown W, Malveau R, Mowbray T (1998) AntiPatterns: refactoring software, architectures, and projects in crisis. Wiley, New YorkGoogle Scholar
  11. Ciupke O (1999) Automatic detection of design problems in object-oriented reengineering. In: Proc 30th int’l conf on technology of object-oriented languages and systems. IEEE, Piscataway, pp 18–32Google Scholar
  12. Crespo Y, López C, Marticorena R, Manso E (2005) Language independent metrics support towards refactoring inference. In: Int’l workshop on quantitative approaches in object-oriented software engineeringGoogle Scholar
  13. Dagpinar M, Jahnke JH (2003) Predicting maintainability with object-oriented metrics—an empirical comparison. In: Proc working conf on reverse engineering. IEEE, Piscataway, pp 155–164Google Scholar
  14. Eldredge N, Gould SJ (1972) Punctuated equilibria: an alternative to phyletic gradualism. In: Schopf T (ed) Models in palaeobiology. Freeman and Cooper, San Francisco, pp 82–115Google Scholar
  15. Emam KE, Benlarbi S, Goel N, Rai SN (2001) The confounding effect of class size on the validity of object-oriented metrics. IEEE Trans Softw Eng 27(7):630–650CrossRefGoogle Scholar
  16. Eysenck HJ (1976) Case studies in behaviour therapy. Routledge, Evanston. Chap IntroductionGoogle Scholar
  17. Fernández-Ramil J, Lozano A, Wermelinger M, Capiluppi A (2008) Empirical studies of open source evolution. In: Software evolution, chap 11. Springer, New York, pp 263–288Google Scholar
  18. Flyvbjerg B (2006) Five misunderstandings about case-study research. Qual Inq 12(2):219–245CrossRefGoogle Scholar
  19. Fowler M, Beck K, Brant J, Opdyke W, Roberts D (1999) Refactoring: improving the design of existing code. Addison-Wesley, ReadingGoogle Scholar
  20. Gamma E, Helm R, Johnson R, Vlissides J (1995) Design patterns: elements of reusable object-oriented software. Addison-Wesley, ReadingGoogle Scholar
  21. Godfrey MW, Tu Q (2000) Evolution in open source software: a case study. In: Int’l conf on software maintenance. IEEE, Piscataway, pp 131–142Google Scholar
  22. Hansen KM, Jónasson K, Neukirchen H (2009) An empirical study of open source software architectures’ effect on product quality. Tech Rep VHI-01-2009, Engineering Research Institute, Univ of IcelandGoogle Scholar
  23. Hou D (2007) Studying the evolution of the Eclipse Java editor. In: Proc OOPSLA workshop on eclipse technology exchange. ACM, New York, pp 65–69CrossRefGoogle Scholar
  24. Johnson RE, Foote B (1988) Designing reusable classes. J Object-Oriented Program 1(2):22–35Google Scholar
  25. Juergens E, Deissenboeck F, Hummel B, Wagner S (2009) Do code clones matter? In: Proc int’l conference on software engineering. IEEE, Piscataway, pp 485–495Google Scholar
  26. Kuhn TS (1987) What are scientific revolutions? In: The probabilistic revolution, vol 1. MIT, Cambridge, pp 7–22Google Scholar
  27. Lakos J (1996) Large-scale C+ + software design. Addison-Wesley, ReadingGoogle Scholar
  28. Lehman MM, Belady LA (1985) Program evolution: processes of software change. Academic, New YorkGoogle Scholar
  29. Lehman MM, Ramil JF, Wernick PD, Perry DE, Turski WM (1997) Metrics and laws of software evolution—the nineties view. In: Proc symp on software metrics. IEEE, Piscataway, pp 20–32CrossRefGoogle Scholar
  30. Lieberherr KJ, Holland I, Riel A (1988) Object-oriented programming: an objective sense of style. In: Proc int’l conf on object oriented programming, systems, languages, and applications, pp 323–334Google Scholar
  31. Liskov B (1987) Data abstraction and hierarchy. In: Proc int’l conf on object oriented programming, systems, languages, and applications. ACM, New York, pp 17–34CrossRefGoogle Scholar
  32. Marinescu R (2001) Detecting design flaws via metrics in object oriented systems. In: Proc int’l conf on technology of object-oriented languages and systems. IEEE, Piscataway, pp 173–182Google Scholar
  33. Martin RC (1996) Granularity. C+ + Report 8(10):57–62Google Scholar
  34. Martin RC (1997) Large-scale stability. C+ + Report 9(2):54–60Google Scholar
  35. Medvidovic N, Dashofy EM, Taylor RN (2007) Moving architectural description from under the technology lamppost. Inf Softw Technol 49(1):12–31CrossRefGoogle Scholar
  36. Melton H (2006) On the usage and usefulness of OO design principles. In: Companion to the 21st OOPSLA. ACM, New York, pp 770–771Google Scholar
  37. Melton H, Tempero E (2007) An empirical study of cycles among classes in Java. Empir Software Eng 12(4):389–415CrossRefGoogle Scholar
  38. Mens T, Fernández-Ramil J, Degrandsart S (2008) The evolution of Eclipse. In: Proc 24th int’l conf on software maintenance. IEEE, Piscataway, pp 386–395Google Scholar
  39. Meyer B (1988) Object-oriented software construction. Prentice-Hall, Englewood CliffsGoogle Scholar
  40. Meyer B (1992) Applying ‘design by contract’. Computer 25(10):40–51CrossRefGoogle Scholar
  41. Moha N, Guéhéneuc YG, Leduc P (2006) Automatic generation of detection algorithms for design defects. In: Proc int’l conf on automated software engineering. IEEE, Piscataway, pp 297–300Google Scholar
  42. Munro M (2005) Product metrics for automatic identification of “bad smell” design problems in Java source-code. In: Proc int’l symp on software metrics. IEEE, Piscataway, pp 15–24Google Scholar
  43. Parnas DL (1972) On the criteria to be used in decomposing systems into modules. Commun ACM 15(12):1053–1058CrossRefGoogle Scholar
  44. Popper KR (1959) The logic of scientific discovery. Hutchinson, LondonMATHGoogle Scholar
  45. Ratiu D, Ducasse S, Girba T, Marinescu R (2004) Using history information to improve design flaws detection. In: Proc European conf on software maintenance and reengineering. IEEE, Piscataway, pp 223–232Google Scholar
  46. Riel A (1996) Object-oriented design heuristics. Addison-Wesley, ReadingGoogle Scholar
  47. Simon HA (1962) The architecture of complexity. Proc Am Philos Soc 106(6):467–482Google Scholar
  48. Stevens W, Myers G, Constantine L (1979) Structured design. In: Classics in software engineering. Yourdon Press, Upper Saddle River, NJ, USA, pp 205–232Google Scholar
  49. Tahvildari L, Kontogiannis K (2003) A metric-based approach to enhance design quality through meta-pattern transformations. In: Proc European conf on software maintenance and reengineering. IEEE, Piscataway, pp 183–192Google Scholar
  50. Tourwe T, Mens T (2003) Identifying refactoring opportunities using logic meta programming. In: Proc European conf on software maintenance and reengineering. IEEE, Piscataway, pp 91–100Google Scholar
  51. van Belle T (2004) Modularity and the evolution of software evolvability. PhD thesis, University of New MexicoGoogle Scholar
  52. Walter B, Pietrzak B (2005) Multi-criteria detection of bad smells in code with UTA method. In: Extreme programming and agile processes in software engineering. Springer, New York, pp 154–161CrossRefGoogle Scholar
  53. Wermelinger M, Yu Y (2011) Some issues in the ‘archaeology’ of software evolution. In: Generative and transformational techniques in software engineering III. LNCS, vol 6491. Springer, New York, pp 426–445CrossRefGoogle Scholar
  54. Wermelinger M, Yu Y, Lozano A (2008) Design principles in architectural evolution: a case study. In: Proc 24th int’l conf on software maintenance. IEEE, Piscataway, pp 396–405Google Scholar
  55. Wong K (1998) The Rigi user’s manual, version 5.4.4Google Scholar
  56. Wu J, Spitzer C, Hassan A, Holt R (2004) Evolution spectrographs: visualizing punctuated change in software evolution. In: Proc 7th intl workshop on principles of software evolution. IEEE, Piscataway, pp 57–66Google Scholar
  57. Xing Z, Stroulia E (2004) Understanding class evolution in object-oriented software. In: Proc int’l workshop on program comprehension. IEEE, Piscataway, pp 34–43CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • Michel Wermelinger
    • 1
  • Yijun Yu
    • 1
  • Angela Lozano
    • 2
  • Andrea Capiluppi
    • 3
  1. 1.Computing Department & Centre for Research in ComputingThe Open UniversityMilton KeynesUK
  2. 2.ICTEAMUniversité catholique de LouvainLouvain-la-NeuveBelgium
  3. 3.School of Computing, Information Technology and EngineeringUniversity of East LondonLondonUK

Personalised recommendations