Experiences in System-of-Systems-Wide Architecture Evaluation over Multiple Product Lines

  • Juha Savolainen
  • Tomi Männistö
  • Varvana Myllärniemi
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8919)

Abstract

Software architecture evaluation, both for software products and software product lines, has become a mainstream activity in industry. Significant amount of practical experience exists in applying architecture evaluation in real projects. However, most of the methods and practices focus on evaluating individual products or product lines. In this paper, we study how to evaluate a system-of-systems consisting of several cooperating software product lines. In particular, the intent is to evaluate the system-of-systems-wide architecture for the ability to satisfy a new set of crosscutting requirements. We describe the experiences and practices of performing a system-of-systems-wide architecture evaluation in industry: the system-of-systems in question is a set of product lines whose products are used to create the All-IP 3G telecommunications network. The results indicate there are significant differences in evaluating the architecture of system-of-systems compared with traditional evaluations targeting single systems. The two main differences affecting architecture evaluation were the heterogeneity in the maturity levels of the individual systems, i.e., product lines, and the option that instead of simply evaluating each product line individually, responsibilities can be moved from one product line to another to satisfy the system-of-systems level requirements.

Keywords

Architecture evaluation System-of-systems Industrial Experience 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lindvall, M., Tvedt, R.T., Costa, P.: An empirically-based process for software architecture evaluation. Empirical Software Engineering 8(1), 83–108 (2003)CrossRefMATHGoogle Scholar
  2. 2.
    Svahnberg, M.: An industrial study on building consensus around software architectures and quality attributes. Information and Software Technology (46), 818–850 (2004)Google Scholar
  3. 3.
    Hofmeister, C., Nord, R., Soni, D.: Applied Software Architecture. Addison-Wesley, Reading (2000)Google Scholar
  4. 4.
    Bass, L., Clements, P., Kazman, R.: Software Architecture in Practice, 2nd edn. Addison-Wesley (2003)Google Scholar
  5. 5.
    Clements, P., Kazman, R., Klein, M.: Evaluating Software Architectures—Methods and Case Studies. Addison-Wesley, Boston (2002)Google Scholar
  6. 6.
    Nord, R., Tomayko, J.: Software architecture-centric methods and agile development. IEEE Software (2006)Google Scholar
  7. 7.
    Tyree, J., Akerman, A., Financial, C.: Architecture decisions: Demystifying architecture. IEEE Software (2005)Google Scholar
  8. 8.
    Rozanski, N., Woods, E.: Software Systems Architecture: Working With Stakeholders Using Viewpoints and Perspectives. Addison-Wesley (2005)Google Scholar
  9. 9.
    Savolainen, J., Männistö, T.: Conflict-Centric Software Architectural Views: Exposing Trade-Offs in Quality Requirements. IEEE Software 27(6), 33–37 (2010)CrossRefGoogle Scholar
  10. 10.
    Svahnberg, M., Wohlin, C., Lundberg, L., Mattsson, M.: A method for understanding quality attributes in software architecture structures. In: Proceedings of the 14th International Conference on Software Engineering and Knowledge Engineering (SEKE 2002), pp. 819–826. ACM Press, New York (2002)Google Scholar
  11. 11.
    Hillard, R., Kurland, M., Litvintchouk, S., Rice, T., Schwarm, S.: Architecture Quality Assessment, version 2.0, MITRE Corporation (August 7, 1996)Google Scholar
  12. 12.
    Etxeberria, L., Sagardui, G.: Product-line architecture: New issues for evaluation. In: Obbink, H., Pohl, K. (eds.) SPLC 2005. LNCS, vol. 3714, pp. 174–185. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  13. 13.
    Olumofin, F.G., Mišić, V.B.: A holistic architecture assessment method for software product lines. Information and Software Technology 49(4), 309–323 (2007)CrossRefGoogle Scholar
  14. 14.
    Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering 14(2), 131–164 (2009)CrossRefGoogle Scholar
  15. 15.
    Myllärniemi, V., Savolainen, J., Männistö, T.: Performance variability in software product lines: A case study in the telecommunication domain. In: Software Product Line Conference, pp. 32–41 (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Juha Savolainen
    • 1
  • Tomi Männistö
    • 2
  • Varvana Myllärniemi
    • 3
  1. 1.Danfoss Power Electronics A/S, Global Research and DevelopmentGraastenDenmark
  2. 2.Department of Computer ScienceUniversity of HelsinkiHelsinkiFinland
  3. 3.School of ScienceAalto UniversityEspooFinland

Personalised recommendations