The Supportive Effect of Traceability Links in Change Impact Analysis for Evolving Architectures – Two Controlled Experiments

  • Muhammad Atif Javed
  • Uwe Zdun
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8919)

Abstract

The documentation of software architecture relations as a kind of traceability information is considered important to help people understand the consequences or ripple-effects of architecture evolution. Traceability information provides a basis for analysing and evaluating software evolution, and consequently, it can be used for tasks like reuse evaluation and improvement throughout the evolution of software. To date, however, none of the published empirical studies on software architecture traceability have examined the validity of these propositions. In this paper, we hypothesize that impact analysis of changes in software architecture can be more efficient when supported by traceability links. To test this hypothesis, we designed two controlled experiments that were conducted to investigate the influence of traceability links on the quantity and quality of retrieved assets during architecture evolution analysis. The results provide statistical evidence that a focus on architecture traceability significantly reduces the quantity of missing and incorrect assets, and increases the overall quality of architecture impact analysis for evolution.

Keywords

Software architecture traceability Architecture evolution Change impact analysis Empirical software engineering Controlled experiment 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Baeza-Yates, R.A., Ribeiro-Neto, B.: Modern Information Retrieval. Addison-Wesley Longman Publishing Co., Inc., Boston (1999)Google Scholar
  2. 2.
    Bengtsson, P., Lassing, N., Bosch, J., van Vliet, H.: Architecture-level modifiability analysis (alma). J. Syst. Softw. 69(1-2), 129–147 (2004)CrossRefGoogle Scholar
  3. 3.
    Boehm, B., Rombach, H.D., Zelkowitz, M.V.: Foundations of Empirical Software Engineering: The Legacy of Victor R. Basili. Springer-Verlag New York, Inc., Secaucus (2005)CrossRefGoogle Scholar
  4. 4.
    Clements, P., Kazman, R., Klein, M.: Evaluating Software Architectures: Methods and Case Studies. Addison-Wesley Longman Publishing Co., Inc., Boston (2002)Google Scholar
  5. 5.
    Cook, T.D., Campbell, D.T.: Quasi-experimentation: Design & analysis issues for field settings. Houghton Mifflin Harcourt, Boston (1979)Google Scholar
  6. 6.
    Harman, D.: Ranking algorithms. In: Frakes, W.B., Baeza-Yates, R. (eds.) Information Retrieval: Data Structures & Algorithms, pp. 363–392. Prentice-Hall, Inc., Upper Saddle River (1992)Google Scholar
  7. 7.
    Javed, M.A., Zdun, U.: The supportive effect of traceability links in architecture-level software understanding: Two controlled experiments. In: Proceedings of the 11th Working IEEE/IFIP Conference on Software Architecture, WICSA 2014, pp. 215–224. IEEE (2014)Google Scholar
  8. 8.
    Javed, M.A., Zdun, U.: A systematic literature review of traceability approaches between software architecture and source code. In: Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering, EASE 2014, pp. 16:1–16:10. ACM (2014)Google Scholar
  9. 9.
    Jedlitschka, A., Pfahl, D.: Reporting guidelines for controlled experiments in software engineering. In: 2005 International Symposium on Empirical Software Engineering, ISESE 2005, pp. 95–104. IEEE (2005)Google Scholar
  10. 10.
    Kitchenham, B.A., Pfleeger, S.L., Pickard, L.M., Jones, P.W., Hoaglin, D.C., El Emam, K., Rosenberg, J.: Preliminary guidelines for empirical research in software engineering. IEEE Transactions on Software Engineering 28(8), 721–734 (2002)CrossRefGoogle Scholar
  11. 11.
    Mann, H., Whitney, D.: On a test of whether one of two random variables is stochastically larger than the other, vol. 18, pp. 50–60. Institute of Mathematical Statistics (1947)Google Scholar
  12. 12.
    Mens, T., Magee, J., Rumpe, B.: Evolving software architecture descriptions of critical systems. IEEE Computer 43(5), 42–48 (2010)CrossRefGoogle Scholar
  13. 13.
    Selby, R.W.: Enabling reuse-based software development of large-scale systems. IEEE Transactions on Software Engineering 31(6), 495–510 (2005)CrossRefGoogle Scholar
  14. 14.
    Shahin, M., Liang, P., Li, Z.: Architectural design decision visualization for architecture design: Preliminary results of a controlled experiment. In: Proceedings of the 5th European Conference on Software Architecture: Companion Volume, ECSA 2011, pp. 2:1–2:8. ACM (2011)Google Scholar
  15. 15.
    Shapiro, S.S., Wilk, M.B.: An analysis of variance test for normality (complete samples). Biometrika 52(3/4), 591–611 (1965)CrossRefMATHMathSciNetGoogle Scholar
  16. 16.
    Stevens, W.P., Myers, G.J., Constantine, L.L.: Structured design. IBM Syst. J. 13(2), 115–139 (1974)CrossRefGoogle Scholar
  17. 17.
    Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in Software Engineering: An Introduction. Kluwer Academic Publishers, Norwell (2000)CrossRefGoogle Scholar
  18. 18.
    Yau, S., Collofello, J., MacGregor, T.: Ripple effect analysis of software maintenance. In: The IEEE Computer Society’s Second International on Computer Software and Applications Conference, COMPSAC 1978, pp. 60–65. IEEE (1978)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Muhammad Atif Javed
    • 1
  • Uwe Zdun
    • 1
  1. 1.Software Architecture Research GroupUniversity of ViennaAustria

Personalised recommendations