Empirical Software Engineering

, Volume 16, Issue 5, pp 587–622 | Cite as

From monolithic to component-based performance evaluation of software architectures

A series of experiments analysing accuracy and effort
  • Anne Martens
  • Heiko Koziolek
  • Lutz Prechelt
  • Ralf Reussner
Article

Abstract

Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.

Keywords

Empirical study Software architecture Performance evaluation Performance modelling Performance prediction 

References

  1. Bacigalupo DA, Jarvis SA, He L, Nudd GR (2004) An investigation into the application of different performance techniques to e-commerce applications. In: 18th IEEE international parallel and distributed processing symposium 2004 (IPDPS’04). IEEE Computer Society Press, Santa Fe, New MexicoGoogle Scholar
  2. Bacigalupo DA, Jarvis SA, He, L, Spooner DP, Dillenberger DN, Nudd GR (2005) An investigation into the application of different performance prediction methods to distributed enterprise applications. J Supercomput 34(2):93–111CrossRefGoogle Scholar
  3. Bacigalupo DA, Turner JD, Jarvis SA, Nudd GR (2003) A dynamic predictive framework for e-business workload management. In: 7th world multiconference on systemics,cybernetics and informatics (SCI2003) performance of web services invited session. Orlando, USAGoogle Scholar
  4. Balsamo S, Di Marco A, Inverardi P, Simeoni M (2004a) Model-based performance prediction in software development: a survey. IEEE Trans Softw Eng 30(5):295–310CrossRefGoogle Scholar
  5. Balsamo S, Marzolla M, Di Marco A, Inverardi P (2004b) Experimenting different software architectures performance techniques: a case study. In: Proceedings of the 4th international workshop on software and performance. ACM, pp 115–119Google Scholar
  6. Basili VR, Caldiera G, Rombach HD (1994) The Goal Question Metric Approach. In: Marciniak JJ (ed) Encyclopedia of software engineering—2 volume set. Wiley, pp 528–532Google Scholar
  7. Becker S, Grunske L, Mirandola R, Overhage S (2006) Performance prediction of component-based systems: a survey from an engineering perspective. In: Reussner R, Stafford J, Szyperski C (eds.) Architecting systems with trustworthy components. Lecture notes in computer science, vol 3938. Springer-Verlag, Berlin, pp 169–192CrossRefGoogle Scholar
  8. Becker S, Koziolek H, Reussner R (2009) The Palladio component model for model-driven performance prediction. J Syst Softw 82:3–22CrossRefGoogle Scholar
  9. Becker S, Koziolek H, Reussner RH (2007) Model-based performance prediction with the palladio component model. In: WOSP ’07: proceedings of the 6th international workshop on software and performance. ACM, New York, pp 54–65CrossRefGoogle Scholar
  10. Böhme R, Reussner R (2008) Dependability metrics. Lecture notes in computer science, chapter validation of predictions with measurements, vol 4909. Springer-Verlag, Berlin, pp 7–13CrossRefGoogle Scholar
  11. Bondarev E, Chaudron MRV, de Kock EA (2007) Exploring performance trade-offs of a JPEG decoder using the DeepCompass framework. In: WOSP ’07: proceedings of the 6th international workshop on software and performance. ACM, New York, pp 153–163CrossRefGoogle Scholar
  12. Bondarev E, Muskens J, With Pd, Chaudron M, Lukkien J (2004) Predicting real-time properties of component assemblies: a scenario-simulation approach. In: Proceedings of the 30th EUROMICRO conference (EUROMICRO’04). IEEE Computer Society, Washington, pp 40–47CrossRefGoogle Scholar
  13. Briand LC, Bunse C, Daly WJ (1997) An experimental evaluation of quality guidelines on the maintainability of object-oriented design documents. In: 7th workshop on empirical studies of programmers. ACM, pp 1–19Google Scholar
  14. Brosig F, Kounev S, Paclat C (2009) Using weblogic diagnostics framework to enable performance prediction for java EE applications. Oracle Technology Network (OTN) ArticleGoogle Scholar
  15. Dujmović J, Almeida V, Lea D, eds (2004) In: Proceedings of the 4th international workshop on software and performance (WOSP’04). ACM, New YorkGoogle Scholar
  16. Dumke RR, Rautenstrauch C, Schmietendorf A, Scholz A, eds (2001) Performance engineering, state of the art and current trends. Lecture notes in computer science, vol 2047. Springer-Verlag, BerlinMATHGoogle Scholar
  17. Franks G, Omari T, Woodside CM, Das O, Derisavi S (2009) Enhanced modeling and solution of layered queueing networks. IEEE Trans Softw Eng 35(2):148–161CrossRefGoogle Scholar
  18. Heitmann F, Moldt D (2007) Petri nets tool database. Available from http://www.informatik.uni.hamburg.de/TGI/PetriNets/tools/db.html
  19. Hermanns H, Herzog U, Katoen J-P (2002) Process Algebra for Performance Evaluation. Theor Comp Sci 274(1–2):43–87MathSciNetMATHCrossRefGoogle Scholar
  20. Huber N, Becker S, Rathfelder C, Schweflinghaus J, Reussner R (2010) Performance modeling in industry: a case study on storage virtualization. In: ACM/IEEE 32nd international conference on software engineering, software engineering in practice track, Capetown, South Africa. ACM, New York, pp 1–10. Acceptance rate: 23% (16/71)Google Scholar
  21. Jain R (1991) The art of computer systems performance analysis : techniques for experimental design, measurement, simulation, and modeling. WileyGoogle Scholar
  22. Jedlitschka A, Ciolkowski M, Pfahl D (2008) Guide to advanced empirical software engineering, chapter reporting experiments in software engineering. Springer, London, pp 201–228CrossRefGoogle Scholar
  23. Kounev S (2006) Performance modeling and evaluation of distributed component-based systems using queueing petri nets. IEEE Trans Softw Eng 32(7):486–502CrossRefGoogle Scholar
  24. Koziolek H (2004a) Empirical evaluation of performance-analysis methods for software architectures. http://sdqweb.ipd.kit.edu/publications/pdfs/koziolek2004b.pdf. Partial English translation of the original Master’s thesis “Empirische Bewertung von Performance-Analyseverfahren für Software-Architekturen”, Universität Oldenburg
  25. Koziolek H (2004b) Empirische bewertung von performance-analyseverfahren für software-architekturen. Master’s thesis, Universität OldenburgGoogle Scholar
  26. Koziolek H (2010) Performance evaluation of component-based software systems: a survey. Perform Eval 67(8):634–658. (Special issue on software and performance)CrossRefGoogle Scholar
  27. Koziolek H, Becker S, Happe J, Reussner R (2008) Model-driven software development: integrating quality assurance, chapter evaluating performance of software architecture models with the Palladio component model. IDEA Group Inc., pp 95–118Google Scholar
  28. Koziolek H, Firus V (2005) Empirical evaluation of model-based performance predictions methods in software development. In: Reussner RH, Mayer J, Stafford JA, Overhage S, Becker S, Schroeder PJ (eds) Proceeding of the first international conference on the quality of software architectures (QoSA’05). Lecture notes in computer science, vol 3712. Springer-Verlag Berlin, pp 188–202Google Scholar
  29. Kuperberg M, Krogmann K, Reussner R (2008) Performance prediction for black-box components using reengineered parametric behaviour models. In: Proceedings of the 11th international symposium on component based software engineering (CBSE’08), Karlsruhe, Germany, 14th-17th October 2008. Lecture notes in computer science, vol 5282. Springer-Verlag, Berlin, pp 48–63Google Scholar
  30. Lazowska E, Zahorjan J, Graham GS, Sevcik KC (1984) Quantitative system performance—computer system analysis using queueing network models. Prentice-HallGoogle Scholar
  31. Liu Y, Fekete A, Gorton I (2005) Design-level performance prediction of component-based applications. IEEE Trans Softw Eng 31(11):928–941CrossRefGoogle Scholar
  32. Martens A (2007) Empirical validation of the model-driven performance prediction approach Palladio. Master’s thesis, Carl-von-Ossietzky Universität OldenburgGoogle Scholar
  33. Martens A, Becker S, Koziolek H, Reussner R (2008a) An empirical investigation of the applicability of a component-based performance prediction method. In: Proceedings of the 5th European performance engineering workshop (EPEW’08), Palma de Mallorca, Spain. Lecture notes in computer science, vol 5261. Springer-Verlag, Berlin, pp 17–31Google Scholar
  34. Martens A, Becker S, Koziolek H, Reussner R (2008b) An empirical investigation of the effort of creating reusable models for performance prediction. In: Proceedings of the 11th international symposium on component-based software engineering (CBSE’08), Karlsruhe, Germany. Lecture notes in computer science, vol 5282. Springer-Verlag, Berlin, pp 16–31Google Scholar
  35. Martens A, Koziolek H, Prechelt L, Reussner R (2009) Experiment 3: effort for creating and reusing PCM performance models. http://sdqweb.ipd.kit.edu/wiki/Mobppexp/reuse_experiment
  36. Marzolla M (2004) Simulation-based performance modeling of UML software architectures. PhD Thesis TD-2004-1, Dipartimento di Informatica, Università Ca’ Foscari di Venezia, Mestre, ItalyGoogle Scholar
  37. Medvidovic N, Taylor RN (1997) A classification and comparison framework for software architecture description languages. IEEE Trans Softw Eng 26:70–93CrossRefGoogle Scholar
  38. Menasce D, Almeida V, Dowdy L (1994) Capacity planning and performance modeling: from mainframes to client-server systems. Prentice-Hall, New JerseyGoogle Scholar
  39. Menascé DA, Almeida VAF (2000) Scaling for E-business: technologies, models, performance, and capacity planning. Prentice Hall, Englewood CliffsGoogle Scholar
  40. Menascé DA, Almeida VAF, Dowdy LW (2004) Performance by design. Prentice HallGoogle Scholar
  41. Object Management Group (OMG) (2005) UML profile for schedulability, performance and time (SPT)Google Scholar
  42. Object Management Group (OMG) (2006) UML profile for modeling and analysis of real-time and embedded systems (MARTE) RFP (realtime/05-02-06)Google Scholar
  43. Prechelt L (2001) Kontrollierte experimente in der softwaretechnik. Springer-Verlag, BerlinGoogle Scholar
  44. Prechelt L, Unger B, Tichy WF, Votta LG (2001) A controlled experiment in maintenance comparing design patterns to simpler solutions. IEEE Trans Softw Eng 27:1134CrossRefGoogle Scholar
  45. R Development Core Team (2007) R: a language and environment for statistical computing. R foundation for statistical computing, Vienna, Austria. ISBN 3-900051-07-0, Last retrieved 2008-01-06Google Scholar
  46. Rolia JA, Sevcik KC (1995) The method of layers. IEEE Trans Softw Eng 21(8):689–700CrossRefGoogle Scholar
  47. Smith CU, Williams LG (2002) Performance solutions: a practical guide to creating responsive, scalable software. Addison-WesleyGoogle Scholar
  48. Williams LG, Smith CU (2002) PASASM: a method for the performance assessment of software architectures. In: Proceedings of the 3rd international workshop on software and performance (WOSP’02). ACM, New York, pp 179–189CrossRefGoogle Scholar
  49. Williams LG, Smith CU (2003) Making the business case for software performance engineering. In: Proceedings of the 29th international computer measurement group conference, December 7–12, 2003, Dallas, Texas, USA. Computer Measurement Group, pp 349–358Google Scholar
  50. Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A (2000) Experimentation in software engineering: an introduction. Kluwer, NorwellMATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  • Anne Martens
    • 1
  • Heiko Koziolek
    • 2
  • Lutz Prechelt
    • 3
  • Ralf Reussner
    • 1
  1. 1.Karlsruhe Institute of TechnologyKarlsruheGermany
  2. 2.ABB Corporate ResearchLadenburgGermany
  3. 3.Freie Universität BerlinBerlinGermany

Personalised recommendations