Advertisement

Software & Systems Modeling

, Volume 13, Issue 4, pp 1345–1365 | Cite as

Deriving performance-relevant infrastructure properties through model-based experiments with Ginpex

  • Michael HauckEmail author
  • Michael Kuperberg
  • Nikolaus Huber
  • Ralf Reussner
Theme Section Paper

Abstract

To predict the performance of an application, it is crucial to consider the performance of the underlying infrastructure. Thus, to yield accurate prediction results, performance-relevant properties and behaviour of the infrastructure have to be integrated into performance models. However, capturing these properties is a cumbersome and error-prone task, as it requires carefully engineered measurements and experiments. Existing approaches for creating infrastructure performance models require manual coding of these experiments, or ignore the detailed properties in the models. The contribution of this paper is the Goal-oriented INfrastructure Performance EXperiments (Ginpex) approach, which introduces goal-oriented and model-based specification and generation of executable performance experiments for automatically detecting and quantifying performance-relevant infrastructure properties. Ginpex provides a metamodel for experiment specification and comes with predefined experiment templates that provide automated experiment execution on the target platform and also automate the evaluation of the experiment results. We evaluate Ginpex using three case studies, where experiments are executed to quantify various infrastructure properties.

Keywords

Metamodelling Experiments  Measurements Infrastructure Deriving infrastructure properties Performance prediction 

Notes

Acknowledgments

The work presented in this paper was partially developed in the context of EMERGENT: Grundlagen emergenter Software that is funded by the German Federal Ministry of Education and Research (BMBF) under grant 01IC10S01A.

References

  1. 1.
    Aas, J.: Understanding the Linux 2.6.8.1 CPU scheduler. Technical report. Silicon Graphics Inc. (SGI), New York (2005)Google Scholar
  2. 2.
    Balsamo, S., Di Marco, A., Paola, I., Simeoni, M.: Model-based performance prediction in software development: a survey. IEEE Trans. Softw. Eng. 30(5), 295–310 (2004)CrossRefGoogle Scholar
  3. 3.
    Becker, S., Dencker, T., Happe, J.: Model-driven generation of performance prototypes. In: SIPEW 2008. SPEC international performance evaluation workshop, volume 5119 of lecture notes in computer science, pp. 79–98. Springer-Verlag, Berlin/Heidelberg (2008)Google Scholar
  4. 4.
    Becker, S., Koziolek, H., Reussner, R.: The Palladio component model for model-driven performance prediction. J. Syst. Softw. 82, 3–22 (2009)CrossRefGoogle Scholar
  5. 5.
    Cherkasova, L., Gardner, R.: Measuring CPU overhead for I/O processing in the Xen virtual machine monitor. In: USENIX 2005. Proceedings of the USENIX annual technical conference (2005)Google Scholar
  6. 6.
    Faban website: http://www.faban.org/ (2012) Accessed 13 Jan 2012
  7. 7.
    Ginpex website: http://ginpex.ipd.kit.edu (2012) Accessed 04 July 2012
  8. 8.
    Happe, J., Becker, S., Rathfelder, C., Friedrich, H.: Parametric performance completions for model-driven performance prediction. Perform. Eval. 67(8), 694–716 (2010)CrossRefGoogle Scholar
  9. 9.
    Hauck, M., Happe, J., Reussner, R.H.: Automatic derivation of performance prediction models for load-balancing properties based on goal-oriented measurements. In: MASCOTS 2010. Proceedings of the 18th IEEE international symposium on modeling, analysis and simulation of computer and telecommunication systems. IEEE Computer Society (2010)Google Scholar
  10. 10.
    Hauck, M., Kuperberg, M., Huber, N., Reussner, R.: Ginpex: deriving performance-relevant infrastructure properties through goal-oriented experiments. In: QoSA 2011. Proceedings of the 7th ACM SIGSOFT international conference on the quality of software architectures, pp. 53–62. ACM, New York (2011)Google Scholar
  11. 11.
    Heyer, L.J., Semyon, K., Shibu, Y.: Exploring expression data: identification and analysis of coexpressed genes. Genome Res. 1999(9), 1106–1115 (1999)CrossRefGoogle Scholar
  12. 12.
    Iometer Website: http://www.iometer.org/ (2012) Accessed 13 Jan 2012
  13. 13.
    Iosup, A., Ostermann, S., Yigitbasi, N., Prodan, R., Fahringer, T., Epema, D.: Performance analysis of cloud computing services for many-tasks scientific computing. IEEE Trans. Parallel Distrib.Syst. 22(6), 931–945 (2011)CrossRefGoogle Scholar
  14. 14.
    Jain, R.: The art of computer systems performance analysis. Wiley, NewYork (1991)zbMATHGoogle Scholar
  15. 15.
    Kalibera, T., Lehotsky, J., Majda, D., Repcek, B., Tomcanyi, M., Tomecek, A., Tuma, P., Urban, J.: Automated benchmarking and analysis tool. In: VALUETOOLS 2006. Proceedings of the 1st international conference on performance evaluation methodolgies and tools. ACM, New York (2006)Google Scholar
  16. 16.
    Koziolek, H.: Performance evaluation of component-based software systems: a survey. Perform. Eval. 67(8), 634–658 (2010)CrossRefGoogle Scholar
  17. 17.
    Krogmann, K., Kuperberg, M., Reussner, R.: Using genetic search for reverse engineering of parametric behaviour models for performance prediction. IEEE Trans. Softw. Eng. 36, 865–877 (2010)CrossRefGoogle Scholar
  18. 18.
    Lassnig, M., Fahringer, T., Garonne, V., Molfetas, A., Branco, M.: Identification, modelling and prediction of non-periodic bursts in workloads. In: Proceedings of the 2010 10th IEEE/ACM international conference on cluster, cloud and grid computing, CCGRID ’10, pp. 485–494. IEEE Computer Society, Washington (2010)Google Scholar
  19. 19.
    Liu, Y., Fekete, A., Gorton, I.: Design-level performance prediction of component-based applications. IEEE Trans. Softw. Eng. 31(11), 928–941 (2005)CrossRefGoogle Scholar
  20. 20.
    Molnar, I.: Linux: the completely fair scheduler. http://kerneltrap.org/node/8059/ (2007) Accessed 13 Jan 2012
  21. 21.
    R Development Core Team: R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna (2011) ( ISBN 3-900051-07-0)Google Scholar
  22. 22.
    Russinovich, M.E., Solomon, D.A.: Microsoft Windows internals: Microsoft Windows server 2003, Windows XP, and Windows 2000., 4th edn. Microsoft Press, USA (2005)Google Scholar
  23. 23.
    Schroeder, B., Wierman, A., Harchol-Balter, M.: Open versus closed: a cautionary tale. In: Proceedings of NSDI’06, pp. 239–252. USENIX Association, Berkeley (2006)Google Scholar
  24. 24.
    Shao, M., Ailamaki, A., Falsafi, B.: DBmbench: fast and accurate database workload representation on modern microarchitecture. In: Proceedings of the 2005 conference of the centre for advanced studies on collaborative research, CASCON ’05, pp. 254–267. IBM Press (2005)Google Scholar
  25. 25.
    Sigar API Website: http://support.hyperic.com/display/SIGAR/Home/ (2012) Accessed 04 July 2012
  26. 26.
    Smith, C.U.: Performance engineering of software systems. Addison-Wesley, USA (1990). ReadingGoogle Scholar
  27. 27.
    SPEC: SPECjms2007 benchmark. http://www.spec.org/jms2007/ (2012) Accessed 13 Jan 2012
  28. 28.
    The Apache Foundation: Apache JMeter. http://jmeter.apache.org/ (2012) Accessed 13 Jan 2012
  29. 29.
    The Cooperative Association for Internet Data Analysis: Internet tools taxonomy. http://www.caida.org/tools/taxonomy/ (2012) Accessed 13 Jan 2012
  30. 30.
    The Eclipse Foundation: Eclipse EMF Ecore metamodel API. http://download.eclipse.org/modeling/emf/emf/javadoc/2.7.0/org/eclipse/emf/ecore/package-summary.html (2012) Accessed 13 Jan 2012
  31. 31.
    The Eclipse Foundation: Eclipse Equinox OSGi. http://www.eclipse.org/equinox/ (2012) Accessed 13 Jan 2012
  32. 32.
    The Eclipse Foundation: Eclipse model to text (M2T) framework. http://www.eclipse.org/modeling/m2t/ (2012) Accessed 13 Jan 2012
  33. 33.
    The Eclipse Foundation: Eclipse modeling framework project (EMF). http://www.eclipse.org/modeling/emf/ (2012) Accessed 02 July 2012
  34. 34.
    Tsouloupas, G., Dikaiakos, M.D.: Characterization of computational grid resources using low-level benchmarks. In: E-SCIENCE 2006. Proceedings of the 2nd IEEE international conference on e-Science and grid computing. IEEE Computer Society (2006)Google Scholar
  35. 35.
    Westermann, D., Happe, J., Hauck, M., Heupel, C.: The performance cockpit approach: a framework for systematic performance evaluations. In: Proceedings of the 36th EUROMICRO conference on software engineering and advanced applications (SEAA 2010). IEEE Computer Society (2010)Google Scholar
  36. 36.
    Wood, T., Cherkasova, L., Ozonat, K., Shenoy, P.: In: Profiling and modeling resource usage of virtualized applications. In middleware 2008. Proceedings of the 9th ACM/IFIP/USENIX international conference on middleware. Springer-Verlag, New York (2008)Google Scholar
  37. 37.
    Zhang, H., Jiang, G., Yoshihira, K., Chen, H., Saxena, A.: Resilient workload manager: Taming Bursty workload of scaling internet applications. In: Proceedings of the 6th international conference industry session on autonomic computing and communications industry session, ICAC–INDST ’09, pp. 19–28. ACM, New York (2009)Google Scholar
  38. 38.
    Zhu, L., Bui, N.B., Liu, Y., Gorton, I.: MDABench: customized benchmark generation using MDA. J. Syst. Softw. 80(2), 265–282 (2007)CrossRefGoogle Scholar
  39. 39.
    Zhu, L., Liu, Y., Bui, N.B., Gorton, I.: Revel8or: model driven capacity planning tool suite. In: Proceedings of the 29th International Conference on Software Engineering (ICSE 2007), pp. 797–800. IEEE Computer Society (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Michael Hauck
    • 1
    Email author
  • Michael Kuperberg
    • 2
  • Nikolaus Huber
    • 3
  • Ralf Reussner
    • 3
  1. 1.FZI Research Center for Information TechnologyKarlsruheGermany
  2. 2.DB Systel GmbHFrankfurtGermany
  3. 3.Karlsruhe Institute of TechnologyKarlsruheGermany

Personalised recommendations