A Method for Experimental Analysis and Modeling of Virtualization Performance Overhead

  • Nikolaus Huber
  • Marcel von Quast
  • Fabian Brosig
  • Michael Hauck
  • Samuel Kounev
Chapter

Abstract

Nowadays, virtualization solutions are gaining increasing importance. By enabling the sharing of physical resources, thus making resource usage more efficient, they promise energy and cost savings. Additionally, virtualization is the key enabling technology for cloud computing and server consolidation. However, resource sharing and other factors have direct effects on system performance, which are not yet well-understood. Hence, performance prediction and performance management of services deployed in virtualized environments like public and private clouds is a challenging task. Because of the large variety of virtualization solutions, a generic approach to predict the performance overhead of services running on virtualization platforms is highly desirable. In this paper, we present a methodology to quantify the influence of the identified performance-relevant factors based on an empirical approach using benchmarks. We show experimental results on two popular state-of-the-art virtualization platforms, Citrix XenServer 5.5 and VMware ESX 4.0, as representatives of the two major hypervisor architectures. Based on these results, we propose a basic, generic performance prediction model for the two different types of hypervisor architectures. The target is to predict the performance overhead for executing services on virtualized platforms.

References

  1. 1.
    P. Apparao, R. Iyer, X. Zhang, D. Newell, and T. Adelmeyer. Characterization & Analysis of a Server Consolidation Benchmark. In VEE ’08, 2008.Google Scholar
  2. 2.
    P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield. Xen and the Art of Virtualization. In SOSP, 2003.Google Scholar
  3. 3.
    K. Czarnecki and U. W. Eisenecker. Generative Programming. Addison-Wesley Longman, Amsterdam, 2000.Google Scholar
  4. 4.
    Descartes Research Group. http://www.descartes-research.net, July 2011.
  5. 5.
    N. Huber, M. von Quast, F. Brosig, and S. Kounev. Analysis of the Performance-Influencing Factors of Virtualization Platforms. In The 12th International Symposium on Distributed Objects, Middleware, and Applications (DOA), 2010.Google Scholar
  6. 6.
    N. Huber, M. von Quast, M. Hauck, and S. Kounev. Evaluating and Modeling Virtualization Performance Overhead for Cloud Environments. In International Conference on Cloud Computing and Service Science (CLOSER), 2011.Google Scholar
  7. 7.
    IDC. Virtualization Market Accelerates Out of the Recession as Users Adopt “Virtualize First” Mentality. http://www.idc.com/getdoc.jsp?containerId=prUS22316610, April 2010.
  8. 8.
    IT world, The IDG Network. Gartner’s data on energy consumption, virtualization, cloud. http://www.itworld.com/green-it/59328/gartners-data-energy-consumption-virtualization-cloud, 2008.
  9. 9.
    R. Iyer, R. Illikkal, O. Tickoo, L. Zhao, P. Apparao, and D. Newell. VM3: Measuring, modeling and managing VM shared resources. Computer Networks, 53(17):2873–2887, 2009.Google Scholar
  10. 10.
    Y. Koh, R. C. Knauerhase, P. Brett, M. Bowman, Z. Wen, and C. Pu. An analysis of performance interference effects in virtual environments. In ISPASS, 2007.Google Scholar
  11. 11.
    S. Kounev, F. Brosig, N. Huber, and R. Reussner. Towards self-aware performance and resource management in modern service-oriented systems. In SCC’10.Google Scholar
  12. 12.
    D. A. Menascé, V. A. F. Almeida, and L. W. Dowdy. Capacity Planning and Performance Modeling – From Mainframes to Client-Server Systems. Prentice-Hall, Upper Saddle River, New Jersey, USA, 1994.Google Scholar
  13. 13.
    P. Padala, X. Zhu, Z. Wang, S. Singhal, and K. G. Shin. Performance evaluation of virtualization technologies for server consolidation. HP Labs Tec. Report, 2007.Google Scholar
  14. 14.
    B. Quétier, V. Néri, and F. Cappello. Scalability Comparison of Four Host Virtualization Tools. Jounal on Grid Computing, 5(1):83–98, 2007.Google Scholar
  15. 15.
    M. Rosenblum and T. Garfinkel. Virtual machine monitors: current technology and future trends. Computer, 38(5):39–47, 2005.Google Scholar
  16. 16.
    M. Salsburg. Beyond the Hypervisor Hype. In CMG Conference, 2007.Google Scholar
  17. 17.
    S. Soltesz, H. Pötzl, M. E. Fiuczynski, A. Bavier, and L. Peterson. Container-based operating system virtualization: a scalable, high-performance alternative to hypervisors. SIGOPS Oper. Syst. Rev., 41(3):275–287, 2007.Google Scholar
  18. 18.
    O. Tickoo, R. Iyer, R. Illikkal, and D. Newell. Modeling virtual machine performance: Challenges and approaches. In HotMetrics, 2009.Google Scholar
  19. 19.
    VMware. A performance comparison of hypervisors. http://www.vmware.com/pdf/hypervisor_performance.pdf, 2007.
  20. 20.
    D. Westermann, J. Happe, M. Hauck, and C. Heupel. The Performance Cockpit Approach: A Framework for Systematic Performance Evaluations. In SEAA’10.Google Scholar

Copyright information

© Springer Science+Business Media New York 2012

Authors and Affiliations

  • Nikolaus Huber
    • 1
  • Marcel von Quast
    • 1
  • Fabian Brosig
    • 1
  • Michael Hauck
    • 2
  • Samuel Kounev
    • 1
  1. 1.Software Design and QualityKarlsruhe Institute of Technology (KIT)KarlsruheGermany
  2. 2.FZI Forschungszentrum InformatikKarlsruheGermany

Personalised recommendations