Advertisement

Enhancing the Performance of High Availability Lightweight Live Migration

  • Peng Lu
  • Binoy Ravindran
  • Changsoo Kim
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7109)

Abstract

Remus is one of the first systems which implemented whole virtual machine replication to achieve high availability (HA). Recently a fast, lightweight migration mechanism (LLM) was proposed to reduce the long network delay in Remus. However, these virtualized systems have the long downtime problem, which is a bottleneck to achieve HA. Based on LLM, in this paper, we describe a fine-grained block identification (or FGBI) mechanism to reduce the downtime in virtualized systems so as to achieve HA, with support for a block sharing mechanism and hybrid compression method. We implement the FGBI mechanism and evaluate it against LLM and Remus, using several benchmarks such as Apache, SPECweb, NPB and SPECsys. Our experimental results reveal that FGBI reduces the type I downtime over LLM and Remus by as much as 77% and 45% respectively, and reduces the type II downtime by more than 90% and more than 70%, compared with LLM and Remus respectively. Moreover, in all cases, the performance overhead of FGBI is less than 13%.

Keywords

Virtual Machine Memory Block Virtual Machine Migration Virtual Machine Monitor Live Migration 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
  2. 2.
    Akoush, S., Sohan, R., Rice, A., Moore, A.W., Hopper, A.: Predicting the performance of virtual machine migration. In: International Symposium on Modeling, Analysis, and Simulation of Computer Systems, pp. 37–46 (2010)Google Scholar
  3. 3.
    Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Neugebauer, R., Pratt, I., Warfield, A.: Xen and the art of virtualization. In: Proceedings of the Nineteenth ACM Symposium on Operating Systems Principles, SOSP 2003, pp. 164–177. ACM, New York (2003)CrossRefGoogle Scholar
  4. 4.
    Clark, C., Fraser, K., Hand, S., Hansen, J.G., Jul, E., Limpach, C., Pratt, I., Warfield, A.: Live migration of virtual machines. In: Proceedings of the 2nd Conference on Symposium on Networked Systems Design & Implementation, NSDI 2005, vol. 2, pp. 273–286. USENIX Association, Berkeley (2005)Google Scholar
  5. 5.
    Cully, B., Lefebvre, G., Hutchinson, N., Warfield, A.: Remus source code, http://dsg.cs.ubc.ca/remus/
  6. 6.
    Cully, B., Lefebvre, G., Meyer, D., Feeley, M., Hutchinson, N., Warfield, A.: Remus: high availability via asynchronous virtual machine replication. In: Proceedings of the 5th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2008, pp. 161–174. USENIX Association, Berkeley (2008)Google Scholar
  7. 7.
    Ekman, M., Stenstrom, P.: A robust main-memory compression scheme. In: ISCA 2005: Proceedings of the 32nd Annual International Symposium on Computer Architecture, pp. 74–85. IEEE Computer Society, Washington, DC (2005)Google Scholar
  8. 8.
    Gupta, D., Lee, S., Vrable, M., Savage, S., Snoeren, A.C., Varghese, G., Voelker, G.M., Vahdat, A.: Difference engine: harnessing memory redundancy in virtual machines. Commun. ACM 53, 85–93 (2010)CrossRefGoogle Scholar
  9. 9.
    Henson, V.: An analysis of compare-by-hash. In: Proceedings of the 9th Conference on Hot Topics in Operating Systems, vol. 9, pp. 3–3 (2003)Google Scholar
  10. 10.
    Henson, V., Henderson, R.: Guidelines for using compare-by-hash, http://infohost.nmt.edu/~val/review/hash2.pdf
  11. 11.
    Hines, M.R., Gopalan, K.: Post-copy based live virtual machine migration using adaptive pre-paging and dynamic self-ballooning. In: Proceedings of the 2009 ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments, VEE 2009, pp. 51–60. ACM, New York (2009)CrossRefGoogle Scholar
  12. 12.
    Huang, W., Gao, Q., Liu, J., Panda, D.K.: High performance virtual machine migration with RDMA over modern interconnects. In: CLUSTER 2007: Proceedings of the 2007 IEEE International Conference on Cluster Computing, pp. 11–20. IEEE Computer Society, Washington, DC (2007)Google Scholar
  13. 13.
    Jiang, B., Ravindran, B., Kim, C.: Lightweight Live Migration for High Availability Cluster Service. In: Dolev, S., Cobb, J., Fischer, M., Yung, M. (eds.) SSS 2010. LNCS, vol. 6366, pp. 420–434. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  14. 14.
    Jin, H., Deng, L., Pan, X.: Live virtual machine migration with adaptive, memory compression. In: 2009 IEEE International Conference on Cluster Computing and Workshops, pp. 1–10 (2009)Google Scholar
  15. 15.
    Liu, H., Jin, H., Liao, X., Hu, L., Yu, C.: Live migration of virtual machine based on full system trace and replay. In: HPDC 2009: Proceedings of the 18th ACM International Symposium on High Performance Distributed Computing, pp. 101–110. ACM, New York (2009)Google Scholar
  16. 16.
    Lu, M., Chiueh, T.C.: Fast memory state synchronization for virtualization-based fault tolerance. In: Dependable Systems Networks, pp. 534–543 (2009)Google Scholar
  17. 17.
    Nelson, M., Lim, B.H., Hutchins, G.: Fast transparent migration for virtual machines. In: ATEC 2005: Proceedings of the Annual Conference on USENIX Annual Technical Conference, pp. 25–25. USENIX Association, Berkeley (2005)Google Scholar
  18. 18.
    Bradford, A.F.R., Kotsovinos, E., Schioeberg, H.: Live wide-area migration of virtual machines including local persistent state. In: VEE 2007: Proceedings of the Third International Conference on Virtual Execution Environments, pp. 169–179. ACM Press, San Diego (2007)CrossRefGoogle Scholar
  19. 19.
    Sun, Y., Luo, Y., Wang, X., Wang, Z., Zhang, B., Chen, H., Li, X.: Fast live cloning of virtual machine based on Xen. In: Proceedings of the 2009 11th IEEE International Conference on High Performance Computing and Communications, pp. 392–399. IEEE Computer Society, Washington, DC (2009)CrossRefGoogle Scholar
  20. 20.
    Tamura, Y., Sato, K., Kihara, S., Moriai, S.: Kemari: Virtual machine synchronization for fault tolerance using DomT (technical report) (June 2008), http://wiki.xen.org/xenwiki/Open_Topics_For_Discussion?action=AttachFile&do=get&target=Kemari_08.pdf
  21. 21.
    Vrable, M., Ma, J., Chen, J., Moore, D., Vandekieft, E., Snoeren, A.C., Voelker, G.M., Savage, S.: Scalability, fidelity, and containment in the potemkin virtual honeyfarm. In: Proceedings of the Twentieth ACM Symposium on Operating Systems Principles, SOSP 2005, pp. 148–162. ACM, New York (2005)CrossRefGoogle Scholar
  22. 22.
    Waldspurger, C.A.: Memory resource management in vmware esx server. SIGOPS Oper. Syst. Rev. 36, 181–194 (2002)CrossRefGoogle Scholar
  23. 23.
    XenCommunity. Xen unstable source, http://xenbits.xensource.com/xen-unstable.hg
  24. 24.
    Zhao, M., Figueiredo, R.J.: Experimental study of virtual machine migration in support of reservation of cluster resources. In: VTDC 2007: Proceedings of the 2nd International Workshop on Virtualization Technology in Distributed Computing, pp. 5:1–5:8. ACM, New York (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Peng Lu
    • 1
  • Binoy Ravindran
    • 1
  • Changsoo Kim
    • 2
  1. 1.ECE DepartmentVirginia TechUSA
  2. 2.ETRIDaejeonSouth Korea

Personalised recommendations