Adaptive Resource Remapping through Live Migration of Virtual Machines

  • Muhammad Atif
  • Peter Strazdins
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7016)


In this paper we present ARRIVE-F, a novel open source framework which addresses the issue of heterogeneity in compute farms. Unlike the previous attempts, our framework is not based on linear frequency models and does not require source code modifications or off-line profiling. The heterogeneous compute farm is first divided into a number of virtualized homogeneous sub-clusters. The framework then carries out a lightweight ‘online’ profiling of the CPU, communication and memory subsystems of all the active jobs in the compute farm. From this, it constructs a performance model to predict the execution times of each job on all the distinct sub-clusters in the compute farm. Based upon the predicted execution times, the framework is able to relocate the compute jobs to the best suited hardware platforms such that the overall throughput of the compute farm is increased. We utilize the live migration feature of virtual machine monitors to migrate the job from one sub-cluster to another.

The prediction accuracy of our performance estimation model is over 80%. The implementation of ARRIVE-F is lightweight, with an overhead of 3%. Experiments on a synthetic workload of scientific benchmarks show that we are able to improve the throughput of a moderately heterogeneous compute farm by up to 25%, with a time saving of up to 33%.


Virtual Machine Migration Decision Wall Clock Time Average Wait Time Heterogeneous Compute 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    NAS Parallel Benchmarks (September 2010),
  2. 2.
    OpenMPI (June 2010),
  3. 3.
    Atif, M., Strazdins, P.: Optimizing live migration of virtual machines in smp clusters for hpc applications. In: IFIP International Conference on Network and Parallel Computing Workshops, pp. 51–58 (2009)Google Scholar
  4. 4.
    Atif, M., Strazdins, P.: Adaptive Resource Remappin In Virtualized Environments - Framework. Computer Science Technical Report TR-CS-11-01, Australian National University (May 2011),
  5. 5.
    Boeres, C., Rebello, V.E.F.: Easygrid: towards a framework for the automatic grid enabling of legacy mpi applications: Research articles. Concurr. Comput.: Pract. Exper. 16, 425–432 (2004)CrossRefGoogle Scholar
  6. 6.
    Braun, T.D., Siegel, H.J., Beck, N., Boloni, L.L., Maheswaran, M., Reuther, A.I., Robertson, J.P., Theys, M.D., Yao, B., Hensgen, D., Freund, R.F.: A comparison of eleven static heuristics for mapping a class of independent tasks onto heterogeneous distributed computing systems. Journal of Parallel and Distributed Computing 61, 810–837 (2001)CrossRefzbMATHGoogle Scholar
  7. 7.
    Dongarra, J., Lastovetsky, A.: An overview of heterogeneous high performance and grid computing. In: Di Martino, B., Dongarra, J., Hoisie, A., Yang, L., Zima, H. (eds.) Engineering the Grid: Status and Perspective (2006)Google Scholar
  8. 8.
    Ibarra, O.H., Kim, C.E.: Heuristic algorithms for scheduling independent tasks on nonidentical processors. J. ACM 24(2), 280–289 (1977)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    University of Tennesse Innovative Computing Laboratory. High performance linpack benchmark (March 2009),
  10. 10.
    Kale, L.V., Kale, L.V.: Charm++ and ampi: Adaptive runtime strategies via migratable objects. In: Advanced Computational Infrastructures for Parallel and Distributed Adaptive Applications, no. 9780470558027. John Wiley & Sons, Inc., Chichester (2009)Google Scholar
  11. 11.
    Katramatos, D., Chaplin, S.J.: A cost/benefit estimating service for mapping parallel applications on heterogeneous clusters. In: IEEE International Conference on Cluster Computing, Cluster 2005 (2005)Google Scholar
  12. 12.
    Lublin, U., Feitelson, D.G.: The workload on parallel supercomputers: Modeling the characteristics of rigid jobs. Journal of Parallel and Distributed Computing, 1105–1122 (November 2003)Google Scholar
  13. 13.
    Menon, A., Santos, J.R., Yoshio, T.: Diagnosing performance overheads in the xen virtual machine environment. In: VEE 2005: Proceedings of the 1st ACM/USENIX International Conference on Virtual Execution Environments, pp. 13–23 (2005)Google Scholar
  14. 14.
    Mu’alem, A.W., Feitelson, D.G.: Utilization, predictability, workloads, and user runtime estimates in scheduling the ibm sp2 with backfilling. IEEE Trans. Parallel Distrib. Syst. 12, 529–543 (2001)CrossRefGoogle Scholar
  15. 15.
    Nakazawa, M., Lowenthal, D.K., Zhou, W.: The mheta execution model for heterogeneous clusters. In: SC 2005: Proceedings of the 2005 ACM/IEEE Conference on Supercomputing. IEEE Computer Society, Washington, DC, USA (2005)Google Scholar
  16. 16.
    Weissman, J., Zhao, X.: Scheduling parallel applications in distributed networks. Cluster Computing 1, 109–118 (1998) 10.1023/A:1019073113216CrossRefGoogle Scholar
  17. 17.
    Youseff, L., Wolski, R., Gorda, B., Krintz, C.: Paravirtualization for HPC systems. In: Min, G., Di Martino, B., Yang, L.T., Guo, M., Rünger, G. (eds.) ISPA Workshops 2006. LNCS, vol. 4331, pp. 474–486. Springer, Heidelberg (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Muhammad Atif
    • 1
  • Peter Strazdins
    • 2
  1. 1.ANU Supercomputer FacilityThe Australian National UniversityCanberraAustralia
  2. 2.Research School of Computer ScienceThe Australian National UniversityCanberraAustralia

Personalised recommendations