Skip to main content
Log in

Efficient resource management for virtual desktop cloud computing

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

In virtual desktop cloud computing, user applications are executed in virtual desktops on remote servers. This offers great advantages in terms of usability and resource utilization; however, handling a large amount of clients in the most efficient manner poses important challenges. Especially deciding how many clients to handle on one server, and where to execute the user applications at each time is important. Assigning too many users to one server leads to customer dissatisfaction, while assigning too little leads to higher investments costs. We study different aspects to optimize the resource usage and customer satisfaction. The results of the paper indicate that the resource utilization can increase with 29% by applying the proposed optimizations. Up to 36.6% energy can be saved when the size of the online server pool is adapted to the system load by putting redundant hosts into sleep mode.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Buyya R, Yeo CS, Venugopal S, Broberg J, Brandic I (2009) Cloud computing and emerging it platforms: vision, hype, and reality for delivering computing as the 5th utility. Future Gener Comput Syst 25(6):599–616

    Article  Google Scholar 

  2. Google. Google apps cloud service. http://www.google.com/apps

  3. Microsoft Corporation. Windows Remote Desktop Protocol (RDP). http://www.microsoft.com/ntserver/ProductInfo/terminal/tsarchitecture.asp

  4. Richardson T, Stafford-Fraser Q, Wood KR, Hopper A (1998) Virtual network computing. IEEE Internet Comput 2(1):33–38

    Article  Google Scholar 

  5. Quiroz A, Kim H, Parashar M, Gnanasambandam N, Sharma N (2009) Towards autonomic workload provisioning for enterprise grids and clouds. In: 10th IEEE/ACM international conference on grid computing, pp 50–57

    Chapter  Google Scholar 

  6. Vin H, Goyal P, Goyal A (1994) A statistical admission control algorithm for multimedia servers. In: Proceedings of the second ACM international conference on multimedia, pp 33–40

    Chapter  Google Scholar 

  7. Boorstyn RR, Burchard A, Liebeherr J, Oottamakorn C (2000) Statistical service assurances for traffic scheduling algorithms. IEEE J Sel Areas Commun 18(12):2651–2664

    Article  Google Scholar 

  8. Urgaonkar B, Shenoy P, Roscoe T (2009) Resource overbooking and application profiling in a shared Internet hosting platform. ACM Trans Internet Technol 9:1:1–1:45

    Article  Google Scholar 

  9. Stillwell M, Schanzenbach D, Vivien F, Casanova H (2010) Resource allocation algorithms for virtualized service hosting platforms. J Parallel Distrib Comput 70:962–974

    Article  MATH  Google Scholar 

  10. Talwar V, Agarwalla B, Basu S, Kumar R, Nahrstedt K (2006) Resource allocation for remote desktop sessions in utility grids: Research articles. Concurr Comput 18:667–684

    Article  Google Scholar 

  11. Alvarez-Bermejo J, Roca-Piera J (2010) A proposed asynchronous object load balancing method for parallel 3d image reconstruction applications. In: Algorithms and architectures for parallel processing. Lecture notes in computer science, vol 6081. Springer, Berlin, pp 454–462

    Chapter  Google Scholar 

  12. Parashar M, Li X, Chandra S (2010) Advanced computational infrastructures for parallel and distributed adaptive application. Wiley, New York

    Google Scholar 

  13. Wooda T, Shenoya P, Venkataramania A, Yousif M (2009) Sandpiper: Black-box and gray-box resource management for virtual machines. Comput Netw 53:2923–2938

    Article  Google Scholar 

  14. Lefevre L, Orgerie A-C (2010) Designing and evaluating an energy efficient cloud. J Supercomput 51:352–373

    Article  Google Scholar 

  15. Berl A, Gelenbe E, Di Girolamo M, Giuliani G, de Meer H, Dang MQ, Pentikousis K (2010) Energy-efficient cloud computing. Comput J 53:1045–1051

    Article  Google Scholar 

  16. Beloglazov A, Buyya R (2010) Energy-efficient consolidation of virtual machines in cloud data centers. In: Proceedings of the IBM collaborative academia research exchange workshop. I-CARE 2010

    Google Scholar 

  17. Lee Y, Zomaya A (2010) Energy efficient utilization of resources in cloud computing systems. The Journal of Supercomputing, 1–13. doi:10.1007/s11227-010-0421-3

  18. Beloglazov A, Buyya R (2010) Energy efficient resource management in virtualized cloud data centers. In: 10th IEEE/ACM International conference on cluster, cloud and grid computing (CCGrid), pp 826–831

    Chapter  Google Scholar 

  19. Morad TY, Weiser UC, Kolodnyt A, Valero M, Ayguade E (2006) Performance, power efficiency and scalability of asymmetric cluster chip multiprocessors. IEEE Comput Archit Lett 5(1):14–17

    Google Scholar 

  20. Beloglazov A, Buyya R (2010) Adaptive threshold-based approach for energy-efficient consolidation of virtual machines in cloud data centers. In: Proceedings of the 8th international workshop on middleware for grids, clouds and e-science, MGC’10. ACM, New York, pp 4:1–4:6

    Google Scholar 

  21. Gmacha D, Roliaa J, Cherkasovaa L, Kemper A (2009) Resource pool management: Reactive versus proactive or let’s be friends. Comput Netw 53:2905–2922

    Article  Google Scholar 

  22. Citrix (2010) Xendesktop planning guide – hosted VM-based resource allocation

  23. Rosenblatt M (1956) A central limit theorem and a strong mixing condition. Proc Natl Acad Sci USA 42(1):43–47

    Article  MathSciNet  MATH  Google Scholar 

  24. Raghavendra R, Ranganathan P, Talwar V, Wang Z, Zhu X (2008) No power struggles: coordinated multi-level power management for the data center. In: Proceedings of the international conference on architectural support for programming languages and operating systems (ASPLOS)

    Google Scholar 

  25. Lublin U, Feitelson DG (2003) The workload on parallel supercomputers: modeling the characteristics of rigid jobs. J Parallel Distrib Comput 63(11):1105–1122

    Article  MATH  Google Scholar 

  26. Calheiros RN, Ranjan R, Beloglazov A, De Rose CAF, Buyya R (2011) Cloudsim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms. Softw Pract Exp 41(1):23–50

    Article  Google Scholar 

  27. Martello S, Toth P (1990) Knapsack problems: algorithms and computer implementations. Wiley, New York

    MATH  Google Scholar 

  28. Voorsluys W, Broberg J, Venugopal S, Buyya R (2009) Cost of virtual machine live migration in clouds: A performance evaluation. In: Proceedings of the 1st international conference on cloud computing, pp 254–265

    Google Scholar 

  29. Travostino F, Daspit P, Gommans L, Jog C, de Laat C, Mambretti J, Monga I, van Oudenaarde B, Raghunath S, Wang PY (2006) Seamless live migration of virtual machines over the MAN/WAN. Future Gener Comput Syst 22(8):901–907

    Article  Google Scholar 

Download references

Acknowledgements

Lien Deboosere and Bert Vankeirsbilck are funded by a Ph.D. grant from the Institute for the Promotion of Innovation through Science and Technology in Flanders (IWT-Vlaanderen). Part of the research leading to these results was done for the MobiThin Project and has received funding from the European Community’s Seventh Framework (FP7/2007-2013) under grant agreement No. 216946.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lien Deboosere.

Appendix: Impact of overbooking on user satisfaction: analytical use case

Appendix: Impact of overbooking on user satisfaction: analytical use case

For the customer, it is important that the negative impact of overbooking on the amount of SLA violations experienced is acceptable. Applying a certain overbooking degree and the advanced resource scheduler essentially means that a virtual desktop never encounters an SLA violation when the requested resources do not exceed the reserved resources. When the virtual desktop requests more resources than reserved, it can encounter an SLA violation depending on the resource requests of other virtual desktops executed on the same host. Only when the total amount of requested resources exceeds the available resources (F) of the host, at least one virtual desktop experiences an SLA violation. The probability that this occurs is calculated as

Deducing the amount of virtual desktops experiencing an SLA violation—when the advanced resource scheduler is applied—is complicated. To make an analytical deduction of the probability that a virtual desktop encounters an SLA violation treatable, the following assumptions are made. We assume that the distributions of the resource consumption of the virtual desktops are identical and independent normal distributions: N(μ,σ 2). The deduction below treats a fully reserved host with three VDs and an overbooking degree of 50% (i.e., μ resources are reserved for each VD).

It is obvious that when all three virtual desktops request less resources than reserved, no SLA violations occur on the host. The domain in which all three virtual desktops request less resources than reserved is called D 0={∀req 1,req 2,req 3:req 1,req 2,req 3μ}. On the other hand, when all three virtual desktops request more resources than reserved, all three virtual desktops experience an SLA violation. This domain is called D 3={∀req 1,req 2,req 3:req 1,req 2,req 3>μ}. When some virtual desktops request more resources than reserved and others request less resources than reserved, elaborations are required to determine the probability that a virtual desktop encounters an SLA violation. In this deduction, two cases can be distinguished: (i) the combination of one VD requesting less resources than reserved and two VDs requesting more resources than reserved (domain D 1={∀req 1,req 2,req 3:req 1μ&req 2,req 3>μ}), and (ii) the combination of two VDs requesting less resources than reserved and one VD requesting more resources than reserved (domain D 2={∀req 1,req 2,req 3:req 1,req 2μ&req 3>μ}).

In the first case, i.e. in domain D 1, either 0, 1 or 2 virtual desktops experience an SLA violation. The probability that no SLA violations occur is calculated as the probability that the amount of additional resources requested by req 2 and req 3 are smaller than the amount of resources put in the resource pool by req 1. The probability that two SLA violations occur is calculated as the probability that both req 2 and req 3 request more than half of the resources put in the resource pool by req 1. Finally, the probability that exactly one virtual desktop encounters an SLA violation can be deduced from the previous probabilities.

In the second case, i.e. in domain D 2, either 0 or 1 virtual desktop experiences an SLA violation. Similarly to the first case, the probability that no SLA violations occur is calculated as the probability that the amount of additional resources requested by req 3 is smaller than the amount of resources put in the resource pool by req 1 and req 2. The probability that one SLA violation occurs is easily deduced from the previous probability.

Discussing the calculation of all above probabilities in detail might be superfluous. Basic statistical methods can be applied to calculate the probabilities. As an example, the main steps of the first case (i.e., domain D 1) are presented below.

The probability that no SLA violations occur is calculated as

(3)

with \(f_{\mathit{req}_{1}}(x_{1})\) the density function of the distribution of req 1 in domain D 1.

To calculate the expected value of the sum of req 2 and req 3, the density function of the sum of those resource requests has to be composed first. In the case of identically and independently distributions, the density function for y=req 2+req 3 is

Substituting the expected value E[y]=E[req 2+req 3] of the density function f y (y) into (3) allows to calculate the probability that no SLA violations occur in domain D 1.

Next, the probability that two SLA violations occur is elaborated—under the assumption that the distributions of the resource requests are identical and independent—as

(4)
(5)

The average number of SLA violations in domain D 1 is then calculated as

In general, the average number of SLA violations on a host with n i virtual desktops is calculated as

The general approach of the analytical deduction presented above is applicable when the number of VDs on a host increases, however it becomes hard to analytically deduce the density distribution of the sum of a large amount of resource requests.

Therefore, simulations are used to determine the average amount of SLA violations on a host when more VDs are executed on the host. The results of the simulations can be found in Fig. 12. In each simulation, the resource requests of a VD are distributed according to a normal distribution N(10000,1500) and the total amount of FLOPS of the host is equal to the total amount of reserved resources. Each simulation has been conducted until there was no significantly difference between the running average of two consecutive iterations.

Fig. 12
figure 12

Probability that a virtual desktop experiences an SLA violation for different amounts of simultaneous virtual desktops on a host with an overbooking degree of 50%. In each case, the total amount of available FLOPS of the host is completely reserved by the virtual desktops. The results are compared for the simple scheduler (i.e. standard scheduler in CloudSim) and the advanced scheduler discussed in Sect. 3.2

The results of Fig. 12 show that applying an overbooking degree of 50% does not necessarily lead to a probability on SLA violations of 50% when the advanced resource scheduler is used.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Deboosere, L., Vankeirsbilck, B., Simoens, P. et al. Efficient resource management for virtual desktop cloud computing. J Supercomput 62, 741–767 (2012). https://doi.org/10.1007/s11227-012-0747-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-012-0747-0

Keywords

Navigation