Skip to main content
Log in

Unified resource management in cloud based data centers

  • S.I. : Cloud Computing for Scientific and Business Needs
  • Published:
CSI Transactions on ICT Aims and scope Submit manuscript

Abstract

Maintaining a high and efficient resource utilization level is highly desirable in a cloud based data center. This keeps the costs low for both the cloud provider and users. However, managing and allocating resources to different hosted applications is challenging. The reason is the diverse set of data center resources namely computing, memory, storage and network, and a diverse set of applications like web services, databases, big data analytics, mail servers and many more. In fact, an optimal resource allocation to applications in cloud based data center is found to be an intractable task. Currently available resource management and allocation schemes are heuristics which manage only a subset of available resource types. Applying such schemes result in resource fragmentation where some available resources become unusable due to the unavailability of other resources. Such unusability of available resources results in inefficiency and negatively impacts data center’s and application performance and bringing up the costs. In this paper, we first present the reasons, due to which, such resource fragmentation occurs. Then we present the approach to avoid such wastage of data center resources. Experiments show that the proposed approach results in up to 60% more applications to be hosted in data center than current schemes and thus improves resource utilization efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

Notes

  1. As there is one-to-one correspondence between a VM and application component, we use these terms interchangeably in this document.

  2. Terms Host, Physical Machine, Server are used interchangeably in this document.

  3. We use terms resource allocation problem, VM placement problem and application placement problem are used interchangeably in this document.

  4. Bisection bandwidth: bandwidth across smallest cut that divides network into two equal parts.

References

  1. Al-Fares M, Radhakrishnan S, Raghavan B, Huang N, Vahdat A (2010) Hedera: dynamic flow scheduling for data center networks. In: Proceedings of the 7th USENIX conference on networked systems design and implementation. USENIX, p 19

  2. Ballani H, Costa P, Karagiannis T, Rowstron A (2011) Towards predictable datacenter networks. In: ACM SIGCOMM computer communication review, vol 41. ACM, pp 242–253

  3. Bin packing problem. http://en.wikipedia.org/wiki/Bin_packing_problem. Accessed 6 June 2015

  4. Bodík P et al (2012) Surviving failures in bandwidth-constrained datacenters. In: Proceedings of the ACM SIGCOMM 2012 conference on applications, technologies, architectures, and protocols for computer communication. ACM, pp 431–442

  5. Clark C, Fraser K, Hand S, Hansen JG, Jul E, Limpach C, Pratt I, Warfield A (2005) Live migration of virtual machines. In: NSDI 2005

  6. Data set for imc 2010 data center measurement. http://pages.cs.wisc.edu/~tbenson/IMC10_Data.html. Accessed 6 June 2015

  7. Farrington N, Andreyev A (2013) Facebook’s data center network architecture. In: Optical interconnects conference. IEEE, pp 49–50

  8. Giurgiu I, Castillo C, Tantawi A, Steinder M (2012) Enabling efficient placement of virtual infrastructures in the cloud. In: Proceedings of the 13th international middleware conference, middleware ’12. Springer, New York, pp 332–353

  9. Gmach D, Rolia J, Cherkasova L, Kemper A (2007) Workload analysis and demand prediction of enterprise data center applications. In: Proceedings of the 2007 IEEE 10th international symposium on workload characterization

  10. Greenberg A et al (2009) Vl2: a scalable and flexible data center network. ACM SIGCOMM Comput Commun Rev 39:51

    Article  Google Scholar 

  11. Guo C et al (2008) Dcell: a scalable and fault-tolerant network structure for data centers. ACM SIGCOMM Comput Commun Rev 38(4):75–86

    Article  Google Scholar 

  12. Guo C et al (2009) Bcube: a high performance, server-centric network architecture for modular data centers. ACM SIGCOMM Comput Commun Rev 39(4):63–74

    Article  Google Scholar 

  13. LaCurts K, Deng S, Goyal A, Balakrishnan H (2013) Choreo: network-aware task placement for cloud applications. In: Proceedings of the 2013 conference on internet measurement conference. ACM, p 191

  14. Lee J et al (2014) Application-driven bandwidth guarantees in datacenters. In: Proceedings of the 2014 ACM conference on SIGCOMM. ACM, pp 467–478

  15. Meng X et al (2010) Efficient resource provisioning in compute clouds via vm multiplexing. In: Proceedings of the 7th international conference on autonomic computing. ACM, pp 11–20

  16. Meng X, Pappas V, Zhang L (2010) Improving the scalability of data center networks with traffic-aware virtual machine placement. In: Proceedings of IEEE INFOCOM. IEEE, pp 1–9

  17. Mishra M, Sahoo A (2011) On theory of vm placement: anomalies in existing methodologies and their mitigation using a novel vector based approach. In: Proceedings of the 4th international conference on cloud computing. IEEE

  18. Mishra M, Bellur U (2014) Whither tightness of packing? The case for stable vm placement. IEEE Trans Cloud Comput 4(4):481–494

    Article  Google Scholar 

  19. Mishra M, Bellur U (2016) De-fragmenting the cloud. In Proceedings of 2016 16th IEEE/ACM international symposium on cluster, cloud and grid computing (CCGrid). IEEE/ACM, pp 511–520

  20. Mishra M, Das A, Kulkarni P, Sahoo A (2012) Dynamic resource management using virtual machine migrations. IEEE Commun Mag 50(9):34–40

  21. Mogul JC, Popa L (2012) What we talk about when we talk about cloud network performance. ACM SIGCOMM Comput Commun Rev 42(5):44–48

    Article  Google Scholar 

  22. Nandi B, Banerjee A, Ghosh S, Banerjee N (2012) Stochastic vm multiplexing for datacenter consolidation. In: 2012 IEEE ninth international conference on services computing (SCC). IEEE, pp 114–121

  23. Nathan S, Kulkarni P, Bellur U (2013) Resource availability based performance benchmarking of virtual machine migrations. In: ACM/SPEC ICPE

  24. Papoulis A, Unnikrishna Pillai S (2002) Probability, random variables, and stochastic processes. Tata McGraw-Hill Education, New Delhi

    Google Scholar 

  25. Rodrigues H, Santos JR, Turner Y, Soares P, Guedes D (2011) Gatekeeper: supporting bandwidth guarantees for multi-tenant datacenter networks. In: Proceedings of the 3rd conference on I/O virtualization. USENIX Association, p 6

  26. Shieh A, Kandula S, Greenberg A, Kim C (2010) Seawall: performance isolation for cloud datacenter networks. In: Proceedings of the 2nd USENIX conference on hot topics in cloud computing. USENIX Association, p 1

  27. Singh A, Korupolu M, Mohapatra D (2008) Server-storage virtualization: integration and load balancing in data centers. In: Proceedings of the 2008 ACM/IEEE conference on supercomputing. IEEE, p 53

  28. Singla A, Hong C-Y, Popa L, Godfrey P (2012) Jellyfish: networking data centers randomly. In: Proceedings of the 9th conference on networked systems design and implementation. USENIX, p 17

  29. Singla A, Singh A, Ramachandran K, Xu L, Zhang Y (2010) Proteus: a topology malleable data center network. In: Proceedings of 9th ACM SIGCOMM workshop on hot topics in networks. ACM, p 8

  30. Verma A, Dasgupta G, Nayak T, De P, Kothari R (2009) Server workload analysis for power minimization using consolidation. In: Proceedings of the 2009 USENIX annual technical conference. USENIX Association, p 28

  31. Wood T, Shenoy P, Venkataramani A, Yousif M (2007) Black-box and gray-box strategies for virtual machine migration. In Proceedings of NSDI. USENIX

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mayank Mishra.

Additional information

This work is an extension of previous works by same authors presented in [18, 19]. Mayank Mishra was part of this work as a Ph.D. student at Department of Computer Science and Engineering, Indian Institute of Technology Bombay, Mumbai, India.

Appendix: Finding reaches in data center topology

Appendix: Finding reaches in data center topology

Here is a simple scheme to find reach in tree based topologies under the following assumptions a) each host is connected to only one TOR (Top of the Rack switch), b) there are even number of hosts on a rack and c) a switch can have even number of hosts or switches connected to it at lower level. A reach consists of a set of hosts and the set of switches which are further connected to other part of topology via an oversubscribed network link. It can be seen from Fig. 8 that the “reaches” always lie in non-oversubscribed zone. For ease of explanation, call the switches beyond which there is over-subscription in network links as “boundary-switches”. For example, all the TOR switches in a tree topology (Fig. 8a) and and all aggregator switches in CLOS topology (Fig. 8b) are boundary switches. Let S be the set of all boundary-switches. The key intuition behind reach-finding algorithm is to divide S into subsets such that all member boundary-switches in subset share common hosts. We now propose the reach-finding algorithm. Let R denote the set of reaches, initialize R to empty set (\(R \leftarrow \varnothing\)).

Procedure to find reaches in data center:

  1. 1.

    Let \(S_v\) denotes visited switches. Initialize \(S_v\) (\(S_v \leftarrow \varnothing\)).

  2. 2.

    For every switch \(s \in S\) do

    1. (a)

      If \(s \in S_v\) then go to 1.

    2. (b)

      Let H denote the hosts which have s in their parent chain. Here parent chain refers to set containing parent, parent’s parent and so on till a root switch.

    3. (c)

      Let P denote switches which are in parent chain of hosts in H at same level as s.

    4. (d)

      Let r be the new reach. \(r.hosts \leftarrow H\), \(r.switches \leftarrow P\).

    5. (e)

      \(S_v \leftarrow S_v \cup P\)

    6. (f)

      R.add(r)

It should be noted that set R of reaches is required to be found only once in a data center.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mishra, M., Bellur, U. Unified resource management in cloud based data centers. CSIT 5, 361–374 (2017). https://doi.org/10.1007/s40012-017-0168-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40012-017-0168-6

Keywords

Navigation