Advertisement

Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Labeled von Neumann Architecture for Software-Defined Cloud

  • 514 Accesses

  • 4 Citations

Abstract

As cloud computing is moving forward rapidly, cloud providers have been encountering great challenges: long tail latency, low utilization, and high interference. They intend to co-locate multiple workloads on a single server to improve the resource utilization. But the co-located applications suffer from severe performance interference and long tail latency, which lead to unpredictable user experience. To meet these challenges, software-defined cloud has been proposed to facilitate tighter coordination among application, operating system and hardware. Users’ quality of service (QoS) requirements could be propagated all the way down to the hardware with differential management mechanisms. However, there is little hardware support to maintain and guarantee users’ QoS requirements. To this end, this paper proposes Labeled von Neumann Architecture (LvNA), which introduces a labelling mechanism to convey more software’s semantic information such as QoS and security to the underlying hardware. LvNA is able to correlate labels with various entities, e.g., virtual machine, process and thread, and propagate labels in the whole machine and program differentiated services based on rules. We consider LvNA to be a fundamental hardware support to the software-defined cloud.

This is a preview of subscription content, log in to check access.

References

  1. [1]

    Mell P M, Grance T. SP 800-145, the NIST definition of cloud computing. Tech. Rep., Gaithersburg, MD, United States, 2011. http:///www.nist.gov/node/568586, Feb. 2017.

  2. [2]

    Shvachko K, Kuang H, Radia S, Chansler R. The Hadoop distributed file system. In Proc. the 26th IEEE MSST, May 2010.

  3. [3]

    Zaharia M, Chowdhury M, Das T, Dave A, Ma J, Mc- Cauley M, Franklin M J, Shenker S, Stoica I. Resilient distributed datasets: A fault-tolerant abstraction for inmemory cluster computing. In Proc. the 9th USENIX NSDT, April 2012, pp.15-28.

  4. [4]

    Gonzalez J E, Low Y, Gu H, Bickson D, Guestrin C. Powergraph: Distributed graph-parallel computation on natural graphs. In Proc. the 10th USENIX OSDI, Oct. 2012.

  5. [5]

    Kaplan J M, Forrest W, Kindle N. Revolutionizing data center energy efficiency. Tech. Rep., McKinsey & Company, July 2008. http://pdfsr.com/pdf/revolutionizingdata-center-energy-efficiency, Feb. 2017.

  6. [6]

    Liu H. A measurement study of server utilization in public clouds. In Proc. the 9th IEEE International Conference on Dependable, Autonomic and Secure Computing, Dec. 2011, pp.435-442.

  7. [7]

    Dean J, Barroso L A. The tail at scale. Commun. ACM, 2013, 56(2): 74-80.

  8. [8]

    Grandl R, Chen Y, Khalid J, Yang S, Anand A, Benson T, Akella A. Harmony: Coordinating network, compute, and storage in software-defined clouds. In Proc. the 4th Annual Symposium on Cloud Computing, Oct. 2013, pp.53:1–53:2.

  9. [9]

    Xu Y, Musgrave Z, Noble B, Bailey M. Bobtail: Avoiding long tails in the cloud. In Proc. the 10th USENIX Conference on Networked Systems Design and Implementation, Apr. 2013, pp.329-342.

  10. [10]

    Alizadeh M, Greenberg A, Maltz D A, Padhye J, Patel P, Prabhakar B, Sengupta S, Sridharan M. Data Center TCP (DCTCP). In Proc. ACM SIGCOMM, Aug.30-Sept.3, 2010.

  11. [11]

    Alizadeh M, Kabbani A, Edsall T, Prabhakar B, Vahdat A, Yasuda M. Less is more: Trading a little bandwidth for ultra-low latency in the data center. In Proc. the 9th USENIX NSDI, April 2012, pp.253-266.

  12. [12]

    Vamanan B, Hasan J, Vijaykumar T. Deadline-aware datacenter TCP (D2tcp). In Proc. the ACM SIGCOMM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication, Aug. 2012, pp.115-126.

  13. [13]

    Wilson C, Ballani H, Karagiannis T, Rowtron A. Better never than late: Meeting deadlines in datacenter networks. In Proc. the ACM SIGCOMM, Aug. 2011, pp.50-61.

  14. [14]

    Zats D, Das T, Mohan P, Borthakur D, Katz R. DeTail: Reducing the flow completion time tail in datacenter networks. ACM SIGCOMM Comput. Commun. Rev., 2012, 42: 139-150.

  15. [15]

    Hong C Y, Caesar M, Godfrey P B. Finishing flows quickly with preemptive scheduling. ACM SIGCOMM Comput. Commun. Rev., 2012, 42: 127-138.

  16. [16]

    Leverich J, Kozyrakis C. Reconciling high server utilization and sub-millisecond quality-of-service. In Proc. the 9th European Conference on Computer Systems, Apr. 2014, pp.4:1-4:14.

  17. [17]

    Kapoor R, Porter G, Tewari M, Voelker G M, Vahdat A. Chronos: Predictable low latency for data center applications. In Proc. the 3rd ACM Symposium on Cloud Computing, Oct. 2012, pp.9:1-9:14.

  18. [18]

    Sukwong O, Kim H S. Is co-scheduling too expensive for SMP VMs? In Proc. the 6th Conference on Computer Systems, Apr. 2011, pp.257-272.

  19. [19]

    Uhlig V, LeVasseur J, Skoglund E, Dannowski U. Towards scalable multiprocessor virtual machines. In Proc. the 3rd Conference on Virtual Machine Research and Technology Symposium — Volume 3, May 2004, pp.43-56.

  20. [20]

    Ding X, Gibbons P B, Kozuch M A, Shan J. Gleaner: Mitigating the blocked-waiter wakeup problem for virtualized multicore applications. In Proc. the USENIX Annual Technical Conference, June 2014, pp.73-84.

  21. [21]

    Kambadur M, Moseley T, Hank R, Kim M A. Measuring interference between live datacenter applications. In Proc. the International Conference on High Performance Computing, Networking, Storage and Analysis, Nov. 2012, pp.51:1-51:12.

  22. [22]

    Blagodurov S, Zhuravlev S, Fedorova A, Kamali A. A case for NUMA-aware contention management on multicore systems. In Proc. the 19th International Conference on Parallel Architectures and Compilation Techniques, Sept. 2010, pp.557-558.

  23. [23]

    Tam D, Azimi R, Stumm M. Thread clustering: Sharingaware scheduling on SMP-CMP-SMT multiprocessors. In Proc. the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems, Mar. 2007, pp.47-58.

  24. [24]

    Zhang Y, Laurenzano M A, Mars J, Tang L. SMiTe: Precise QoS prediction on real-system SMT processors to improve utilization in warehouse scale computers. In Proc. the 47th Annual IEEE/ACM International Symposium on Microarchitecture, Dec. 2014, pp.406-418.

  25. [25]

    Cazorla F J, Ramírez A, Valero M, Fernández E. Dynamically controlled resource allocation in SMT processors. In Proc. the 37th Annual International Symposium on Microarchitecture, Dec. 2004, pp.171-182.

  26. [26]

    Choi S, Yeung D. Learning-based SMT processor resource distribution via hill-climbing. In Proc. the 33rd International Symposium on Computer Architecture, June 2006, pp.239-251.

  27. [27]

    Eyerman S, Eeckhout L. Per-thread cycle accounting in SMT processors. In Proc. the 14th International Conference on Architectural Support for Programming Languages and Operating Systems, Mar. 2009, pp.133-144.

  28. [28]

    Cheng L, Wang C L. vBalance: Using interrupt load balance to improve I/O performance for SMP virtual machines. In Proc. the 3rd ACM Symposium on Cloud Computing, Oct. 2012, pp.2:1-2:14.

  29. [29]

    Wang G, Ng T S E. The impact of virtualization on network performance of Amazon EC2 data center. In Proc. the 29th Conference on Information Communications, Mar. 2010, pp.1163-1171.

  30. [30]

    Xu Y, Bailey M, Noble B, Jahanian F. Small is better: Avoiding latency traps in virtualized data centers. In Proc. the 4th Annual Symposium on Cloud Computing, Oct. 2013, pp.7:1-7:16.

  31. [31]

    Chiang R C, Huang H H. TRACON: Interference-aware scheduling for data-intensive applications in virtualized environments. In Proc. the International Conference for High Performance Computing, Networking, Storage and Analysis, Nov. 2011, pp.47:1-47:12.

  32. [32]

    Nathuji R, Kansal A, Ghaffarkhah A. Q-clouds: Managing performance interference effects for QoS-aware clouds. In Proc. the 5th European Conference on Computer Systems, Apr. 2010, pp.237-250.

  33. [33]

    Mars J, Tang L, Hundt R, Skadron K, Soffa M L. Bubbleup: Increasing utilization in modern warehouse scale computers via sensible co-locations. In Proc. the 44th Annual IEEE/ACM International Symposium on Microarchitecture, Dec. 2011, pp.248-259.

  34. [34]

    Shieh A, Kandula S, Greenberg A, Kim C, Saha B. Sharing the data center network. In Proc. the 8th USENIX Conference on Networked Systems Design and Implementation, Mar.30-Apr.1, 2011.

  35. [35]

    Cherkasova L, Gardner R. Measuring CPU overhead for I/O processing in the Xen virtual machine monitor. In Proc. the USENIX Annual Technical Conference, Apr. 2005, pp.387-390.

Download references

Author information

Correspondence to Yun-Gang Bao.

Electronic supplementary material

Below is the link to the electronic supplementary material.

ESM 1

(PDF 308 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bao, Y., Wang, S. Labeled von Neumann Architecture for Software-Defined Cloud. J. Comput. Sci. Technol. 32, 219–223 (2017). https://doi.org/10.1007/s11390-017-1716-0

Download citation

Keywords

  • software-defined cloud
  • von Neumann architecture
  • tail latency
  • performance interference