Client-Side Scheduling Based on Application Characterization on Kubernetes

  • Víctor Medel
  • Carlos Tolón
  • Unai Arronategui
  • Rafael Tolosana-Calasanz
  • José Ángel Bañares
  • Omer F. Rana
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10537)

Abstract

In container management systems, such as Kubernetes, the scheduler has to place containers in physical machines and it should be aware of the degradation in performance caused by placing together containers that are barely isolated. We propose that clients provide a characterization of their applications to allow a scheduler to evaluate what is the best configuration to deal with the workload at a given moment. The default Kubernetes Scheduler only takes into account the sum of requested resources in each machine, which is insufficient to deal with the performance degradation. In this paper, we show how specifying resource limits is not enough to avoid resource contention, and we propose the architecture of a scheduler, based on the client application characterization, to avoid the resource contention.

Keywords

Containers Scheduling Resource contention Resource management 

Notes

Acknowledgements

This work was co-financed by the Industry and Innovation department of the Aragonese Government and European Social Funds (COSMOS research group, ref. T93); and by the Spanish Ministry of Economy under the program “Programa de I+D+i Estatal de Investigación, Desarrollo e innovación Orientada a los Retos de la Sociedad”, project id TIN2013-40809-R. V. Medel was the recipient of a fellowship from the Spanish Ministry of Economy.

References

  1. 1.
    Awada, U., Barker, A.D.: Improving resource efficiency of container-instance clusters on clouds. In: 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2017). IEEE (2017)Google Scholar
  2. 2.
    Bhamare, D., Samaka, M., Erbad, A., Jain, R., Gupta, L., Chan, H.A.: Multi-objective scheduling of micro-services for optimal service function chains. In: IEEE International Conference on Communications (ICC 2017). IEEE (2017)Google Scholar
  3. 3.
    Bingmann, T., Axtmann, M., Jöbstl, E., Lamm, S., Nguyen, H.C., Noe, A., Schlag, S., Stumpp, M., Sturm, T., Sanders, P.: Thrill: high-performance algorithmic distributed batch data processing with c++. arXiv preprint arXiv:1608.05634 (2016)
  4. 4.
    Brunner, S., Blochlinger, M., Toffetti, G., Spillner, J., Bohnert, T.M.: Experimental evaluation of the cloud-native application design. In: IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC), pp. 488–493 (2015)Google Scholar
  5. 5.
    Burns, B., Grant, B., Oppenheimer, D., Brewer, E., Wilkes, J.: Borg, omega, and Kubernetes. ACM Queue 14, 70–93 (2016)CrossRefGoogle Scholar
  6. 6.
    Carbone, P., Katsifodimos, A., Ewen, S., Markl, V., Haridi, S., Tzoumas, K.: Apache Flink: stream and batch processing in a single engine. Bull. IEEE Comput. Soc. Techn. Comm. Data Eng. 38(4), 28–38 (2015)Google Scholar
  7. 7.
    Choi, S., Myung, R., Choi, H., Chung, K., Gil, J., Yu, H.: Gpsf: general-purpose scheduling framework for container based on cloud environment. In: International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), pp. 769–772. IEEE (2016)Google Scholar
  8. 8.
    Felter, W., Ferreira, A., Rajamony, R., Rubio, J.: An updated performance comparison of virtual machines and linux containers. In: International Symposium on Performance Analysis of Systems and Software (ISPASS), pp. 171–172 (2015)Google Scholar
  9. 9.
    Havet, A., Schiavoni, V., Felber, P., Colmant, M., Rouvoy, R., Fetzer, C.: Genpack: a generational scheduler for cloud data centers. In: 2017 IEEE International Conference on Cloud Engineering (IC2E), pp. 95–104. IEEE (2017)Google Scholar
  10. 10.
    Hindman, B., Konwinski, A., Zaharia, M., Ghodsi, A., Joseph, A.D., Katz, R.H., Shenker, S., Stoica, I.: Mesos: a platform for fine-grained resource sharing in the data center. In: NSDI, vol. 11, p. 22 (2011)Google Scholar
  11. 11.
    Kaewkasi, C., Chuenmuneewong, K.: Improvement of container scheduling for Docker using ant colony optimization. In: 2017 9th International Conference on Knowledge and Smart Technology (KST), pp. 254–259. IEEE (2017)Google Scholar
  12. 12.
    Kumar, K.A., Konishetty, V.K., Voruganti, K., Rao, G.V.P.: CASH: context aware scheduler for hadoop. In: 2012 International Conference on Advances in Computing, Communications and Informatics, ICACCI 2012, 3–5 August 2012, Chennai, India, pp. 52–61 (2012)Google Scholar
  13. 13.
    Lukša, M.: Kubernetes in Action (MEAP). Manning Publications, Greenwich (2017)Google Scholar
  14. 14.
    McCalpin, J.D.: Memory bandwidth and machine balance in current high performance computers. In: IEEE Computer Society Technical Committee on Computer Architecture (TCCA) Newsletter, pp. 19–25, December 1995Google Scholar
  15. 15.
    Medel, V., Rana, O., Arronategui, U., et al.: Modelling performance & resource management in Kubernetes. In: Proceedings of the 9th International Conference on Utility and Cloud Computing, pp. 257–262. ACM (2016)Google Scholar
  16. 16.
    Morabito, R., Kjällman, J., Komu, M.: Hypervisors vs. lightweight virtualization: a performance comparison. In: 2015 IEEE International Conference on Cloud Engineering (IC2E), pp. 386–393. IEEE (2015)Google Scholar
  17. 17.
    Oskooei, A.R., Down, D.G.: COSHH: a classification and optimization based scheduler for heterogeneous hadoop systems. Future Generation Comp. Syst. 36, 1–15 (2014)CrossRefGoogle Scholar
  18. 18.
    Page, L., Brin, S., Motwani, R., Winograd, T.: The pagerank citation ranking: bringing order to the web. Technical report, Stanford InfoLab (1999)Google Scholar
  19. 19.
    Raho, M., Spyridakis, A., Paolino, M., Raho, D.: Kvm, Xen and Docker: a performance analysis for ARM based NFV and cloud computing. In: 3rd Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE), pp. 1–8. IEEE (2015)Google Scholar
  20. 20.
    Seo, K.T., Hwang, H.S., Moon, I.Y., Kwon, O.Y., Kim, B.J.: Performance comparison analysis of linux container and virtual machine for building cloud. Adv. Sci. Technol. Lett. 66(105–111), 2 (2014)Google Scholar
  21. 21.
    Verma, A., Pedrosa, L., Korupolu, M.R., Oppenheimer, D., Tune, E., Wilkes, J.: Large-scale cluster management at Google with Borg. In: Proceedings of the European Conference on Computer Systems (EuroSys), Bordeaux, France (2015)Google Scholar
  22. 22.
    Wang, K., Khan, M.M.H., Nguyen, N., Gokhale, S.S.: Modeling interference for apache spark jobs. In: 9th IEEE International Conference on Cloud Computing, CLOUD 2016, USA, pp. 423–431 (2016)Google Scholar
  23. 23.
    Zhang, W., Rajasekaran, S., Duan, S., Wood, T., Zhu, M.: Minimizing interference and maximizing progress for hadoop virtual machines. SIGMETRICS Perform. Eval. Rev. 42(4), 62–71 (2015)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Víctor Medel
    • 1
  • Carlos Tolón
    • 1
  • Unai Arronategui
    • 1
  • Rafael Tolosana-Calasanz
    • 1
  • José Ángel Bañares
    • 1
  • Omer F. Rana
    • 2
  1. 1.Computer Science and Systems Engineering Department, Aragón Institute of Engineering Research (I3A)University of ZaragozaZaragozaSpain
  2. 2.School of Computer Science and InformaticsCardiff UniversityCardiffUK

Personalised recommendations