Skip to main content
Log in

An intelligent scheduling algorithm for resource management of cloud platform

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Cloud-computing technologies and their application are becoming increasingly popular, which improves both enterprises’ and individuals’ working efficiency while at the same time greatly reducing users’ cost. Besides, the scale of cloud platform and its application are rapidly expanding. Yet it’s a challenging task to effectively utilize resource and guarantee quality of services to users. The quality of cloud task scheduling algorithm plays a key role in it. For one thing, traditional rule-based scheduling algorithms like FCFS and priority-based always focus on the algorithm itself instead of considering characteristics of Virtual Machines (VMs) and task, finally leading to poor operation effect. For another, one carefully selects a set of features based on sample data and employs machine-learning algorithms to train a scheduling policy. This method has the following deficiencies: quality of manually selected sample features directly affects that of the scheduling algorithm; many effective scheduling algorithms are based on a large number of labeled samples; however, it is very difficult to acquire these samples in reality; trained scheduling algorithms are often applicable only to specific environments and easy to be damaged. For the deficiencies of traditional scheduling algorithm and based on deep reinforcement learning (DRL) model, this paper presents a new-type model-free and end-to-end task scheduling agent which can interact with cloud environment and output the information of the virtual machine executing the task while inputting the original tasks of the cloud platform. The agent learns scheduling knowledge through the execution of tasks, and optimizes its scheduling policy. This algorithm completely solves the deficiencies of traditional scheduling algorithms like lower adaptability and flexibility, providing brand-new feasible solutions for task scheduling methods under cloud environments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Armbrust M, Fox A, Griffith R, Joseph AD et al Above the clouds: a berkeley view of cloud computing, http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.pdf

  2. Barreto ADMS, Anderson CW (2008) Restricted gradient-descent algorithm for value-function approximation in reinforcement learning[j]. Artif Intell 172(4-5):454–482

    Article  MathSciNet  Google Scholar 

  3. Bellemare MG, Naddaf Y, Veness J et al (2013) The arcade learning environment: an evaluation platform for general agents[j]. Comput Sci 47(1):253–279

    Google Scholar 

  4. Blundell C, Uria B, Pritzel A et al (2016) Model-free Episodic Control[J]

  5. Dahl GE, Yu D, Deng L, Acero A (2012) Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans Audio, Speech, Lang Process 20(1):30C42

    Article  Google Scholar 

  6. Fang W, Yin X, An Y et al (2015) Optimal scheduling for data transmission between mobile devices and cloud[J]. Inf Sci 301(C):169–180

    Article  MathSciNet  Google Scholar 

  7. Forell T, Milojicic D, Talwar V (2011) Cloud management challenges and opportunities. In: IEEE International Symposium on Parallel and Distributed, pp 881C889

  8. Galindo-Serrano A, Giupponi L (2014) Self-organized femtocells: a Fuzzy Q-Learning approach[J]. Wirel Netw 20(3):441–455

    Article  Google Scholar 

  9. Germain-Renaud C, Rana O (2009) The convergence of clouds, grids, and autonomics. IEEE Internet Comput 13(6):9

    Article  Google Scholar 

  10. Goodfellow IJ, Pouget-Abadie J, Mirza M et al (2014) Generative adversarial nets[c]// international conference on neural information processing systems. MIT Press, Cambridge

    Google Scholar 

  11. Graves A, Mohamed A-r, Hinton GE Speech recognition with deep recurrent neural networks. In: Proceedings of ICASSP

  12. Hosseinimotlagh S, Khunjush F, Samadzadeh R (2015) SEATS: smart energy-aware task scheduling in real-time cloud computing[J]. J Supercomput 71(1):45–66

    Article  Google Scholar 

  13. Houthooft R, Chen X, Duan Y et al (2016) Variational Information Maximizing Exploration[J]

  14. Ibarra OH, Kim CE (1977) Heuristic algorithms for scheduling independent tasks on nonidentical processors. J ACM 24(2):280C289

    Article  MathSciNet  Google Scholar 

  15. Jarrett K, Kavukcuoglu K, Ranzato M, LeCun Y (2009) What is the best multi-stage architecture for object recognition?. In: Proceedings of International Conference on Computer Vision and Pattern Recognition (CVPR 2009). IEEE, pp 2146c2153

  16. Krizhevsky A, Sutskever I, Hinton G (2012) Imagenet classification with deep convolutional neural networks. In:Advances in Neural Information Processing Systems 25, pp 1106c1114

  17. Lange S, Riedmiller M (2010) Deep auto-encoder neural networks in reinforcement learning. In: The 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, pp 1c8

  18. Li J, Qiu M, Ming Z et al (2012) Online optimization for scheduling preemptable tasks on IaaS cloud systems[J]. J Parallel Distrib Comput 72(5):666–677

    Article  Google Scholar 

  19. Lin L-J (1993) Reinforcement learning for robots using neural networks. technical report, dtic document

  20. Maguluri ST, Srikant R, Ying L (2012) Stochastic models of load balancing and scheduling in cloud computing Clusters[J]. Proc - IEEE INFOCOM 131(5):702–710

    Google Scholar 

  21. Mnih V (2013) Machine learning for aerial image labeling. PhD thesis, University of Toronto

  22. Mnih V, Kavukcuoglu K, Silver D et al (2013) Playing atari with deep reinforcement learning[j]. Computer Science

  23. Mnih V, Kavukcuoglu K, Silver D et al (2015) Human-level control through deep reinforcement learning.[j]. Nature 518(7540):529–33

    Article  Google Scholar 

  24. Morozs N, Clarke T, Grace D (2016) Distributed Heuristically Accelerated Q-Learning for Robust Cognitive Spectrum Management in LTE Cellular Systems[J]. IEEE Transactions on Mobile Computing, pre-print:817-825

  25. Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (icml 2010)

  26. Oh J, Chockalingam V, Singh S et al (2016) Control of memory, active perception, and action in minecraft[j]

  27. Riedmiller M (2005) Neural fitted q iterationcfirst experiences with a data efficient neural reinforcement learning method. In: Machine Learning: ECML 2005. springer, pp 317c328

  28. Riedmiller M (2005) Neural fitted q iterationcfirst experiences with a data efficient neural reinforcement learning method. In: Machine Learning: ECML 2005. springer, pp 317c328

  29. Rooijen JCV, Grondman I, Babüka R (2014) Learning rate free reinforcement learning for real-time motion control using a value-gradient based policy[j]. Mechatronics 24(8):966–974

    Article  Google Scholar 

  30. Rummery GA, Niranjan M (1994) On-Line Q-Learning using connectionist Systems[J]

  31. Sallans B, Hinton GE (2004) Reinforcement learning with factored states and actions.[j]. J Mach Learn Res 5(12):1063–1088

    MathSciNet  MATH  Google Scholar 

  32. Schaul T, Quan J, Antonoglou I et al (2015) Prioritized experience Replay[J]. Computer Science

  33. Sermanet P, Kavukcuoglu K, Chintala S, LeCun Y (2013) Pedestrian detection with unsupervised multi-stage feature learning. In: Proceedings of International Conference on Computer Vision and Pattern Recognition (CVPR 2013). IEEE

  34. Smith W, Foster I, Taylor V Scheduling with advanced reservations. In: IEEE International Parallel and Distributed Processing Symposium, CANCUN, pp 127c132

  35. Sotomayor B, Llorente R, Foster I Resource leasing and the art of suspending virtual machines. In: 11th IEEE International Conference on High Performance Computing and Communications, Seoul, pp 59c68

  36. Sutton R, Barto A (1998) Reinforcement learning: An introduction. MIT Press, Cambridge

    MATH  Google Scholar 

  37. Szegedy C, Liu W, Y Jia et al (2015) Going deeper with convolutions[j], pp 1–9

  38. Taufer M, Rosenberg AL (2015) Scheduling dag-based workflows on single cloud instances: High-performance and cost effectiveness with a static scheduler[J]. Int J High Perform Comput Appl 5(5):266–272

    Google Scholar 

  39. Watkins CJCH, Dayan P (1992) Q-learning. Mach Learn 8(3-4):279C292

    Article  Google Scholar 

  40. Wei Q, Lewis FL, Sun Q, et al. (2016) Discrete-time deterministic Q-learning: a novel convergence analysis[J]. IEEE Transactions on Cybernetics 47(5):1224–1237

    Article  Google Scholar 

  41. Yang W, Wang Z, Zhang B (2016) Face recognition using adaptive local ternary patterns method[J]. Neurocomputing 213:183–190

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the reviewers for their helpful advices. The National Science and Technology Major Project (Grant No. 2017YFB0803001), Project supported by the Natural Science Foundation of Hunan Province,China(Grant No.2018JJ2023), Scientific Research Fund of Hunan Provincial Education Department(Grant no.17C0295) are gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoning Zhu.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jin, H., Fu, Y., Yang, G. et al. An intelligent scheduling algorithm for resource management of cloud platform. Multimed Tools Appl 79, 5335–5353 (2020). https://doi.org/10.1007/s11042-018-6477-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-018-6477-4

Keywords

Navigation