Skip to main content

Advertisement

Log in

An energy-aware scheduling algorithm for budget-constrained scientific workflows based on multi-objective reinforcement learning

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Since scientific workflow scheduling becomes a major energy contributor in clouds, much attention has been paid to reduce the energy consumed by workflows. This paper considers a multi-objective workflow scheduling problem with the budget constraint. Most existing works of budget-constrained workflow scheduling cannot always satisfy the budget constraint and guarantee the feasibility of solutions. Instead, they discuss the success rate in the experiments. Only a few works can always figure out feasible solutions. These methods work, but they are too complicated. In workflow scheduling, it has been a trend to consider more than one objective. However, the weight selection is usually ignored in these works. The inappropriate weights will reduce the quality of solutions. In this paper, we propose an energy-aware multi-objective reinforcement learning (EnMORL) algorithm. We design a much simpler method to ensure the feasibility of solutions. This method is based on the remaining cheapest budget. Reinforcement learning based on the Chebyshev scalarization function is a new framework, which is effective in solving the weight selection problem. Therefore, we design EnMORL based on it. Our goal is to minimize the makespan and energy consumption of the workflow. Finally, we compare EnMORL with two state-of-the-art multi-objective meta-heuristics in the case of four different workflows. The results show that our proposed EnMORL outperforms these existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Senyo PK, Addae E, Boateng R (2018) Cloud computing research: a review of research themes, frameworks, methods and future research directions. Int J Inf Manag 38(1):128–139

    Article  Google Scholar 

  2. Khattar N, Sidhu J, Singh J (2019) Toward energy-efficient cloud computing: a survey of dynamic power management and heuristics-based optimization techniques. J Supercomput 75(8):4750–4810

    Article  Google Scholar 

  3. Kintsakis AM, Psomopoulos FE, Mitkas PA (2019) Reinforcement learning based scheduling in a workflow management system. Eng Appl Artif Intell 81:94–106

    Article  Google Scholar 

  4. Andrae ASG, Edler T (2015) On global electricity usage of communication technology: trends to 2030. Challenges 6(1):117–157

    Article  Google Scholar 

  5. Ismayilov G, Topcuoglu HR (2020) Neural network based multi-objective evolutionary algorithm for dynamic workflow scheduling in cloud computing. Futur Gener Comput Syst 102:307–322

    Article  Google Scholar 

  6. Belkhir L, Elmeligi A (2018) Assessing ict global emissions footprint: trends to 2040 and recommendations. J Clean Prod 177:448–463

    Article  Google Scholar 

  7. Zhangjun W, Liu X, Ni Z, Yuan D, Yang Y (2013) A market-oriented hierarchical scheduling strategy in cloud workflow systems. J Supercomput 63(1):256–293

    Article  Google Scholar 

  8. Verma A, Kaushal S (2017) A hybrid multi-objective particle swarm optimization for scientific workflow scheduling. Parallel Comput 62:1–19

    Article  MathSciNet  Google Scholar 

  9. Arabnejad H, Barbosa JG (2014) A budget constrained scheduling algorithm for workflow applications. J Grid Comput 12(4):665–679

    Article  Google Scholar 

  10. Garg R, Singh AK (2014) Multi-objective workflow grid scheduling using \(\varepsilon \)-fuzzy dominance sort based discrete particle swarm optimization. J Supercomput 68(2):709–732

    Article  Google Scholar 

  11. Wu CQ, Lin X, Yu D, Xu W, Li L (2014) End-to-end delay minimization for scientific workflows in clouds under budget constraint. IEEE Trans Cloud Comput 3(2):169–181

    Article  Google Scholar 

  12. Chen W, Xie G, Li R, Bai Y, Fan C, Li K (2017) Efficient task scheduling for budget constrained parallel applications on heterogeneous cloud computing systems. Futur Gener Comput Syst 74:1–11

    Article  Google Scholar 

  13. Sofia AS, GaneshKumar P (2018) Multi-objective task scheduling to minimize energy consumption and makespan of cloud computing using NSGA-ii. J Netw Syst Manag 26(2):463–485

    Article  Google Scholar 

  14. Das I, Dennis JE (1997) A closer look at drawbacks of minimizing weighted sums of objectives for pareto set generation in multicriteria optimization problems. Struct Optim 14(1):63–69

    Article  Google Scholar 

  15. Van Moffaert K, Drugan MM, Nowé A (2013) Scalarized multi-objective reinforcement learning: Novel design techniques. In: 2013 IEEE symposium on adaptive dynamic programming and reinforcement learning (ADPRL), IEEE, pp 191–199

  16. Zhu D, Melhem R, Childers BR (2003) Scheduling with dynamic voltage/speed adjustment using slack reclamation in multiprocessor real-time systems. IEEE Trans Parallel Distrib Syst 14(7):686–700

    Article  Google Scholar 

  17. Zhou J, Wang T, Cong P, Lu P, Wei T, Chen M (2019) Cost and makespan-aware workflow scheduling in hybrid clouds. J Syst Arch. https://doi.org/10.1016/j.sysarc.2019.08.004

    Article  Google Scholar 

  18. Gábor Z, Kalmár Z, Szepesvári C (1998) Multi-criteria reinforcement learning. In: Proceedings of the Fifteenth International Conference on Machine Learning, Morgan Kaufmann Publishers Inc, San Francisco, CA, USA, pp 197–205

  19. Zitzler E, Thiele L, Laumanns M, Fonseca CM, Da Fonseca GV (2002) Performance assessment of multiobjective optimizers: an analysis and review. TIK-Report, vol 139

  20. Li Z, Ge J, Haiyang H, Song W, Hao H, Luo B (2015) Cost and energy aware scheduling algorithm for scientific workflows with deadline constraint in clouds. IEEE Trans Serv Comput 11(4):713–726

    Article  Google Scholar 

  21. Qureshi B (2019) Profile-based power-aware workflow scheduling framework for energy-efficient data centers. Futur Gener Comput Syst 94:453–467

    Article  Google Scholar 

  22. Deb K, Pratap A, Agarwal S, Meyarivan TAMT (2002) A fast and elitist multiobjective genetic algorithm: NSGA-ii. IEEE Trans Evolut Comput 6(2):182–197

    Article  Google Scholar 

  23. Topcuoglu H, Hariri S, Min-you W (2002) Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Trans Parallel Distrib Syst 13(3):260–274

    Article  Google Scholar 

  24. Coello CAC, Pulido GT, Lechuga MS (2004) Handling multiple objectives with particle swarm optimization. IEEE Trans Evolut Comput 8(3):256–279

    Article  Google Scholar 

  25. Mossalam H, Assael YM, Roijers DM, Shimon W (2016) Multi-objective deep reinforcement learning. arXiv preprint arXiv:1610.02707

  26. Van Moffaert K, Nowé A (2014) Multi-objective reinforcement learning using sets of pareto dominating policies. J Mach Learn Res 15(1):3483–3512

    MathSciNet  MATH  Google Scholar 

  27. Lee YC, Zomaya AY (2010) Energy conscious scheduling for distributed computing systems under different operating conditions. IEEE Trans Parallel Distrib Syst 22(8):1374–1381

    Article  Google Scholar 

  28. Atkinson M, Gesing S, Montagnat J (2017) and Ian Taylor. Past, present and future, Scientific workflows

  29. Sutton RS, Barto AG (2018) Reinforcement learning: an introduction. MIT press, New York

    MATH  Google Scholar 

  30. Watkins CJCH (1989) Learning from delayed rewards

  31. Tsitsiklis JN (1994) Asynchronous stochastic approximation and q-learning. Mach Learn 16((3):185–202

    MATH  Google Scholar 

  32. Wiering MA, De Jong ED (2007) Computing optimal stationary policies for multi-objective Markov decision processes. In: 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, IEEE, pp 158–165

  33. Vamplew P, Yearwood J, Dazeley R, Berry A (2008) On the limitations of scalarisation for multi-objective reinforcement learning of pareto fronts. In: Australasian Joint Conference on Artificial Intelligence, Springer, New York, pp 372–378

  34. Voß T, Beume N, Rudolph G, Igel C(2008) Scalarization versus indicator-based selection in multi-objective CMA evolution strategies. In: 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), IEEE, pp 3036–3043

  35. Bharathi S, Chervenak A, Deelman E, Mehta G, Su M-H, Vahi K (2008) Characterization of scientific workflows. In: 2008 Third Workshop on Workflows in Support of Large-Scale Science, IEEE, pp 1–10

  36. Calheiros RN, Ranjan R, Beloglazov A, De Rose CAF, Buyya R (2011) Cloudsim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms. Softw Pract Exp 41(1):23–50

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank anonymous referees for their helpful suggestions to improve this paper. This work was supported in part by the National Natural Science Foundation of China under Grant NSFC 61672323, in part by the Fundamental Research Funds of Shandong University under Grant 2017JC043, in part by the Key Research and Development Program of Shandong Province under Grant 2017GGX10122 and Grant 2017GGX10142, and in part by the Natural Science Foundation of Shandong Province Grant ZR2019MF072.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hua Wang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qin, Y., Wang, H., Yi, S. et al. An energy-aware scheduling algorithm for budget-constrained scientific workflows based on multi-objective reinforcement learning. J Supercomput 76, 455–480 (2020). https://doi.org/10.1007/s11227-019-03033-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-019-03033-y

Keywords

Navigation