Abstract
We are given a set of parallel jobs that have to be executed on a set of speed-scalable processors varying their speeds dynamically. Running a job at a slower speed is more energy-efficient, however, it takes a longer time and affects the performance. Every job is characterized by the processing volume and the number or the set of the required processors. Our objective is to minimize the maximum completion time so that the energy consumption is not greater than a given energy budget. For various particular cases, we propose polynomial-time approximation algorithms, consisting of two stages. At the first stage, we give an auxiliary convex program. By solving this problem, we get processing times of jobs and a lower bound on the makespan. Then, at the second stage, we transform our problem into the corresponding scheduling problem with the constant speed of processors and construct a feasible schedule. We also obtain an “almost exact” solution for the preemptive settings based on a configuration linear program.
Similar content being viewed by others
References
Albers, S., Müller, F., Schmelzer, S.: Speed scaling on parallel processors. Algorithmica 68(2), 404–425 (2014)
Angel, E., Bampis, E., Kacem, F., Letsios, D.: Speed scaling on parallel processors with migration. In: Euro-Par 2012 Parallel Processing, pp. 128–140. Springer, Berlin (2012)
Antczak, T.: Optimality and duality for nonsmooth multiobjective programming problems with V-r-invexity. J. Glob. Optim. 45(2), 319–334 (2009)
Bampis, E., Kononov, A., Letsios, D., Lucarelli, G., Sviridenko, M.: Energy efficient scheduling and routing via randomized rounding. J. Sched. 21(1), 35–51 (2018)
Bampis, E., Letsios, D., Lucarelli, G.: A note on multiprocessor speed scaling with precedence constraints. In: 26th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2014, pp. 138–142. ACM (2014)
Bampis, E., Letsios, D., Milis, I., Zois, G.: Speed scaling for maximum lateness. Theory Comput. Syst. 58, 304–321 (2016)
Bansal, N., Kimbrel, T., Pruhs, K.: Dynamic speed scaling to manage energy and temperature. In: 45th Annual IEEE Symposium on Foundations of Computer Science, pp. 520–529 (2004)
Bansal, N., Pruhs, K.: Speed scaling to manage temperature. In: STACS 2005, pp. 460–471. Springer, Berlin (2005)
Baptiste, P.: A note on scheduling multiprocessor tasks with identical processing times. Comput. Oper. Res. 30(13), 2071–2078 (2003)
Blazewicz, J., Drabowski, M., Weglarz, J.: Scheduling multiprocessor tasks to minimize schedule length. IEEE Trans. Comput. 35(5), 389–393 (1986)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Brucker, P., Knust, S., Roper, D., Zinder, Y.: Scheduling UET task systems with concurrency on two parallel identical processors. Math. Methods Oper. Res. 52(3), 369–387 (2000)
Bunde, D.: Power-aware scheduling for makespan and flow. J. Sched. 12, 489–500 (2009)
Coffman, E., Garey, M., Johnson, D., Lapaugh, A.: Scheduling file transfers. SIAM J. Comput. 14(3), 744–780 (1985)
Dempe, S., Zemkoho, A.: On the Karush–Kuhn–Tucker reformulation of the bilevel optimization problem. Nonlinear Anal. Theory Methods Appl. 75(3), 1202–1218 (2012)
Drozdowski, M.: On complexity of multiprocessor tasks scheduling. Bull. Polish Acad. Sci. Tech. Sci. 43(3), 381–392 (1995)
Drozdowski, M.: Scheduling for Parallel Processing. Springer, London (2009)
Du, J., Leung, J.T.: Complexity of scheduling parallel task systems. SIAM J. Discrete Math. 2(4), 472–478 (1989)
Gerards, M.E.T., Hurink, J.L., Hölzenspies, P.K.F.: A survey of offline algorithms for energy minimization under deadline constraints. J. Sched. 19, 3–19 (2016)
Graham, R.L.: Bounds for certain multiprocessor anomalies. SIAM J. Appl. Math. 17(2), 416–429 (1966)
Grötschel, M., Lovász, L., Schrijver, A.: Geometric Algorithms and Combinatorial Optimizations, 2nd corrected Springer, Berlin (1993)
Hoogeveen, J., van de Velde, S., Veltman, B.: Complexity of scheduling multiprocessortasks with prespecified processor allocations. Discrete Appl. Math. 55(3), 259–272 (1994)
Jansen, K., Porkolab, L.: Preemptive parallel task scheduling in O(n) + Poly(m) time. In: Algorithms and Computation, ISAAC 2000. LNCS, pp. 398–409. Springer, Berlin (2000)
Jansen, K., Porkolab, L.: Preemptive scheduling on dedicated processors: applications of fractional graph coloring. J. Sched. 7(1), 35–48 (2004)
Johannes, B.: Scheduling parallel jobs to minimize the makespan. J. Sched. 9(5), 433–452 (2006)
Kononov, A., Kovalenko, Y.: On speed scaling scheduling of parallel jobs with preemption. In: Discrete Optimization and Operations Research, DOOR-2016. LNCS, vol. 9869, pp. 309–321. Springer (2016)
Kononov, A., Kovalenko, Y.: Approximation algorithms for energy-efficient scheduling of parallel jobs. J. Sched. 23(6), 693–709 (2020)
Kononov, A., Kovalenko, Y.: Makespan minimization for parallel jobs with energy constraint. In: Mathematical Optimization Theory and Operations Research, MOTOR-2020. LNCS, vol. 12095, pp. 289–300. Springer (2020)
Kubale, M.: The complexity of scheduling independent two-processor tasks on dedicated processors. Inf. Process. Lett. 24(3), 141–147 (1987)
Kubale, M.: Preemptive versus nonpreemtive scheduling of biprocessor tasks on dedicated processors. Eur. J. Oper. Res. 94(2), 242–251 (1996)
Kuhn, H., Tucker, A.: Nonlinear programming. In: The Second Berkeley Symposium on Mathematical Statistics and Probability, pp. 481–492. University of California Press, Berkeley (1951)
Kunz, K.S.: Numerical Analysis. McGraw-Hill, New York (1957)
Kwon, W.C., Kim, T.: Optimal voltage allocation techniques for dynamically variable voltage processors. ACM Trans. Embed. Comput. Syst. 4(1), 211–230 (2005)
Li, K.: Analysis of the list scheduling algorithm for precedence constrained parallel tasks. J. Comb. Opt. 3, 73–88 (1999)
Li, K.: Energy efficient scheduling of parallel tasks on multiprocessor computers. J. Supercomput. 60(60), 223–247 (2012)
Makarychev, K., Sviridenko, M.: Solving optimization problems with diseconomies of scale via decoupling. J. ACM 65(6), 1–27 (2018)
Naroska, E., Schwiegelshohn, U.: On an on-line scheduling problem for parallel jobs. Inf. Process. Lett. 81(6), 297–304 (2002)
Nesterov, Y.: Lectures on Convex Optimization. Springer, Cham (2018)
Pruhs, K., van Stee, R.: Speed scaling of tasks with precedence constraints. Theory Comput. Syst. 43, 67–80 (2007)
Pruhs, K., Uthaisombut, P., Woeginger, G.: Getting the best response for your erg. ACM Trans. Algorithms 4(3), 1–17 (2008)
Shabtay, D., Kaspi, M.: Parallel machine scheduling with a convex resource consumption function. Eur. J. Oper. Res. 173, 92–107 (2006)
Stefanini, L., Arana-Jiminez, M.: Karush–Kuhn–Tucker conditions for interval and fuzzy optimization in several variables under total and directional generalized differentiability. Fuzzy Sets Syst. 362, 1–34 (2019)
Acknowledgements
The research was supported by RSF grant 21-41-09017.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Appendix contains a polynomial time algorithm solving the lower bound model (1)–(5). We consider the following two subproblems.
Subproblem (P1)
Subproblem (P2)
Problems (P1) and (P2) are relaxations of the problem (1)–(5). Therefore, if an optimal solution of (P1) or (P2) is a feasible solution of (1)–(5), it is also an optimal solution of (1)–(5).
Our algorithm for (1)–(5) consists of three steps. At the first two steps, problems (P1) and (P2) are solved, and at the third step, a combination of (P1) and (P2) is formed by equating the values of the objective functions.
1.1 The first step
Before solving problem (P1), we check the following condition
If inequality (46) takes place, then the optimal solution of problem (P1) is an optimal solution of problem (1)–(5), otherwise, go to the second step. Indeed, problem (P1) corresponds to the case when each rigid job j is replaced by \(\textit{size}_j\) single-processor jobs of the same duration \(p_j\), and it is required to schedule them on one processor. So, in an optimal solution of (P1) all jobs have the same speed s, which can be found through equation
The processing times of jobs are
The condition (46) guarantees that the inequality \(\max _{j\in {\mathcal {J}}}p_j\le \frac{1}{m}\sum _{j\in {\mathcal {J}}}{} \textit{size}_jp_j\) holds for job durations (47).
1.2 The second step
Initially we check the condition
which indicates that the optimal solution of problem (P2) is an optimal solution of problem (1)–(5). Problem (P2) corresponds to simultaneous execution of the jobs. Thus, all jobs should have identical processing times
The condition (48) guarantees that \(\frac{1}{m}\sum _{j\in {\mathcal {J}}}{} \textit{size}_j p_j \le \max _{j\in {\mathcal {J}}}p_j\) for job durations (49). However, if the total number of processors required by all jobs is more then m, we go to the third step.
1.3 The third step
If neither inequality (46) nor inequality (48) is satisfied, according to the Karush–Kuhn–Tucker necessary and sufficient conditions (see the last section in “Appendix”), the following equality should hold for an optimal solution of problem (1)–(5):
As a result we have problem (P3):
Solving (P3) begins with the optimal solution of (P1), where the processing times of jobs are equal to
As we can see, job durations \(p_j\) are proportional to works \(W_j\). We order jobs in non-increasing work-values \(W_1\ge W_2\ge \cdots W_n\). Let l denote the job with a minimal processing volume such that
This condition is equivalent to the condition \(p_l> \frac{1}{m}\sum _{j\in {\mathcal {J}}}{} \textit{size}_j p_j\).
Because the objective of (P3) is a linear function, a necessary condition for the optimal solution of (P3) is
Then we formulate a new problem (P4):
In order to find an optimal solution of problem (P4) we use the Lagrangian method constructing the Lagrangian function
We calculate the partial derivatives, equate them to zero and find the processing times of jobs
where
Note that \(m>\sum _{i=1}^l \textit{size}_i\) due to condition (50) for choosing index l.
If the following inequality is satisfied for the obtained durations of jobs \(l+1,\dots ,n\)
then we have an optimal solution of problem (1)–(5). Otherwise, we go to solving the problem (P4) with new value of l.
Inequality (53) can be rewritten in the following form using the presented durations of jobs (51), (52)
So, instead of solving problem (P4) several times we only need to find such value of l for which condition (54) is satisfied. This can be done in \(O(\min \{n,m\})\) steps. After that, we use formulas (51), (52) to calculate processing times and obtain an optimal solution of the initial problem (1)–(5).
At each of the three steps, the formulas for processing times of jobs include exponentiation and root extraction operations. Therefore \(p_j,\ j\in {\mathcal {J}},\) may be irrational and computed with any desired accuracy \(\varepsilon '>0\) in \(O\left( n\mathrm {log}\left( \frac{1}{\varepsilon '} n m W_{\max }\right) \right) \) time, where \(W_{\max }=\max _{j\in {\mathcal {J}}}W_j\) (see Newton’s method or dichotomy method in [32]). If we set \(\varepsilon '=\frac{\varepsilon }{n}\), then we obtain an \(OPT+\varepsilon \) solution of problem (1)–(5) as the objective is the weighted sum of job durations.
1.4 Karush–Kuhn–Tucker conditions
Consider the following convex program:
where all functions \(g_i\) are differentiable, and there is an interior point \(x'\) such that \(g_i(x')<0\) for all \(i=1,\dots ,m\).
Let \(\lambda _i\) be the dual variable associated with the constraint \(g_i(x)\le 0\). The Karush–Kuhn–Tucker conditions are:
where \(L(x,\lambda )=f(x)+\sum _{i=1}^n \lambda _i g_i(x)\) is the Lagrangian function. Karush–Kuhn–Tucker conditions are necessary and sufficient for solutions \(x\in R^n\) and \(\lambda \in R^m\) to be primal and dual optimal [11].
Rights and permissions
About this article
Cite this article
Kononov, A., Zakharova, Y. Speed scaling scheduling of multiprocessor jobs with energy constraint and makespan criterion. J Glob Optim 83, 539–564 (2022). https://doi.org/10.1007/s10898-021-01115-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-021-01115-x