Skip to main content

Advertisement

Log in

Speed scaling scheduling of multiprocessor jobs with energy constraint and makespan criterion

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

We are given a set of parallel jobs that have to be executed on a set of speed-scalable processors varying their speeds dynamically. Running a job at a slower speed is more energy-efficient, however, it takes a longer time and affects the performance. Every job is characterized by the processing volume and the number or the set of the required processors. Our objective is to minimize the maximum completion time so that the energy consumption is not greater than a given energy budget. For various particular cases, we propose polynomial-time approximation algorithms, consisting of two stages. At the first stage, we give an auxiliary convex program. By solving this problem, we get processing times of jobs and a lower bound on the makespan. Then, at the second stage, we transform our problem into the corresponding scheduling problem with the constant speed of processors and construct a feasible schedule. We also obtain an “almost exact” solution for the preemptive settings based on a configuration linear program.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Albers, S., Müller, F., Schmelzer, S.: Speed scaling on parallel processors. Algorithmica 68(2), 404–425 (2014)

    Article  MathSciNet  Google Scholar 

  2. Angel, E., Bampis, E., Kacem, F., Letsios, D.: Speed scaling on parallel processors with migration. In: Euro-Par 2012 Parallel Processing, pp. 128–140. Springer, Berlin (2012)

  3. Antczak, T.: Optimality and duality for nonsmooth multiobjective programming problems with V-r-invexity. J. Glob. Optim. 45(2), 319–334 (2009)

    Article  MathSciNet  Google Scholar 

  4. Bampis, E., Kononov, A., Letsios, D., Lucarelli, G., Sviridenko, M.: Energy efficient scheduling and routing via randomized rounding. J. Sched. 21(1), 35–51 (2018)

    Article  MathSciNet  Google Scholar 

  5. Bampis, E., Letsios, D., Lucarelli, G.: A note on multiprocessor speed scaling with precedence constraints. In: 26th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2014, pp. 138–142. ACM (2014)

  6. Bampis, E., Letsios, D., Milis, I., Zois, G.: Speed scaling for maximum lateness. Theory Comput. Syst. 58, 304–321 (2016)

    Article  MathSciNet  Google Scholar 

  7. Bansal, N., Kimbrel, T., Pruhs, K.: Dynamic speed scaling to manage energy and temperature. In: 45th Annual IEEE Symposium on Foundations of Computer Science, pp. 520–529 (2004)

  8. Bansal, N., Pruhs, K.: Speed scaling to manage temperature. In: STACS 2005, pp. 460–471. Springer, Berlin (2005)

  9. Baptiste, P.: A note on scheduling multiprocessor tasks with identical processing times. Comput. Oper. Res. 30(13), 2071–2078 (2003)

    Article  MathSciNet  Google Scholar 

  10. Blazewicz, J., Drabowski, M., Weglarz, J.: Scheduling multiprocessor tasks to minimize schedule length. IEEE Trans. Comput. 35(5), 389–393 (1986)

    Article  MathSciNet  Google Scholar 

  11. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  Google Scholar 

  12. Brucker, P., Knust, S., Roper, D., Zinder, Y.: Scheduling UET task systems with concurrency on two parallel identical processors. Math. Methods Oper. Res. 52(3), 369–387 (2000)

    Article  MathSciNet  Google Scholar 

  13. Bunde, D.: Power-aware scheduling for makespan and flow. J. Sched. 12, 489–500 (2009)

    Article  MathSciNet  Google Scholar 

  14. Coffman, E., Garey, M., Johnson, D., Lapaugh, A.: Scheduling file transfers. SIAM J. Comput. 14(3), 744–780 (1985)

    Article  MathSciNet  Google Scholar 

  15. Dempe, S., Zemkoho, A.: On the Karush–Kuhn–Tucker reformulation of the bilevel optimization problem. Nonlinear Anal. Theory Methods Appl. 75(3), 1202–1218 (2012)

    Article  MathSciNet  Google Scholar 

  16. Drozdowski, M.: On complexity of multiprocessor tasks scheduling. Bull. Polish Acad. Sci. Tech. Sci. 43(3), 381–392 (1995)

    MATH  Google Scholar 

  17. Drozdowski, M.: Scheduling for Parallel Processing. Springer, London (2009)

    Book  Google Scholar 

  18. Du, J., Leung, J.T.: Complexity of scheduling parallel task systems. SIAM J. Discrete Math. 2(4), 472–478 (1989)

    Article  MathSciNet  Google Scholar 

  19. Gerards, M.E.T., Hurink, J.L., Hölzenspies, P.K.F.: A survey of offline algorithms for energy minimization under deadline constraints. J. Sched. 19, 3–19 (2016)

    Article  MathSciNet  Google Scholar 

  20. Graham, R.L.: Bounds for certain multiprocessor anomalies. SIAM J. Appl. Math. 17(2), 416–429 (1966)

    Article  Google Scholar 

  21. Grötschel, M., Lovász, L., Schrijver, A.: Geometric Algorithms and Combinatorial Optimizations, 2nd corrected Springer, Berlin (1993)

    Book  Google Scholar 

  22. Hoogeveen, J., van de Velde, S., Veltman, B.: Complexity of scheduling multiprocessortasks with prespecified processor allocations. Discrete Appl. Math. 55(3), 259–272 (1994)

    Article  MathSciNet  Google Scholar 

  23. Jansen, K., Porkolab, L.: Preemptive parallel task scheduling in O(n) + Poly(m) time. In: Algorithms and Computation, ISAAC 2000. LNCS, pp. 398–409. Springer, Berlin (2000)

  24. Jansen, K., Porkolab, L.: Preemptive scheduling on dedicated processors: applications of fractional graph coloring. J. Sched. 7(1), 35–48 (2004)

    Article  MathSciNet  Google Scholar 

  25. Johannes, B.: Scheduling parallel jobs to minimize the makespan. J. Sched. 9(5), 433–452 (2006)

    Article  MathSciNet  Google Scholar 

  26. Kononov, A., Kovalenko, Y.: On speed scaling scheduling of parallel jobs with preemption. In: Discrete Optimization and Operations Research, DOOR-2016. LNCS, vol. 9869, pp. 309–321. Springer (2016)

  27. Kononov, A., Kovalenko, Y.: Approximation algorithms for energy-efficient scheduling of parallel jobs. J. Sched. 23(6), 693–709 (2020)

    Article  MathSciNet  Google Scholar 

  28. Kononov, A., Kovalenko, Y.: Makespan minimization for parallel jobs with energy constraint. In: Mathematical Optimization Theory and Operations Research, MOTOR-2020. LNCS, vol. 12095, pp. 289–300. Springer (2020)

  29. Kubale, M.: The complexity of scheduling independent two-processor tasks on dedicated processors. Inf. Process. Lett. 24(3), 141–147 (1987)

    Article  MathSciNet  Google Scholar 

  30. Kubale, M.: Preemptive versus nonpreemtive scheduling of biprocessor tasks on dedicated processors. Eur. J. Oper. Res. 94(2), 242–251 (1996)

    Article  Google Scholar 

  31. Kuhn, H., Tucker, A.: Nonlinear programming. In: The Second Berkeley Symposium on Mathematical Statistics and Probability, pp. 481–492. University of California Press, Berkeley (1951)

  32. Kunz, K.S.: Numerical Analysis. McGraw-Hill, New York (1957)

    MATH  Google Scholar 

  33. Kwon, W.C., Kim, T.: Optimal voltage allocation techniques for dynamically variable voltage processors. ACM Trans. Embed. Comput. Syst. 4(1), 211–230 (2005)

    Article  Google Scholar 

  34. Li, K.: Analysis of the list scheduling algorithm for precedence constrained parallel tasks. J. Comb. Opt. 3, 73–88 (1999)

    Article  MathSciNet  Google Scholar 

  35. Li, K.: Energy efficient scheduling of parallel tasks on multiprocessor computers. J. Supercomput. 60(60), 223–247 (2012)

    Article  Google Scholar 

  36. Makarychev, K., Sviridenko, M.: Solving optimization problems with diseconomies of scale via decoupling. J. ACM 65(6), 1–27 (2018)

    Article  MathSciNet  Google Scholar 

  37. Naroska, E., Schwiegelshohn, U.: On an on-line scheduling problem for parallel jobs. Inf. Process. Lett. 81(6), 297–304 (2002)

    Article  MathSciNet  Google Scholar 

  38. Nesterov, Y.: Lectures on Convex Optimization. Springer, Cham (2018)

    Book  Google Scholar 

  39. Pruhs, K., van Stee, R.: Speed scaling of tasks with precedence constraints. Theory Comput. Syst. 43, 67–80 (2007)

    Article  MathSciNet  Google Scholar 

  40. Pruhs, K., Uthaisombut, P., Woeginger, G.: Getting the best response for your erg. ACM Trans. Algorithms 4(3), 1–17 (2008)

    Article  MathSciNet  Google Scholar 

  41. Shabtay, D., Kaspi, M.: Parallel machine scheduling with a convex resource consumption function. Eur. J. Oper. Res. 173, 92–107 (2006)

    Article  MathSciNet  Google Scholar 

  42. Stefanini, L., Arana-Jiminez, M.: Karush–Kuhn–Tucker conditions for interval and fuzzy optimization in several variables under total and directional generalized differentiability. Fuzzy Sets Syst. 362, 1–34 (2019)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The research was supported by RSF grant 21-41-09017.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yulia Zakharova.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Appendix contains a polynomial time algorithm solving the lower bound model (1)–(5). We consider the following two subproblems.

Subproblem (P1)

$$\begin{aligned}&\frac{1}{m}\sum _{j\in {\mathcal {J}}}{} \textit{size}_jp_j\rightarrow \min , \\&\quad \sum _{j\in {\mathcal {J}}}{} \textit{size}_jW_j^{\alpha }p_j^{1-\alpha }\le E, \\&\quad p_j\ge 0,\ j\in {\mathcal {J}}. \end{aligned}$$

Subproblem (P2)

$$\begin{aligned}&\max _{j\in {\mathcal {J}}}p_j\rightarrow \min , \\&\quad \sum _{j\in {\mathcal {J}}}{} \textit{size}_jW_j^{\alpha }p_j^{1-\alpha }\le E, \\&\quad p_j\ge 0,\ j\in {\mathcal {J}}. \end{aligned}$$

Problems (P1) and (P2) are relaxations of the problem (1)–(5). Therefore, if an optimal solution of (P1) or (P2) is a feasible solution of (1)–(5), it is also an optimal solution of (1)–(5).

Our algorithm for (1)–(5) consists of three steps. At the first two steps, problems (P1) and (P2) are solved, and at the third step, a combination of (P1) and (P2) is formed by equating the values of the objective functions.

1.1 The first step

Before solving problem (P1), we check the following condition

$$\begin{aligned} \max _{j\in {\mathcal {J}}}{W_j}\le \frac{1}{m}\sum _{j\in {\mathcal {J}}}{\textit{size}_jW_j}. \end{aligned}$$
(46)

If inequality (46) takes place, then the optimal solution of problem (P1) is an optimal solution of problem (1)–(5), otherwise, go to the second step. Indeed, problem (P1) corresponds to the case when each rigid job j is replaced by \(\textit{size}_j\) single-processor jobs of the same duration \(p_j\), and it is required to schedule them on one processor. So, in an optimal solution of (P1) all jobs have the same speed s, which can be found through equation

$$\begin{aligned} \sum _{j\in {\mathcal {J}}}{} \textit{size}_jW_js^{\alpha -1}= E. \end{aligned}$$

The processing times of jobs are

$$\begin{aligned} p_i=\frac{W_i}{s}=\frac{W_i\left( \sum _{j\in {\mathcal {J}}}{} \textit{size}_jW_j\right) ^{\frac{1}{\alpha -1}}}{E^{\frac{1}{\alpha -1}}},\ i\in {\mathcal {J}}. \end{aligned}$$
(47)

The condition (46) guarantees that the inequality \(\max _{j\in {\mathcal {J}}}p_j\le \frac{1}{m}\sum _{j\in {\mathcal {J}}}{} \textit{size}_jp_j\) holds for job durations (47).

1.2 The second step

Initially we check the condition

$$\begin{aligned} \sum _{j\in {\mathcal {J}}} \textit{size}_j\le m, \end{aligned}$$
(48)

which indicates that the optimal solution of problem (P2) is an optimal solution of problem (1)–(5). Problem (P2) corresponds to simultaneous execution of the jobs. Thus, all jobs should have identical processing times

$$\begin{aligned} p_i=\frac{\left( \sum _{j\in {\mathcal {J}}}{} \textit{size}_jW^{\alpha }_j \right) ^{\frac{1}{\alpha -1}}}{E^{\frac{1}{\alpha -1}}},\ i\in {\mathcal {J}}. \end{aligned}$$
(49)

The condition (48) guarantees that \(\frac{1}{m}\sum _{j\in {\mathcal {J}}}{} \textit{size}_j p_j \le \max _{j\in {\mathcal {J}}}p_j\) for job durations (49). However, if the total number of processors required by all jobs is more then m, we go to the third step.

1.3 The third step

If neither inequality (46) nor inequality (48) is satisfied, according to the Karush–Kuhn–Tucker necessary and sufficient conditions (see the last section in “Appendix”), the following equality should hold for an optimal solution of problem (1)–(5):

$$\begin{aligned} \frac{1}{m}\sum _{j\in {\mathcal {J}}}{} \textit{size}_j p_j = \max _{j\in {\mathcal {J}}}p_j. \end{aligned}$$

As a result we have problem (P3):

$$\begin{aligned}&\sum _{j\in {\mathcal {J}}}{} \textit{size}_jp_j\rightarrow \min , \\&\quad \max _{j\in {\mathcal {J}}}p_j - \frac{1}{m}\sum _{j\in {\mathcal {J}}}{} \textit{size}_jp_j=0, \\&\quad \sum _{j\in {\mathcal {J}}}{} \textit{size}_jW_j^{\alpha }p_j^{1-\alpha }\le E, \\&\quad p_j\ge 0,\ j\in {\mathcal {J}}. \end{aligned}$$

Solving (P3) begins with the optimal solution of (P1), where the processing times of jobs are equal to

$$\begin{aligned} p_i=\frac{W_i\left( \sum _{j\in {\mathcal {J}}}{} \textit{size}_jW_j\right) ^{\frac{1}{\alpha -1}}}{E^{\frac{1}{\alpha -1}}},\ i\in {\mathcal {J}}. \end{aligned}$$

As we can see, job durations \(p_j\) are proportional to works \(W_j\). We order jobs in non-increasing work-values \(W_1\ge W_2\ge \cdots W_n\). Let l denote the job with a minimal processing volume such that

$$\begin{aligned} W_l> \frac{1}{m}\sum _{j\in {\mathcal {J}}}{} \textit{size}_j W_j. \end{aligned}$$
(50)

This condition is equivalent to the condition \(p_l> \frac{1}{m}\sum _{j\in {\mathcal {J}}}{} \textit{size}_j p_j\).

Because the objective of (P3) is a linear function, a necessary condition for the optimal solution of (P3) is

$$\begin{aligned} p_j= \frac{1}{m}\sum _{i\in {\mathcal {J}}}{} \textit{size}_i p_i,\ j=1,\dots ,l. \end{aligned}$$

Then we formulate a new problem (P4):

$$\begin{aligned}&\sum _{j\in {\mathcal {J}}}{} \textit{size}_jp_j\rightarrow \min , \\&\quad p_j = \frac{1}{m}\sum _{i\in {\mathcal {J}}}{} \textit{size}_ip_i,\ j=1,\dots ,l, \\&\quad \sum _{j\in {\mathcal {J}}}{} \textit{size}_jW_j^{\alpha }p_j^{1-\alpha }\le E, \\&\quad p_j\ge 0,\ j\in {\mathcal {J}}. \end{aligned}$$

In order to find an optimal solution of problem (P4) we use the Lagrangian method constructing the Lagrangian function

$$\begin{aligned} L(p_j,\lambda _j)= & {} \sum _{j\in {\mathcal {J}}}{} \textit{size}_jp_j+ \sum _{j=1}^l \lambda _j \left\{ p_j- \sum _{i\in {\mathcal {J}}}\frac{\textit{size}_ip_i}{m} \right\} + \\&\quad \lambda _{l+1}\left( \sum _{j\in {\mathcal {J}}}{} \textit{size}_jW_j^{\alpha }p_j^{1-\alpha }- E\right) \end{aligned}$$

We calculate the partial derivatives, equate them to zero and find the processing times of jobs

$$\begin{aligned} p_j= & {} \frac{\sum _{i=l+1}^{n}{} \textit{size}_iW_i}{(m-\sum _{i=1}^l \textit{size}_i)}C,\ j=1,\dots ,l, \end{aligned}$$
(51)
$$\begin{aligned} p_j= & {} W_jC,\ j=l+1,\dots ,n, \end{aligned}$$
(52)

where

$$\begin{aligned} C=\left( \frac{E}{\sum _{j=1}^l\frac{W_j^{\alpha }{} \textit{size}_j (\sum _{i=l+1}^{n}W_i \textit{size}_i)^{1-\alpha } }{(m-\sum _{i=1}^l \textit{size}_i)^{1-\alpha }} +\sum _{j=l+1}^nW_j \textit{size}_j} \right) ^{\frac{1}{1-\alpha }}. \end{aligned}$$

Note that \(m>\sum _{i=1}^l \textit{size}_i\) due to condition (50) for choosing index l.

If the following inequality is satisfied for the obtained durations of jobs \(l+1,\dots ,n\)

$$\begin{aligned} \max _{j=l+1,\dots ,n}p_j\le \frac{1}{m}\sum _{j\in {\mathcal {J}}}{} \textit{size}_j p_j, \end{aligned}$$
(53)

then we have an optimal solution of problem (1)–(5). Otherwise, we go to solving the problem (P4) with new value of l.

Inequality (53) can be rewritten in the following form using the presented durations of jobs (51), (52)

$$\begin{aligned} \max _{j=l+1,\dots ,n}W_j=W_{l+1}\le \frac{1}{(m-\sum _{i=1}^l \textit{size}_i)}\sum _{j=l+1}^{n}{} \textit{size}_j W_j. \end{aligned}$$
(54)

So, instead of solving problem (P4) several times we only need to find such value of l for which condition (54) is satisfied. This can be done in \(O(\min \{n,m\})\) steps. After that, we use formulas (51), (52) to calculate processing times and obtain an optimal solution of the initial problem (1)–(5).

At each of the three steps, the formulas for processing times of jobs include exponentiation and root extraction operations. Therefore \(p_j,\ j\in {\mathcal {J}},\) may be irrational and computed with any desired accuracy \(\varepsilon '>0\) in \(O\left( n\mathrm {log}\left( \frac{1}{\varepsilon '} n m W_{\max }\right) \right) \) time, where \(W_{\max }=\max _{j\in {\mathcal {J}}}W_j\) (see Newton’s method or dichotomy method in [32]). If we set \(\varepsilon '=\frac{\varepsilon }{n}\), then we obtain an \(OPT+\varepsilon \) solution of problem (1)–(5) as the objective is the weighted sum of job durations.

1.4 Karush–Kuhn–Tucker conditions

Consider the following convex program:

$$\begin{aligned}&f(x) \rightarrow \min \\&\quad g_i(x)\le 0,\ i=1,\dots ,m, \\&\quad x\in R^n, \end{aligned}$$

where all functions \(g_i\) are differentiable, and there is an interior point \(x'\) such that \(g_i(x')<0\) for all \(i=1,\dots ,m\).

Let \(\lambda _i\) be the dual variable associated with the constraint \(g_i(x)\le 0\). The Karush–Kuhn–Tucker conditions are:

$$\begin{aligned} g_i(x)\le 0,\ i= & {} 1,\dots ,m, \\ \lambda _i\ge 0,\ i= & {} 1,\dots ,m, \\ \lambda _i g_i(x)= 0,\ i= & {} 1,\dots ,m, \\ \nabla L(x,\lambda )= & {} 0, \end{aligned}$$

where \(L(x,\lambda )=f(x)+\sum _{i=1}^n \lambda _i g_i(x)\) is the Lagrangian function. Karush–Kuhn–Tucker conditions are necessary and sufficient for solutions \(x\in R^n\) and \(\lambda \in R^m\) to be primal and dual optimal [11].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kononov, A., Zakharova, Y. Speed scaling scheduling of multiprocessor jobs with energy constraint and makespan criterion. J Glob Optim 83, 539–564 (2022). https://doi.org/10.1007/s10898-021-01115-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-021-01115-x

Keywords

Navigation