Speed scaling on parallel processors with migration

We study the problem of scheduling a set of jobs with release dates, deadlines and processing requirements (or works) on parallel speed scalable processors so as to minimize the total energy consumption. We consider that both preemptions and migrations of jobs are allowed. For this problem, there exists an optimal polynomial-time algorithm which uses as a black box an algorithm for linear programming. Here, we formulate the problem as a convex program and we propose a combinatorial polynomial-time algorithm which is based on finding maximum flows. Our algorithm runs in O(nf(n)logU)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O({ nf}(n)\log U)$$\end{document} time, where n is the number of jobs, U is the range of all possible values of processors’ speeds divided by the desired accuracy and f(n) is the time needed for computing a maximum flow in a layered graph with O(n) vertices.


Introduction
Energy consumption is a major issue in our days.Great efforts are devoted to the reduction of energy dissipation in computing environments ranging from small portable devices to large data centers.From an algorithmic point of view, new challenging optimization problems are studied, in which the energy consumption is taken into account as a constraint or as the optimization goal itself (for recent reviews see [1,2]).This later approach has been adopted in the seminal paper of Yao et al. [15], where a set of independent jobs with release dates and deadlines have to be scheduled on a single processor so that the total energy is minimized, under the so-called speed-scaling model where the processor may run at variable speeds.Under this model, if the speed of a processor is s then the power consumption is s α , where α > 1 is a constant, and the energy consumption is the power integrated over time.
Single processor case.Yao et al. proposed in [15], an optimal off-line algorithm, known as the YDS algorithm according to the initials of the authors, for the problem with preemption, i.e.where the execution of a job may be interrupted and resumed later on.In the same work, they initiated the study of online algorithms for the problem, introducing the Average Rate (AVR) and the Optimal Available (OA) algorithms.Bansal et al. [6] proposed a new online algorithm, the BKP algorithm according to the authors' initials, which improves the competitive ratio of OA for large values of α.
Multiprocessor case.There are two variants of the model: the first variant allows the preemption of the jobs but not their migration.We call this variant, the non-migratory variant.This means that a job may be interrupted and resumed later on, on the same processor, but it is not allowed to continue its execution on a different processor.In the second variant, the migratory variant, both the preemption and the migration of the jobs are allowed.In [5], Albers et al. considered the non-migratory problem of minimizing the total energy consumption given that the jobs have release dates and deadlines.For unit-work jobs, they proposed a polynomial time algorithm when the deadlines of jobs are agreeable.When the release dates and deadlines of jobs are arbitrary, they proved that the problem becomes NP-hard even for unit-size jobs and proposed approximation algorithms with constant approximation ratios for the off-line version of the problem.A generic reduction is given by Greiner et al.
(see [11]) transforming a β-approximation algorithm for the single-processor problem to a βB α -approximation algorithm for the multi-processor non-migratory problem, where B α is the a-th Bell number.Also, they showed that a β-approximation for multiple processors with migration yields a deterministic βB α -approximation algorithm for multiple processors without migration.
For the migratory variant, Chen et al., in [10], were the first to study the speed scaling problem of minimizing the energy consumption on m processors with migration.In fact, they proposed a simple algorithm for the case where jobs have common release dates and deadlines.In [8], Bingham and Greenstreet proposed a polynomial-time algorithm for the general problem where each job has an arbitrary work, a release date and a deadline, and the power function is any convex function.Their algorithm is based on the use of the Ellipsoid method (see [13]).Since the Ellipsoid algorithm is not used in practice, it was an open problem to define a faster combinatorial algorithm.When preparing the current version of this paper, it came to our knowledge that Albers et al. [3] considered the same problem and presented an optimal O(n 2 f (n))-time combinatorial algorithm, where n is the number of jobs and f (n) the complexity of finding a maximum flow in a layered graph with O(n) vertices.Notice that in [3], nothing is mentioned about the exact complexity of the algorithm, except of course of its clear polynomiality.They also extended the analysis of the single processor OA and AVR online algorithms to the multiprocessor case with migration.
Multicriteria minimization.In general, minimizing the energy consumption is in conflict with the increase of the performance of many computing devices.Hence, a series of papers adresses this problem in a multicriteria context.In [14], Pruhs et al. were the first to study the problem of optimizing a time-related objective function with a budget of energy.Their objective was to minimize the sum of flow times and they presented a polynomial time algorithm for the case of unit-work jobs.To prove that their algorithm is optimal, they formulated the problem as a convex program and they applied the well-known Karush-Kuhn-Tucker (KKT) conditions to get necessary conditions for optimality.In [4], Albers and Fujiwara studied the problem of minimizing the sum of flow times plus energy instead of having an energy budget, which gives rise to an alternative way of combining the optimization of two conflicting criteria.For unit-work jobs, they proposed online algorithms and an exact polynomial-time algorithm.In [9], Chan et al. proposed an online algorithm to minimize the energy consumption and among the schedules with the minimum energy they tried to find the one with the maximum throughput.Assuming that there is an upper bound on the processor's speed, they established constant-factor competitiveness both in terms of energy and throughput.
Our contribution and organization of the paper.We consider the multiprocessor migratory scheduling problem with the objective of minimizing the energy consumption.In Section 3, we give the first convex programming formulation of the problem and in Section 4, we apply, for the first time, the well known KKT conditions.In this way, we obtain a set of properties that need to be satisfied by any optimal schedule.Then in Section 5, we propose an optimal algorithm in the case where the jobs have release dates, deadlines and the power function is of the form s α .The time complexity of our algorithm, which we call BAL, is in O(nf (n) log P ), where n is the number of jobs, P is the range of all possible values of processors' speed divided by the desired accuracy and f (n) is the complexity of computing a maximum flow in a layered graph with O(n) vertices.We also give a brief description of the relation of our algorithm and the one of Albers et al. [3], as well as the analysis of their algorithm's complexity.Finally in Section 6, we extend BAL to obtain an optimal algorithm for the problem of makespan minimization with a budget of energy.

Preliminaries
Let J = {j 1 , ..., j n } be a set of jobs.Each job j i is specified by a work w i , a release date r i and a deadline d i .We define span i = [r i , d i ] and we say that j i is alive at time t if t ∈ span i .We also define the density of job j i as den i = w i /(d i − r i ).We assume a set of m variable-speed homogeneous processors in the sense that they can all, dynamically, change their speeds and have a common speed-to-power function P (t) = s(t) α where P (t) is the power consumption at time t, s(t) is the speed (or frequency) at time t and α > 1 is a constant.Consider any interval of time [a, b] and a given processor.The amount of work processed by this processor and its energy consumption during [a, b] are b a s(t)dt and b a s(t) α dt, respectively.Hence, if a job is continuously run at a constant speed s during an interval of length ℓ, then w = s • ℓ units of work are completed and an amount of E = s α • ℓ units of energy are consumed.In our setting, preemption and migration of jobs are allowed.That is, the processing of a job may be suspended and resumed later on the same processor or on a different one.Nevertheless, we do not allow parallel execution of a job which means that a job cannot be run simultaneously on two or more processors.We also assume that a continuous spectrum of speeds is available and that there is no upper bound on the speed of any processor.Our objective is to find a feasible schedule that minimizes the total energy consumed by all processors.
We define T = {t 0 , • • • t L } to be the set of release dates and deadlines taken in a nondecreasing order and without duplication.It is clear that t 0 = min j i ∈J {r i } and t L = max j i ∈J {d i }.Let I j = [t j−1 , t j ], for 1 ≤ j ≤ L, and I = {I 1 , • • • , I L }.We denote |I j | the length of the interval I j .Also, let A(j) be the set of jobs that are alive during I j , i.e. all the jobs j i with I j ⊆ span i , and a j = |A(j)| be the number of jobs in A(j).Given any schedule S, we denote t i,j the total units of time that job j i is processed during the interval I j by S. As already mentioned in many other works (see [15] for example), one can show, through a simple exchange argument, that there always exists an optimal schedule in which every job j i is run at a constant speed s i and this comes from the convexity of the power function.
Next, we state a problem which is a variation of our problem that we will need throughout our analysis, we call it the Work Assignment Problem (or WAP) and can be described as follows: Consider a set of n jobs Each job can be alive in one or more intervals in I.During each interval I j there are m j available processors.Moreover, we are given a value v. Our objective is to find whether or not there is a feasible schedule that executes all jobs in J with constant speed v. Recall that a schedule is feasible if and only if each job is executed during its alive intervals and is executed by at most one processor at each time t.Preemption and migration of jobs are allowed.Note that the WAP is almost the P |r i , d i , pmtn|− (see [7]) with the difference that, in WAP, not all intervals have the same number of available processors.Therefore, WAP is polynomially solvable by applying a variant of an algorithm for P |r i , d i , pmtn|−.

Convex Programming Formulation
Our problem can be formulated as the following convex program: Note that the total running time and the total energy consumption of each job j i is w i s i and w i s a−1 i , respectively.Then, the term (1) is the total energy consumed by all jobs which is our objective function and the constraints (2) enforce that w i amount of work must be executed for each job j i .The constraints (3) enforce that we can use at most m processors for |I j | units of time during any interval I j .Also, we can use at most a j processors operating for |I j | units of time during any interval I j , otherwise we would have parallel execution of a job and this is expressed by (4).The constraints (5) prevent any job j i from being executed for more than |I j | units of time during any interval I j ⊆ span i .Note that constraints (4) and ( 5) are both needed and none is covered by the other.The constraints ( 6) and ( 7) insure the positivity of the variables t i,j and s i , respectively.
The above mathematical program is indeed convex because, as mentioned by other works (e.g.[14]), the objective function and the first constraint are convex while all the other constraints are linear.Since our problem can be written as a convex program, it can be solved in polynomial time by applying the Ellipsoid Algorithm [13].Nevertheless, the Ellipsoid Algorithm is not used in practice and we would like to construct a faster and less complicated combinatorial algorithm.
At this point, notice that once the speeds of the jobs are computed, by solving the convex program, a further step is needed in order to construct a feasible schedule.This is exactly the feasibility problem P |r i , d i , pmtn|−.

KKT Conditions
We apply the KKT conditions to the above convex program to obtain necessary conditions for optimality of a feasible schedule.We also show that these conditions are sufficient for optimality.
Assume that we are given the following convex program: Suppose that the program is strictly feasible, i.e. there is a point x such that g i (x) < 0 for all 1 ≤ i ≤ m, and all functions g i are differentiable.Let λ i be the dual variable associated with the constraint g i (x) ≤ 0. The Karush-Kuhn-Tucker (KKT) conditions are: KKT conditions are necessary and sufficient for solutions x ∈ R n and λ ∈ R m to be primal and dual optimal.We refer to the above conditions as primal feasible, dual feasible, complementary slackness and stationarity conditions, respectively.
The following lemma is a direct consequence of the KKT conditions for the convex program of our problem.

Lemma 1 A feasible schedule for our problem is optimal if and only if it satisfies the following properties:
1.Each job j i is executed at a constant speed s i .
2. If a job j i is not executed during an interval I j ⊂ span i , i.e. t i,j = 0, then s i ≤ s k for every job j k with I j ⊆ span k and t k,j > 0.
3. If a job j i has t i,j = |I j | for an interval I j , then s i ≥ s k for any job j k alive during I j with t k,j < |I j |.
4. All jobs j i that are alive during I j with 0 < t i,j < |I j | have equal speeds.
5. If a j ≤ m during an interval I j , then t i,j = |I j |, for every j i with I j ⊆ span i .
Proof: In order to apply the KKT conditions, we need to associate with each constraint a dual variable.Therefore, to each set of constraints from (2) up to (7) we associate the dual variables β i , γ j , δ j , ǫ i,j , ζ i,j and η i , respectively.
By stationarity conditions, we have that The above equation can be rewritten equivalently as Furthermore, complementary slackness conditions imply that We can safely assume that there are no jobs with zero work because we may treat such jobs as if they did not exist.So, for any job j i it holds that s i > 0 and I j ⊆span i t i,j > 0.Then, (14) implies that η i = 0. We set the coefficients of the partial derivatives ∇s i and ∇t i,j equal to zero so as to satisfy the stationarity conditions.Thus, (8) gives that β i = (α − 1)s α i for each job j i ∈ J and (α − 1) for each j i ∈ J and I j ⊆ span i .Now, for each interval I j we have the following cases: Case 1: a j > m In this case, it is obvious that all processors operate during the whole interval in any optimal schedule.Because of (11), δ j = 0. We consider the following subcases on the execution time of any job j i ∈ A(j): Stationarity conditions (12), (13) imply that ǫ i,j = ζ i,j = 0.As a result, (15) can be written as (α − 1)s α i = γ j The variable γ j is specific for each interval and as a result, all jobs of this subcase have the same speed throughout the whole schedule.We denote this speed v j for each interval I j .

Subcase B: t
In this case, by ( 13) and ( 15), we get that Hence, all jobs of this kind have s i ≥ v j .
Case 2: a j < m In this case, each job in A(j) is executed throughout the whole interval I j , in every optimal schedule.This argument comes from the convexity of speed to power function.Therefore, each job j i ∈ A(j) has ζ i,j = 0. Moreover since fewer than m processors are used we have that γ j = 0.That is, for each j i ∈ I j we have (α − 1)s α i = δ j + ǫ i,j .By this set of equations, we cannot establish any strong relation between the speed of the jobs that are alive during an interval I j .
Case 3: a j = m This case can be handled exactly as the previous one with the difference that γ j ≥ 0 and thus, we get that (α − 1)s α i = γ j + δ j + ǫ i,j .
Given a solution of the convex program that satisfies the KKT conditions, we derived some relations between the primal variables.Based on them, we defined some structural properties of any optimal schedule.These properties are necessary for optimality and we show that they are also sufficient because any schedule that satisfies these properties is optimal.
Assume for the sake of contradiction that there is a schedule A, that satisfies the properties of lemma 1, which is not optimal and let B be an optimal schedule.We denote E X , s X i and t X i,j the energy consumption, the speed of job j i and the total execution time of job j i during the interval I j in schedule X, respectively.Then, E A > E B .Let S be the set of jobs j i with s A i > s B i .Clearly, there is at least one job j k such that s A k > s B k , otherwise A would not consume more energy than B. So, S = ∅.By definition of S, Hence, there is at least one interval I p such that This gives that t A k,p < t B k,p for some job j k ∈ S. Thus, t A k,p < |I p | and t B k,p > 0. If we consider any interval I j , the sum of processing times of all jobs in I j is the same for all schedules satisfying lemma 1.So, there must be a job Notice that Lemma 1 does not explain how to find an optimal schedule.The basic reason is that it does not determine the speed value of each job.Moreover, it does not specify exactly the structure of the optimal schedule.That is, it does not specify which job is executed by each processor at each time t.

An Optimal Combinatorial Algorithm
In this section, we propose a combinatorial algorithm for our problem which always constructs a schedule satisfying the properties stated in the previous section.Our algorithm is based on the notion of critical jobs defined below.The basic idea is to continuously decrease the speeds of jobs step by step.At each step, we assign a speed to the critical jobs that we ignore in the subsequent steps and we continue with the remaining subset of jobs.At the end of the last step, every job has been assigned a speed.In order to recognize the critical jobs, we consider a reduction to the Work Assignment Problem (WAP).
Let us first give some notations and definitions concerning the maximum flow and minimum cut problems.Consider a graph G = (V, E) in which each edge (u, v) has capacity c(u, v) and two nodes s, t ∈ V .An (s, t)-cut of G is a partition of its nodes into two disjoint subsets X and Y so that if we remove the edges (u, w) with u ∈ X and w ∈ Y , the nodes s and t are disconnected, i.e. there is no path from s to t.A minimum (s, t)-cut (X, Y ) is a cut whose sum of the capacities of the edges (u, w) with u ∈ X and w ∈ Y is minimized.In the following, we will consider an (s, t)-cut as the set of these edges.Also, given an (s, t)-flow of a graph G = (V, E), we use the term f (e) to denote the amount of flow that passes through the edge e ∈ E.
Given a graph G and a flow F , we define the residual graph G f of G with respect to F as follows: (i) G f has the same set of nodes with G, (ii) for each edge (u, v) in G on which f (u, v) < c(u, v), we include the edge (u, v) with capacity c(u, v) − f (u, v), and (iii) for each edge (u, v) with f (u, v) > 0, we include the edge (v, u) with capacity f (u, v).Next, we define the notion of upstream nodes that we will need throughout our analysis.A node v is upstream if, for all min (s, t)-cuts (X, Y ), v belongs in X.That is, v lies on the source side of every min cut.Now, for each instance of the WAP, we define a graph so as to reduce our original problem to the maximum flow problem.Given an instance < J , I, v > of the WAP, consider the graph G = (V, E) that contains one node x i for each job j i , one node y j for each interval I j , a source node s and a destination node t.We introduce an edge (s, x i ) for each j i ∈ J with capacity w i v , an edge (x i , y j ) with capacity |I j | for each couple of j i and I j such that j i ∈ A(j) and an edge (y j , t) with capacity m j |I j | for each interval I j ∈ I.We say that this is the corresponding graph of < J , I, v >.
At this point, we are ready to introduce the notion of criticality.Given a feasible instance for the WAP, we say that job j i is critical if and only if for any feasible schedule and for each I j ⊆ span i , either t i,j = |I j | or j i ∈A(j) t i,j = m j |I j |.Furthermore, we say that an instance < J , I, v > of the WAP is critical if and only if v is the minimum speed so that the set of jobs J can be feasibly executed over the intervals in I.With respect to graph G, a job j i is critical if and only if for any maximum flow, either the edge (x i , y j ) or the edge (y j , t) is saturated for each I j such that j i ∈ A(j).Notice that job j i is also critical for the < J , I, v − ǫ >, for any ǫ > 0.

Properties of the Work Assignment Problem
Next, we will prove some lemmas that will guide us to an optimal algorithm.Our algorithm will be based on a reduction of our problem to the maximum flow problem which is a consequence of the following lemma.
Lemma 2 [7] There exists a feasible schedule for the work assignment problem if and only if the corresponding graph has maximum (s, t)-flow equal to n i=1 w i v .
At this point, we state a Lemma concerning the upstream nodes that we will need in one of the proofs that follow.Also, for completeness, we present a proof that can be also be found in [12].
Claim 1 [12] The set of upstream nodes is reachable from the source node s in the residual graph of any maximum flow and therefore they can be found by performing a breadth-firstsearch (BFS) starting from s.

Proof:
Let (X, Y ) the cut found after performing a BFS on the residual graph G f , starting from the source s, at the end of any maximum flow algorithm.If a node v is upstream then it must belong to X. Conversely, assume that v ∈ X and v is not an upstream node.This means that there is a cut (X ′ , Y ′ ) with v ∈ Y ′ .Given that v ∈ X, there is a path P from s to v. Since v ∈ Y ′ , P must have an edge (u, w) with u in X ′ and w ∈ Y ′ .However this is a contradiction since there is an edge in G f that goes from the source side to the sink side of a minimum cut.
The following lemmas that involve the notions of critical job and critical instance are important ingredients for the analysis of our algorithm.
Lemma 3 If < J , I, v > is a critical instance of WAP, then there is at least one critical job j i ∈ J .

Proof:
Let G be the graph that corresponds to a critical instance < J , I, v >, and let G ′ be the graph that corresponds to the instance < J , I, v − ǫ >, for a small constant ǫ > 0 that approaches zero.Since < J , I, v > is critical, there is no feasible (s, t)-flow equal to Because of the max f low − min cut theorem, we can conclude that any minimum (s, t)-cut of G ′ has capacity strictly less than j i ∈J w i v−ǫ and as a result, there is no minimum (s, t)-cut of G ′ that includes all edges (s, x i ).If all edges (s, x i ) were included in a minimum (s, t)-cut, then G ′ would have an (s, t)-flow in which all these edges would be saturated which implies that there would be a feasible (s, t)-flow for G ′ with value j i ∈J The remainder of the proof is based on the notion of upstream nodes.For that, it suffices to observe that given any maximum flow, there is always an edge (s, x i ) that is not saturated.Firstly, we need to show that there is always an x i node in G ′ which belongs to the set of upstream nodes.If we apply breadth-first search on the residual graph G f , we will reach x i which implies that x i is upstream.Thus, for every path x i , y j , t of G ′ , there is always an edge (x i , y j ) or (y j , t) that is saturated by any maximum flow.This holds since if not, there would be an unsaturated (s, t) path (a path is saturated if at least one of its edges is saturated) contradicting the maximality of the flow.Hence, j i , the job that corresponds to x i , is a critical job.
Lemma 4 Let G = (V, E) be the graph that corresponds to the instance < J , I, v > of the WAP.If the edge (y j , t) ∈ E belongs to a minimum (s, t)-cut of G and there is a maximum (s, t)-flow such that f (x i , y j ) > 0, then j i is critical.

Proof:
Suppose that the edge (y j , t) belongs to a minimum (s, t)-cut C and that there is a maximum (s, t)-flow F such that f (x i , y j ) > 0. C is saturated by any maximum flow.Since f (x i , y j ) > 0, it is not possible that a path from x i to t is left unsaturated by F because if this was the case, then we could send part of f (x i , y j ) through the unsaturated path and this would contradict the fact that (y j , t) belongs to a minimum (s, t)-cut.Since F is a maximum (s, t)-flow and saturates all the paths from x i to t, there should be a minimum (s, t)-cut C ′ that contains one edge from each such path (the one that is saturated by F ). Hence, j i is critical.
Our algorithm is based on the following lemma in order to determine critical jobs.
Lemma 5 Assume that < J , I, v > is a critical instance for WAP and let G ′ be the graph that corresponds to the instance < J , I, v − ǫ >.Then, any minimum (s, t)-cut C ′ of G ′ contains: (i) exactly one edge of every path x i , y j , t for any critical job j i of G, (ii) all the (s, x i ) edges for any non-critical job j i of G.

Proof:
Consider any critical job j i .Assume that there is a path x i , y j , t in G ′ such that none of its edges belong to a minimum (s, t)-cut C. Then there is a maximum (s, t)-flow F that does not saturate the edges (x i , y j ) and (y j , t).If the edge (s, x i ) was not saturated, then F would not be a maximum flow.On the other hand, if (s, x i ) was saturated by F , then job j i would not be critical for < J , I, v >.In both cases, we have a contradiction.
Similarly, assume that j i is not critical for the instance < J , I, v > and suppose that the edge (s, x i ) does not belong to a minimum cut of G'.This means that there is a maximum (s, t)-flow F that does not saturate this edge.If there is at least one path x i , y j , t that is not saturated, then F is not maximum and if all paths are saturated then j i is a critical job for < J , I, v >, which is a contradiction.

The BAL Algorithm
We are now ready to give a high level description of our algorithm.Initially, we will assume that the optimal schedule consumes an unbounded amount of energy and we assume that all jobs are executed with the same speed s U B .This speed is such that there exists a feasible schedule that executes all jobs with the same speed.Then, we decrease the speed of all jobs up to a point where no further reduction is possible so as to obtain a feasible schedule.At this point, all jobs are assumed to be executed with the same speed, which is critical, and there is at least one job that cannot be executed with speed less than this.The jobs that cannot be executed with speed less than the critical one form the current set of critical jobs.So, the critical job(s) is (are) assigned the critical speed and is (are) ignored after this point.That is, in what follows, the algorithm considers the subproblem in which some jobs are omitted (critical jobs), because they are already assigned the lowest speed possible (critical speed) so that they can be feasibly executed, and there are less than m processors during some intervals because these processors are dedicated to the omitted jobs (i.e.we get an instance of WAP).Our algorithm can be described as follows: Find the minimum speed s crit so that the instance < J , I, s crit > of the WAP problem is feasible, using binary search in the interval [s LB , s U B ], through repeated maximum flow computations.

4:
Determine the set of critical jobs J crit .

5:
Assign to the critical jobs speed s crit and set J = J \J crit .

6:
Update I, i.e., the number of available processors m j for each interval I j .

7:
s U B = s crit , s LB = max j i ∈J {den i } 8: Use the optimal algorithm for P |r i , d i , pmtn|− to schedule each job with processing time w i /s i .
We denote s crit the critical speed and J crit the set of critical jobs.We know that each job will be executed with speed not less than its density.Therefore, given a set of jobs J , we know that there does not exist a feasible schedule that executes all jobs with the speed s < max j i ∈J {den i }.Also, observe that no job has speed s > max{max j { j i ∈A(j) w i |I j | }, max j i {den i }}.These bounds define the search space of the binary search for the first step of the algorithm in order to determine the minimum speed for which there is a feasible schedule that executes all jobs in J with the same speed.In the subsequent step the current speed (i.e. the critical speed of the previous step) is an upper bound on the speed of all remaining jobs and a lower bound is the maximum density among them.We use these updated bounds to perform a new binary search and we go on like that.At this point, note that binary search has already been used in other works as part of optimal polynomial-time algorithms for scheduling problems with speed scaling (see [4] and [14]).
In order to complete the description of our algorithm, it remains to explain the way critical jobs are determined.Because of Lemma 5, this can be done by finding a minimum (s, t)-cut in the graph G ′ that corresponds to < J , Ĩ, v − ǫ > where J and Ĩ correspond to the current instance of the WAP.Note that ǫ must be such that v − ǫ is strictly greater than the next critical speed.
Algorithm BAL produces an optimal schedule, and this holds because any schedule constructed by the algorithm satisfies the properties of Lemma 1.
Theorem 1 Algorithm BAL produces an optimal schedule.

Proof:
First of all, it is obvious that the algorithm assigns to every job a constant speed because each job is assigned exactly one speed in one iteration.Because of Lemma 4, we know that all jobs that have 0 < t i,j < |I j | will have the same speed because when such a job is critical all other jobs of the same kind are critical as well and are assigned the same speed.For the same reason, each job with t i,j = |I j | will be assigned the same speed with all jobs that will run during I j or a greater one in a previous step.Now, consider the case where s i = 0 for a job j i during an interval I j ⊆ span i .When j i is assigned a speed by the algorithm, it is critical.Hence, in every interval I j such that j i is alive, apart from the ones whose processors were already occupied in previous iterations, we know that either t i,j = |I j | or j i ∈A(j) t i,j = m j |I j |, where m j is the number of the available processors.Therefore, if t i,j = 0 then there are two cases: either (i) I j had all its processors occupied in a previous iteration than the one that j i was assigned a speed, or (ii) this happened at the same iteration and the minimum speed that a job has during this interval is not less than the one of j i .Hence, j i cannot get greater speed than any job executed during I j .Finally, because of Lemma 5, BAL correctly identifies the critical jobs at each step of the algorithm.The theorem follows.
We turn, now, our attention to the complexity of the algorithm.Because of Lemma 3 at least one job (all critical ones) is scheduled at each step of the algorithm.Therefore, there will be at most n steps.Assume that P is the range of all possible values of speeds divided by our desired accuracy.Then binary search, needs O(log P ) values of speed to determine the next critical speed at one step.That is, BAL performs O(log P ) maximum flow calculations at each step.Thus, the overall complexity of our algorithm is O(nf (n) log P ).
Relation of BAL with the algorithm of Albers et al. [3].The high-level idea of the algorithm in [3] is similar with the one of BAL.Both algorithms can be decomposed in a number of steps (phases) and at each step, a subset of jobs (the critical ones) is scheduled.The difference between the two algorithms is the way each step is performed.In [3], a step is as follows: at the beginning, all remaining jobs are conjectured to be critical.Then, the set of (potential) critical jobs is reduced through repeated maximum flow computations.Once the set of critical jobs of a particular step is determined, their algorithm specifies the way these jobs are executed.In the worst case, their algorithm performs n steps and the i-th step involves n − i maximum flow computations.Therefore, the worst-case running time of their algorithm is O(n 2 f (n)).In our case, BAL computes the speed of critical jobs through binary search.Each iteration of the binary search involves a maximum flow computation.Once the critical speed is computed, the set of critical jobs can be found by computing a minimum-cut.BAL constructs the schedule once all the critical speeds are determined.

Makespan Minimization with a Budget of Energy
Algorithm BAL can be extended to obtain an optimal algorithm, say MBAL, for the problem of makespan minimization given a fixed budget of energy E. As before, preemption and migration are allowed and jobs have arbitrary release dates and works.In order to apply MBAL, we will need an upper and a lower bound on the makespan of the optimal schedule.Then, the algorithm uses binary search to compute the minimum makespan for which there is a feasible schedule consuming E units of energy.Two such bounds are X LB = 1 m ( W α E ) 1 α−1 and X U B = max i {r i } + ( W α E ) 1 α−1 where W is the total work of all jobs.The high-level description of the algorithm is the following: Algorithm 2 MBAL 1: Compute X U B and X LB .
2: Perform binary search in [X LB , X U B ] to find the minimum makespan X * for which there is a feasible schedule that consumes an E amount of energy.3: Return this schedule.
In order to perform the binary search, given a value X, MBAL examines whether or not there is a feasible schedule of makespan X that consumes E units of energy.To do this, it runs algorithm BAL assuming that all jobs have a common deadline X.Then, it computes the minimum value of energy E * that a feasible schedule for the particular instance might have.If E ≥ E * , then there is a feasible schedule that executes the jobs using no more than E energy with makespan X.Otherwise, there does not exist such a schedule.The complexity of MBAL is log P times the complexity of BAL, i.e.O(nf (n) log 2 P ).

Conclusion
We studied the energy minimization multiprocessor speed scaling problem with migration.We proposed a combinatorial polynomial time algorithm based on a reduction to the maximum flow problem.We also extended our result in the case where the objective is makespan minimization given a budget of energy.Since there is not much work on problems with migration there are many directions and problems to be considered for multicriteria optimization.All these problems seem to be very interesting and might require new algorithmic techniques because of their continuous nature.In this context, we believe that the approach used in our paper may be useful for future works.