Minimizing total weighted completion time on a single machine subject to non-renewable resource constraints

In this paper, we describe new complexity results and approximation algorithms for single-machine scheduling problems with non-renewable resource constraints and the total weighted completion time objective. This problem is hardly studied in the literature. Beyond some complexity results, only a fully polynomial-time approximation scheme (FPTAS) is known for a special case. In this paper, we discuss some polynomially solvable special cases and also show that under very strong assumptions, such as the processing time, the resource consumption and the weight is the same for each job; minimizing the total weighted completion time is still NP-hard. In addition, we also propose a 2-approximation algorithm for this variant and a polynomial-time approximation scheme (PTAS) for the case when the processing time equals the weight for each job, while the resource consumptions are arbitrary.


Introduction
Non-renewable resources, such as raw material, energy or money, are used in all sectors of production, and depending on the stocking policy, they have varying impact on the preparation of daily and weekly production schedules. Consider for instance the preparation of the weekly schedule of a production line, where some of the raw materials built into the products arrive over the week, and the supplies constrain what can be produced and when. Of course, if all the purchased items were in stock right at the beginning of the week, then the supply arriving during the week would not influence the scheduling decisions, but the drawback is that larger stocks should be kept, which incurs additional costs.
In this paper, we consider single-machine scheduling problems with one additional non-renewable resource. The non-renewable resource has an initial stock and some additional supplies in the future with known supply dates and B Tamás Kis tamas.kis@sztaki.mta.hu Péter Györgyi peter.gyorgyi@sztaki.mta.hu 1 Institute for Computer Science and Control, Hungarian Academy of Sciences, Kende str. 13-17, Budapest 1111, Hungary quantities. A job can only be started if the inventory level of the resource is at least as much as the quantity required by the job. When the job is started, the inventory level is decreased by the required quantity. Therefore, when determining the schedule, one must take into account not only the initial stock level, but also the future supplies. This is an extra constraint in addition to, e.g., job release dates, or sequence-dependent setup times.
More formally, in all problems studied in this paper, there are a single machine, a non-renewable resource, and a finite set of jobs J . Each job j ∈ J has a processing time p j > 0, a weight w j ≥ 0, and a resource requirement a j ≥ 0. The resource has an initial supplyb 1 available at time u 1 = 0, and additional suppliesb at supply dates u for = 2, . . . , q. For convenience, we also define u q+1 = +∞. We assume that the supplies are indexed in increasing u order, i.e., u < u +1 for = 1, . . . , q − 1. Let S be a schedule specifying a start time S j for each job j. It is feasible if (i) the jobs do not overlap in time, and (ii) for each = 1, . . . , q, j:S j <u +1 a j ≤ =1b , i.e., the supply arriving up to u covers the demands of those jobs starting before u +1 . The objective function is the weighted sum of job completion times, i.e., a feasible schedule of minimum j∈J w j C j value is sought, where C j = S j + p j . We mention that a feasible schedule exists only if j∈J a j ≤ q =1b , and more Thm. 1 * a j =ā, w j = λ p j w j C j Polynomial time (decr. p j ord.) Thm. 1 2 p j = 1, w j = λa j w j C j Weakly NP-hard Thm. 2 2 w j = p j = a j p j C j Weakly NP-hard Thm. 3 * w j = p j = a j p j C j Strongly NP-hard Thm. 3 * w j = p j = a j p j C j 2-approx algorithm (LPT rule) Thm. 4 Const. w j = p j p j C j PTAS Thm. 5 " * " stands for "arbitrary" "decr. w j ord." means decreasing (non-increasing) w j order "incr. a j ord." means increasing (non-decreasing) a j order "decr. p j ord." is equivalent to LPT rule "2-approx algorithm" means "polynomial time approximation algorithm with relative error 2" "λ" is an arbitrary positive number "PTAS" stands for "polynomial time approximation scheme" resources are not needed. In fact, without loss of generality we may assume that (i) j∈J a j = q =1b , and (ii)b q > 0, i.e., at least one job must start not before u q . In the standard α|β|γ notation of Graham et al. (1979), we will indicate in the β field by nr = 1 that the number of non-renewable resources is 1. In addition, we will constrain the number of supply dates to a constant by q = const. We will use a number of other constraints, which are standard in the scheduling literature.
In this paper, we establish new complexity and approximability results for special cases of 1|nr = 1| w j C j . The special cases are obtained by imposing constraints on the parameters of the jobs. For instance, the constraint w j = p j means that for each job j, its weight equals its processing time, while w j = λ p j indicates that w j is proportional to p j , λ > 0 which is a common ratio. Furthermore, p j = 1 or p j =p restricts the processing time of each job to 1 or to some other common constant valuep. The new results are summarized in Table 1. As we can see, three special cases can be solved in polynomial time by list scheduling; we identify three new NP-hard variants and propose approximation algorithms in two cases. We emphasize that the 2-approximation algorithm is merely list scheduling using the LPT order, but the analysis of the algorithm is tricky. On the other hand, the polynomial time approximation scheme for 1|nr = 1, w j = p j , q = const.| p j C j is rather involved, and the underlying analysis needs new ideas, which may be used in the analysis of other problems as well.
In Sect. 2, we overview the related literature. In Sect. 3, we generalize list scheduling to our problem and discuss special cases that can be solved optimally with this method. In Sect. 4, we establish the NP-hardness of 1|nr = 1, p j = 1, a j = w j | w j C j . In Sect. 5, we present complexity results and a 2-approximation algorithm for the special case with p j = a j = w j . Finally, in Sect. 6 we devise a PTAS for 1|nr = 1, p j = w j , q = const.| w j C j .

Literature review
Machine scheduling problems with non-renewable resources have been introduced by Carlier (1984) and Slowinski (1984). In Carlier (1984), the computational complexity of several variants with a single machine is established. In particular, it is shown that 1|nr = 1| w j C j is NP-hard in the strong sense, which is also proved in Gafarov et al. (2011). However, the problem remains NP-hard in the weak sense if q = 2 (two supplies), see Kis (2015). In Kis (2015), an FPTAS is devised for the special case 1|nr = 1, q = 2| w j C j . Moreover, Gafarov et al. (2011) study a variant of this problem, where each job has processing time 1, and there are n supplies such that u = M, andb = M for = 1, . . . , n, where M = j∈J a j /n is an integer number, and n = |J |. Without the non-renewable resource constraint, the problem 1|| w j C j can be solved optimally in polynomial time by scheduling the jobs in non-increasing w j / p j order, a classical result of Smith (1956).
There are several results about the complexity and approximability of machine scheduling problems with nonrenewable resources and the makespan and the maximum lateness objective, see e.g., Slowinski (1984), Toker et al. (1991), Xie (1997), Grigoriev et al. (2005), Györgyi andKis (2014, 2015a, b) and Györgyi (2017). For an overview, see Györgyi and Kis (2017). In particular, Slowinski (1984) considers a parallel machine problem with preemptive jobs, and with a single non-renewable resource, which has an initial stock and some additional supplies. It is assumed that the Fig. 1 The case C * j1 < S * j2 of Theorem 1 (c). The schedule S * is depicted in part (a), where the dashed line indicates two options for the length of j 2 . The form of the schedule S if C j2 ≥ u and if C j2 < u is depicted in part (b) and (c), respectively rate of consuming the non-renewable resource is constant during the execution of the jobs. These assumptions led to a polynomial time algorithm for minimizing the makespan. Toker et al. (1991) prove that the single-machine scheduling problem with a single non-renewable resource and the makespan objective reduces to the 2-machine flow shop problem provided that the single non-renewable resource has a unit supply in every time period. In Grigoriev et al. (2005), 2-approximation algorithms are devised for the makespan and the maximum lateness objective (under some additional conditions). In a series of papers Györgyi and Kis (2014, 2015a, b, 2017 and Györgyi (2017), Györgyi and Kis present approximation schemes and inapproximability results for various special cases of single and parallel machine problems with the makespan and the maximum lateness objectives. In Györgyi and Kis (2018), a branch-and-cut algorithm for minimizing the maximum lateness is devised and evaluated.

List scheduling
In this section, we discuss polynomially solvable special cases of 1|nr = 1| w j C j . All the algorithms presented below are based on the following extension of the well-known list scheduling method: 1. Sort the jobs according to some total ordering relation.
In the above algorithm, t represents the time when the next job may be scheduled, and r the resource level before scheduling it. In Step 3, t and r are reset if the resource available after scheduling the previous jobs is not enough to schedule j i . Notice that in such a case, the supply of more than one period may be needed to increase the available quantity of the resource sufficiently.
The above simple algorithm is a generalization of the wellknown algorithm that schedules the jobs in some given order without interruptions.
Theorem 1 All of the following special cases can be solved optimally by list scheduling: (a) Scheduling the jobs in non-increasing w j order is optimal for 1|nr = 1, p j =p, a j =ā| w j C j . (b) Scheduling the jobs in non-decreasing a j order is opti- Proof The proof of optimality is left to the reader, except in the last case, that we can verify as follows. Consider any instance of 1|nr = 1, a j =ā, w j = λ p j | w j C j , and let S * be an optimal schedule in which the number of job pairs violating the LPT order is the smallest. Define C * j := S * j + p j for each job j. Suppose that there are at least two jobs that are not in LPT order. Consider the first two such consecutive jobs, say j 1 and j 2 , where j 1 is scheduled before j 2 , and p j 1 + K = p j 2 for some K > 0. Let S be the schedule where we swap the order of j 1 and j 2 . We distinguish two cases. If , and the objective function does not change. Since S is feasible, as each job has the same resource requirement, we reached a contradiction with the choice of S * . Now suppose C * j 1 < S * j 2 . Hence, there is an such that S * j 2 = u . Note that we have S j 2 = S * j 1 , S j 1 = max{C j 2 , u }. Further on we have S j = S * j for each job j with S * j < S * j 1 , and S j ≤ S * j for each job j with S * j ≥ S * j 2 , see Fig. 1. Notice that only the start time of job j 1 increases after swapping job j 1 and job j 2 . To reach a contradiction with the choice of S * , it is enough to prove that Theorem 2 For any λ > 0, the problem 1|nr = 1, p j = 1, w j = λa j | w j C j is weakly NP-hard even for q = 2.
Proof We reduce the NP-hard PARTITION problem to our scheduling problem. An instance of the former problem is given by a natural number n, and the sizes of n items, s 1 , . . . , s n , which are nonnegative integer numbers. One has to decide whether the items can be partitioned into two subsets, Q 1 and Q 2 , such that i∈Q 1 s i = i∈Q 2 s i . Since all item sizes are integer numbers, the answer is "NO", unless n i=1 s i = 2 A for some integer A. Therefore, we assume that n i=1 s i is an even integer, and let A := n i=1 s i /2. Let I be an instance of PARTITION, the corresponding instance I of 1|nr = 1, p j = 1, w j = λa j | w j C j consists of n jobs, and for each item j, the corresponding job has a processing time p j = 1, a j := s j and w j := λs j , where λ > 0 is fixed arbitrarily. In addition, there is a single resource with an initial stock ofb 1 := A, available at time u 1 := 0, and with one more supplyb 2 := A at time u 2 := n 2 A 2 .
We claim that I has a "YES" answer if and only if I has a feasible schedule of objective function value at most λ(n 2 A 3 + 2n A). First suppose that I has a partitioning of the items Q 1 , Q 2 of equal size. Schedule the jobs corresponding to the items in Q 1 from time 0 on consecutively in decreasing w j order, and those in Q 2 from u 2 consecutively in decreasing w j order. This schedule is clearly feasible. Suppose Q 1 = { j 1 , . . . , j k }, and w j i ≥ w j i+1 for i = 1, . . . , k −1, and Q 2 = { j k+1 , . . . , j n }, and w j i ≥ w j i+1 for i = k + 1, . . . , n − 1. Then, we compute Conversely, suppose the scheduling problem admits a feasible schedule S of objective function value at most λ( Since S is feasible, the total resource consumption of those jobs in Q 1 is at most A. Indirectly, suppose it is less than A. Then, the total weight of those jobs in Q 2 is at least λ(A + 1). But then we have which is a contradiction.
Finally, notice that the transformation is of polynomial time complexity, which shows that there is a polynomial reduction from PARTITION to a decision version of the scheduling problem 1|nr = 1, p j = 1, w j = λa j | w j C j . 5 Problem 1|nr = 1, p j = a j = w j | w j C j We start this section by providing a non-trivial expression for the objective function value of an optimal schedule under the condition p j = w j for every job j.
Let S be any feasible schedule for the problem, and let C j = S j + p j be the completion time of job j in S. Let H denote the length of the idle period, if any, in schedule S in the interval [u , u +1 ] and let G = ν=1 H ν be the total idle time until u +1 . Let P denote the total working time (when the machine is not idle) in [u , u +1 ], noting that u = −1 ν=1 P ν + G −1 . See Fig. 2a for an illustration. Using the new notation, we can express the objective function value of S as follows: Lemma 1 If p j = w j for each job j, then the objective function value of any feasible schedule S can be expressed as (1) Proof Consider any working period B = [u , t] in the schedule S, that is, the machine is idle right before u and right after t, and is working contiguously throughout B. Suppose t ∈ (u , u +1 ], where ≥ . Let k be an arbitrary job that is processed in B, see Fig. 2b. We have C k = C j ≤C k p j + G −1 ; thus, the total weighted completion time of the jobs processed in B is where the first equation follows from k:C k ∈B p k = ν= P μ , and the second from G ν = G −1 for each ≤ μ < , since the machine is not idle in the interval B. Since the schedule can be partitioned into working and idle periods, we derive Finally, the second equation of the statement of the lemma can be derived by using the definition of G and by rearranging terms.
Theorem 3 The problem 1|nr = 1, q = 2, p j = a j = w j | w j C j is weakly NP-hard, and 1|nr = 1, p j = a j = w j | w j C j is strongly NP-hard.
Proof Recall the definition of the PARTITION problem from the proof of Theorem 2. For proving the weak NP-hardness of 1|nr = 1, q = 2, p j = a j = w j | w j C j , we reduce the PARTITION problem to this scheduling problem. For any instance of PARTITION, the corresponding instance of 1|nr = 1, q = 2, p j = a j = w j | w j C j consists of n jobs, one job for each item, and p i = a i = w i = s i for each item i = 1, . . . , n. There are two supplies, one at u 1 = 0 and the supplied quantity from the single resource is A, and another at u 2 = A with supplied quantity A. We claim that the PARTITION problem instance has a solution if and only if the corresponding scheduling problem instance has a feasible solution of value at most j≤k p j p k . Using Lemma 1, the latter holds if and only if the schedule has no idle time. So, it suffices to prove that the PARTITION problem instance has a solution if and only if the corresponding scheduling problem instance admits a feasible schedule without any idle time. First suppose that the PARTITION problem instance has a "yes" answer, i.e., there is a subset Q of items with i∈Q s i = A. Schedule the corresponding jobs contiguously in any order in the interval [0, A]. Since p j = a j , and the supply at u 1 = 0 is A, this is feasible. Now, schedule the remaining jobs without idle times from u 2 = A. The result is a feasible schedule without idle times. Conversely, suppose there is a feasible schedule without idle times. Then, the machine is working throughout the interval [0, A]. Since the supply at u 1 = 0 is A, the total processing time of the jobs starting before u 2 = A is A. Let the set Q consist of the items corresponding to these jobs. This yields a feasible solution for the PARTITION problem instance. For proving the strong NP-hardness of 1|nr = 1, p j = a j = w j | w j C j , we reduce the 3-PARTITION problem to this scheduling problem. Recall that an instance of 3-PARTITION consists of an positive integer t, and 3t items, each having a size s i , i ∈ {1, . . . , 3t}, where the item sizes are bounded by polynomial in the input length. It is assumed The question is whether the set of items can be partitioned into t groups Q 1 , . . . , Q t such that i∈Q s i = B for = 1, . . . , t. The corresponding instance of the scheduling problem 1|nr = 1, p j = a j = w j | w j C j has 3t jobs corresponding to the 3t items with p i = a i = w i = s i , and q = t supplies at supply dates u = ( − 1)B with supplied quantities b = B for = 1, . . . , q. The rest of the proof goes along the same lines as in the first part, i.e., we argue that 3-PARTITION has a feasible solution if and only if the corresponding scheduling problem instance has a solution of objective function value j≤k p j p k if and only if there is a feasible schedule without any idle times.
Theorem 4 Scheduling the jobs in LPT order is a 2approximation algorithm for 1|nr = 1, p j = a j = w j | w j C j .
Proof The main idea of the following proof is that first we transform the problem data such that the resource supplies are deferred until they are used in a selected optimal schedule, and then we bound the approximation ratio of the LPT schedule. Finally, we observe that the LPT order yields at least as good a schedule with the original problem data as the same job order for the modified problem data.
Let I be any instance of the scheduling problem and fix an optimal schedule S * for I . Let J * be the set of jobs that start in [u , u +1 ) in S * . Let I be a new problem instance derived from I by modifying the supplied quantities (the other problem data do not change): b 1 := j∈J * 1 a j and for Claim 1 I has the following properties: S * is optimal for I , (iv) any ordering of the jobs yields at least as good a schedule for I as for I .

Proof
The first two claims are straightforward consequences of the definitions, while (iii) and (iv) both follow from the fact that in I the resource supplies are deferred with respect to I .
From now on we consider I . Let S L PT denote the schedule obtained from the LPT order for problem instance I , and let C L PT j denote the completion time of job j in this schedule. Let G L PT denote the total idle time in S L PT in [0, u +1 ] and P L PT the total working time (when the machine processes a job) in [u , u +1 ]. We have u = −1 ν=1 P ν + G L PT −1 . Let us defineP L PT as follows. If the machine is working just before u , or idle just after u in S L PT , thenP L PT = 0; otherwise,P L PT equals the length of the working period starting at u until the first idle period in S L PT , see Fig. 3. Notice that if the machine is working right before and also right after u , thenP L PT = 0 by definition.
According to Lemma 1, we can express the total weighted processing time of the LPT schedule as follows: Note that the second equation follows from the fact that if P L PT = 0, then G L PT −1 = G L PT −1 for the largest < with P L PT > 0.
In the next claim, we relate (2) to (1). The notations P * , G * and H * refer to P , G and H in case of S * . Note that u = −1 ν=1 P * ν + G * −1 . Claim 2 IfP L PT > 0, i.e., the machine is idle just before u , and a job j( ) is started at u in S L PT , then Fig. 4 Tight example: the optimal schedule (above) and the LPT schedule (below) for the same instance Using (2) and Claim 2 (ii), we derive where the first inequality follows from Claim 2 (ii), the second from the observation that p j( ) is multiplied by the total processing time of job j( ) and all those jobs following j( ) in the LPT order, and the rest is obtained by rearranging terms.
Note that H * −1 = 0 means the machine is not working before u in S * , q μ= P L PT μ equals the total amount of work after u in S L PT , while q μ= P * μ is the same in the optimal schedule S * .

Proof (of Claim 3)
First we prove the claim for each such thatP L PT = 0. Consider such an . If q μ= P * μ were less than p j( ) , then each job with a processing time at least p j( ) would be scheduled before u in S * ; thus, −1 ν=1 b ν would be at least the total processing time of these jobs. However, this would mean that j( ) could be scheduled earlier (recall that the machine is idle just before u in S L PT ); thus, we have q μ= P * μ ≥ p j( ) . SinceP L PT = 0, we can use Claim 2 (i) and we have = 0, then the claim is trivial. Otherwise, let > be the smallest index such thatP L PT = 0. Since we know that the claim is true for , we have Finally, as we have already noted, the LPT ordering of the jobs yields at least as good a schedule for I as the same job order for I , and the theorem is proved. Tight example For any integer n ≥ 3 consider the scheduling problem with n jobs, the first n −1 jobs are of unit processing time, while the last job has processing time n. That is, p j = a j = w j = 1 for j = 1, . . . , n − 1, and p n = a n = w n = n for job n. There are two supplies, one at u 1 = 0 with supplied quantity n −1, and another at u 2 = n 2 with supplied quantity n. In the optimal schedule, the first n − 1 jobs are scheduled from time 0, and the last job is scheduled at time n 2 (at u 2 ), see Fig. 4. That is, C * j = j for j = 1, . . . , n − 1, and C * n = n 2 + n. The optimal objective function value is n j=1 p j C * j = n(n − 1)/2 + (n 3 + n 2 ).
6 PTAS for 1|nr = 1, p j = w j , q = const| w j C j Now we consider the special case when the number of supply dates is a constant (not part of the input), and at least 3 (for q = 2, there is an FPTAS for the general problem 1|nr = 1, q = 2| w j C j Kis 2015), and p j = w j for each job j. Theorem 3 implies that this version is still NP-hard. However, below we describe a PTAS for it. Let P sum := j p j be the total processing time of the jobs. Let Δ := 1+(ε/q 2 ). We will guess the total processing time of those jobs starting after u for = 2, . . . , q, where a guess is a q−1 dimensional vector of non-increasing numbers P g 2 , . . . , P g q , i.e., P g ≥ P g +1 ≥ 1 for = 2, . . . , q − 1, and each P g is of the form Δ t for some integer t ≥ 0 with Δ t ≤ P sum . Also, fix P g 1 := P sum . For any guess, define the set of large size jobs M : Note that M q ⊇ M q−1 ⊇ · · · ⊇ M 1 , since P g q ≤ P g q−1 ≤ · · · ≤ P g 1 . Let S be the complement of M , i.e., S := { j | p j < (Δ − 1)P g }. Clearly, S q ⊆ S q−1 ⊆ · · · ⊆ S 1 .
After these preliminaries, the PTAS for 1|nr = 1, p j = w j , q = const| w j C j consists of the following steps: 1. Consider each possible guess (P g 2 , . . . , P g q ) of the total processing time of those jobs starting after the supply dates u 2 , . . . , u q , respectively. For each possible guess, define the sets of jobs M and S (see above), and perform the steps 2-5. After processing all the guesses, go to Step 6. 2. For each = 1, . . . , q, choose at most 1/(Δ − 1) large size jobs from M (since the sets M are not disjoint, care must be taken to choose each job at most once). For each possible choice (T 1 , . . . , T q ) of the large size jobs (where T ⊆ M ), perform steps 3-5. After evaluating all choices, continue with the next guess in Step 1. 3. Determine a schedule of the large jobs. That is, for = 1, . . . , q, schedule the jobs in T in any order contiguously after u , and after all the previously scheduled jobs. 4. Let J u 0 be the set of unscheduled jobs. For = q, q − 1, . . . , 1, repeat the following. In a general step with ≥ 2, pick jobs from J u q− ∩S in non-increasing a j / p j order until the selected subset K satisfies p(K ) + p(T ) ≥ P g − (1/Δ)P g +1 , or no more jobs are left, i.e., K = J u q− ∩ S . In either case, insert the jobs of K in any order after u and after all the jobs in T 1 ∪ · · · ∪ T −1 , and before all the jobs in T ∪ q = +1 (K ∪ T ) (pushing some of them to the right if necessary). Let J u q− +1 := J u q− \K and continue with − 1 until = 1 or no more unscheduled jobs are left. For = 1, just schedule all the remaining jobs from time u 1 = 0 on (pushing the already scheduled jobs to the right, if necessary). If the complete schedule obtained satisfies the resource constraints, then continue with Step 5, otherwise with the next choice of large size jobs in Step 2. See Fig. 5 for illustration. 5. Compute the objective function value of the complete schedule obtained in step (4) and store this schedule as the best schedule if it is the first feasible schedule or if it is better than the best feasible schedule found so far. Continue with next choice of large size jobs in Step 2. 6. Output the best schedule found in the previous steps.

Theorem 5
The above algorithm is a PTAS for 1|nr = 1, p j = w j , q = const| w j C j .
Proof Let I be any instance of the scheduling problem and S * an optimal solution for I . LetP * be the total processing time of those jobs starting after u in S * . Clearly,P * ≥P * +1 for = 1, . . . , q − 1. Consider the guess P g 2 , . . . , P g q in Step 1 of our algorithm such thatP * ≤ P g < ΔP * for each = 2, . . . , q. Such a guess must exist by the definition of guesses.
For each = 1, . . . , q, let us partition the set of jobs that start in the interval [u , u +1) in the schedule S * into subsets T * ⊆ M and K * ⊆ S . Clearly, the sets T * are disjoint, and the cardinality of each T * is at most 1/(Δ − 1), since P g ≥ P * , and thus each job in T * is of size at least (Δ − 1)P g ≥ (Δ − 1)P * , while the total size of all the jobs starting after u in S * isP * by definition. Therefore, the algorithm will enumerate and process the choice (T * 1 , . . . , T * q ) in Step 2. In the rest of the proof, we fix this choice of large jobs. After scheduling them in Step 3, the resulting schedule is like S * , except that some jobs may be yet unscheduled. Thus, we perform Step 4, and let S A be the resulting schedule. In Step 4, the algorithm will find sets of jobs K 1 , . . . , K q , and it may well be the case that K * = K for some , but we know that q =1 K = q =1 K * , since in S * and S A , for each , the same subset T * of M is chosen. We will prove that S A is a feasible schedule and that its objective function value is at most 1 + O(ε) times the optimum.
Step 4 of the algorithm at iteration . Then, we deduce that in S A , all the small jobs in S \ q = +1 T * are scheduled after u in the iterations , . . . , q, while in S * , some jobs of S \ q = +1 T * may be started before u . Therefore, the first inequality in (3) holds in this case as well. To verify the second inequality in (3), note that since p(K ) + p(T * ) < P g − (1/Δ)P g +1 by assumption, (4) follows immediately. Then, using the induction hypothesis, we obtain (6), and then the same argument applies as above.
In order to prove (resource) feasibility, we need some further technical results. To simplify notation, suppose S 1 \ q =2 T * = {1, . . . , n 1 } and a j / p j ≥ a j+1 / p j+1 for 1 ≤ j < n 1 , i.e., job j is the jth job in the ordered sequence. Let X t := {1, . . . , t} be the index set of the first t ≤ n 1 jobs with the largest a j / p j ratio.
Claim 5 There exists a unique t ∈ {0, . . . , n 1 } such that Proof If q = K is the empty set, then t = 0 will do. Otherwise, let t be the maximum element in q = K . Indirectly, suppose there exists some t < t such that t / ∈ q = K , but t ∈ S \ q = T * . Then, at some iteration in Step 4 of the algorithm, t would be chosen in place of t < t, which is a contradiction.
Corollary 1 For the job index t defined in Claim 5, Claim 6 For each t = 1, . . . , n 1 , and 2 ≤ ≤ q, we have We proceed by induction, the base case being for = q. Then, K q , K * q ⊆ S q . If K q is a proper subset of S q , then we have p(K * q ) ≤ p(K q ) by (5). Otherwise, K q = S q ⊇ K * q , and we have p(K * q ) ≤ p(K q ) in this case, too. For the sake of a contradiction, suppose there exists 1 ≤ t ≤ n 1 such that (8) does not hold. Let t be the smallest such job index. Then, job t ∈ K * q \K q ; otherwise, t could be decreased. Then, K q does not contain any job v with v > t; otherwise, before picking v, the algorithm would have picked t. But then which is a contradiction. Now assume by induction that (8) holds for = k + 1, with k ≥ 2, and for all 1 ≤ t ≤ n 1 , and we check it for = k. We distinguish two cases.
-K ⊂ S \ q = +1 (T * ∪ K ). Then, we have p(K * ) ≤ p(K ) by (5). For the sake of a contradiction, suppose there exists 1 ≤ t ≤ n 1 such that (8) does not hold. Let t be the smallest such job index. Then, it must be the case that t ∈ (K * ∪ · · · ∪ K * q )\(K ∪ · · · ∪ K q ); otherwise, t could be decreased. So suppose t ∈ K * for some ≤ ≤ q. Then, {t, . . . , n 1 } ∩ K = ∅, because if not, then, since t ∈ K * ⊆ S ⊆ S , the algorithm would have chosen t before picking some v ∈ {t + 1, . . . , n 1 } ∩ K . Consequently, K ⊆ X t−1 . Now we use the induction hypothesis: where the first equation follows from K ⊆ X t−1 ⊂ X t , the first inequality from the induction hypothesis, and the last inequality from the fact that p(K ) ≥ p(K * ).
However, the derived inequality is just (8) for and t, a contradiction. -K = S \ q = +1 (T * ∪ K ). Since q = K * ⊆ S \ q = +1 T * , we can observe that each t ∈ S \ q = +1 T * belongs to one of the sets K with ≤ ≤ q, but may not belong to any of the sets K * with ≤ ≤ q. Hence, the claim follows in this case, too.
Corollary 2 For each = 2, . . . , q, we have Now we verify resource feasibility by showing that for each = 2, . . . , q, This suffices to prove the feasibility of S A , because then for each = 2, . . . , q, the total resource consumption of those jobs that start after u in S A is at least as much as that in S * . Therefore, the total resource consumption of those jobs that start not later than u in S A cannot be more than that in S * . Hence, S A is a feasible schedule. Let t be the job index defined in Claim 5. Now we compute where (a), (b) and (d) are obvious, and (c) follows from three observations: (i) the first terms of the two expressions are the same by Corollary 1, (ii) the inequality between the second terms follows from since the jobs are indexed in non-increasing a j / p j order, and q = K ⊆ X t , and from (iii) which can be derived from the inequality of Corollary 2 by subtracting the equation Now we bound the objective function value of S A . Again, we need a technical result. Let H A denote the idle time in [u , u +1 ) in the schedule S A , and G A the total idle time before u +1 .