Polynomial Time Approximation Scheme for Two Parallel Machines Scheduling with a Common Due Date to Maximize Early Work

We study the scheduling problem with a common due date on two parallel identical machines and the total early work criterion. The problem is known to be NP-hard. We prove a few dominance properties of optimal solutions of this problem. Their proposal was inspired by the results of some auxiliary computational experiments. Test were performed with the dynamic programming algorithm and list algorithms. Then, we propose the polynomial time approximation scheme, based on structuring problem input. Moreover, we discuss the relationship between the early work criterion and the related late work criterion. We compare the computational complexity and approximability of scheduling problems with both mentioned objective functions.


Introduction
The scheduling theory provides models and methods helpful in solving practical problems (cf., e.g., [1,2]). To cover different application fields, several performance measures have been proposed in the literature, which allow for modeling various B Malgorzata Sterna malgorzata.sterna@cs.put.poznan.pl Kateryna Czerniachowska kateryna.czerniachowska@cs.put.poznan.pl optimization criteria. Criteria involving due dates seem to be especially interesting, because time restrictions are very often met in practice.
Within this paper, we analyze the early work criterion, which maximizes the amount of work executed before the due date (i.e. the total size of early parts of jobs) and the late work criterion, strictly related to this objective function, minimizing late parts of jobs [3]. These performance measures find many practical applications. They can be met, e.g., in control systems when collecting data from sensors [3], in agriculture in the process of harvesting crops or spreading fertilizers/pesticides on fields (cf., e.g., [4,5]), in manufacturing systems in planning technological processes or scheduling customer orders (cf., e.g., [6,7]), or in software engineering in the process of software testing or debugging (cf., [8]).
The considered criteria have been mainly studied for the single machine and for dedicates machines problems. There are relatively few results for the parallel machines. For all mentioned environments, a lot of attention has been paid to the common due date cases. Such basic scheduling models are often met in practice too. They can appear in optimizing the execution of production tasks or any other activities within a given planning horizon (e.g., in constructing a factory production plan for a month). Similarly, we can optimize work assignment to employees within a shift (e.g., manufacturing or assembling parts, packing items ordered by customers in Internet shops). Representing the shift length with a common due date, we can use early/late work models for optimizing labor cost. The employer is interested in minimizing the working time of employee exceeding the contracted period (i.e. the shift length), because it causes additional costs. From this point of view, the sequence of jobs is less important than their assignment before and after the due date, since the payment rate depends on whether work is done within normal working time or in overtime.
Within the presented research, we studied the two parallel identical machines scheduling problem with a common due date to maximize the total early work, which is known to be NP-hard [9]. We proved a few dominance properties of optimal solutions of this problem, which allowed us to propose the polynomial time approximation scheme. These properties were disclosed based on the results of some auxiliary computational experiments performed for dynamic programming [9] and list algorithms. Moreover, we studied the relationship between the late and early work, which has not been addressed formally before. We showed that these two criteria are equivalent when the optimal solutions are considered, but they have different nature, when the existence of approximation algorithms with a bounded approximation ratio is taken into account.
The remainder of this paper is organized as follows. In Sect. 2 we shortly discuss the related results presented in the literature so far. In Sect. 3 we give the formal definition of the problem considered in this paper, and study the relationship between the late work and early work criteria, focusing on approximability issues. Section 4 presents dominance properties proved for the early work problem, inspired by computational experiments results, which are briefly summarized. The polynomial time approximation scheme is proposed in Sect. 5. The paper is concluded in Sect. 6.
As we have mentioned in Sect. 1, there are relatively few results concerning the parallel identical machine environment, studied in this paper, which corresponds to those applications, where machines, factory units, workers, teams, etc. have identical capabilities and speeds. Polynomial time algorithms based on network flow were presented for preemptive problems with an arbitrary number of machines, job release times with the late work criterion (P|r j , pmtn|Y ) [18], and with the weighted late work criterion (P|r j , pmtn|Y w ) [3,29]. Similar non-preemptive problems with unit processing times (Pm|r j , p j = 1|Y w and P|r j , p j = 1|Y w ) are also polynomially solvable [8]. For non-preemptive cases, Blazewicz [3] proved the unary NP-hardness of problem P||Y , while Sterna [8] mentioned intractability of problem P2| p j = 1, chains|Y .
The late work minimization problem is closely related to the early work maximization problem, but these two problems are not fully equivalent, as we will show in the next section. The relationship between those two criteria has not been deeply studied yet. Obviously a schedule, which is optimal for the late work minimization, is optimal for the early work maximization. This observation has been already used in the literature while constructing optimal algorithms for the late work criterion (cf., e.g., [13,15]). However, the polynomial approximation schemes have been proposed only for the late work criterion for the single machine problems (cf., [11,30,31]). They have not been studied for multiple machine problems with early/late work criteria yet.
The presented research strictly relates to the results obtained by Chen et al. [9]. The work by Chen et al. [9], inspired also Xu et al. [32], who reported computational experiments for heuristic approaches proposed for the weighted late work problem, i.e. P2|d j = d|Y w ; they tested a few list scheduling algorithms and metaheuristics (ant colony system, genetic algorithm, and simulated annealing). Chen et al. [9] studied the two-machine problem with a common due date in offline and online modes, taking into account the total late work as well as the total early work criteria. They proved the binary NP-hardness of the offline version of P2|d j = d|Y , where the set of jobs is known in advance, by showing the transformation from the partition problem and proposing a pseudopolynomial time dynamic programming algorithm. They mentioned the unary NP-hardness of problem P|d j = d|Y . In the online version, the set of jobs is unknown in advance, and a new job may appear in the system after scheduling the previous one. For the online model with the total early work criterion, Chen et al. [9] proposed an online algorithm for an arbitrary number of machines, called Extended First Fit (EFF), proving its approximation ratio dependent on the number of machines. Moreover, Chen et al. [9] showed that EFF is an optimal online algorithm for two machines, since its approximation ratio equals the lower bound of this ratio. Chen et al. [9] did not study the approximability of the original offline formulation of problem P2|d j = d|Y .
It is worth to be mentioned that different concept of early work can be also found in the literature. Ben Yehoshua and Mosheiov [33] studied a single machine scheduling problem with the minimum total early work criterion. They minimize, instead of maximizing, the duration of the parts of jobs completed prior to their due dates, analyzing different scheduling models with another application fields from the ones considered within this paper. Moreover, the late/early work scheduling is strictly related to the imprecise computation model (cf., [18] for a survey) and to the scheduling models with variable processing times (cf., [34] or Chapter 6 in [35] for surveys).

Problem Formulation
In this paper we study problem P2|d j = d|X , which requires scheduling a set of jobs J = {J 1 , . . . , J j , . . . , J n } on two identical parallel machines M 1 and M 2 in non-overlapping and non-preemptive way. Each job J j is described by the processing time p j . The quality of a solution, requiring assigning particular jobs to machines, is estimated with regard to job completion times c j and the common due date d, which represents the preferred completion time for all jobs (d j = d). We maximize the size of early parts of all jobs, X j = min{ p j , max{0, d −(c j − p j )}}, that means the total early work, X = n j=1 X j . As it was stressed before, the early work parameter, illustrated in Fig. 1, is a complementary parameter to the late work, which was originally proposed in the literature. The late work is defined as Y j = min{ p j , max{0, c j − d}}, and the total late work as Y = n j=1 Y j . The late work criterion is closely related to the makespan and the tardiness criterion [36], but unlikely those two performance measures this parameter is upper bounded by the job processing time.
Within the remainder of the paper, we will denote with p sum = n j=1 p j the total processing time of all jobs, with p max = max j=1...n { p j } the processing time of the longest job, and with C k the completion time (i.e. the workload) on machine M k for k = 1, 2. Obviously, the schedule makespan equals C max = max{C 1 , C 2 }. In the two-machine case considered in this research, the total early work equals X = min{d, C 1 } + min{d, C 2 }.
Obviously determining the optimal total early work X * is equivalent to determining the optimal total late work Y * , since Y * = p sum − X * , and in any feasible schedule Y j = p j − X j for all jobs. Thus problems P2|d j = d|Y and P2|d j = d|X are equivalent, if optimal solutions are considered, but they have different natures, if the quality of approximation is discussed.
Since the problem of late work minimization on two machines, P2|d j = d|Y , is NP-hard [9], the corresponding problem of early work maximization, P2|d j = d|X , is also NP-hard. Moreover, the pseudopolynomial time dynamic programming method proposed for P2|d j = d|Y by Chen et al. [9], can be used for solving P2|d j = d|X optimally, and we can classify the latter problem as binary NP-hard. The problems for Fig. 1 The early work parameter for early (J i ), partially early (J k ) and late (J r ) jobs an arbitrary number of machines, P|d j = d|X and P|d j = d|Y , are already unary NP-hard [9].
On the other hand, the problem with the total late work, P2|d j = d|Y , is nonapproximable, due to the nature of this objective function, which takes zero value if all jobs are early. Based on the results by Alon et al. [37], we conclude that there exists no polynomial time approximation algorithm with finite performance guarantee for this problem, particularly no polynomial time approximation scheme (PTAS) exists, unless P = NP. Taking into account that the NP-complete partition problem [38] transforms to the decision version of P2|d j = d|Y [9], a hypothetical PTAS for this scheduling problem would have solved the partition problem in polynomial time [37]. On the contrary, the problem with the total early work, P2|d j = d|X , is approximable, this means that there exist some approximation algorithms with a finite approximation ratio. Chen et al. [9] have already proposed the list algorithm with the guaranteed performance: Extended First Fit heuristic with the approximation ratio , where m denotes the number of machines (the method is able to solve more general case with m machines too), and the pseudopolynomial complexity O(nd 2 ). Within this paper, in Sect. 5, we propose the polynomial time approximation scheme, which constructs schedules with guaranteed (1 − 3ε) quality in O(max{n, 1 ε 2 1 ε }) time for any 0 < ε < 1. The proposal of PTAS for P2|d j = d|X allows us to classify this problem as approximable and underlines its different nature in comparison with the non-approximable problem P2|d j = d|Y . Both problems are binary NP-hard, but problem P2|d j = d|X belongs to the class of problems possessing approximation algorithms, class APX (cf., e.g., [37,38,40]), while P2|d j = d|Y does not.

Dominance Properties
The theoretical studies on properties of problem P2|d j = d|X , crucial for proposing the polynomial time approximation scheme, were preceded by auxiliary computational experiments, which gave some insight in the structure of feasible and optimal solutions of this problem. We implemented the dynamic programming algorithm (DP) proposed by Chen et al. [9], which provided optimal solutions, and four list algorithms. Three of these methods assign jobs to the machine with the minimum workload: in the input (MW), in the longest processing time (LPT), and in the shortest processing time (SPT) order, respectively. The fourth one, Extended First Fit algorithm (EFF) proposed by Chen et al. [9], assigns jobs to the first suitable machine, or to the machine with the minimum workload. The machine is suitable if its workload after assigning a current job will not exceed a bound equal to r 2 d, where r 2 = √ 5 − 1 denotes the assumed approximation ratio r m for 2 machines. All methods were implemented in Visual C#, and tested for randomly generated instances on PC with AMD Athlon II X2 245 2.90GHz and 3GB RAM. To analyze the influence of the common due date on the solution process, we performed computational experiments for small instances changing the value of d. For n ∈ {5, 6, . . . , 20}, we generated 10 instances for each instance size with uniform distribution of job processing times p j ∈ [1,20]. For each instance, several tests were performed with various due date values d = qp sum , where 0 < q ≤ 1, namely q ∈ {0.1, 0.15, 0.2, . . . , 1}. Then, we performed similar experiments for larger instances with n ∈ {10, 20, . . . , 150} for the fixed due date d = 0.5 p sum (the value chosen after the first stage of experiments).
The analysis of optimal solutions obtained for various due dates showed that most instances are easy. Figure 2 presents the ratio (in percent) between the value of the optimal early work, determined by dynamic programming, and the upper bound of the criterion value, determined as min{ p sum , 2d}.
For small due dates, there are late jobs on both machines, and the criterion exceeds its maximum value equal to 2d. Similarly for large due dates, idle time appears before d on both machines, and again the criterion reaches its maximum value equal to p sum in this case. From the optimization point of view, only instances with due dates between ca. 40% and ca. 55% of the total processing time are interesting, because playing with job assignment might improve the criterion value. Thus, the experiments with large instances were performed for d = 0.5 p sum for which the difference between X * and its upper bound was the biggest. The influence of the due date value and the total (as well as the maximum) processing time on the instance difficulty inspired theoretical studies on dominance properties presented in the remainder of this section.
The computational experiments showed very high efficiency of list algorithms for small instances, except from EFF algorithm. Figure 3 presents the error made by particular algorithms (called average approximation ratio), i.e. the ratio between the optimal early work and the criterion value obtained by these heuristics. EFF's behavior differs from the behavior of the remaining methods, because it is the only approach, which does not assign jobs to the machine with smaller current workload. EFF keeps assigning jobs to the first machine till its workload reaches r 2 d, this means it always exceeds the due date, instead of keeping balance between both machines as other methods do. Therefore, the strategy for which the theoretical approximation ratio can be easily determined [9] might be not efficient in practice.
Within the experiments, we wanted to check not only the influence of the due date, but also of the number of jobs, on the instance difficulty and the efficiency of particular list algorithms (cf., Fig. 4 Fig. 4 The average approximation ratio for list algorithms for large instances of different sizes jobs is large, then the way in which these jobs are assigned to the machines (i.e. the strategy used within a list algorithm) is less crucial than in the case of the small number of jobs. The differences in the criterion value between different schedules for the same instance are relatively less visible than for the instance with the small number of jobs.
The same effect-decrease of the average approximation ratio with the number of jobs for all algorithms except from EFF-is also visible while comparing aggregated results for all small and large instances (cf., Fig. 5). This means that for large instances, simple list heuristics (especially LPT) are very efficient and sufficient. There is no need to construct more sophisticated algorithms, such as metaheuristics, because list algorithms are able to solve the problem optimally in most cases.
The computational experiments showed that many instances of the early work maximization problem are trivial, i.e. any schedule is optimal or an optimal schedule can be determined in polynomial time. If the common due date is very small or very big, then an optimal schedule for these special instances can be constructed without exploring the solution space, despite the fact that the problem is NP-hard. The analysis of those instances resulted in the following dominance properties useful for constructing an approximation scheme.  Proof Let's assume that the longest job is scheduled on M 1 , and the remaining jobs are scheduled on M 2 , then C 1 = p max ≥ 1 2 p sum and C 2 = p sum − p max ≤ 1 2 p sum ≤ C 1 . Because jobs are non-preemptive, no unit of work can be shifted from M 1 to M 2 , possibly before the common due date d, so the total early work cannot increase, and the schedule is optimal.
From Property 4.1, we can assume that for non-trivial instances of the problem:

Property 4.2 If p max ≥ d, then X
Proof Let's assume that the longest job is scheduled on M 1 , and the remaining jobs are scheduled on M 2 , then C 1 = p max and C 2 = p sum − p max . The workload on M 1 exceeds the due date and the early work on this machine cannot increase, since it exceeds its maximal value equal to d. The remaining jobs are assigned to M 2 , so the early work cannot be increased on this machine either, and the schedule is optimal.
From Property 4.2, we can assume that for non-trivial instances of the problem: and, from (1) and (2), that:

Property 4.3
If p sum ≤ d, then X * = p sum .
Proof Any schedule without idle time is optimal, because the makespan cannot exceed the common due date, and all jobs are early.
From Property 4.3, we can assume that for non-trivial instances of the problem:

Property 4.4 For any instance of P2|d
Proof The upper bound of the total early work is determined by the total processing time p sum (if all jobs are early), or by the doubled due date value 2d (if the intervals till the due date on both machines are completely filled with jobs). Taking into account Property 4.3, if p sum ≤ d, then the lower bound of the total early work is determined by the total processing time (all jobs are early). If p sum > d, then by executing all jobs on one machine, we can construct a schedule with the total early work equal to d, but a better schedule might be obtained by dividing jobs between machines.
Taking into account Property 4.4 and (4), we can assume that for non-trivial instances of the problem: Property 4.5 If p sum ≥ 3d, then X * = 2d.
Proof Consider the schedule obtained with the longest processing time rule LPT, i.e. by assigning the longest job to the machine with the smaller workload. We claim that this schedule is optimal. If min{C 1 , C 2 } ≥ d, then X * = 2d because the workload exceeds the due date on both machines. Let's assume that min{C 1 , C 2 } < d. We can assume without loss of generality that C 1 < C 2 , which implies C 1 < d and Note, that in the LPT schedule, the difference between workloads on both machines is bounded by the maximum processing time at any stage of this algorithm, i.e. |C 2 − C 1 | ≤ p max . If before assigning job J j the workloads on both machines are the same, C 1 = C 2 , then after assigning this job |C 2 − C 1 | = p j ≤ p max . If before assigning job J j the workloads on both machines are different, we can assume without loss of generality that C 1 < C 2 (i.e. C 2 − C 1 > 0), then after assigning this job to machine M 1 , we have the difference between workloads Based on Property 4.2 and (2), we can assume that p max < d. Hence, in the LPT schedule The assumption that C 1 < d, leads to the contradiction C 2 < 2d and C 2 > 2d. Hence, min{C 1 , C 2 } < d is impossible and the property holds.
From Property 4.5 and (4), we can assume that for non-trivial instances of the problem:

Polynomial Time Approximation Scheme
We proposed an approximation scheme (cf., e.g., [39]), constructing a feasible solution for P2|d j = d|X based on a simplified instance of this problem. We adopted the idea of polynomial time approximation scheme given by Schuurman and Woeginger [40] for the two-machine problem with the makespan criterion, P2||C max , based on structuring problem input, which was originally proposed by Sahni [41].
Algorithm A ε works in three phases. In Phase 1, for the original instance I of problem P2|d j = d|X , with a set of jobs J , we construct the simplified instanceĪ , with a set of jobsJ , with regard to 0 < ε < 1. We copy long jobs with p j > εd of the total length L from I intoĪ , and replace short jobs with p j ≤ εd of the total length S in I by S/εd jobs with the processing time εd inĪ . In the simplified instanceĪ , we have long jobs of the total lengthL = L and short jobs of the total lengthS = S/εd εd. In Phase 2, we construct an optimal schedule for the simplified instanceĪ with the optimal total early workX * . The workloadC k * on machine M k for k = 1, 2 contains L k * units of long jobs andS k * units of short jobs, i.e.C k * =L k * +S k * ,L =L 1 * +L 2 * , S =S 1 * +S 2 * . In Phase 3, presented without loss of generality forC 1 * ≥C 2 * since the case forC 1 * <C 2 * is solved analogously, we transform an optimal schedule for the simplified instanceĪ into the feasible schedule for the original instance I with the total early work X . We assign in the schedule for I long jobs with p j > εd to the same machine as in the optimal schedule forĪ , at mostS 1 * + 2εd units of short jobs with p j ≤ εd to machine M 1 , and the remaining jobs to machine M 2 . The workload C k on machine M k for k = 1, 2 in the transformed schedule for instance I contains L k =L k * units of long jobs and S k units of short jobs, i.e. C k = L k + S k .
In Theorem 5.1 we show that the above proposed method solves the considered problem in polynomial time. Then, in Theorem 5.2, we prove that it constructs schedules with guaranteed quality. Proof In Algorithm A ε , simplifying instance I to instanceĪ (Phase 1) and constructing the feasible solution for the original instance I based on the optimal schedule for I (Phase 3) takes O(n) time. In Phase 2, we can use an enumeration algorithm to construct the optimal solution for the simplified instanceĪ , which explores all subsets of jobs, assigning them to M 1 , and assigning the remaining jobs to M 2 . Such an approach runs in O( 1 ε 2 1 ε ) time, since the simplified instanceĪ contains at most p sum /εd jobs, and due to Property 4.5 and (6), we know that in non-trivial instances of the problem p sum < 3d, and p sum /εd < 3d/εd < 3/ε . Thus, the number of jobs in the simplified instanceĪ is bounded by the constant 3/ε, and it is independent of the original problem size. The enumeration method, checking all 2 3 ε subsets of jobs, each in time O(3/ε) necessary to determine the criterion value, is a O( 1 ε 2 1 ε )-time algorithm, which is polynomial for the original instance I (although it is obviously exponential with regard to 1/ε). Thus, the overall time complexity of the method (Phases 1-3) is bounded by O(max{n, 1 ε 2 1 ε }) for any given constant 0 < ε < 1.
To prove, in Theorem 5.2, that Algorithm A ε is an approximation algorithm for the considered problem, we will compare in Lemma 5.1 optimal schedules for the original and simplified instances.

Lemma 5.1
The optimal early workX * for the simplified instanceĪ is at least (1 − ε) times the optimal early work X * for the original instance I , i.e.X * > (1 − ε)X * . Proof Let's consider the optimal schedule for the original instance I of the problem with the optimal early work X * = min{d, C * 1 } + min{d, C * 2 }, where the workload on machine M k is the sum of the length of long and short jobs, i.e. C * k = L * k + S * k for k = 1, 2. We construct the feasible schedule for the simplified instanceĪ by keeping on machine M k the assignment of long jobs (L k = L * k ) and replacing short jobs with S * k /εd jobs of length εd. In this schedule for instanceĪ the early work equalsX = min{d,C 1 } + min{d,C 2 } andC k =L k +S k = L * k +S k , whereS k = S k * /εd εd.

Algorithm 1 Algorithm
According to Phase 1 of Algorithm A ε we should have in the simplified instanceĪ S/εd = (S * 1 + S * 2 )/εd short jobs of length εd. Note that: S * 1 + S * 2 = (S * 1 + S * 2 )/εd εd + r , 0 ≤ r < εd, S * 1 = S * 1 /εd εd + r 1 , 0 ≤ r 1 < εd, The remaining short jobs from instance I are assigned to machine M 2 within interval S 2 * . The duration of these jobs equals S 2 = S − S 1 . Taking into account (7) we have This means that the time reserved for short jobs on machine M 2 (S 2 * ) is big enough to schedule within it short jobs of length S 2 (S 2 * > S −S 1 * − εd > S 2 , i.e.S 2 * > S 2 ). Moreover, because S ≥S 1 * +S 2 * , we Taking into account the relation given in (7), The change of workloads on machine M k for k = 1, 2 between the schedule forĪ and I is determined by the change of the duration of short jobs, because the assignment of long jobs is identical in both schedules. We have C k = L k + S k =L k * + S k and C k * =L k * +S k * , which implies that (C k −C k * ) = (S k −S k * ) for k = 1, 2. On machine M 1 , due to (7), we have On machine M 2 , due to (8), we have The workload on M 1 increases, after the transformation of the optimal schedule for the simplified instanceĪ to the feasible schedule for the original instance I , causing possible increase of the early work on M 1 , while the workload on M 2 decreases, causing possible decrease of the early work on M 2 . According to Phase 3 of Algorithm A ε , we know thatC 1 * ≥C 2 * . Due to (9) and (10) we have C 1 >C 1 * and C 2 <C 2 * and in the constructed schedule there is C 1 > C 2 . We have to consider two cases.
If in the optimal schedule for the simplified instanceĪ there isC 1 * ≥ d, then in the final schedule for the original instance I , we have C 1 > d. This means that the change in the workload on M 1 does not influence the early work because it concerns jobs executed after the common due date. Consequently the total early work may only decrease,X * ≥ X , due to workload decrease on machine M 2 (if C 2 ≥ d, then the criterion value X does not change with regard toX * , because the workload on M 2 exceeds the due date in both schedules). The decrease in the total early work is bounded byC 2 * − C 2 , i.e.X * − X ≤C 2 * − C 2 ≤ 2εd and, finally, X ≥X * − 2εd.
If in the optimal schedule for the simplified instanceĪ there isC 1 * < d, then we have C 2 < d. In both schedules, the workloads on M 2 are strictly smaller than d, and the workload decrease on M 2 (C 2 <C 2 * ) reduces the early work on this machine by −2εd ≤ ΔX 2 < 0 according to (10). On machine M 1 , the workload increases (C 1 >C 1 * ), possibly increasing the early work at most by 2εd units according to (9).
However, if C 1 > d, then the increase in the early work is smaller than C 1 −C 1 * , because a part of short jobs from the original instance I assigned to this machine by the algorithm are executed after d, and their processing times do not contribute to the criterion value. The increase of early work is bounded by d −C 1 * > 0, instead of (C 1 −C 1 * ) > εd. Summing up, the workload increase on M 1 increases the early work on this machine by 0 < ΔX 1 ≤ 2εd. Finally, in result of the instance transformation, the total early work changes by −2εd < ΔX 1 + ΔX 2 < 2εd, and the total early work for the constructed feasible schedule for the original instance I cannot be smaller than the optimal early work for simplified instanceĪ by more than 2εd, i.e. X >X * − 2εd.
Based on Theorems 5.1 and 5.2, a family of Algorithms A ε is a polynomial time approximation scheme (PTAS) for problem P2|d j = d|X .

Conclusions
Within this paper, we returned to the studies on the late work minimization/early work maximization on parallel identical machines, which is a classical scheduling model, finding many practical applications. We collected results on early/late work scheduling problems, which have been published in the literature since the last survey paper on this subject appeared. We discussed formally the relationship between the late work and the early work. We pointed out that these two criteria are equivalent when the optimal solutions are considered, but they have different nature when the approximate solutions are taken into account. The total late work problem, P2|d j = d|Y , is non-approximable, while the total early work problem, P2|d j = d|X , is approximable, since for the first problem no polynomial-time approximation scheme exists (unless P = NP), while for the latter one we proposed PTAS.
The computational experiments reported in this paper, performed for a few list scheduling methods, showed that developing more advanced optimization algorithms for the analyzed problems, such as metaheuristic algorithms, cannot be considered as real challenge. The quality of list scheduling solutions is close to the optimal ones for the considered case. Only very small instances, with very specific due date values, are difficult to be solved. Proposing metaheuristics was and still is a very popular research direction not only in the field of late work scheduling (cf., e.g., [16,19,24,32,[42][43][44]). The results presented in this paper showed that simple strategies, such as list algorithms, should not be a priori rejected from the analysis by assuming their low efficiency.
On the contrary to metaheuristics, approximation algorithms are still worth to be looked for, because they give the guarantee of the solution quality, which might be crucial for some practical applications. The analysis of computational results inspired theoretical studies on problem P2|d j = d|X , which resulted in proving a few dominance properties helpful in constructing such approximation methods. Within the paper, we proposed the polynomial time approximation scheme for the considered problem, constructing schedules with guaranteed (1 − 3ε) quality in O(max{n, 1 ε 2 1 ε }) time, for any given constant 0 < ε < 1 .