Makespan Minimization with OR-Precedence Constraints

We consider a variant of the NP-hard problem of assigning jobs to machines to minimize the completion time of the last job. Usually, precedence constraints are given by a partial order on the set of jobs, and each job requires all its predecessors to be completed before it can start. In his seminal paper, Graham (1966) presented a simple 2-approximation algorithm, and, more than 40 years later, Svensson (2010) proved that 2 is essentially the best approximation ratio one can hope for in general. In this paper, we consider a different type of precedence relation that has not been discussed as extensively and is called OR-precedence. In order for a job to start, we require that at least one of its predecessors is completed - in contrast to all its predecessors. Additionally, we assume that each job has a release date before which it must not start. We prove that Graham's algorithm has an approximation guarantee of 2 also in this setting, and present a polynomial-time algorithm that solves the problem to optimality, if preemptions are allowed. The latter result is in contrast to classical precedence constraints, for which Ullman (1975) showed that the preemptive variant is already NP-hard. Our algorithm generalizes a result of Johannes (2005) who gave a polynomial-time algorithm for unit processing time jobs subject to OR-precedence constraints, but without release dates. The performance guarantees presented here match the best-known ones for special cases where classical precedence constraints and OR-precedence constraints coincide.


Introduction
In this paper, we consider the problem of scheduling jobs with OR-precedence constraints on uniform parallel machines to minimize the total length of the project. Let [n] := {1, . . . , n} be the set of jobs and m be the number of machines. Each job j ∈ [n] is associated with a processing time p j ≥ 0 and a release dates r j ≥ 0. The precedence constraints are given by a directed graph G = ([n], E). The set of predecessors of a job j ∈ [n] is P(j) = {i ∈ [n] | (i, j) ∈ E}.
A schedule is an assignment of the jobs in [n] to the machines such that (i) each job j is processed by a machine for p j units of time, and (ii) each machine processes only one job at a time. Depending on the problem definition, jobs may be allowed to preempt and continue on a different machine (preemptive scheduling) or not (non-preemptive scheduling). The start time and completion time of job j ∈ [n] are denoted by S j and C j , respectively. Note that C j ≥ S j + p j and equality holds if job j ∈ [n] is not preempted.
A schedule is called feasible, if (i) S j ≥ min{C i | i ∈ P(j)}, and (ii) S j ≥ r j for all jobs j ∈ [n]. A job without predecessors may start at any point in time t ≥ r j . In other words, every job with predecessors requires that at least one of its predecessors is completed before it can start, and no job may start before it gets released. A job j is called available at time t ≥ 0, if t ≥ r j and, unless P(j) = ∅, there is i ∈ P(j) with C i ≤ t. Our goal is to determine a feasible schedule that minimizes the makespan, which is defined as C max := max j∈[n] C j . In an extension of the notation in [16] and the three-field notation of Graham et al. [11], the preemptive and non-preemptive variant of this problem are denoted by P | r j , or-prec, pmtn | C max and P | r j , or-prec | C max , respectively.
From now on we assume w.l.o.g. that all processing times and release dates of jobs in [n] are positive and non-negative integers, respectively. Note that this can be done by suitable scaling and that any job with zero processing time may be disregarded. As discussed below, the nonpreemptive problem is NP-hard, which is why we are interested in approximation algorithms. Let Π be a minimization problem, and ρ ≥ 1. Recall that a ρ-approximation algorithm for Π is a polynomial-time algorithm that returns a feasible solution with objective value at most ρ times the optimal objective value.
Non-Preemptive Scheduling. Garey and Johnson [6] proved that the non-preemptive variant is already strongly NP-hard in the absence of precedence constraints and release dates. It remains NP-hard, even if the number of machines is fixed to m = 2 [19]. In his seminal paper, Graham [9] showed that a simple algorithm called List Scheduling achieves an approximation guarantee of 2: Consider the jobs in arbitrary order. Whenever a machine is idle, execute the next available job in the order on this machine. If there is no available job, then wait until a job completes.
If the jobs are sorted in order of non-increasing processing times, then List Scheduling is a 4 3 -approximation [10]. Hochbaum and Shmoys [13] presented a (1 + ε)-approximation for P | | C max , which was improved in running time to the currently best-known by Jansen [15]. Mnich and Wiese [21] showed that P | | C max is fixed parameter tractable with parameter max j∈[n] p j . If we add non-trivial release dates, then List Scheduling with an arbitrary job order is a 2-approximation [12], and it is 3 2 -approximate if the jobs are sorted in order of non-increasing processing times [3]. Hall and Shmoys [12] provided a (1 + ε)-approximation for P | r j | C max .
In contrast to OR-precedence constraints that are considered in this paper, the standard precedence constraints, where each job requires that all its predecessors are completed, will be called AND-precedence constraints. Minimizing the makespan with AND-precedence constraints is strongly NP-hard, even if the number of machines is fixed to m = 2 and the precedence graph consists of disjoint paths [4]. List Scheduling is still 2-approximate in the presence of AND-precedence constraints if the order of the jobs is consistent with the precedence constraints [9,10]. The approximation factor can also be preserved for non-trivial release dates [12]. Assuming a variant of the Unique Games Conjecture [17] together with a result of Bansal and Khot [1], Svensson [24] proved that this is essentially best possible.
If the precedence constraints are of AND/OR-structure and the precedence graph is acyclic, then the problem without release dates still admits a 2-approximation algorithm [7]. Erlebach, Kääb and Möhring [5] showed that the assumption on the precedence graph is not necessary. Both results first transform the instance to an AND-precedence constrained instance by fixing a predecessor of the OR-precedence constraints. Then they solve the resulting instance with AND-precedence constraints using List Scheduling. Our first result shows that the makespan of every feasible schedule without unnecessary idle time on the machines is at most twice the optimal makespan, even if non-trivial release dates are involved. Theorem 1. List Scheduling is a 2 − 1 m -approximation for P | r j , or-prec | C max .
The proof of Theorem 1 is contained in Section 3. The key ingredient for proving the performance guarantee is the concept of minimal chains that we introduce in Section 2. Informally the length of the minimal chain of job j ∈ [n] is the amount of extra time we need to complete j. The minimal chain of j is the set of jobs in [n] \ S that have to be processed in order to complete j in that time.
Preemptive Scheduling. If preemptions are allowed the algorithm of McNaughton [20] computes an optimal schedule in the absence of release dates and precedence constraints. Ullman [25] showed that the problem with AND-precedence constraints is NP-hard, even if all jobs have unit processing time. Note that if p j = 1 for all jobs j, then there is no benefit in preemption. This implies that the preemptive problem with AND-precedence constraints is also NP-hard. However, the preemptive variant becomes solvable in polynomial time for certain restricted precedence graphs. Precedence graphs that consist of outtrees are of special interest to us, since then ANDand OR-precedence constraints coincide.
A number of polynomial-time algorithms were proposed for AND-precedence constraints in form of an outtree. Hu [14] proposed the first such algorithm for unit processing time jobs, and Brucker, Garey and Johnson [2] presented an algorithm that can also deal with non-trivial release dates. Muntz and Coffman [23] gave a polynomial-time algorithm, if preemptions are allowed. The algorithm of Gonzalez and Johnson [8] has an asymptotically better running time and uses fewer preemptions than the one in [23]. Finally Lawler [18] proposed a polynomial-time algorithm for the preemptive variant that can deal with non-trivial release dates, if the precedence graph consists of outtrees. 1 For general OR-precedence constrained unit processing time jobs, Johannes [16] presented a polynomial-time algorithm that is similar to Hu's algorithm [14]. We improve on this result by analyzing the structure of an optimal solution of P | r j , or-prec, pmtn | C max . More precisly, we show that there is an optimal preemptive schedule where each job is preceded by its minimal chain. We then exploit this structure to transform the instance into an equivalent AND-precedence constrained instance, where we can apply known algorithms of e.g. [14,23,2,8,18]. Thereby we obtain our second result. The proof is contained in Section 4. Theorem 2. P | r j , or-prec, pmtn | C max can be solved to optimality in polynomial time.
Since there is no need to preempt if p j = 1 for all j ∈ [n], we immediately obtain the following corollary. This generalizes the aforementioned result of [16].
Corollary 3. P | r j , or-prec, p j = 1 | C max can be solved to optimality in polynomial time.

Preliminaries and Minimal Chains
In order to simplify some arguments, we introduce a dummy job s with p s = r s = 0 that shall precede all jobs. That is, we assume that the set of jobs is N = [n] ∪ {s}, and introduce an arc (s, j) for all j ∈ [n] with P(j) = ∅ in the precedence graph G. Note that there is a feasible schedule, if and only if every job j ∈ [n] is reachable from s in G = (N, E). In particular, we can decide in linear time, e.g. via breadth-first-search, whether there exists a feasible schedule. Henceforth, we will assume that the instances we consider admit a feasible schedule.
Note that P | or-prec | C max is a generalization of P | | C max which is already strongly NP-hard [6]. If G is an outtree rooted at s, then OR-and AND-precedence constraints are equivalent. The NPhardness result of Du, Leung and Young [4] implies that the problem remains strongly NP-hard, even if the number of machines is fixed.
In order to analyze the performance of our algorithms, we use the concept of so-called minimal chains. Informally, a minimal chain of a job k is a set of jobs that need to be scheduled so that k can complete as early as possible. To define minimal chains properly, we use the notion of an earliest start schedule [5,22,16]. Although these schedules are well-defined for general AND/ORscheduling, we only need and define them in the OR-scheduling context.
The earliest start schedule is defined as a schedule on an infinite number of machines such that (i) a job j without predecessors starts at time r j , and (ii) a job j with P(j) = ∅ starts at time max{r j , min{C i | i ∈ P(j)}}. Clearly, an earliest start schedule respects the OR-precedence constraints of the instance, since every job is preceded by at least one of its predecessors according to (ii). Also, the completion time of a job in any feasible schedule on m machines is bounded from below by its completion time in the earliest start schedule. That is, if C j denotes the completion time of job j in the earliest start schedule, the optimum makespan satisfies C * max ≥ max{C j | j ∈ N }. Note that an earliest start schedule is not necessarily unique, but the start and completion times of all jobs are fix. Earliest start schedules can be constructed in polynomial time by iteratively scheduling every job as early as possible [5].
Let k ∈ N and let C j be the completion time of j ∈ N in the earliest start schedule. A set L ⊆ N is called minimal chain of k ∈ N if L ∈ S is inclusion-minimal such that k ∈ L and max j∈L C j = C k . The set of minimal chains of k is denoted by MC(k), and the length of the minimal chain of k is mc(k) := C k .
We can construct a minimal chain of k by iteratively tracing back predecessors that delay job k in the earliest start schedule. That is, starting at k, we mark one of its predecessors j with C j = S k , and then proceed with j in the same manner, i.e., we mark a predecessor i of j with C i = S j , and so on, until we reach a job i ′ that starts at its release date. If i ′ has no predecessors, we are done. If P(i ′ ) = ∅, we mark a predecessor j ′ of i ′ with C j ′ ≤ S i ′ , and continue with j ′ as described above. The marked jobs now correspond to a minimal chain of k. That is, a minimal chain L = {j 1 , . . . , j ℓ } ∈ MC(k) is a path in G with P(j 1 ) = ∅, j q ∈ P(j q+1 ) for all q ∈ [ℓ − 1] and j ℓ = k such that S j 1 = r j 1 and S jq = max{r jq , C j q−1 } for all 2 ≤ q ≤ ℓ. We call j q the predecessor of j q+1 in L for q ∈ [ℓ − 1] and denote this by In the following, we denote the completion times in an optimal schedule by C * j (for j ∈ [n]) and its makespan by C * max . Also, we will sometimes denote an optimal schedule by C * and the schedule with completion times C j (for j ∈ [n]) by C. There are two trivial lower bounds on the optimal makespan. First, any feasible schedule cannot do better than splitting the total processing load equally among all machines, so C * max ≥ 1 m j∈N p j . Second, every job requires at least one of its predecessors to be completed before it can start. If we start with an empty schedule, the earliest completion time of job j is by definition equal to the length of its minimal chain w.r.t. the empty set. Thus, C * max ≥ max j∈N mc(j). Figure 1: An instance on nine jobs with processing times p j1 = p j6 = p k = 1, p j2 = p j3 = p j5 = p j7 = 2, p j4 = 3, p j8 = 4 and release dates r j1 = 2, r j2 = 1, r j6 = 4, r j = 0 for all other jobs j (left) and an earliest start schedule (right). The set of minimal chains of k is MC(k) = {{j 2 , j 6 , j 7 , k}, {j 3 , j 5 , j 6 , j 7 , k}} with mc(k) = 8. The chain {j 2 , j 6 , j 7 , k} is dominated by j 6 , and {j 3 , j 5 , j 6 , j 7 , k} is dominated by jobs j 3 and j 6 . The paths in G that correspond to the minimal chains in MC(k) are depicted dashed and thick, respectively. Jobs in minimal chains are highlighted in gray.

List Scheduling Without Preemptions
Erlebach et al. [5] presented a 2-approximation algorithm for minimizing the makespan with AND/OR-precedence constraints. The algorithm transforms the instance to an AND-instance by fixing an OR-predecessor for each job, and then applies List Scheduling. We show that List Scheduling without transforming the instance is already 2-approximate for OR-precedence constraints, even with non-trivial release dates. The proof is similar to [12]. Since we consider OR-precedence constraints, we need the notion of minimal chains to bound the amount of idle time on the machines. For completeness, we restate Theorem 1 here.
Theorem. List Scheduling is a 2 − 1 m -approximation for P | r j , or-prec | C max . Proof. Consider the schedule returned by List Scheduling, and let S j and C j be the start and completion time of job j ∈ [n]. Let l ∈ [n] be the job that completes last, i.e. C l = C max . Let I ⊆ [0, S l ] be the union of all time intervals I 1 , . . . , I b where some machine is idle. If I = ∅, then all machines are busy before time S l with jobs in N \ {l}. Hence So suppose there is idle time, and let I be the union of all intervals in which some machine is idle. Let S ⊆ N be a set of jobs such that S is a path in the precedence graph from the source s to k. At every point in time t ∈ I, a job in S is either not yet released, or is currently running on some machine. Otherwise, there is an unscheduled available job in S that can be processed at time t.
Let L ′ ∈ MC(k) be a minimal chain of k and enumerate the jobs L ′ = {j 1 , . . . , j ℓ } such that j ℓ = k, P(j 1 ) = ∅, and j q ∈ P(j q+1 ) for all q ∈ [ℓ − 1]. Recall that L ′ is a path in G, so, at every idle point in time, some job of L ′ is either being processed or not yet released. That is, the total idle time is |I| ≤ mc(k), but we can even get an even stronger bound.
Let h ∈ [ℓ] be maximal such that j h dominates the minimal chain L ′ , i.e., mc(k) = r j h + ℓ q=h p jq . Let L := {j h , . . . , j ℓ }, and consider the points in time I L := [0; r j h ] ∪ j∈L [S j ; C j ] when j h is not yet released or some job in L is being processed. Note that the intervals [S j ; C j ] for j ∈ L are not necessarily disjoint, because jobs in L might run in parallel, i.e., |I L | ≤ r j h + j∈L p j . W.l.o.g., we can assume that at least one machine is running during [0; r j h ]. Otherwise, if all machines were idle at some point t ∈ [0; r j h ], then all jobs j with r j ≤ t are already completed at time t. Thus, also in the optimum solution, no machine is running at time t, so we can disregard these time slots where no machine is running at all. That is, during I B := [0; C max ] \ I L , all machines are busy with jobs in N \ L and the total processing load of jobs that are running in I B is less or equal than and we obtain This proves the claim.

A Polynomial-Time Algorithm for Special Cases
In this section, we consider the preemptive problem P | r j , or-prec, pmtn | C max and prove Theorem 2. Recall that all processing times and release dates of jobs in [n] are positive and non-negative integers, respectively. So preemptive and non-preemptive scheduling of unit processing time jobs are equivalent, since there is no need to preempt, which proves Corollary 3.
In contrast to the non-preemptive instance, an optimal preemptive schedule will never have idle time, if there are available jobs. Without preemption, it could make sense to wait for some job j to finish (i.e. have idle time), although there is an available job k. The reason might be that we want to process a successor i of j rightaway. However, if we allow preemption, then we could just schedule a fraction of k, and once j completes, we preempt k and process i.
We first derive some necessary notation, and then present a polynomial-time algorithm that computes an optimal preemptive schedule. Fix L j ∈ MC(j) for all j ∈ N . The collection of minimal chains {L j | j ∈ N } is called closed, if i ∈ L j implies L i ⊆ L j for all j ∈ N . Note that we can always choose L i ⊆ L j for all i ∈ L j , since (informally) subpaths of shortest paths are shortest paths. Hence, if we compute minimal chains L 1 , . . . , L n using the procedure described in Section 2, we may assume that {L 1 , . . . , L n } is closed. We say an arc (i, j) ∈ E is in line with the minimal chain L j if i ∈ L j . Recall that all processing times are strictly positive and L j ∈ MC(j). So if (i, j) ∈ E is in line with L j then i ∈ P(j).
Our algorithm, which we refer to as AlgoPmtn, works as follows. First, compute a closed collection of minimal chains {L j | j ∈ N }. Then, transform the instance to an instance with AND-precedence constraints by deleting all arcs that are not in line with L 1 , . . . , L n . (Note that the resulting graph G ′ is an outtree.) Now, apply a polynomial-time algorithm for the resulting AND-instance to compute an optimal preemptive schedule. (Recall that we can compute optimal preemptive schedules for these special cases in polynomial time, see e.g. [14,23,2,8,18]. We use the algorithm of Lawler [18], but instead, depending on the setting, we could also use any of the other algorithms. ) We prove that AlgoPmtn works correctly by analyzing the structure of an optimal preemptive schedule. More precisly, we show that for any closed collection of minimal chains, there is an optimal preemptive schedule that is feasible for the transformed graph G ′ . Before we are able to prove Theorem 2, we need some additional notation.
If jobs are allowed to preempt, we need to "keep track" how much of the minimal chain of a job is already processed at every point in time. To formalize this, we split every job j ∈ [n] into p j jobs j 1 , . . . , j p j of unit processing time. The predecessors of these jobs are P(j 1 ) = {i p i | (i, j) ∈ E} and P(j u ) = {j u−1 } for all 2 ≤ u ≤ p j . The release dates are r ju = r j for all j ∈ N and u ∈ [p j ]. As before, we add a dummy job s with p s = r s = 0 and P(j 1 ) = {s} if P(j) = ∅ for j ∈ [n]. We refer to this instance as the preemtive instance and denote the set of jobs by N (p) .
Note that, if all jobs have unit processing time, then N (p) = N . We informally extend definition of mc(k) to fractions of jobs via the original definition on the preemptive instance. Note that (the lengths of) all minimal chains coincide with the non-preemptive instance. In particular, all lower bounds on the makespan are still valid, and i ∈ L j implies i 1 , . . . , i p i ∈ L ju for all u ∈ [p j ]. Since minimal chains in the non-preemptive and preemptive instance coincide, Lemma 6 describes a procedure that swaps two jobs k, l ∈ N (p) that are scheduled consecutively. We will apply this procedure to show that there always exists an optimal solution without inversions (see Lemma 7). For the notation of Lemma 6, we forget about release dates, i.e. consider schedules for P | or-prec, pmtn | C max . We describe how to incorporate release dates in the proof of Lemma 7, which is the key lemma for the correctness of AlgoPmtn. Lemma 6. Let {L j | j ∈ N (p) } be a closed collection of minimal chains and C * be a feasible preemptive schedule. Let i ∈ N (p) with C * i ≥ 2, and let S i = {j ∈ N (p) | C * j = C * i − 1} be the jobs scheduled directly before i. Assume that |S i | = m and C * j ≤ C * i − 2 for j ∈ P L i (i). 2 Then there is k ∈ S i such that swapping i and k, i.e., setting for all j ∈ N (p) \ {i, k}, yields a feasible schedule with C ′ max = C * max and I C ′ ≤ I C * . Proof. To shorten notation, set t := C * i − 1. Note that the makespan does not change if we swap two unit processing time jobs. Let J i = {j ∈ N (p) \ {i} | C * j = t + 1} be the jobs running in parallel to i on the other machines. Note that |J i | ≤ m − 1, and recall that there are |S i | = m jobs that are being processed directly before i. For j ∈ N (p) , let A * j := {j ′ ∈ N (p) | C * j ′ < C * j } be the set of jobs that complete before j starts. Let and P(j) ∩ A * j ⊆ S i } be the set of jobs that are scheduled parallel to i and that are processed directly after their predecessor in the minimal chain or the only predecessor preceding them in the schedule. Let S ′ ⊆ S i be the set of these predecessors of jobs in J ′ . We do not want to swap i with a job in S ′ since this would cause an inversion or yield an infeasible schedule. Note that |S ′ | ≤ |J ′ |, and set S = S i \ S ′ and The arrows indicate that the respective job in S ′ is the predecessor of the corresponding job in J ′ . In this example, J = ∅, so J i = J ′ .
any k ∈ S satisfies the claim. Figure 2 illustrates the sets and the corresponding schedules before and after the swap. Let k ∈ S be arbitrary and consider the resulting schedule after swapping k and i. Feasibility of the initial schedule implies that at most m jobs are running at any point in time. It remains to be shown that the precedence constraints are satisfied and that no additional inversions are created. Let A ′ j := {j ′ ∈ N (p) | C ′ j ′ < C ′ j } for all j ∈ N (p) be the set of jobs that complete before j starts in the new schedule.
As for feasibility, recall that the precedence constraints for i are not violated if we schedule i in [t − 1; t] by assumption. In the following, let j ∈ N (p) \ {i} be a job with predecessors, i.e., P(j) = ∅. Note that P(j) ∩ A * j = ∅, since the initial schedule was feasible. If j ∈ N (p) \ (J i ∪ {i}), we get P(j) ∩ A ′ j = ∅ because A * j ⊆ A ′ j . (Note that strict inclusion only holds for j = k.) For j ∈ J ′ , its predecessor in S ′ is still contained in A ′ j since k / ∈ S ′ . So P(j) ∩ A ′ j = ∅. Finally, any job j ∈ J i \ J ′ has a predecessor j ′ ∈ P(j) that completes before time t − 1 by definition of J ′ , so j ′ ∈ A ′ j . In total, each job with predecessors is still preceded by one of them, and the schedule is feasible.
As for the number of inversions, note that the schedule is not altered in the intervals [0; t − 1] ∪ [t + 1; C * max ]. All jobs that could cause an inversion are contained in {i} ∪ S i . Scheduling i one time slot earlier does not cause an inversion by assumption. The only jobs in S i that could cause an inversion, if we schedule them in [t; t + 1], are contained in S ′ . Since we swap i with a job in S = S i \ S ′ , swapping i and k ∈ S does not cause an additional inverted pair. Hence I C ′ ≤ I C * . Lemma 7. Let {L j | j ∈ N (p) } be a closed collection of minimal chains L j ∈ MC(j) for all j ∈ N (p) . There exists an optimal preemptive schedule C * such that C * i < C * j for all j ∈ N (p) and i ∈ L j \{j}.
Proof. Recall that all processing times of jobs in N (p) are equal to 1. Consider an optimal schedule with completion times C * j for all j ∈ N (p) such that I C * is minimal among all optimal solutions. Suppose by contradiction that I C * ≥ 1. We show how to construct a schedule with C ′ max = C * max and I C ′ < I C * using Lemma 6.
Using Lemma 6, we move the jobs j 1 , . . . , j ℓ successively (in this order) to the front such that they complete at times mc(j 1 ), . . . , mc(j ℓ ), respectively. For all k ∈ N (p) with non-trivial release date, it holds So we can swap those jobs k ∈ {j 1 , . . . , j ℓ } that do not complete at time mc(k) to the front without violating the respective release dates. Thereby, we obtain a schedule that satisfies Since we first move job j 1 to the front, then j 2 , and so on, we ensure that, when we apply Lemma 6 for i = j q (in the notation of Lemma 6), then its predecessor j q−1 completes at time mc(j q−1 ) < mc(j q ). So the assumptions of Lemma 6 are satisfied. The procedure of Lemma 6 does not violate any release dates, since k ∈ S (in the notation of Lemma 6) is scheduled later and it is feasible to schedule j q earlier due to (1) for all q ∈ [ℓ]. Figure 3 illustrates the current completion times and the time slots in which we move the jobs in the minimal chain L j . Note that it is not necessary to move the job j = j ℓ+1 . However, by applying Lemma 6, it might happen that k = j (in the notation of Lemma 6) is chosen, i.e., j is "passively moved". Similarly, a job j h might be "passively moved" when we swap j q with q < h to the front. This is not a problem, since we deal with j h in a later iteration.
Multiple application of Lemma 6 ensures that the resulting schedule is feasible and has no more inversions than the initial schedule. Further, Lemma 6 implies C ′ max = C * max , and I C ′ < I C * because i and j are not inverted anymore, see (2). This contradicts to the choice of the initial schedule being an optimal solution with fewest inversions. So there exists an optimal solution without inversions, which proves the claim.
The following lemma shows correctness of AlgoPmtn, and thus proves Theorem 2.
Lemma 8. AlgoPmtn solves P | r j , or-prec, pmtn | C max to optimality in polynomial time.
Proof. First, observe that the graph G ′ constructed by AlgoPmtn is a subgraph of the initial precedence graph G. Since the schedule returned by the algorithm is feasible for the AND-instance on G ′ (this follows from correctness of Lawler's algorithm [18]), it certainly is feasible for the ORinstance on G. Construction of the earliest start schedule and Lawler's algorithm run in polynomial time [5,18]. Also, we can compute the closed collection of minimal chains and construct G ′ in polynomial time. So AlgoPmtn runs in polynomial time and returns a feasible schedule. As for optimality of the schedule returned by AlgoPmtn, let {L j | j ∈ N } be the closed collection of minimal chains that is computed in the second step, and let G ′ be the corresponding subgraph of G. Since {L j | j ∈ N } is closed, G ′ is an outforest. Thus, OR-and AND-precedence constraints on G ′ are equivalent.
Consider the schedule returned by AlgoPmtn, i.e., by Lawler's algorithm [18] on G ′ , and let C max be its makespan. Since the schedule is feasible for the OR-instance with precedence graph G ′ , it is also feasible for the initial precedence graph G. By Lemma 7, there exists an optimal solution with makespan C * max for the instance on G that is also feasible for the instance on G ′ . Since the schedule returned by AlgoPmtn is optimal for the instance on G ′ , it holds C max ≤ C * max . This proves the claim.

Concluding Remarks
In this paper, we discuss the problem of minimizing the makespan on parallel uniform machines with OR-precedence constraints. We introduce the concept of minimal chains, which is crucial to prove that the List Scheduling algorithm of Graham [9] achieves an approximation guarantee of 2. Using minimal chains, we show that there exists an optimal preemptive schedule of a certain structure and exploit this structure to obtain a polynomial-time algorithm for the preemptive variant.
This matches the complexity and best-known approximation guarantees of makespan minimization, if the precedence graph is an outtree, which is a special case where AND-and OR-precedence constraints coincide. Clearly any improvement on OR-precedence constraints directly transfers to AND-precedence constraints on outtrees. On the other hand, due to the close connection with minimal chains, any progress on the approximation factor of AND-precedence constraints on outtrees might also be applicable to OR-precedence constraints.
We would like to remark that Corollary 3 (unit processing times) without release dates was already proven by Johannes [16]. However, the size of the preemptive instance is not polynomial in the input parameters of the initial instance. Thus the analysis in [16] cannot be extended to the preemptive case.