On the Fine-grained Parameterized Complexity of Partial Scheduling to Minimize the Makespan

We study a natural variant of scheduling that we call partial scheduling: in this variant an instance of a scheduling problem along with an integer k is given and one seeks an optimal schedule where not all, but only k jobs, have to be processed. Specifically, we aim to determine the fine-grained parameterized complexity of partial scheduling problems parameterized by k for all variants of scheduling problems that minimize the makespan and involve unit/arbitrary processing times, identical/unrelated parallel machines, release/due dates, and precedence constraints. That is, we investigate whether algorithms with runtimes of the type \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f(k)n^{{\mathcal {O}}(1)}$$\end{document}f(k)nO(1) or \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n^{{\mathcal {O}}(f(k))}$$\end{document}nO(f(k)) exist for a function f that is as small as possible. Our contribution is two-fold: First, we categorize each variant to be either in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathsf {P}}$$\end{document}P, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathsf {N}}}{{\mathsf {P}}}$$\end{document}NP-complete and fixed-parameter tractable by k, or \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathsf {W}}[1]$$\end{document}W[1]-hard parameterized by k. Second, for many interesting cases we further investigate the runtime on a finer scale and obtain run times that are (almost) optimal assuming the Exponential Time Hypothesis. As one of our main technical contributions, we give an \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {O}}(8^kk(|V|+|E|))$$\end{document}O(8kk(|V|+|E|)) time algorithm to solve instances of partial scheduling problems minimizing the makespan with unit length jobs, precedence constraints and release dates, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G=(V,E)$$\end{document}G=(V,E) is the graph with precedence constraints.


Introduction
Scheduling is one of the most central application domains of combinatorial optimization. In the last decades, huge combined effort of many researchers led to major progress on understanding the worst-case computational complexity of almost all natural variants of scheduling: By now, for most of these variants it is known whether they are NP-complete or not. Scheduling problems provide the context of some of the most classic approximation algorithms. For example, in the standard textbook by Shmoys and Williamson on approximation algorithms [29] a wide variety of techniques are illustrated by applications to scheduling problems. See also the standard textbook on scheduling by Pinedo [24] for more background.
Instead of studying approximation algorithms, another natural way to deal with NP-completeness is Parameterized Complexity (PC).
While the application of general PC theory to the area of scheduling has still received considerably less attention than the approximation point of view, recently its study has seen explosive growth, as witnessed by a plethora of publications (e.g. [2,13,18,22,27,28]). Additionally, many recent results and open problems can be found in a survey by Mnich and van Bevern [21], and even an entire workshop on the subject was recently held [20].
In this paper we advance this vibrant research direction with a complete mapping of how several standard scheduling parameters influence the parameterized complexity of minimizing the makespan in a natural variant of scheduling problems that we call partial scheduling. Next to studying the classical question of whether parameterized problems are in P, in FPT parameterized by k, or W[1]-hard parameterized by k, we also follow the well-established modern perspective of 'fine-grained' PC and aim at run times of the type f (k)n O (1) or n f (k) for the smallest function f of parameter k. Partial Scheduling In many scheduling problems arising in practice, the set of jobs to be scheduled is not predetermined. We refer to this as partial scheduling. Partial scheduling is well-motivated from practice, as it arises naturally for example in the following scenarios: 1. Due to uncertainties a close-horizon approach may be employed and only few jobs out of a big set of jobs will be scheduled in a short but fixed time-window, 2. In freelance markets typically a large database of jobs is available and a freelancer is interested in selecting only a few of the jobs to work on, 3. The selection of the jobs to process may resemble other choices the scheduler should make, such as to outsource non-processed jobs to various external parties.
Partial scheduling has been previously studied in the equivalent forms of maximum throughput scheduling [25] (motivated by the first example setting above), job rejection [26], scheduling with outliers [12], job selection [8,16,30] and its special case interval selection [5].
In this paper, we conduct a rigorous study of the parameterized complexity of partial scheduling, parameterized by the number of jobs to be scheduled. We denote this number by k. While several isolated results concerning the parameterized complexity of partial scheduling do exist, this parameterization has (somewhat surprisingly) not been rigorously studied yet. 1 We address this and study the parameterized complexity of the (arguably) most natural variants of the problem. We fix as objective to minimize the makespan while scheduling at least k jobs, for a given integer k and study all variants with the following characteristics: -1 machine, identical parallel machines or unrelated parallel machines, -release/due dates, unit/arbitrary processing times, and precedence constraints.

Our Results
We give a classification of the parameterized complexity of these 48 variants. Additionally, for each variant that is not in P, we give algorithms solving them and lower bounds under ETH. To easily refer to a variant of the scheduling problem, we use the standard three-field notation by Graham et al. [11]. See Sect. 2 for an explanation of this notation. To accommodate our study of partial scheduling, we extend the α|β|γ notation as follows: Definition 1 We let k-sched in the γ -field indicate that we only schedule k out of n jobs.
We study the fine-grained parameterized complexity of all problems α|β|γ , where α ∈ {1, P, R}, the options for β are all combinations for r j , d j , p j = 1, prec, and γ is fixed to γ = k-sched, C max . Our results are explicitly enumerated in Table 1.
The rows of Table 1 are lexicographically sorted on (i) precedence relations/no precedence relations, (ii) a single machine, identical machines or unrelated machines (iii) release dates and/or deadlines. Because their presence has a major influence on the character of the problem we stress the distinction between variants with and without precedence constraints. 2 On a high abstraction level, our contribution is two-fold: 1. We present a classification of the complexity of all aforementioned variants of partial scheduling with the objective of minimizing the makespan. Specifically, we classify all variants to be either solvable in polynomial time, to be fixed-parameter tractable in k and NP-hard, or to be W[1]-hard. 2. For most of the studied variants we present both an algorithm and a lower bound that shows that our algorithm cannot be significantly improved unless the Exponential Time Hypothesis (ETH) fails.
Thus, while we completely answer a classical type of question in the field of Parameterized Complexity, we pursue in our second contribution a more modern and fine-grained understanding of the best possible runtime with respect to the parameter k. For several of the studied variants, the lower bounds and algorithms listed in Table 1 follow relatively quickly. However, for many other cases we need substantial new insights to obtain (almost) matching upper and lower bounds on the runtime of the algorithms solving them. We have grouped the rows in result types [A]-[G] depending on our methods for determining their complexity. 1 We compare the previous works and other relevant studied parameterization in the end of this section. 2 A precedence constraint a ≺ b enforces that job a needs to be finished before job b can start.
Since p j = 1 implies that the machines are identical, the mentioned number of 48 combinations reduces to 40 different scheduling problems. The O * notation omits factors polynomial in the input size. The highlighted table entries are new results from this paper

Our New Methods
We now describe some of our most significant technical contributions for obtaining the various types (listed as [A]-[G] in Table 1) of results. Note that we skip some less interesting cases in this introduction; for a complete argumentation of all results from Table 1 we refer to Sect. 6. The main building blocks and logical implications to obtain the results from Table 1 are depicted in Fig. 1. We now discuss these building blocks of Fig. 1 in detail.  Table 1. Arrows indicate how a problem is generalized by another problem

Precedence Constraints
Our main technical contribution concerns result type [C]. The simplest of the two cases, P|prec, p j = 1|k-sched, C max , cannot be solved in O * (2 o( √ k log k) ) time assuming the Exponential Time Hypothesis and not in 2 o(k) unless sub-exponential time algorithms for the Biclique problem exist, due to reductions by Jansen et al. [14]. Our contribution lies in the following theorem that gives an upper bound for the more general of the two problems that matches the latter lower bound: Theorem 1 P|r j , prec, p j = 1|k-sched, C max can be solved in O(8 k k(|V | + |E|)) time, 3 where G = (V , E) is the precedence graph given as input.
Theorem 1 will be proved in Sect. 3. The first idea behind the proof is based on a natural 4 dynamic programming algorithm indexed by anti-chains of the partial order naturally associated with the precedence constraints. However, evaluating this dynamic program naïvely would lead to an n O(k) time algorithm, where n is the number of jobs.
Our key idea is to only compute a subset of the table entries of this dynamic programming algorithm, guided by a new parameter of an antichain called the depth. Intuitively, the depth of an antichain A indicates the number of jobs that can be scheduled after A in a feasible schedule without violating the precedence constraints.
We prove Theorem 1 by showing we may restrict attention in the dynamic programming algorithm to antichains of depth at most k, and by bounding the number of antichains of depth at most k indirectly by bounding the number of maximal antichains of depth at most k. We believe this methodology should have more applications for scheduling problems with precedence constraints.
Surprisingly, the positive result of Theorem 1 is in stark contrast with the seemingly symmetric case where only deadlines are present: Our next result, indicated as [B] in Fig. 1 shows it is much harder: The problem P|d j , prec, p j = 1|k-sched, C max is W[1]-hard, and it cannot be solved in n o(k/ log k) time assuming the ETH.
Theorem 2 is a consequence of a reduction outlined in Sect. 4. Note the W[1]hardness follows from a natural reduction from the k-Clique problem (presented originally by Fellows and McCartin [9]), but this reduction increases the parameter k to (k 2 ) and would only exclude n o( √ k) time algorithms assuming the ETH. To obtain the tighter bound from Theorem 2, we instead provide a non-trivial reduction from the 3-Coloring problem based on a new selection gadget.
For result type [D], we give a lower bound by a (relatively simple) reduction from Partitioned Subgraph Isomorphism in Theorem 6 and Corollary 4. Since it is conjectured that Partitioned Subgraph Isomorphism cannot be solved in n o(k) time assuming the ETH, our reduction is a strong indication that the simple n O(k) time algorithm (see Sect. 6) cannot be improved significantly in this case.

No Precedence Constraints
The second half of our classification concerns scheduling problems without precedence constraints, and is easier to obtain than the first half. Results [E], [F] are consequences of a greedy algorithm and Moore's algorithm [23] that solves the problem 1|| j U j in O(n log n) time. Notice that this also solves the problem 1|r j |k-sched, C max , by reversing the schedule and viewing the release dates as the deadlines. For result type [G] we show that a standard technique in parameterized complexity, the color coding method, can be used to get a 2 O(k) time algorithm for the most general problem of the class, being R|r j , d j |k-sched, C max . All lower bounds on the runtime of algorithms for problems of type [G] are by a reduction from Subset Sum, but for 1|r j , d j |k-sched, C max this reduction is slightly different.

Related Work
The interest in parameterized complexity of scheduling problems recently witnessed an explosive growth, resulting in e.g. a workshop [20] and a survey by Mnich and van Bevern [21] with a wide variety of open problems.
The parameterized complexity of partial scheduling parameterized by the number of processed jobs, or equivalently, the number of jobs 'on time' was studied before: Fellows et al. [9] studied a problem called k-Tasks On Time that is equivalent to 1|d j , prec, p j = 1|k-sched, C max and showed that it is W[1]-hard when parameterized by k, 5 and in FPT parameterized by k and the width of the partially ordered set induced by the precedence constraints. Van Bevern et al. [27] showed that the Job Interval Selection problem, where each job is given a set of possible intervals to be processed on, is in FPT parameterized by k. Bessy et al. [2] consider partial scheduling with a restriction on the jobs called 'Coupled-Task', and also remarked the current parameterization is relatively understudied.
Another related parameter is the number of jobs that are not scheduled, that also has been studied in several previous works [4,9,22]. For example, Mnich and Wiese [22] studied the parameterized complexity of scheduling problems with respect to the number of rejected jobs in combination with other variables as parameter. If n denotes the number of given jobs, this parameter equals n − k. The two parameters are somewhat incomparable in terms of applications: In some settings only few jobs out of many alternatives need to be scheduled, but in other settings rejecting a job is very costly and thus will happen rarely. However, a strong advantage of using k as parameter is in terms of its computational complexity: If the version of the problem with all jobs mandatory is NP-complete it is trivially NP-complete for n − k = 0, but it may still be in FPT parameterized by k.

Organization of this Paper
This paper is organized as follows: We start with some preliminaries in Sect.  Table 1. Finally, in Sect. 7 we present a conclusion.

The Three-Field Notation by Graham Et al
Throughout this paper we denote scheduling problems using the three-field notation by Graham et al. [11]. Problems are classified by parameters α|β|γ . The α describes the machine environment. We use α ∈ {1, P, R}, indicating whether there are one (1), identical (P) or unrelated (R) parallel machines available. Here identical refers to the fact that every job takes a fixed amount of time process independent of the machine, and unrelated means a job could take different time to process per machine. The β field describes the job characteristics, which in this paper can be a combination of the following values: prec (precedence constraints), r j (release dates), d j (deadlines) and p j = 1 (all processing times are 1). We assume without loss of generality that all release dates and deadlines are integers.
The γ field concerns the optimization criteria. A given schedule determines C j , the completion time of job j, and U j , the unit penalty which is 1 if C j > d j , and 0 if C j ≤ d j . In this paper we use the following optimization criteria -C max : minimize the makespan (i.e. the maximum completion time C j of any job), j U j : minimize the number of jobs that finish after their deadline, k-sched: maximize the number of processed jobs; in particular, process at least k jobs.
A schedule is said to be feasible if no constraints (deadlines, release dates, precedence constraints) are violated.

Notation for Posets
Any precedence graph G is a directed acyclic graph and therefore induces a partial Notice that max(G) is exactly the antichain A such that pred(A) = V (G). We denote the subgraph of G induced by S with G[S]. We may assume that r j < r j if j ≺ j since job j will be processed later than r j in any schedule. To handle release dates we use the following: Definition 2 Let G be a precedence graph. Then G t is the precedence graph restricted to all jobs that can be scheduled on or before time t, i.e. all jobs with release date at most t.
We assume G = G C max , since all jobs with release date greater than C max can be ignored.

Parameterized Complexity
We say a problem is Fixed-Parameter Tractable (and in the complexity class FPT) parameterized by parameter k, if there exists an algorithm with runtime O( f (k) · n c ), where n denotes the size of the instance, f is a computable function and c some constant. There also exist problems for which inclusion in FPT for some parameter is unlikely, such as k-Clique. This is because k-Clique is complete for the complexity class W [1] and it is conjectured that FPT = W [1]. One could view FPT as the parameterized version of P and W [1] of the parameterized version of NP. To prove a problem P to be W[1]-hard, one can use a parameterized reduction from another problem P that is W[1]-hard, where the reduction is a polynomial-time reduction with the following two additional restrictions: (1) the parameter k of P is bounded by g(k) for some function computable g and k the parameter of P, (2) the runtime of the reduction is bounded by f (k) · n c for f some computable function, n the size of the instance of P and c a constant.
We exclude fixed-parameter tractable algorithms for problems that are W[1]-hard. To exclude runtimes in a more fine-grained manner, we use the Exponential Time Hypothesis (ETH). Roughly speaking, the ETH conjectures that no 2 o(n) algorithm for 3-SAT exists, where n is the number of variables of the instance. As a consequence we can, for example, exclude algorithms with runtime 2 o(n) for Subset Sum where n is the number of input integers, and algorithms with runtime n o(k) for k-Clique where n is the number of vertices of the input graph and k the size of the clique that we are after. The function g(k) bounding the size of k in the parameterized reductions plays an important role in these types of proofs, as for example a reduction with g(k) from k-Clique yields a lower bound under ETH of n o(g −1 (k)) .

Result Type C: Precedence Constraints, Release Dates and Unit Processing Times
In this section we provide a fast algorithm for partial scheduling with release dates and unit processing times parameterized by the number k of scheduled jobs (Theorem 1). There exists a simple, but slow, algorithm with runtime O * (2 k 2 ) that already proves that this problem is in FPT parameterized by k: This algorithm branches k times on jobs that can be processed next. If more than k jobs are available at a step, then processing these jobs greedily is optimal. Otherwise, we can recursively try to schedule all nonempty subsets of jobs to schedule next, and a O * (2 k 2 ) time algorithm is obtained via a standard (bounded search-tree) analysis. To improve on this algorithm, we present a dynamic programming algorithm based on table entries indexed by antichains in the precedence graph G describing the precedence relations. Such an antichain describes the maximal jobs already scheduled in a partial schedule. Our key idea is that, to find an optimal solution, it is sufficient to restrict our attention to a subset of all antichains. This subset will be defined in terms of the depth of an antichain. With this algorithm we improve the runtime to O(8 k k(|V | + |E|)).
By binary search, we can restrict attention to a variant of the problem that asks whether there is a feasible schedule with makespan at most C max , for a fixed universal deadline C max .

The Algorithm
We start by introducing our dynamic programming algorithm for P|r j , prec, p j = 1|k-sched, C max . Let m be the number of machines available. We start with defining the table entries. For a given antichain A ⊆ V (G) and integer t we define S(A, t) = 1, if there exists a feasible schedule ofmakespantthat processes pred(A), 0, otherwise.
Computing the values of S(A, t) can be done by trying all combinations of scheduling at most m jobs of A at time t and then checking whether all remaining jobs of pred(A) can be scheduled in makespan t − 1. To do so, we also verify that all the jobs in A actually have a release date at or before t. Formally, we have the following recurrence for S(A, t): For any X ⊆ A, X is a set of maximal elements with respect to G[pred(A)], and consists of pair-wise incomparable jobs, since A is an antichain. So, we can schedule all jobs from X at time t without violating any precedence constraints. Define A = max(pred(A)\X ) as the unique antichain such that pred(A)\X = pred(A ). If S(A , t −1) = 1 and |X | ≤ m, we can extend the schedule of S(A , t −1) by scheduling all X at time t. In this way we get a feasible schedule processing all jobs of pred(A) before or at time t. So if we find such an X with |X | ≤ m and S(A , t − 1) = 1, we must have S(A, t) = 1.
For the other direction, if for all X ⊆ A with |X | ≤ m, S(A , t − 1) = 0, then no matter which set X ⊆ A we try to schedule at time t, the remaining jobs cannot be scheduled before t. Note that only jobs from A can be scheduled at time t, since those are the maximal jobs. Hence, there is no feasible schedule and S(A, t) = 0.
The above recurrence cannot be directly evaluated, since the number of different antichains of a graph can be big: there can be as many as n k different antichains with |pred(A)| ≤ k, for example in the extreme case of an independent set. Even when we restrict our precedence graph to have out degree k, there could be k k different antichains, for example in k-ary trees. To circumvent this issue, we restrict our dynamic programming algorithm only to a specific subset of antichains. To do this, we use the following new notion of the depth of an antichain.

Definition 3 Let
A be an antichain. Define the depth (with respect to t) of A as The intuition behind this definition is that it quantifies the number of jobs that can be scheduled before (and including) A without violating precedence constraints. See Fig. 2 for an example of an antichain and its depth. We restrict the dynamic Note that for instances with k = 2, a feasible schedule may exist. If so, we will find that R({r }, 1) = 1, which will be defined later. In this way, we can still find the antichain A as a solution programming algorithm to only compute S(A, t) for A satisfying d t (A) ≤ k. This ensures that we do not go 'too deep' into the precedence graph unnecessarily at the cost of a slow runtime.
Because of this restriction in the depth, it could happen that we check no antichains with k or more predecessors, while there are corresponding feasible schedules. It is therefore possible that for some antichains A with d t (A) > k, there is a feasible schedule for all ≥ k jobs in pred(A) before time C max , but the value S(A, C max ) will not be computed. To make sure we still find an optimal schedule, we also compute the following condition R(A, t) for all t ≤ C max and antichains A with d t (A) ≤ k: , if there exists a feasible schedule with makespan at most C max that processes pred(A) on or before t and processes jobs from min(G − pred(A)) after t, with a total of k jobs processed, 0, otherwise.
By definition of R(A, t), if R(A, t) = 1 for any A and t ≤ C max , then we find a feasible schedule that processes k jobs on time. 6 We show that there is an algorithm, namely fill(A, t), that quickly computes R (A, t). The algorithm fill(A, t) does the following: first it checks if S(A, t) = 1 and if so, greedily schedules jobs from min(G − pred(A)) after t in order of smallest release date. If k − |pred(A)| jobs can be scheduled before C max , it returns 'true' (R(A, t) = 1). Otherwise, it returns 'false' (R(A, t) = 0).

Lemma 2 There is an O(|V |k +|E|) time algorithm that, given an antichain A, integer t, and value S(A, t), computes R(A, t).
Proof We show that fill(A, t), defined above, fulfills all requirements. First we prove that if fill(A, t) returns 'true', it follows that R(A, t) = 1. Since S(A, t) = 1, all jobs from pred(A) can be finished at time t. Take that feasible schedule and process k−|pred(A)| jobs from min(G−pred(A)) between t and C max . This is possible because fill(A, t) is true. All predecessors of jobs in min(G − pred(A)) are in pred(A) and therefore processed before t. Hence, no precedence constraints are violated and we find a feasible schedule with the requirements, i.e. R(A, t) = 1.
For the other direction, assume that R(A, t) = 1, i.e. we find a feasible schedule σ where exactly the jobs from pred(A) are processed on or before t and only jobs from min(G − pred(A)) are processed after t. Thus S(A, t) = 1. Define M as the set of jobs processed after t in σ . If M equals the set of jobs with the smallest release dates of min(G − pred(A)), we can also process the jobs of M in order of increasing release dates. Then fill(A, t) will be 'true', since M has size at least k − |pred(A)|. However, if M is not that set, we can replace a job which does not have one of the smallest k − |pred(A)| release dates, by one which has and was not in M yet. This new set can then still be processed between t + 1 and C max because smaller release dates impose weaker constraints. We keep replacing until we end up with M being exactly the set of jobs with smallest release dates, which is then proved to be schedulable between t and C max . Hence, fill(A, t) will return 'true'.
Computing Combining all steps gives us the algorithm as described in Algorithm 1. It remains to bound its runtime and argue its correctness. For any (maximal) antichain A with d(A) ≤ k, we derive that |A| ≤ k and so each maximal antichain of depth at most k has at most 2 k subsets. By Lemma 3, we see that each antichain is a subset of a maximal antichain with the same depth. Proof Let A k (G) be the set of maximal antichains in G with depth at most k. We prove that |A k (G)| ≤ 2 k for any graph G by induction on k. Clearly, |A 0 (G)| ≤ 1 for any graph G, since the only antichain with d Let k > 0 and assume |A j (G)| ≤ 2 j for j < k for any graph G. If we have a precedence graph G with minimal elements s 1 , . . . , s , we partition A k (G) into  + 1 different sets B 1 , B 2 , . . . , B +1 . For i = 1, . . . , , the set B i is defined as the set of maximal antichains A of depth at most k in which {s i : i < i} ⊆ A, but s i / ∈ A (and no restrictions on elements in the set {s i : i > i}). If s i / ∈ A, then s i ∈ pred(A) since A is maximal, so any such maximal antichain has a successor of s i in A. If we define S j as the set of all successors of s j (including s j ), we see that Hence we can remove those elements and its successors from the graph, as they are comparable to any such antichain. Moreover, we can also remove s i (but not its successors) from the graph, since it is in pred(A). Thus B i is then exactly the set of maximal antichains with depth i less in the remaining graph. The set B +1 is defined as all antichains not in some B i , which is all maximal antichains of A of depth at most k for which {s 1 , . . . , s } ⊆ A. Note that B +1 = {s 1 , . . . , s }. We get the following recurrence relation: since |B l+1 | = 1. Notice that we may assume that ≤ k, because otherwise the depth of the antichain will be greater than k. Then if we use the induction hypothesis that |A j (G)| ≤ 2 j for j < k for any graph G, we see by (1) that: The lemma follows since the above procedure can easily be modified in a recursive algorithm to enumerate the antichains, and by using a Breadth-First Search we can compute G − Returning to (non-maximal) antichains, we see that we can enumerate all maximal antichains of depth at most k with Lemma 4 and by Corollary 1 we can find all antichains of depth at most k by taking all subsets of the found maximal antichains.

Corollary 2 There are at most 4 k antichains A with d(A) ≤ k in any precedence graph G = (V , E), and they can be enumerated within O(4 k (|V | + |E|)) time.
Notice that the runtime is indeed correct, as it dominates both the time needed for the construction of the set A k (G) and the time needed for taking the subsets of A k (G) (which is 2 k |A k (G)|).
We now restrict the number of antichains A in G t with d t (A) ≤ k. Take G t to be the graph in Corollary 2 and notice that d t (A) = d(A) for any antichain A in G t . By Corollary 2 we obtain Lemma 5.

Lemma 5 For any t, there are at most 4 k antichains A with d t (A) ≤ k in any precedence graph G = (V , E), and they can be enumerated within O(4 k (|V | + |E|)) time.
To compute each S (A, t), we look at a maximum of k m ≤ 2 k different sets X .  R(A, t). Since C max ≤ k, we therefore have total runtime of O(4 k k(2 k (|V | + |E|) + (|V |k + |E|))). Hence, Algorithm 1 runs in time O(8 k k(|V | + |E|)).

Correctness of Algorithm
To show that the algorithm described in Algorithm 1 indeed returns the correct answer, the following lemma is clearly sufficient:

Lemma 6 A feasible schedule for k jobs with makespan at most C max exists if and only if R(A, t) = 1 for some t ≤ C max and antichain A with d t (A) ≤ k.
Before we are able to prove Lemma 6, we need one more definition. A(σ ) is the antichain such that pred(A(σ )) is exactly the set of jobs that was scheduled in σ .

Definition 4 Let σ be a feasible schedule. Then
Equivalently, if X is the set of jobs processed by σ , then A(σ ) = max(G[X ]). i.e. σ * is a schedule for which A(σ * ) has minimal depth (with respect to C max ). We now define t and B such that R(B, t) = 1.  pred(B)). So, by definition R(B, t) = 1. d t (B) > k. In this case we prove that there is a schedule σ such that d(A(σ )) < d(A(σ * )), i.e. we find a contradiction to the fact that d(A(σ * )) was minimal. This σ can be found as follows: take schedule σ * only up until time t. Let C be a subset of min(G t − comp(B)) such that |C| = k − |pred(B)|. This C can be found since d t (B) ≥ k. Process the jobs in C after time t in σ . These can all be processed without precedence constraint or release date violations, since their predecessors were already scheduled and C ⊆ G t . So, we find a feasible schedule that processes k jobs, called In the last step we prove D(B) = D(A(σ * )), which gives us d(A(σ )) < d(A(σ * )).
We are left to show that D(B) = D(A(σ * )). Remember that t was chosen such that there is a job processed at time t that was not in max(G[pred(A(σ * ))]). In other words, there was a job x ∈ B in σ * at time t with y ∈ M such that y x. Note that y / ∈ D(B), since y ∈ M, so y is not in pred(B) and y is clearly comparable to x. However, y ∈ D(A(σ * )), so we find that d(A(σ )) = d(B) < d(A(σ * )). Hence, we found a schedule with smaller d (A(σ )), which leads to a contradiction.

Result Types B and D: One Machine and Precedence Constraints
In this section we show that Algorithm 1 cannot be even slightly generalized further: if we allow job-dependent deadlines or non-unit processing times, the problem becomes W[1]-hard parameterized by k and cannot be solved in n o(k/ log k) time unless the ETH fails. In the following reductions we reduce to a variant of the scheduling problems that asks whether there is a feasible schedule with makespan at most C max , where C max is given as input. If for a given instance such a schedule exists, we call the instance a yes instance, and a no instance otherwise. We may restrict ourselves to this variant because of binary search.

Job-Dependent Deadlines
The fact that combining precedence constraints with job-dependent deadlines makes the problem W[1]-hard, is a direct consequence from the fact that 1|prec, p j = 1| j U j is W[1]-hard, parameterized by n − j U j = k where n is the number of jobs [9]. It is important to notice that the notation of these problems implies that each job can have its own deadline. Hence, we conclude from this that 1|d j , prec, p j = 1|k-sched, C max is W[1]-hard parameterized by k. This is a reduction from k-Clique that yields a quadratic blow-up on the parameter, giving a lower bound on algorithms for the problem of n ( √ k) . Based on the Exponential Time Hypothesis, we now sharpen this lower bound with a reduction from 3-Coloring: Theorem 3 1|d j , prec, p j = 1|k-sched, C max is W[1]-hard parameterized by k. Furthermore, there is no algorithm solving 1|d j , prec, p j = 1|k-sched, C max in 2 o(n) time where n is the number of jobs, assuming ETH.
Proof The proof will be a reduction from 3-Coloring, for which no 2 o(|V |+|E|) algorithm exists under the Exponential Time Hypothesis [7, pages 471-473]. Let the graph G = (V , E) be the instance of 3-Coloring with |V | = n and |E| = m . We label the vertices v 1 , . . . , v n and the edges e 1 , . . . , e m . We then create the following instance for 1|d j , prec, p j = 1|k-sched, C max .
• For each vertex v i ∈ V , create 6 jobs: These jobs represent which color for each vertex will be chosen (for instance if v 1 i and w 1 i are processed, vertex i gets color 1).
• For each edge e j ∈ E, create 12 jobs: We now prove that the created instance is a yes instance if and only if the original 3-Coloring instance is a yes instance. Assume that there is a 3-coloring of the graph G = (V , E). Then there is also a feasible schedule: For each vertex v i with color a, process the jobs v a i and w a i at their respective deadlines. For each edge e j = {u, v} with u colored a and v colored b, process the jobs e ab j and f ab j exactly at their respective deadlines. Notice that because it is a 3-coloring, each edge has endpoints of different colors, so these jobs exist. Also note that no two jobs were processed at the same time. Exactly 2n + 2m jobs were processed before time 2n + 2m . Furthermore, no precedence constraints were violated.
For the other direction, assume that we have a feasible schedule in our created instance of 1|d j , prec, . . , n , and let E j = {e 12 j , e 13 j , e 21 j , e 23 j , e 31 j , e 32 j } and We show by induction on i that out of each of the sets V i , W i , E j and F j , exactly one job was scheduled at its deadline.
Since we have a feasible schedule, at time 2m + 2n one of the jobs of W 1 must be scheduled, since they are the only jobs with a deadline greater than 2n + 2m − 1. If w a 1 was scheduled at time 2m + 2n , then the job v a 1 must be processed at time 1 because of precedence constraints and since its deadline is 1. No other jobs from V 1 and W 1 can be processed, due to their deadlines and precedence constraints. Now assume that all sets V 1 , . . . , V i−1 , W 1 , . . . , W i−1 have exactly one job scheduled at their respective deadline, and no more can be processed. Since we have a feasible schedule, one job should be scheduled at time 2n + 2m − (i − 1). However, since no more jobs from W 1 , . . . , W i−1 can be scheduled, the only possible jobs are from W i since they are the only other jobs with a deadline greater than 2n + 2m − i. However, if w a i was scheduled at time 2n + 2m − (i − 1), then the job v a i must be processed at time i because of precedence constraints, its deadline at i and because at times 1, . . . , i − 1 other jobs had to be processed. Also, no other job from V i can be processed in the schedule, since they all have deadline i. As a consequence, no other jobs from W i can be processed, as they are restricted to precedence constraints. So the statement holds for all sets V i and W i . In the exact same way, one can conclude the same about all sets E j and F j .
Because of this, we see that each job and each vertex have received a color from the schedule. They must form a 3-coloring, because a job from E j could only be processed if the two endpoints got two different colors. Hence, the 3-Coloring instance is a yes instance.
As k = 2n + 2m we therefore conclude there is no 2 o(n) algorithm under the ETH.
Note that this bound significantly improves the old lower bound of 2 (log n √ k) implied by the the reduction from k-Clique reduction. Since k ≤ n and n/ log n is an increasing function, Theorem 3 implies that Corollary 3 Assuming ETH, there is no algorithm solving 1|d j , prec, p j = 1|k-sched, C max in n o(k/ log(k)) where n is the number of jobs.

Non-unit Processing Times
We show that having non-unit processing times combined with precedence constraints make the problem W[1]-hard even on one machine. The proof of Theorem 4 heavily builds on the reduction from k-Clique to k-Tasks On Time by Fellows and McCartin [9].
Proof The proof is a reduction from k-Clique. We start with G = (V , E), an instance of k-Clique. For each vertex v ∈ V , create a job J v with processing time p(J v ) = 2.
For each edge e ∈ E, create a job J e with processing time p(J e ) = 1. Now for each edge (u, v), add the following two precedence relations: J u ≺ J e and J v ≺ J e , so before one can process a job associated with an edge, both jobs associated with the endpoints of that edge need to be finished. Now let k = k + 1 2 k(k − 1) and C max = 2k + 1 2 k(k − 1). We will now prove that 1|prec|k -sched, C max is a yes instance if and only if the k-Clique instance is a yes instance.
Assume that the k-Clique instance is a yes instance, then process first the k jobs associated with the vertices of the k-clique. Next process the 1 2 k(k − 1) jobs associated with the edges of the k-clique. In total, k + 1 2 k(k − 1) = k jobs are now processed with a makespan of 2k + 1 2 k(k − 1). Hence, the instance of 1|prec|k -sched, C max is a yes instance.
For the other direction, assume 1|prec|k -sched, C max to be a yes instance, so there exists a feasible schedule. For any feasible schedule, if one schedules jobs associated with vertices, then at most 1 2 ( − 1) jobs associated with edges can be processed, because of the precedence constraints. However, because k = k + 1 2 k(k − 1) jobs were done in the feasible schedule before C max = 2k + 1 2 k(k − 1), at most k jobs associated with vertices can be processed, because they have processing time of size 2. Hence, we can conclude that exactly k vertex-jobs and 1 2 k(k − 1) edge-jobs were processed. Hence, there were k vertices connected through 1 2 k(k − 1) edges, which is a k-clique.
The proofs of Theorem 6 and Corollary 4 are reductions from Partitioned Subgraph Isomorphism. Let P = (V , E ) be a 'pattern' graph, G = (V , E) be a 'target' graph, and χ : V → V a 'coloring' of the vertices of G with elements from P. A χ -colorful P-subgraph of G is a mapping ϕ : V → V such that (1) for each {u, v} ∈ E it holds that {ϕ(u), ϕ(v)} ∈ E and (2) for each u ∈ V it holds that χ(ϕ(u)) = u. If χ and G are clear from the context they may be omitted in this definition.
Theorem 5 (Marx [19]) Partitioned Subgraph Isomorphism cannot be solved in n o(|E |/ log |E |) time assuming the Exponential Time Hypothesis (ETH), where n is the size of the input.
We will now reduce Partitioned Subgraph Isomorphism to 1|prec, r j |k-sched, C max .
Construct the following jobs for the instance of the 1|prec, r j |k-sched, C max problem: Then ask whether there exists a solution to the scheduling problem for k = s + |E | with makespan C max = t s + |E |. Let the Partitioned Subgraph Isomorphism instance be a yes-instance and let ϕ : V → V be a colorful P-subgraph. We claim the following schedule is feasible: Notice that all jobs are indeed processed after their release date and that in total there are k = s + |E | jobs processed before C max = t s + |E |. Furthermore, all precedence constraints are respected as any edge job is processed after both its predecessors. Also, the edge jobs J {ϕ(i),ϕ(i )} must exist, as ϕ is a properly colored P-subgraph. Therefore, we can conclude that indeed this schedule is feasible. For the other direction, assume that there is a solution to the created instance of 1|prec, r j |k-sched, C max . Define J i = {J v : χ(v) = i}. We will first prove that at most one job from each set J i can be processed in a feasible schedule. To do this, we first prove that at most one job from each set J i can be processed before t s . Any job in J i has release date t i−1 = i−1 j=1 3 s+1− j . Therefore, there is only t s − t i−1 = s j=i 3 s+1− j time left to process the jobs from J i before time t s . However, the processing time of any job in J i is 3 s+1−i , and since 2 · 3 s+1−i > s j=i 3 s+1− j , at most one job from J i can be processed before t s . Since all jobs not in some J i have their release date at t s , at most s jobs are processed at time t s . Thus, at time t s , there are |E | time units left to process |E | jobs, because of the choice of k and makespan. Hence, the only way to get a feasible schedule is to process exactly one job from each set J i at its respective release date and process exactly |E | edge jobs after t s .
Let v i be the vertex, such that J v was processed in the feasible schedule with color i. We will show that ϕ : V → V , defined as ϕ(i) = v i , is a properly colored P-subgraph of G. Hence, we are left to prove that for each {i, i } ∈ E , the edge {ϕ(i), ϕ(i )} ∈ E, i.e. that for each {i, i } ∈ E , the job J {ϕ(i),ϕ(i )} was processed. Because only the vertex jobs J ϕ (1) , J ϕ (2) , . . . , J ϕ(s) were processed, the precedence constraints only allow for edge jobs J {ϕ(i),ϕ(i )} to be processed. We created edge job J {v,w} if and only if {v, w} ∈ E and {χ(v), χ(w)} ∈ E , hence the |E | edge jobs have to be exactly the edge jobs J {ϕ(i),ϕ(i )} for {i, i } ∈ E . Therefore, we proved indeed that ϕ is a colorful P-subgraph of G.
Notice that k = s + |E | ≤ 3|E | as we may assume the number of vertices in P is at most 2|E |. The given bound follows.

Corollary 4 2|prec|k-sched, C max cannot be solved in n o(k/ log k) time assuming the Exponential Time Hypothesis (ETH).
Proof We can use the same idea for the reduction from Partitioned Subgraph Isomorphism as in the proof of Theorem 6, except for the release dates, as they are not allowed in this type of scheduling problem. To simulate the release dates, we use the second machine as a release date machine, meaning that we will create a job for each upcoming release date and will require these new jobs to be processed. More formally: For i = 1, . . . , s, create a job J r i with processing time 3 s+1−i and precedence constraints J r i ≺ J for any job J that had release date t i in the original reduction. Furthermore, let J r i ≺ J r i+1 . Then we add |E | jobs J with processing time 1 and with precedence relations J r s ≺ J . We then ask whether there exists a feasible schedule with k = 2s + 2|E | and with makespan t s + |E |. All newly added jobs are required in any feasible schedule and therefore, all other arguments from the previous reduction also hold. Finally, note that k is again linear in |E |.

Result Type G: k-scheduling without Precedence Constraints
The problem P||k-sched, C max cannot be solved in O * (2 o(k) ) time assuming the ETH, since there is a reduction to Subset Sum for which 2 o(n) algorithms were excluded by Jansen et al. [15].
We show that the problem is fixed-parameter tractable with a matching runtime in k, even in the case of unrelated machines, release dates and deadlines, denoted by R|r j , d j |k-sched, C max . Theorem 7 R|r j , d j |k-sched, C max is fixed-parameter tractable in k and can be solved Proof We give an algorithm that solves any instance of R|r j , d j |k-sched, C max within O * ((2e) k k O(log k) ) time. The algorithm is a randomized algorithm that uses the color coding method; it can can be derandomized as described by Alon et al. [1]. The algorithm first (randomly) picks a coloring c : {1, . . . , n} → {1, . . . , k}, so each job is given one of the k available colors. We then compute whether there is a feasible colorful schedule, i.e. a feasible schedule that processes exactly one job of each color. If this colorful schedule can be found, then it is possible to schedule at least k jobs before C max .
Given a coloring c, we compute whether there exists a colorful schedule in the following way. Define for 1 ≤ i ≤ m and X ⊆ {1, . . . , k}: B i (X ) = minimum makespan of all schedules on machine i processing |X | jobs, each from a different color in X .
Clearly B i (∅) = 0, and all values B i (X ) can be computed in O(2 k n) time using the following: Proof In a schedule on one machine with |X | jobs using all colors from X , one job should be scheduled as last, defining the makespan. So for all possible jobs j, we compute what the minimal end time would be if j was scheduled at the end of the schedule. This j cannot start before its release date or before all other colors are scheduled.
Next, define for 1 ≤ i ≤ m and X ⊆ [k], A i (X ) to be 1 if B i (X ) ≤ C max , and to be 0 otherwise. So A i (X ) = 1 if and only if |X | jobs, each from a different color of X , can be scheduled on machine i before C max . A colorful feasible schedule exists if and only if there is some partition Then there is some partition X 1 , . . . , X m of {1, . . . , k} such that m i=1 A i (X i ) = 1 if and only if (A 1 * · · · * A m )({1, . . . , k}) > 0. The value of (A 1 * · · · * A m )({1, . . . , k}) > 0 can be computed in 2 k k O(1) time using fast subset convolution [3].
An overview of the randomized algorithm is given in Algorithm 2. If the k jobs that are processed in an optimal solution are all in different colors, the algorithm outputs true. By standard analysis, k jobs are all assigned different colors with probability at least 1/e k , and thus e k independent trials to boost the error probability of the algorithm to at most 1/2. By using the standard methods by Alon et al. [1], Algorithm 2 can be derandomized. Compute B i (X ) using Lemma 7.

5
Set A i (X ) = 1 if B i (X ) ≤ C max , set A i (X ) = 0 otherwise. 6 Compute (A 1 * · · · * A m )({1, . . . , k}) using fast subset convolution [3].  Table 1 For completeness and the readers convenience, we explain in this section for each row of Table 1 how the upper and lower bounds are obtained.
First notice that the most general variant R|r j , d j , prec|k-sched, C max can be solved in n O(k) time as follows: Guess for each machine the set of jobs that are scheduled on it, and guess how they are ordered in an optimal solution, to get sequences σ 1 , . . . , σ m with a joint length equal to k. For each such (σ 1 , . . . , σ m ), run the following simple greedy algorithm to determine whether the minimum makespan achieved by a feasible schedule that schedules for each machine i the jobs as described in σ i : Iterate t = 1, . . . , n and schedule the job σ i (t) at machine i as early as possible without violating release dates/deadline and precedence constraints (if this is not possible, return NO). Since each optimal schedule can be assumed to be normalized in the sense that no single job can be executed earlier, it is easy to see that this algorithm always returns an optimal schedule for some choice of σ 1 , . . . , σ m . Since there are only n O(k) different sequences σ 1 , . . . , σ m of combined length k, the runtime follows.

Cases 1-2 The polynomial time algorithms behind result [A]
are obtained by a straightforward greedy algorithm: For 1|r j , prec, p j = 1|k-sched, C max , build the schedule from beginning to end, and schedule an arbitrary job if any is available; otherwise wait until one becomes available. Cases 3-4, 7-8 The given lower bound is by Corollary 3.
Cases 5-6 The upper bound is by the algorithm of Theorem 1. The lower bound is due to reduction by Jansen et al. [14]. In particular, if no subexponential time algorithm for the Biclique problem exists, there exist no algorithms in n o(k) time for these problems. Case 9 The lower bound is by Theorem 4, which is a reduction from k-Clique and heavily builds on the reduction from k-Clique to k-Tasks On Time by Fellows and McCartin [9]. This reduction increases the parameter k to (k 2 ), hence the lower bound of n o( √ k) . Cases 10-20 The given lower bound is by Theorem 6, which is a reduction from Partitioned Subgraph Isomorphism. It is conjectured that there exist no algorithms solving Partitioned Subgraph Isomorphism in n o(k) time assuming ETH, which would imply that the n O(k) algorithm for these problems cannot be improved significantly.

Cases 21-28 Result [E]
is established by a simple greedy algorithm that always schedules an available job with the earliest deadline. Cases 29-31 Result [F] is a consequence of Moore's algorithm [23] that solves the problem 1|| j U j in O(n log n) time. The algorithm creates a sequence j 1 , . . . , j n of all jobs in earliest due date order. It then repeats the following steps: It tries to process the sequence (in the given order) on one machine. Let j i be the first job in the sequence that is late. Then a job from j 1 , . . . , j i with maximal processing time is removed from the sequence. If all jobs are on time, it returns the sequence followed by the jobs that have been removed from the sequence. Notice that this also solves the problem 1|r j |k-sched, C max , by reversing the schedule and viewing the release dates as the deadlines. Cases 32 The lower bound for this problem is a direct consequence of the reduction from Knapsack to 1|r j | j U j by Lenstra et al. [17], which is a linear reduction. Jansen et al. [15] showed that Subset Sum (and thus also Knapsack) cannot be solved in 2 o(n) time assuming ETH. Cases 33-40 Since 2||C max is equivalent to Subset Sum and can therefore not be solved in 2 o(n) time assuming ETH, as shown by Jansen et al. [15]. Therefore, its generalizations, in particular those mentioned in cases 33-40, have the same lower bound on run times assuming ETH. The upper bound is by the algorithm of Theorem 7.

Concluding Remarks
We classify all studied variants of partial scheduling parameterized by the number of jobs to be scheduled to be either in P, NP-complete and fixed-parameter tractable by k, or W[1]-hard parameterized by k. Our main technical contribution is an O(8 k k(|V | + |E|)) time algorithm for P|r j , prec, p j = 1|k-sched, C max . In a fine-grained sense, the cases we left open are cases 3-20 from Table 1. We believe in fact algorithms in rows 5-6 and 10-20 are optimal: An n o(k) time algorithm for any case from result type [C] or [D] would imply either a 2 o(n) time algorithm for Biclique or an n o(k) time algorithm for Partitioned Subgraph Isomorphism, which both would be surprising. It would be interesting to see whether for any of the remaining cases with precedence constraints and unit processing times a 'subexponential' time algorithm exists.
A related case is P3|prec, p j = 1|C max (where P3 denotes three machines). It is a famously hard open question (see e.g. [10]) whether this can be solved in polynomial time, but maybe it is doable to try to solve this question in subexponential time, e.g. 2 o(n) ? in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.