In this section we show that for a fixed number m of machines and a fixed ordering of the jobs \(\sigma \), we can construct a polynomial time dynamic program, which finds a permutation schedule that is optimal among all schedules obeying job ordering \(\sigma \). The algorithm works for an arbitrary regular sum or bottleneck objective function. Combined with Theorem 3 this shows that the makespan and the total completion time in a PFB can be minimized in polynomial time for fixed m.
The dynamic program is based on the important observation that, for a given machine \(M_i\), the set of possible job completion times on \(M_i\) is not too large. In order to formalize this statement, we introduce the notion of a schedule being \(\varGamma \)-active. For ease of notation, we use the shorthand \([k]=\{1,2,\ldots ,k\}\).
Definition 5
Given an instance of PFB with n jobs and m machines, let
$$\begin{aligned} \varGamma _i=\left\{ r_{j'} + \sum _{i'=1}^{i}\lambda _{i'}p_{i'}\mathrel {}\Bigg \vert \mathrel {}j'\in [n],\ \lambda _{i'}\in [n]\text { for } i'\in [i] \right\} \end{aligned}$$
for all \(i \in \left[ m\right] \). We say that a schedule S is \(\varGamma \)-active, if \(c_{ij}(S) \in \varGamma _i\), for any job \(J_j\), \(j \in \left[ n\right] \), and any machine \(M_i\), \(i \in \left[ m\right] \).
Note that on machine \(M_i\), there are only \(|\varGamma _i|\le n^{i+1}\le n^{m+1}\) possible job completion times to consider for any \(\varGamma \)-active schedule S.
Now we show that for a PFB problem with a regular objective function, any schedule can be transformed into a \(\varGamma \)-active schedule, without increasing the objective value. To do so, we prove that this is true even for a slightly stronger concept than \(\varGamma \)-active.
Definition 6
A schedule is called batch-active, if no batch can be started earlier without violating feasibility of the schedule and without changing the order of batches.
Clearly, given a schedule S that is not batch-active, by successively removing unnecessary idle times we can obtain a new, batch-active schedule \(S^\prime \). Furthermore, for regular objective functions, \(S^\prime \) has objective value no higher than the original schedule S. These two observations immediately yield the following lemma.
Lemma 7
A schedule for a PFB can always be transformed into a batch-active schedule such that any regular objective function is not increased. This transformation does not change the order in which the jobs are processed on the machines. \(\square \)
Now we show that, indeed, being batch-active is stronger than being \(\varGamma \)-active, in other words that every batch-active schedule is also \(\varGamma \)-active. This result generalizes an observation made by Baptiste (2000, Section 2.1) from a single machine to the flow shop setting.
Lemma 8
In a PFB, any batch-active schedule is also \(\varGamma \)-active.
Proof
Fix some \(i\in [m]\) and \(j\in [n]\). Let \(B^{(i)}_\ell \) be the batch which contains job \(J_j\) on machine \(M_i\). Since the schedule is batch-active, \(B^{(i)}_\ell \) is either started at the completion time of the previous batch \(B^{(i)}_{\ell -1}\) on the same machine or as soon as all jobs of \(B^{(i)}_\ell \) are available on machine \(M_i\). In the former case, all jobs \(J_{j'}\in B^{(i)}_{\ell -1}\) satisfy \(c_{ij} = c_{ij'}+p_i\). In the latter case, there is a job \(J_{j''}\in B^{(i)}_\ell \) such that \(c_{ij}=c_{(i-1)j''}+p_i\), where we write \(c_{0j''}=r_{j''}\) for convenience. The claim follows inductively by observing that the former case can happen at most \(n-1\) times in a row, since there are at most n batches on each machine. \(\square \)
Together, Lemmas 7 and 8 imply the following desired property.
Lemma 9
Let S be a schedule for a PFB problem with regular objective function f. Then there exists a schedule \(S^\prime \) for the same PFB problem, such that
-
1.
On each machine, jobs appear in the same order in \(S^\prime \) as they do in S,
-
2.
\(S^\prime \) has objective value no larger than S, and
-
3.
\(S'\) is \(\varGamma \)-active. \(\square \)
In particular, Lemma 9 shows that, if for a PFB problem an optimal job ordering \(\sigma ^*\) is given, then there exists an optimal schedule which is a permutation schedule with jobs ordered by \(\sigma ^*\) and in addition \(\varGamma \)-active.
From this point on, we assume that the objective function is a regular sum or bottleneck function, that is, \(f(C) = \bigoplus _{j=1}^n f_j(C_j)\), where \(\bigoplus \in \{\sum ,\max \}\) and \(f_j\) is nondecreasing for all \(j\in [n]\). We also use the symbol \(\oplus \) as a binary operator. In what follows, we present a dynamic program which, given a job ordering \(\sigma \), finds a schedule that is optimal among all \(\varGamma \)-active permutation schedules obeying job ordering \(\sigma \). For simplicity, we assume that jobs are already indexed by \(\sigma \). The dynamic program schedules the jobs one after the other, until all jobs are scheduled. Due to Lemma 9, if \(\sigma \) is an optimal ordering, then the resulting schedule is optimal.
Given an instance I of a PFB and a job index \(j\in [n]\), let \(I_j\) be the modified instance which contains only the jobs \(J_1,\dots ,J_j\). We write \(\varGamma =\varGamma _1 \times \varGamma _2 \times \ldots \times \varGamma _m\) and \({\mathcal {B}}=\left[ b_1\right] \times \left[ b_2\right] \times \ldots \times \left[ b_m\right] \), where \(\times \) is the standard Cartesian product. For a vector
$$\begin{aligned} t = (t_1, t_2, \ldots , t_i, \ldots , t_{m-1}, t_m) \in \varGamma \end{aligned}$$
of m possible completion times and a vector
$$\begin{aligned} k = (k_1, k_2, \ldots , k_i, \ldots , k_{m-1}, k_m) \in {\mathcal {B}} \end{aligned}$$
of m possible batch sizes, we say that a schedule S for instance \(I_j\)corresponds to t and k if, for all \(i\in [m]\), \(c_{ij}=t_i\) and job \(J_j\) is contained in a batch with exactly \(k_i\) jobs (including \(J_j\)) on \(M_i\).
Next, we define the variables g of the dynamic program. Let \({\mathcal {S}}(j,t,k)\) be the set of feasible permutation schedules S for instance \(I_j\) satisfying the following properties:
-
1.
Jobs in S are ordered by their indices,
-
2.
S corresponds to t and k, and
-
3.
S is \(\varGamma \)-active.
Then, for \(j\in [n]\), \(t\in \varGamma \), and \(k\in {\mathcal {B}}\), we define
$$\begin{aligned} g(j,t,k)=g(j,t_1,t_2,\dots ,t_m,k_1,k_2,\dots ,k_m) \end{aligned}$$
to be the minimum objective value of a schedule in \({\mathcal {S}}(j,t,k)\) for instance \(I_j\). If no such schedule exists, the value of g is defined to be \(+\infty \).
Using the above definitions as well as Lemma 9, the objective value of the best permutation schedule ordered by the job indices is given by \(\min _{t,k} g(n,t,k)\), \(t\in \varGamma \), \(k\in {\mathcal {B}}\). In Lemmas 10 and 11 (below) we provide formulas to recursively compute the values g(j, t, k), \(j=1,2,\ldots ,n\), \(t\in \varGamma \), \(k \in {\mathcal {B}}\). In total, we then obtain Algorithm 1 as our dynamic program.
The remainder of this section deals with proving the correctness and running time bound of Algorithm 1. As mentioned above, in Lemmas 10 and 11 we define and prove the correctness of a recursive formula to compute all values of function g. Lemma 10 deals with the starting values g(1, t, k) while Lemma 11 deals with the recurrence relation. Finally, in Theorem 12 we connect the results of this section to prove the correctness of Algorithm 1.
Lemma 10
For \(j=1\), \(t_i\in \varGamma _i\), and \(k_i\in [b_i]\), \(i\in [m]\), the starting values of g are given by
$$\begin{aligned} g(1,t,k) = \left\{ \begin{array}{ll} f_1(t_m), &{} \text {if conditions (i)--(iii) hold},\\ +\infty , &{} \text {otherwise,} \end{array} \right. \end{aligned}$$
where
-
(i)
\(k_i=1\) for all \(i\in [m]\),
-
(ii)
\(t_1\ge r_1 + p_1\), and
-
(iii)
\(t_{i+1}\ge t_{i}+p_{i+1}\) for all \(i\in [m-1]\).
Proof
Conditions (i)–(iii) are necessary for the existence of a schedule in \({\mathcal {S}}(1,t,k)\) because \(I_1\) consists of only one job and there must be enough time to process this job on each machine. Conversely, if (i)–(iii) are satisfied, then the vector t of completion times uniquely defines a feasible schedule \(S \in {\mathcal {S}}(1,t,k)\) by \(c_{i1}(S)=t_i\). Since S is uniquely defined by t and thus the only schedule in \({\mathcal {S}}(1,t,k)\), its objective value \(f_1(C_1)=f_1(c_{m1})=f_1(t_m)\) is optimal. \(\square \)
Now we turn to the recurrence formula to calculate g for \(j>1\) from the values of g for \(j-1\).
Lemma 11
For \(j>1\), \(t_i\in \varGamma _i\), and \(k_i\in [b_i]\), \(i\in [m]\), the values of g are determined by
$$\begin{aligned}&g(j,t,k) \\&\quad = \left\{ \begin{array}{ll} \min \{f_j(t_m)\oplus g(j-1,t',k') \mid (**) \}, &{} \text {if }(*),\\ +\infty , &{} \text {otherwise,} \end{array} \right. \end{aligned}$$
where the minimum over the empty set is defined to be \(+\infty \). Here, \((*)\) is given by conditions
-
(i)
\(t_1\ge r_j + p_1\) and
-
(ii)
\(t_{i+1}\ge t_{i}+p_{i+1}\) for all \(i\in [m-1]\),
and \((**)\) is given by conditions
-
(iii)
\(t'_i\in \varGamma _i\),
-
(iv)
\(k'_i\in [b_i]\),
-
(v)
if \(k_i=1\), then \(t'_i\le t_i-p_i\), and
-
(vi)
if \(k_i>1\), then \(t'_i=t_i\) and \(k'_i=k_i-1\),
for all \(i\in [m]\).
Proof
Fix values \(t_i\in \varGamma _i\) and \(k_i\in [b_i]\), \(i\in [m]\). The conditions of \((*)\) are necessary for the existence of a schedule in \({\mathcal {S}}(j,t,k)\) because there must be enough time to process job \(J_j\) on each machine. Therefore, g takes the value \(+\infty \), if \((*)\) is violated. For the remainder of the proof, assume that \((*)\) is satisfied. Hence, we have to show that
$$\begin{aligned} g(j,t,k)=\min \left\{ f_j(t_m)\oplus g(j-1,t',k') \mathrel {}\Bigg \vert \mathrel {}(**)\right\} . \end{aligned}$$
(3)
We first prove “\(\ge \)”. If the left-hand side of (3) equals infinity, then this direction follows immediately. Otherwise, by the definition of g, there must be a schedule \(S\in {\mathcal {S}}(j,t,k)\) with objective value g(j, t, k). Schedule S naturally defines a feasible schedule \(S'\) for instance \(I_{j-1}\) by ignoring job \(J_j\).
Observe that, because S belongs to \({\mathcal {S}}(j,t,k)\) and therefore is \(\varGamma \)-active, \(S'\) is also \(\varGamma \)-active and job \(J_{j-1}\) finishes processing on machine \(M_i\) at some time \(t'_i \in \varGamma _i\). Also, since S is feasible, on each machine \(M_i\) job \(J_{j-1}\) is scheduled in some batch of size \(k'_i \le b_i\). Thus, \(S'\) corresponds to two unique vectors \(t' = (t'_1, t'_2, \ldots , t'_m)\) and \(k' = (k'_1, k'_2, \ldots , k'_m)\), which satisfy (iii) and (iv) from \((**)\). Note that, in particular, \(S' \in {\mathcal {S}}(j-1,t',k')\).
Furthermore, \(t'\) and \(k'\) also satisfy (v) and (vi) from \((**)\). Indeed, due to the fixed job permutation, one of the following two things happens on each machine \(M_i\) in S: either jobs \(J_j\) and \(J_{j-1}\) are batched together, or job \(J_j\) is batched in a singleton batch. In the former case, it follows that \(1<k_i=k'_i+1\) and \(t'_i=t_i\), while the latter case requires \(k_i=1\) and \(t_i\ge t'_i + p_i\), since the machine is occupied by the previous batch with job \(J_{j-1}\) until then.
Thus, \(t'\) and \(k'\) satisfy \((**)\) and we obtain
$$\begin{aligned} g(j,t,k)&= \bigoplus _{j'=1}^j f_{j'}(C_{j'}(S)) \\&= f_j(t_m)\oplus \bigoplus _{j'=1}^{j-1} f_{j'}(C_{j'}(S'))\\&\ge f_j(t_m)\oplus g(j-1,t',k'), \end{aligned}$$
where the last inequality follows due to the definition of g and \(S'\in {\mathcal {S}}(j-1,t',k')\). Hence, the “\(\ge \)” direction in (3) follows because \(t'\) and \(k'\) satisfy \((**)\).
For the “\(\le \)” direction, if the right-hand side of (3) equals infinity, then this direction follows immediately. Otherwise, let \(t'\) and \(k'\) be minimizers at the right-hand side. By definition of g there must be a schedule \(S'\in {\mathcal {S}}(j-1,t',k')\) for \(I_{j-1}\) with objective value \(g(j-1,t',k')\).
We now show that \(S'\) can be extended to a feasible schedule \(S\in {\mathcal {S}}(j,t,k)\). Construct S from \(S'\) by adding job \(J_j\) in the following way:
-
if in \(S'\) there is a batch on machine \(M_i\) which ends at time \(t_i\), add \(J_j\) to that batch (we show later that this does not cause the batch to exceed the maximum batch size \(b_i\) of \(M_i\));
-
otherwise, add a new batch on machine \(M_i\) finishing at time \(t_i\) and containing only job \(J_j\) (we show later that this does not create overlap with any other batch on machine \(M_i\)).
First note that \(c_{ij}(S) = t_i\) for all \(i \in \left[ m\right] \), and thus S corresponds to t by definition. Furthermore, for each machine \(M_i\) consider the two cases (a) \(k_i = 1\) and (b) \(k_i > 1\).
In case (a), due to (v) it follows that \(t'_i \le t_i - p_i\). Since job \(J_{j-1}\) finishing at time \(t'_i\) is the last job completed on machine \(M_i\) in schedule \(S'\), by construction in schedule S job \(J_j\) is in a singleton batch on machine \(M_i\) that starts at time \(t_i - p_i\) and ends at time \(t_i\). Therefore, in this case, S corresponds to k, because \(k_i = 1\) and job \(J_j\) is in a singleton batch. Also, since the last batch on machine \(M_i\) in \(S'\) ends at time \(t'_i \le t_i - p_i\) and the batch with job \(J_j\) starts at time \(t_i-p_i\), no overlapping happens between the new batch for job \(J_j\) and any other batch on machine \(M_i\).
In case (b), due to (vi) it follows that \(t'_i = t_i\) and by construction job \(J_j\) is scheduled in the same batch as job \(J_{j-1}\) on machine \(M_i\) in S. Since \(S'\) corresponds to \(k'\) and again due to (vi), this means that \(J_j\) is scheduled in a batch with \(k_i\) jobs on machine \(M_i\) and S corresponds to k in this case. Also, since \(k_i \in \left[ b_i\right] \) by definition, no batch in S exceeds its permissible size.
Combining the considerations for cases (a) and (b), together with the feasiblity of schedule \(S'\), it follows that
-
(1)
S corresponds to k,
-
(2)
no overlapping happens between batches in S, and
-
(3)
all batches in S are of permissible size.
In order to show feasibility of S it is only left to show that no job starts before its release date on machine \(M_1\), and no job starts on machine \(M_i\) before it is completed on machine \(M_{i-1}\). As \(S'\) is feasible, this is clear for all jobs other than \(J_j\). On the other hand, since S corresponds to t, and t fulfills \((*)\), it also follows for job \(J_j\).
Thus, S is indeed feasible. In total, we have shown all conditions for \(S\in {\mathcal {S}}(j,t,k)\). Therefore, we obtain
$$\begin{aligned} g(j,t,k)&\le \bigoplus _{j'=1}^j f_{j'}(C_{j'}(S))\\&= f_j(t_m)\oplus \bigoplus _{j'=1}^{j-1} f_{j'}(C_{j'}(S'))\\&= f_j(t_m)\oplus g(j-1,t',k'), \end{aligned}$$
where the last equality is due to the choice of \(S'\) as a schedule with objective value \(g(j-1, t', k')\). Hence, the “\(\le \)” direction in (3) follows due to the choice of \(t'\) and \(k'\) as minimizers of the right-hand side. \(\square \)
Using Lemmas 9, 10, and 11 we can now prove correctness of Algorithm 1 and show that its runtime is bounded polynomially (if m is fixed).
Theorem 12
Consider a PFB instance with a constant number of m machines and a regular sum or bottleneck objective function. Then, for a given ordering of the jobs, Algorithm 1 finds the best permutation schedule in time \({\mathcal {O}}(n^{m^2+4m+1})\).
Proof
Without loss of generality, assume that the desired job ordering is given by the job indices. By Lemmas 10 and 11, Algorithm 1 correctly computes the values of function g. Lemma 9 assures that the resulting schedule is indeed optimal among all permutation schedules ordered by the job indices.
Concerning the time complexity, note that there are \(\left|\varGamma \right|\) many possible choices for vector t and \(\left|{\mathcal {B}} \right|\) many possible choices for vector k, yielding a total of \(\left|\varGamma \right|\cdot \left|{\mathcal {B}} \right|\) possible combinations for vectors t and k. The number can be bounded from above as follows:
$$\begin{aligned} \begin{array}{ccccc} \left|\varGamma \right|\cdot \left|{\mathcal {B}} \right|&{} =\ &{} |\varGamma _1|\cdot |\varGamma _2|\cdot \ldots \cdot |\varGamma _m| &{}\cdot &{} b_1 \cdot b_2 \cdot \ldots \cdot b_m\\ &{}\le \ &{} n^2 \cdot n^3 \cdot \ldots \cdot n^{m+1} &{} \cdot &{} n^m\\ &{}=\ &{} n^{\frac{(m+1)(m+2)}{2}-1+m}&{} &{}\\ &{}=\ &{} n^{\frac{m^2}{2}+\frac{5m}{2}}.&{} &{} \end{array} \end{aligned}$$
Here, the first m factors in the second line are due to \(\varGamma _i \le n^{i+1}\) and the last factor \(n^m\) is due to inequalities \(b_i \le n\) for all \(i \in \left[ m\right] \).
Therefore, Step 1 of Algorithm 1 involves at most \(n^{\frac{m^2}{2}+\frac{5m}{2}}\) iterations, each of which can be performed in constant time because m is fixed.
For Step 2, first observe that conditions \((*)\) and \((**)\) can be checked in constant time for fixed m. In a single iteration, that is, for fixed j, t, and k, in order to compute the minimum in the recurrence of Lemma 11, we may have to consider all possible choices for \(t'\) and \(k'\). The number of these choices is \(|\{(t',k')\in \varGamma \times {\mathcal {B}}\mid (**)\}|\). Aggregating over all t and k, while keeping j fixed, yields
$$\begin{aligned}&\sum _{(t,k)\in \varGamma \times {\mathcal {B}}}|\{(t',k')\in \varGamma \times {\mathcal {B}}\mid (**)\}|\\&\quad = |\{(t,t',k,k')\in \varGamma \times \varGamma \times {\mathcal {B}} \times {\mathcal {B}}\mid (**)\}|\\&\quad = \prod _{i=1}^m\ |\{(t_i,t'_i,k_i,k'_i) \in \varGamma _i \times \varGamma _i \times [b_i] \times [b_i] \\&\qquad \mid (v)\text { and }(vi)\}|. \end{aligned}$$
The number of quadruples \((t,t',k,k')\) satisfying (v) and (vi) can be bounded by \(b_i|\varGamma _i|^2+b_i|\varGamma _i|\). Here, the first term corresponds to the quadruples with \(k_i=1\), while the second term corresponds to the quadruples with \(k_i>1\), in which case \(t'_i\) and \(k'_i\) are uniquely determined by \(t_i\) and \(k_i\). Further, note that due to \(|\varGamma _i| \ge 1\), it holds that \(b_i|\varGamma _i|^2+b_i|\varGamma _i| \le 2 b_i |\varGamma _i|^2\). Hence, the total running time of Step 2 (also aggregating over all choices of j), is at most
$$\begin{aligned} \begin{aligned}&{\mathcal {O}}\left( n\prod _{i=1}^m 2 b_i |\varGamma _i|^2\right) \\&\quad ={\mathcal {O}}\left( n \prod _{i=1}^m b_i |\varGamma _i|^2\right) \\&\quad \subseteq {\mathcal {O}}\left( n\prod _{i=1}^m n\cdot n^{2i+2}\right) \\&\quad ={\mathcal {O}}(n^{m^2+4m+1}). \end{aligned} \end{aligned}$$
(4)
The first equality is due to the factor \(2^m\) being constant for constant m and the set inclusion is due to \(b_i \le n\) and \(|\varGamma _i| \le n^{i+1}\).
Finally, in Step 3, we take the minimum over at most \(n^{\frac{m^2}{2}+\frac{5m}{2}}\) many values. The backtracking and reconstruction of the final schedule can be done in time \({\mathcal {O}}(n)\) for constant m.
Hence, in total, Step 2 is the bottleneck and the total time complexity is \({\mathcal {O}}(n^{m^2+4m+1})\). \(\square \)
Combining Theorems 3 and 12, we obtain:
Corollary 13
For a PFB with release dates and a fixed number m of machines, the makespan can be minimized in \({\mathcal {O}}(n^{m^2+4m+1})\) time. The same holds for the total completion time. \(\square \)
In particular, for any fixed number m of machines, the problems
$$\begin{aligned} Fm \mid r_j, p_{ij} = p_i, p\text {-batch}, b_i \mid f \end{aligned}$$
with \(f\in \{C_{\max }, \sum C_j \}\) can be solved in polynomial time.
Improved running time for makespan minimization
Sung et al. (2000, Theorem 6) showed that for minimizing the makespan in a PFB, it is optimal to use the so-called first-only-empty batching on the last machine \(M_m\), that is, to completely fill all batches on the last machine, except for the first batch, which might contain less jobs if n is not a multiple of \(b_m\). In a permutation schedule ordered by the job indices with first-only-empty batching on \(M_m\), a makespan of c can be achieved if and only if for each job \(J_j\) we have enough time to process the \(n-j+1\) jobs \(J_j, J_{j+1},\dots ,J_n\) on \(M_m\) after \(J_j\) has been completed on \(M_{m-1}\). In other words, a makespan of c can be achieved, if and only if \(c\ge c_{(m-1)j}+\left\lceil \frac{n-j+1}{b_m}\right\rceil \) for every \(j\in [n]\). Hence, the optimal makespan is given by \(\max _{j=1,2,\ldots ,n} c_{(m-1)j}+\left\lceil \frac{n-j+1}{b_m}\right\rceil \). Therefore, the problem to minimize the makespan on an m-machine PFB can be solved in the following way: We first find a schedule for the first \(m-1\) machines that minimizes the objective function \(\bigoplus _{j=1}^n f_j(c_{(m-1)j})\) with \(\bigoplus =\max \) and \(f_j(c_{(m-1)j})=c_{(m-1)j}+\left\lceil \frac{n-j+1}{b_m}\right\rceil \). By Theorem 12, this can be done in time
$$\begin{aligned} {\mathcal {O}}(n^{(m-1)^2+4(m-1)+1})={\mathcal {O}}(n^{m^2+2m-2}) \end{aligned}$$
for fixed m. Afterwards, this schedule is extended by a first-only-empty batching on \(M_m\), which does not further increase the asymptotic runtime. Thus, for the makespan objective, we can strengthen Corollary 13:
Corollary 14
For a PFB with release dates and a fixed number m of machines, the makespan can be minimized in \({\mathcal {O}}(n^{m^2+2m-2})\) time. \(\square \)