A fully polynomial time approximation scheme (FPTAS) is introduced for \({\mathcal {P}_{\textit{agreeable}}}\) in this section.
An FPTAS is an algorithm that, given a problem’s input and any approximation factor \(\varepsilon \in (0,1]\), runs in polynomial time of input length and \(1/\varepsilon \) to return a solution with objective value \(\phi ^\varepsilon \le (1+\varepsilon )\cdot \phi ^*\), where \(\phi ^*\) denotes the minimum objective value.
The following FPTAS for \({\mathcal {P}_{\textit{agreeable}}}\) is based on Algorithm 1. It is based on the idea of trimming-the-state-space as described in Ibarra and Kim (1975), combined with the interval partition technique described in Woeginger (2000). The latter technique defines
$$\begin{aligned} \varDelta&=1+\frac{\varepsilon }{2n}\, \text {, and }\, h(x)=\varDelta ^{\lceil \log _\varDelta x\rceil } \,\text { for any real }x>0\,\text {,}{} \end{aligned}$$
where h intentionally satisfies \(x/\varDelta <h(x)\le x\cdot \varDelta \).
For an approximation factor \(\varepsilon \in (0,1]\) and corresponding \(\varDelta \) and h, let us define, similar to Algorithm 1, with the same preconditions and a given straddler job \(\chi \):
Algorithm 2
(FPTAS for \({\mathcal {P}_{\textit{agreeable}}}\) with straddler job \(\chi \) and \(\varepsilon \))
Initialize
$$\begin{aligned} {V}^\#_0={}&\left\{ [t_{\min },\,1,\,0]\right\} . \end{aligned}$$
(14a)
For job \(j=1,\dots ,n\), generate state set
$$\begin{aligned} \tilde{{V}}^\#_j={}&\left\{ [C_j(x),\,y,\,z] \,{\big |}\, [x,\,y,\,z]\in {V}^\#_{j-1},\,{ C_j(x){<} \tau }\right\} \end{aligned}$$
(14b)
$$\begin{aligned}&\cup \left\{ \left[ x,\,\left( {1+b_j}\right) y,\,z+y\,\ell _j\right] {\big |}\, [x,\,y,\,z]\in {V}^\#_{j-1}\right\} \end{aligned}$$
(14c)
and trimmed state set
$$\begin{aligned} {V}^\#_j={}&\left\{ \left[ \tilde{x},\tilde{y},\tilde{z}\right] \in \tilde{{V}}^\#_j \,{\big |}\,\tilde{x}=x_j^{\min }\!\left( \tilde{y},\tilde{z}\right) \right\} \nonumber \\&\text {with }x_j^{\min }\!\left( \tilde{y},\tilde{z}\right) =\min \left\{ x \,{\big |}\, [x,\,y,\,z]\in \tilde{{V}}^\#_j,\, \right. \nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad ~~~~~~~~h(y)\le h(\tilde{y}),\, \nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad ~~~~~~~\left. h(z)=\;h(\tilde{z})\right\} . \end{aligned}$$
(14d)
Return
$$\begin{aligned} C_{\max }^{\chi \varepsilon }={}&\min \!\left\{ \tau +y\,\max \!\left\{ C_\chi (x)-\tau ,0\right\} +z \,{\big |}\, [x,\,y,\,z]\in {V}^\#_{n}\right\} \!. \end{aligned}$$
(14e)
The algorithm’s approximation guarantee and its worst-case runtime is shown in the remainder of this section.
Lemma 4
For \(j=0,\dots ,n\) and all vectors \([x,y,z]\in {V}_j\) of Algorithm 1, there exists a vector \([x^\#,y^\#,z^\#]\in {V}^\#_j\) in Algorithm 2 with
$$\begin{aligned} x^\#&\le x \text {,} \end{aligned}$$
(15a)
$$\begin{aligned} y^\#&\le y\cdot \varDelta ^j \text {, and} \end{aligned}$$
(15b)
$$\begin{aligned} z^\#&\le z\cdot \varDelta ^j. \end{aligned}$$
(15c)
Proof
Let us show the given hypothesis by forward-induction for \(j{=0,\dots ,n}{}\).
For \(j=0\), the trimmed set equals the original set: \({V}_0={V}^\#_0=\{[t_{\min },1,0]\}\), as of (13a) and (14a). As \(\varDelta ^0=1\), the hypothesis is shown.
For \(j=1,\dots ,n\), there are two cases, corresponding to (14b) and (14c):
-
The first case applies if \(x<\tau \), i.e., the job j is appended to \(S_1\) such that it completes before \(\tau \). Consider vector \([x',y',z']\in {V}_{j-1}\) where \(C_j(x')=x\), \(y'=y\), \(z'=z\) as generated in (14b).
Then, by induction, the corresponding vector \([x'^\#,y'^\#,z'^\#]\in {V}^\#_{j-1}\) with \(x'^\#\le x'\), \(y'^\#\le y'\cdot \varDelta ^{j-1}\), and \(z'^\#\le z'\cdot \varDelta ^{j-1}\) exists. Also, the condition \(x'^\#\le \tau \) is satisfied. Furthermore, the algorithm created \([\tilde{x}',\tilde{y}',\tilde{z}']=[C_j(x'^\#), y'^\#,z'^\#]\in \tilde{{V}}^\#_j\), see (14b). Although, after the trimming operation in (14d) this vector may not be in \({V}^\#_j\), there exists a vector \([x^\#,y^\#,z^\#]\in {V}^\#_j\) with \(x^\#\le \tilde{x}'\), \(h(y^\#)\le h(\tilde{y}')\), and \(h(z^\#)=h(\tilde{z}')\) (thus, \(y^\#\le \tilde{y}' \cdot \varDelta \) and \(z^\#\le \tilde{z}' \cdot \varDelta \)).
Let us show the induction hypothesis for this vector. Remember that \(C_j(t)\) is an nondecreasing function. Thus, \( x^\#\le \tilde{x}' = C_j(x'^\#)\le C_j(x')=x\) for (15a). As \(y^\#\le \tilde{y}'\cdot \varDelta = y'^\#\cdot \varDelta \le y'\cdot \varDelta ^j=y\cdot \varDelta ^j\), (15b) is satisfied. Inequality \(z^\#\le \tilde{z}' \cdot \varDelta = z'^\# \cdot \varDelta \le z' \cdot \varDelta ^{j}=z \cdot \varDelta ^j\) satisfies (15c).
-
In the second case corresponding to (14c), consider vector \([x',y',z']\in {V}_{j-1}\) where \(x'=x\), \(\left( {1+b_{j}}\right) y'=y\), and \(z'+y'\,\ell _j= y\).
By induction hypothesis, the corresponding vector \([x'^\#,y'^\#,z'^\#]\in {V}^\#_{j-1}\) with \(x'^\#\le x'\), \(y'^\#= y'\cdot \varDelta ^{j-1}\), and \(z'^\#\le z'\cdot \varDelta ^{j-1}\) exists. Then, the algorithm created a corresponding vector \([\tilde{x}',\tilde{y}',\tilde{z}']=[x'^\#,\,\left( {1+b_{j}}\right) y'^\#,\,z'^\#+y'^\#\,\ell _j]\in \tilde{{V}}^\#_j\) in (14c). Even though, by trimming, this vector may not be in set \({V}^\#_j\), there must exist some vector \([x^\#,y^\#,z^\#]\in {V}^\#_j\) with \(x^\#\le \tilde{x}'\), \(h(y^\#)\le h(\tilde{y}')\), and \(h(z^\#)=h(\tilde{z}')\) (thus \(y^\#\le \tilde{y}'\cdot \varDelta \) and \(z^\#\le \tilde{z}'\cdot \varDelta \)).
Let us show the induction hypothesis. For (15a): \(x^\#\le \tilde{x}' = x'^\# \le x' = x\). As \(y^\#\le \tilde{y}' \cdot \varDelta = \left( {1+b_{j}}\right) \cdot y'^\# \cdot \varDelta \le \left( {1+b_{j}}\right) \cdot \left( y' \cdot \varDelta ^{j-1} \right) \cdot \varDelta =y \cdot \varDelta ^j\), (15b) is satisfied. Lastly, (15c) is satisfied because
$$\begin{aligned} z^\#&\le \tilde{z}' \cdot \varDelta = \left( z'^\#+y'^\#\,\ell _j\right) \cdot \varDelta \\&\le \left( z' \cdot \varDelta ^{j-1} +y'\,\ell _j\cdot \varDelta ^{j-1} \right) \cdot \varDelta = \left( z' +y'\,\ell _j \right) \cdot \varDelta ^{j}\\&= z\cdot \varDelta ^{j}.\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \,\,{\square } \end{aligned}$$
Lemma 5
For \(0<\varepsilon \le 1\) and a \({\mathcal {P}_{\textit{agreeable}}}\) instance with straddler job \(\chi \) and minimum makespan \(\phi ^*\), Algorithm 2 yields \(C_{\max }^{\chi \varepsilon }\) such that \(\phi ^{\varepsilon }=C_{\max }^{\chi \varepsilon }-t_{\min }\le \left( {1+\varepsilon }\right) \phi ^*\).
Proof
Let \(\chi \) be the straddler job and \([x,y,z]\in {V}_n\) be the vector that corresponds to \(\phi ^*=C_{\max }^{{\chi }}-t_{\min }\) in Algorithm 1. Then, \(C_{\max }^{{\chi }}=\tau +y\left( {C_\chi (x)-\tau }\right) +z\), with \(C_\chi (x)\ge \tau \). By Lemma 4, there exists a vector \([x^\#,y^\#,z^\#]\in {V}^\#_n\) with \(x^\#\le x\), \(y^\#\le y\cdot \varDelta ^n\), and \(z^\#\le z\cdot \varDelta ^n\). Then,
$$\begin{aligned} C_{\max }^{\chi \varepsilon }&= \tau +y^\#\cdot \max \left\{ C_\chi (x^\#)-\tau ,\,0\right\} +z^\#\\&\le \tau +y\cdot \varDelta ^n\cdot \max \left\{ C_\chi (x)-\tau ,\,0\right\} +z\cdot \varDelta ^n\\&= \tau +\left( y\cdot \max \left\{ C_\chi (x)-\tau ,\,0\right\} +z\right) \cdot \varDelta ^n\\&= \tau +\left( C_{\max }^{{\chi }}-\tau \right) \cdot \varDelta ^n\\&=C_{\max }^{{\chi }} {\cdot }\varDelta ^n + \tau \cdot \left( {1- \varDelta ^n}\right) \!. \end{aligned}$$
Because \(1- \varDelta ^n\le 0\) and \(t_{\min }\le \tau \), there is \(t_{\min }\cdot \left( {1- \varDelta ^n}\right) \ge \tau \cdot \left( {1- \varDelta ^n}\right) \), thus
$$\begin{aligned} C_{\max }^{\chi \varepsilon }-t_{\min }&\le C_{\max }^{{\chi }}{\cdot }\varDelta ^n + \tau \cdot \left( {1- \varDelta ^n}\right) - t_{\min }\\&\le C_{\max }^{{\chi }}{\cdot }\varDelta ^n + t_{\min }\cdot \left( {1- \varDelta ^n}\right) - t_{\min }\\&=\left( {C_{\max }^{{\chi }} - t_{\min }}\right) \cdot \varDelta ^n . \end{aligned}$$
Thus, together with Proposition 4, \(\phi ^\varepsilon \le \phi ^*{\cdot }\varDelta ^n\). A known inequality is \((1+\delta /n)^n \le 1+2\delta \) for \(0 \le \delta \le 1\) (Woeginger 2000, Proposition 3.1). Setting \(\delta =\varepsilon /2\), it follows \(\varDelta ^n\le 1+\varepsilon \). Thus, \(\phi ^\varepsilon \le \left( {1+\varepsilon }\right) \phi ^*\). \(\square \)
For a worst-case runtime analysis, let us bound the number of states in each stage to a polynomial number. This uses the respective logarithm of
$$\begin{aligned} \ell _{ratio }&= \frac{\max {\left\{ \ell _j \,{\big |}\, k=1,\dots ,n\right\} }}{\min \left\{ \ell _j>0\,{\big |}\, j=1,\dots ,n\right\} }, \\ b_{\max }&= \max \left\{ b_j\,{\big |}\, j=1,\dots ,n\right\} . \end{aligned}$$
Lemma 6
For \(0<\varepsilon \le 1\) and stage \(j=0,\dots ,n\), the number \(|{V}^\#_j|\) of states is in \(\mathcal {O}({n^3}{\cdot \log ^{}\!\left( {1+b_{\max }}\right) }\cdot (\log {}\max \{{\ell _{ratio }},\,1/{ b_{\max }}\}+n\log ^{}\!\left( {1+b_{\max }}\right) )/{\varepsilon ^2}).\)
Proof
Starting with \(|{V}_0^\#|=1\), let us analyze state set \({V}^\#_j\) for \(j=1,\dots ,n\) in the following. Consider a vector \([x,y,z]\in {V}^\#_j\). For each y and z value, there is one x value. Thus, \({|}{V}^\#_j{|}\) is bounded by the product of the number of possible y and z values, which is bounded in the following.
Let \(\ell _{\max }^{(j)}=\max {\{\ell _k\,{\big |}\, k=1,\dots ,j\}}\), \(\ell _{\min }^{(j)}=\min \{\ell _k>0\mid k=1,\dots ,j\}\), and \(b_{\max }^{(j)}=\max {\{b_k\,{\big |}\, k=1,\dots ,j\}}\). Then, the y value is bounded by
$$\begin{aligned} {1\le }\, y\le \prod _{k=1,\dots ,j}\left( {1+b_k}\right) \le \left( {1+b_{\max }^{(j)}}\right) ^j=:Y_j. \end{aligned}$$
The z value represents the makespan of the sequence that starts at the ideal start time. For \(z>0\), it is bounded by
$$\begin{aligned} {\ell _{\min }\le }\,\,z\,&\le \sum _{j'=1,\dots ,j} \ell _{j'}\prod _{k=j'+1,\dots ,j}\left( {1+b_k}\right) \\&\le \ell _{\max }^{(j)} \frac{\left( {1+b_{\max }^{(j)}}\right) ^j-1}{b_{\max }^{(j)}} =:Z_j. \end{aligned}$$
The trimming step in (14d) ensures that there is at most a single y and z value for the same \(h(y)\) and \(h(z)\) value, respectively. Moreover, the rounded values are bounded by \(1\le h(y)\le h(Y_j)\) and by \(h(\ell _{\min }^{(j)})\le h(z)\le h(Z_j)\) for \(z>0\).
It follows from the definition of \(h\) that the number of distinct \(h(y)\) values is at most
$$\begin{aligned} \log _\varDelta Y_j - \log _\varDelta 1&={\log Y_j}\cdot {\ln 2}\,/\,{\ln \varDelta }\\&\le \left( {1+\frac{2n}{\varepsilon }}\right) \cdot {\log Y_j}\\&=\left( 1+\frac{2n}{\varepsilon }\right) \cdot j\cdot \log \!\left( {1+b_{\max }^{(j)}}\right) \!, \end{aligned}$$
which uses the inequality \(\ln \varDelta \ge {(\varDelta -1)}/{\varDelta }\) (Woeginger 2000, Proposition 3.1). Similarly, the number of distinct \(h(z)\) values for \(z>0\) is at most
$$\begin{aligned}&\log _\varDelta Z_j - \log _\varDelta \ell _{\min }^{(j)} \,{=}\, {\log \!\left( Z_j\Big /\ell _{\min }^{(j)}\right) }\cdot \ln 2\,/\,{\ln \varDelta }\\&{<} \left( {1+\frac{2n}{\varepsilon }}\right) \left( \log \frac{\ell _{\max }^{(j)}}{\ell _{\min }^{(j)}} +\log \frac{1}{b_{\max }^{(j)}}+j\cdot \log \!\left( {1+b_{\max }^{(j)}}\right) \right) \!. \end{aligned}$$
In summary, there are at most \(\mathcal {O}\!\left( n^2\log {(1+b_{\max })}/\varepsilon \right) \) distinct y values, and there are at most \(\mathcal {O}(n\cdot (\log {\ell _{ratio }}+\log {(}{}1/ b_{\max }{)}{}+n\log {(1+b_{\max })})/\varepsilon )\) distinct z values. Both upper bounds are polynomial in input length in a binary encoding. This includes rational numbers because they can be encoded as a division of two integers. The product of both bounds yields \(\mathcal {O}({n^3}\cdot \log \left( {1+b_{\max }}\right) \cdot (\log {\ell _{ratio }}+\log (1/{b_{\max }})+n\log (1+b_{\max }))/{\varepsilon ^2})\), which is not more than the upper bound stated above. \(\square \)
Algorithm 2 is repeatedly started for each possible straddler job, hence \(n+1\) times, and has at most n stages, each with a polynomial number of states (Lemma 6). Furthermore, as of Lemma 5, the resulting makespan is guaranteed to be at most \((1+\varepsilon )\) times the minimum makespan \(\phi ^*\). This leads to the conclusion.
Theorem 2
For a \({\mathcal {P}_{\textit{agreeable}}}\) instance of n jobs with minimum makespan \(\phi ^*\) and an approximation factor \(0<\varepsilon \le 1\), the n times repeated call of Algorithm 2 returns a solution with makespan \(\phi ^\varepsilon \le \left( {1+\varepsilon }\right) \cdot \phi ^*\) in \(\mathcal {O}(n^5{\cdot \log ^{}\!\left( {1+b_{\max }}\right) }\cdot (\log \max \{{\ell _{ratio }},1/ b_{\max }\}+n\log ^{}\!\left( {1+b_{\max }}\right) )/\varepsilon ^2))\) time total.
This runtime is polynomial, hence Algorithm 2 is an FPTAS for \({\mathcal {P}_{\textit{agreeable}}}\). With a common slope \(b_j=b\), the y component of a state can attain at most j different values in each stage \(j{\in \{1,\dots ,n\}}\). Then, there are only \(\mathcal {O}(n^2\cdot (\log {\ell _{ratio }}{}{+\log (1/b)}{}+n\log {(1+{b})})/\varepsilon )\) states in each of the \(\mathcal {O}\!\left( n^2\right) \) stages.
Corollary 9
For instances with a common slope \(b_j=b\) for each given job j, the runtime given in Theorem 2 is reduced to \(\mathcal {O}(n^4\cdot (\log \max \{\ell _{\max },1/{b}\}+{}n\log {(1+{b})})/\varepsilon )\).
In the monotonic case \(b{_j}=0\), the y component equals 1 at all times, and z is bounded by the sum of basic processing times. Then, the number of distinct z values is bounded by \({\mathcal {O}\!\left( \ell _{ratio }\cdot n\right) }\), and there are only \(\mathcal {O}\!\left( n\cdot \log ({\ell _{ratio }}n)/\varepsilon \right) \) states per stage.
Corollary 10
For instances with \(b{_j}{}=0\) for each given job j, the runtime in Theorem 2 is reduced to \(\mathcal {O}\!\left( n^3\cdot \log ({\ell _{ratio }}n)/\varepsilon \right) \).
If each job j has the same \(\ell _j=\ell \), then \(\ell _{ratio }=1\). If in addition \(\max \{b_j,1/b_j\}\) is smaller than a constant, then the FPTAS finishes in strongly polynomial \(\mathcal {O}\!\left( n^5/\varepsilon ^2\right) \) time. For \(b_j=0\), it takes only \(\mathcal {O}\!\left( n^3\log n/\varepsilon \right) \) time, although this particular case is even easier to solve optimally by sorting the jobs with respect to nondecreasing \(a_j\) values (Cheng et al. 2003).
Remark 5
(Implications on \({\mathcal {P}_{\textit{agreeable}}}\)’s computational complexity) Garey and Johnson (1978, Theorem 1) state that if, for all instances, the optimal objective value of a minimization problem is upper bounded by a polynomial in input length and input values, then the existence of an FPTAS implies that a pseudopolynomial algorithm exists. In \({\mathcal {P}_{\textit{agreeable}}}\), the makespan can be exponential in input length and input values: e.g., for \(t_{\min }=\tau \), \(b_j=b\), and \(\ell _j\ge 1\) for \(j=1,\dots ,n\), the makespan is not less than \({}\left( 1+b\right) ^{n-1}\). Therefore, the existence of an FPTAS for \({\mathcal {P}_{\textit{agreeable}}}\) does, in this particular case, not imply the existence of a pseudopolynomial algorithm. Hence, it remains open whether a pseudopolynomial algorithm exists for \({\mathcal {P}_{\textit{agreeable}}}\), and whether \({\mathcal {P}_{\textit{agreeable}}}\) is NP-hard in the strong sense.