Abstract
In this paper we study a single batch-processing machine scheduling model. In our model, a set of jobs having different release dates needs to be scheduled onto a single machine that can process a batch of jobs simultaneously at a time. Each batch incurs a fixed setup time and a fixed setup cost. The decision maker may reject some of the jobs by paying penalty cost so as to achieve a tight makespan, but the total rejection penalty cost is required to be no greater than a given value. Our model extends several existing batch-processing machine scheduling models in the literature. We present efficient approximation algorithms with near-linear-time complexities.
Similar content being viewed by others
References
Allahverdi, A., Gupta, J. N. D., & Aldowaisan, T. (1999). A review of scheduling research involving setup considerations. OMEGA, 27, 219–239.
Allahverdi, A., Ng, C. T., Cheng, T. C. E., & Kovalyov, M. Y. (2008). A survey of scheduling problems with setup times or costs. European Journal of Operational Research, 187, 985–1032.
Brucker, P., Gladky, A., Hoogeveen, H., Kovalyov, M. Y., Potts, C. N., & van de Velde, S. L. (1998). Scheduling a batching machine. Journal of Scheduling, 1(1), 31–54.
Cao, Z., & Yang, X. G. (2009). A PTAS for parallel batch scheduling with rejection and dynamic job arrivals. Theoretical Computer Science, 410, 2732–2745.
Cheng, T. C. E., Liu, Z. H., & Yu, W. C. (2001). Scheduling jobs with release dates and deadlines on a batching processing machine. IIE Transactions, 33, 685–690.
Croce, F. D., Koulamas, C., & Vincent, T. (2017). A constraint generation approach for two-machine shop problems with jobs selection. European Journal of Operational Research, 259(3), 898–905.
Havil, J. (2009). Gamma: Exploring Eulers Constant. Princeton: Princeton University Press.
He, C., Leung, J. Y.-T., Lee, K., & Pinedo, M. L. (2016a). Scheduling a single machine with parallel batching to minimize makespan and total rejection cost. Discrete Applied Mathematics, 204, 150–163.
He, C., Leung, J. Y.-T., Lee, K., & Pinedo, M. L. (2016b). Improved algorithms for single machine scheduling with release dates and rejections. 4OR, 14, 41–55.
Kellerer, H., Pferschy, U., & Pisinger, D. (2004). Knapsack problems. Berlin: Springer.
Lee, C.-Y., & Uzsoy, R. (1999). Minimizing makespan on a single batch processing machine with dynamic job arrivals. International Journal of Production Research, 37(1), 219–236.
Lee, C.-Y., Uzsoy, R., & Martin-Vega, L. A. (1992). Efficient algorithms for scheduling batch processing machines. Operations Research, 40(4), 764–775.
Liu, Z. H., & Yu, W. C. (2000). Scheduling one batch processor subject to job release dates. Discrete Applied Mathematics, 105, 129–136.
Liu, Z. H., Yuan, J. J., & Cheng, T. C. E. (2003). On scheduling an unbounded parallel batch machine. Operations Research Letters, 31, 42–48.
Lu, L. F., Cheng, T. C. E., Yuan, J. J., & Zhang, L. Q. (2009). Bounded single-machine parallel-batch scheduling with release dates and rejection. Computers and Operations Research, 36, 2748–2751.
Lu, L. F., Zhang, L. Q., & Yuan, J. J. (2010). The unbounded parallel batch machine scheduling with release dates and rejection to minimize makespan. Theoretical Computer Science, 396, 283–289.
Ou, J. W., & Zhong, X. L. (2017). Bicriteria order acceptance and scheduling with consideration of fill rate. European Journal of Operational Research, 263(3), 904–907.
Ou, J. W., Li, C.-L., & Zhong, X. L. (2016). Faster algorithms for single machine scheduling with release dates and rejection. Information Processing Letters, 116, 503–507.
Potts, C., & Kovalyov, M. (2000). Scheduling with batching: A review. European Journal of Operational Research, 120(2), 228–249.
Shabtay, D. (2014). The single machine serial batch scheduling problem with rejection to minimize total completion time and total rejection cost. European Journal of Operational Research, 233(1), 64–74.
Shabtay, D., Gaspar, N., & Kaspi, M. (2013). A survey on offline scheduling with rejection. Journal of Scheduling , 16(1), 3–28.
Slotnick, S. A. (2011). Order acceptance and scheduling: A taxonomy and review. European Journal of Operational Research, 212(1), 1–11.
Webster, S., & Baker, K. R. (1995). Scheduling groups of jobs on a single machine. Operations Research, 43, 692–703.
Zhang, L. Q., Lu, L. F., & Yuan, J. J. (2009). Single machine scheduling with release dates and rejection. European Journal of Operational Research, 198(3), 975–978.
Zhang, L. Q., Lu, L. F., & Ng, C. T. (2012). The unbounded parallel-batch scheduling with rejection. Journal of the Operational Research Society, 63(3), 293–298.
Acknowledgements
The author thanks two anonymous referees for their excellent suggestions and comments on how to improve the presentation of the paper. This research was supported in part by NSFC 71101064.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Proof of Theorem 1
Consider \(\mathcal{A}(\sigma ^*)\), the set of jobs that are accepted in the optimal solution \(\sigma ^*\). If \(\mathcal{A}(\sigma ^*)=\emptyset \), then \(z^*=(1-\alpha )w(\mathcal{J})=z_0\), which indicates \(\mathbf{H}_1\) generates an optimal solution in this case (note that we must have \(Q_0=R_0-w(\mathcal{J})\ge 0\) in this case). We only need to consider the case when \(\mathcal{A}(\sigma ^*)\ne \emptyset \) in the following analysis.
Let \(\beta \in \{1,2,\ldots ,n\}\) be the largest job index among all jobs in \(\mathcal{A}(\sigma ^*)\). Then, all jobs in set \(\mathcal{J}\setminus \{J_{1},\ldots ,J_\beta \}\) are rejected in \(\sigma ^*\), and inequality \(w(\mathcal{J})-\sum _{j=1}^\beta w_j\le R_0\) holds. Note that \(W_i=w(\mathcal{J})-\sum _{j=1}^i w_j\) for \(i=1,2,\ldots ,n\). Hence, we must have \(Q_\beta =R_0- W_\beta =R_0-w(\mathcal{J})+\sum _{j=1}^\beta w_j\ge 0\). Observe that in \(\sigma ^*\) the makespan of the accepted jobs is no less than \(r_\beta +p_\beta \), and the total rejection penalty of the rejected jobs is no less than \(W_\beta \). This indicates that
Let \(\sigma _\eta \) denote the final solution determined in Step 3. By assumption, we have \(1\le \eta \le n\). According to the definition of \(\eta \), on the one hand, we must have \(Q_\eta =R_0-W_\eta \ge 0\), which indicates that it is feasible if all jobs in \(\mathcal{J}\setminus \{J_{1},\ldots ,J_\eta \}\) are rejected. On the other hand, we have
Recall that \(r_1+p_1\le r_2+p_2 \le \cdots \le r_n+p_n\). We thus have
and
By the construction of \(\sigma _\eta \), the unique job batch starts to be processed at time \(\max _{1\le i \le \eta }\{r_i\}\), and thus, its value
By (16), (15), (14), (13) and (12), we have
Hence, \(\mathbf{H}_1\) is a 2-approximation to problem \(\mathcal{P}\).
Consider the time complexity of \(\mathbf{H}_1\). Step 1 takes \(O(n\log n)\) time. Each of Step 2 and Step 3 only takes O(n) time. Hence, the time complexity of \(\mathbf{H}_1\) is \(O(n\log n)\).
Finally, we show that the bound is tight. Consider the following instance with 2 jobs: \((r_1, p_1,w_1) = (0, 2, 1)\), \((r_2, p_2,w_2) = (2, 0, 2)\), where \(R_0=1\), \(\alpha =0.5\) and \(c=0\). Then, \(r_1+p_1=r_2+p_2=2\), \(w_1+w_2 = 3\), \(Q_0=-2\), \(Q_1=-1\) and \(Q_2=1\). Hence, \(\eta =2\), and the solution determined by \(\mathbf{H}_1\) is as follows: Both \(J_1\) and \(J_2\) are accepted and processed as a batch starting from time 2. The value of such a solution is equal to \(0.5\times (2+2)=2\). However, the optimal schedule is as follows: both \(J_1\) and \(J_2\) are accepted, \(J_1\) starts its processing at time 0, and \(J_2\) starts its processing at time 2 (i.e., there are two batches in the optimal solution, each batch contains exactly one job). The value of the optimal solution equals \(0.5\times 2=1\). Hence, the bound is tight. This completes the proof. \(\square \)
Proof of Theorem 2
Consider \(\mathcal{A}(\sigma ^*)\), the set of accepted jobs in \(\sigma ^*\). If \(\mathcal{A}(\sigma ^*)=\emptyset \), then the optimal solution value \(z^*=z_0\), which can be obtained by rejecting all the jobs. We only need to consider the case when \(\mathcal{A}(\sigma ^*)\ne \emptyset \). According to Step 1 of \(\mathbf{H}_1\), we have \(r_1\le r_2 \le \cdots \le r_n\) if \(p_1=p_2=\cdots =p_n\), and \(p_1\le p_2 \le \cdots \le p_n\) if \(r_1=r_2=\cdots =r_n\). In either case, we let \(\tau \) denote the largest job index among all jobs in \(\mathcal{A}(\sigma ^*)\). Then, the optimal solution is as follows: All jobs in \(\{J_j\in \mathcal{J}\mid j\le \tau \}\) are accepted and processed as a batch starting from time \(r_\tau \), while all other jobs are rejected. The value of the optimal solution is equal to \(\alpha (r_\tau +p_\tau )+(1-\alpha )(c+W_{\tau })= z_\tau \). Observe that in \(\mathbf{H}_1\), we have \(z_i=\alpha (r_i+p_i)+(1-\alpha )(c+W_{i})\) for \(i=1,2,\ldots ,n\). As a result, \(\eta =\arg \min _{i=1,2,\ldots ,n}\{z_i\mid Q_i\ge 0\} =\tau \) (due to optimality). Hence, \(\mathbf{H}_1\) generates an optimal solution to \(\mathcal{P}\) when either \(p_1=p_2=\cdots =p_n\) or \(r_1=r_2=\cdots =r_n\). \(\square \)
Proof of Lemma 1
Note that the LPT-rule holds for the classical unbounded parallel-batch single-machine scheduling problem \(B\mid b= +\infty , r_j\mid C_{\max }\) (see Lee and Uzsoy 1999). Consider \(\mathcal{A}(\sigma ^*)\), the set of accepted jobs in the optimal solution \(\sigma ^*\). Clearly, in the schedule for the jobs in \(\mathcal{A}(\sigma ^*)\), the fixed batch setup cost c does not affect the optimality of the LPT-rule. Hence, the result holds. \(\square \)
Proof of Lemma 2
Note that \(s^*_1\le t_1\Delta \) and \(s^*_2-s^*_1\le (t_2-t_1+1)\Delta \). As a result, we have
We only need to show that \(\cup _{\tau =1}^i H_\tau \supseteq \cup _{\tau =1}^i B^*_\tau \) for \(i=2,\ldots ,\rho \) if \(\rho \ge 2\) in the following.
Suppose that inequality \(\cup _{\tau =1}^{\rho '-1} H_\tau \supseteq \cup _{\tau =1}^{\rho '-1} B^*_\tau \) has been shown to be corrected for any \(\rho '=2,\ldots ,\rho -1\), and now we would like to prove \(\cup _{\tau =1}^{\rho '} H_\tau \supseteq \cup _{\tau =1}^{\rho '} B^*_\tau \). To prove this result, it is equivalent to prove \(\cup _{\tau =1}^{\rho '} H_\tau \supseteq B^*_{\rho '}\). By definition, we have
and
Note that \(s^*_{\rho '}\le t_{\rho '}\Delta \) and \(s^*_{\rho '+1}-s^*_{\rho '}\le (t_{\rho '+1}-t_{\rho '}+1)\Delta \). As a result,
We thus have \(\cup _{\tau =1}^{\rho '} H_\tau \supseteq B^*_{\rho '}\). The result holds. \(\square \)
Proof of Lemma 3
By the construction of \(\sigma _\rho \), the starting time of the first batch \(H_1\) is no later than \(t_1\Delta \), and the setup time plus the processing time of batch \(H_1\) is no greater than \((t_2-t_1+1)\Delta \). As a result, the completion time of \(H_1\) is no greater than \(t_1\Delta +(t_2-t_1+1)\Delta =(t_2+1)\Delta \). Hence, the result holds when \(\rho =1\). We consider the case when \(\rho \ge 2\) in the following.
Suppose that for any i, \(2\le i\le \rho \), the completion time of the \((i-1)\)-th batch \(H_{i-1}\) is no greater than \((t_{i}+i-1)\Delta \), and we would like to prove that the completion time of the i-th batch \(H_i\) is no greater than \((t_{i+1}+i)\Delta \). Note that by (5), we have \(r_j\le t_{i}\Delta \le (t_{i}+i-1)\Delta \) for any \(J_j\in H_i\), which indicates that the starting time of \(H_{i}\) is no later than \((t_{i}+i-1)\Delta \). By (4), the processing time of batch \(H_{i+1}\) is no greater than \((t_{i+1}-t_{i}+1)\Delta \). As a result, the completion time of \(H_{i}\) is no greater than \((t_{i}+i-1)\Delta +(t_{i+1}-t_{i}+1)\Delta =(t_{i+1}+i)\Delta \). This completes the proof. \(\square \)
Proof of Lemma 5
By definition, we have \(H'_1=H_1\),
and
As \(t_3\le t_{\rho +1}\), we have \((\cup _{\tau =1}^{2}H'_\tau )\supseteq H_{2}\). By Lemma 2, we have \((\cup _{\tau =1}^{2} H'_\tau )\supseteq (\cup _{\tau =1}^{2} H_\tau ) \supseteq (\cup _{\tau =1}^{2} B^*_\tau ).\) Note that \(\mathcal{A}(\sigma ^*)=\cup _{i=1}^\rho B_i^*\). Hence, to prove the result, we only need to show \((\cup _{\tau =1}^{3}H'_\tau )\supseteq \cup _{i=3}^\rho B_i^*\). By definition again,
On important observation is that for any \(J_j\in \cup _{i=3}^\rho B_i^*\), we must have
and
(otherwise, according to the LPT-rule, the total processing time of the first 3 batches in \(\sigma ^*\) has been greater than \(C_{\max }(\sigma ^*)\), which is a contradiction). This indicates that
The result holds. \(\square \)
Proof of Lemma 6
Note that \(H'_1=H_1\). By Lemma 3, the completion time of \(H'_1\) is bounded by \((t_{2}+1)\Delta \). By definition of \(H'_{2}\), on the one hand, \(r_j\le t_{2}\Delta \le (t_{2}+1)\Delta \) for any \(J_j\in H'_{2}\), which indicates that the starting time of \(H'_{2}\) is no later than \((t_{2}+1)\Delta \). On the other hand, the processing time of \(H'_{2}\) is no greater than \((t_{\rho +1}-t_{2}+1)\Delta \). Hence, the completion time of \(H'_{2}\) is no greater than \((t_{2}+1)\Delta +(t_{\rho +1}-t_{2}+1)\Delta =(t_{\rho +1}+2)\Delta \).
By definition of \(H'_{3}\), on the one hand, \(r_j\le t_{\rho +1}\Delta \le (t_{\rho +1}+2)\Delta \) for any \(J_j\in H'_{3}\), which indicates that the starting time of \(H'_{3}\) is no later than \((t_{\rho +1}+2)\Delta \). On the other hand, the processing time of batch \(H'_{3}\) is no greater than \(\frac{1}{3}t_{\rho +1}\Delta \). As a result, \(C_{\max }({\hat{\sigma }}_2)\) the completion time of \(H'_{3}\), is no greater than \((t_{\rho +1}+2)\Delta +\frac{1}{3}t_{\rho +1}\Delta =\frac{4}{3}t_{\rho +1}\Delta +2\Delta \). Note that
and
Hence,
The result holds. \(\square \)
Proof of Lemma 8
After our method to combine jobs is applied, there are no more than \(T-2\) jobs remaining in \(\mathcal{S}\), and there are no more than \((T-2)\cdot \zeta \) jobs remaining in \(\mathcal{L}\). Hence, \(|\mathcal{J}'|=(T-2)\cdot \zeta +(T-2)\). To prove the result, we only need to show that \(\zeta =O(T\log T)\). By definition,
According to the partial sums of the harmonic series, we have
where \(\gamma \approx 0.5772156649\) is the Euler - Mascheroni constant which is no greater than 1 (see Havil 2009). As a result,
The result holds. \(\square \)
Proof of Lemma 10
Let \(J_\eta \) be any job in \(B'_\mu \setminus \{J_{t_\mu }\}\). By definition, \(p_{t_\mu }\ge p_\eta \). If \(J_{t_\mu }\in \mathcal{S}\), then \({\tilde{p}}_{t_\mu }={\tilde{p}}_\eta =0\), the result holds. We only need to consider the case when \(J_{t_\mu }\in \mathcal{L}\) in the following analysis. Assume that \(J_{t_\mu }\in L_{i}^\kappa \) for some integers \(i\in \{1,2,\ldots ,T-2\}\) and \(\kappa \in \{1,2,\ldots ,\left\lceil T/i\right\rceil \}\). If job \(J_\eta \) is also within set \(L_{i}^\kappa \), then \({\tilde{p}}_{t_\mu }=\tilde{p}_\eta =i\Delta _1+(\kappa -1)\frac{i}{T}\Delta _1\), the result holds. If job \(J_\eta \) is within set \(L_{i}^{\kappa '}\) with \(1\le \kappa '<\kappa \), then \(\tilde{p}_\eta =i\Delta _1+(\kappa '-1)\frac{i}{T}\Delta _1< i\Delta _1+(\kappa -1)\frac{i}{T}\Delta _1={\tilde{p}}_{t_\mu }\), the result holds. If job \(J_\eta \) is within set \(L_{i'}\) with \(1\le i'<i\), then \({\tilde{p}}_\eta \le p_\eta < (i'+1)\Delta _1 \le i\Delta _1\le {\tilde{p}}_{t_\mu }\), the result still holds. This completes the proof. \(\square \)
Proof of Lemma 11
According to our job partition method, for any \(J_j\in \mathcal{J}\), we always have
Hence, the result holds for the case when \(\ell =1\). We only need to consider the case when \(\ell >1\) in the following analysis.
Remember that \(\{J_{t_1},\ldots ,J_{t_{\ell -1}}\}\subseteq \mathcal{L}\), and that \(L_i=\{J_j\in \mathcal{L}\mid i\Delta _1< p_j\le (i+1)\Delta _1\}\) for \(i=1,2,\ldots ,T-2\). Let \(\Phi =\{\tilde{p}_{t_1},{\tilde{p}}_{t_2},\ldots ,{\tilde{p}}_{t_{\ell -1}}\}\). For \(i=1,2,\ldots ,T-2\), let \(\varphi _i\) be the set of distinct scaled processing times of the jobs in \(L_i\), and let \(v_i=|\varphi _i\cap \Phi |\) be the number of distinct scaled processing times of the jobs in \(L_i\) that are included within \(\Phi \). According to our job partition method, we must have
and
Furthermore, for any \(J_j\in L_i\), we have
As a result, by (18), and (19) and (20),
By Lemma 9 and equality \(C_{\max }({\tilde{\sigma }}) =C_{\max }(\pi ^*)\), we have
Hence, by (17), (21) and (22), we have
This completes the proof. \(\square \)
Proof of Lemma 12
Based on the schedule of \({\tilde{\sigma }}\), we first replace \({\tilde{p}}_j\) by \(p_j\) for \(j=1,2,\ldots ,n\), and let batches \(B'_1,\ldots ,B'_\ell \) start processing as early as possible following this order. After doing so, by Lemma 11, the makespan of the accepted jobs increases by no more than \(\frac{3\epsilon z^*}{5\alpha }\). Then, we further replace \(\tilde{r}_j\) by \(r_j\) for \(j=1,2,\ldots ,n\), and let batches \(B'_1,\ldots ,B'_\ell \) start processing as early as possible following this order. After doing so, the makespan of the accepted jobs further increases by no more than \(\Delta _1=\frac{\epsilon Z_0}{5\alpha }\le \frac{2\epsilon z^*}{5\alpha }\). Hence, the total increment of the makespan of the accepted jobs in \(\sigma _f\) is bounded by \(\frac{3\epsilon z^*}{5\alpha }+ \frac{2\epsilon z^*}{5\alpha } =\frac{\epsilon z^*}{\alpha }\). This completes the proof. \(\square \)
Rights and permissions
About this article
Cite this article
Ou, J. Near-linear-time approximation algorithms for scheduling a batch-processing machine with setups and job rejection. J Sched 23, 525–538 (2020). https://doi.org/10.1007/s10951-020-00657-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10951-020-00657-4