# Decomposition algorithms for submodular optimization with applications to parallel machine scheduling with controllable processing times

## Abstract

In this paper we present a decomposition algorithm for maximizing a linear function over a submodular polyhedron intersected with a box. Apart from this contribution to submodular optimization, our results extend the toolkit available in deterministic machine scheduling with controllable processing times. We demonstrate how this method can be applied to developing fast algorithms for minimizing total compression cost for preemptive schedules on parallel machines with respect to given release dates and a common deadline. Obtained scheduling algorithms are faster and easier to justify than those previously known in the scheduling literature.

### Keywords

Submodular optimization Parallel machine scheduling Controllable processing times Decomposition### Mathematics Subject Classification

90C27 90B35 90C05## 1 Introduction

In scheduling with controllable processing times, the actual durations of the jobs are not fixed in advance, but have to be chosen from a given interval. This area of scheduling has been active since the 1980s, see surveys [16] and [22].

Normally, for a scheduling model with controllable processing times two types of decisions are required: (1) each job has to be assigned its actual processing time, and (2) a schedule has to be found that provides a required level of quality. There is a penalty for assigning shorter actual processing times, since the reduction in processing time is usually associated with an additional effort, e.g., allocation of additional resources or improving processing conditions. The quality of the resulting schedule is measured with respect to the cost of assigning the actual processing times that guarantee a certain scheduling performance.

As established in [23, 24], there is a close link between scheduling with controllable processing times and linear programming problems with submodular constraints. This allows us to use the achievements of submodular optimization [4, 21] for design and justification of scheduling algorithms. On the other hand, formulation of scheduling problems in terms of submodular optimization leads to the necessity of studying novel models with submodular constraints. Our papers [25, 27] can be viewed as convincing examples of such a positive mutual influence of scheduling and submodular optimization.

This paper, which builds up on [26], makes another contribution towards the development of solution procedures for problems of submodular optimization and their applications to scheduling models. We present a decomposition algorithm for maximizing a linear function over a submodular polyhedron intersected with a box. Apart from this contribution to submodular optimization, our results extend the toolkit available in deterministic machine scheduling. We demonstrate how this method can be applied to several scheduling problems, in which it is required to minimize the total penalty for choosing actual processing times, also known as total compression cost. The jobs have to be processed with preemption on several parallel machines, so that no job is processed after a common deadline. The jobs may have different release dates.

The paper is organized as follows. Section 2 gives a survey of the relevant results on scheduling with controllable processing times. In Sect. 3 we reformulate three scheduling problems in terms of linear programming problems over a submodular polyhedron intersected with a box. Section 4 outlines a recursive decomposition algorithm for solving maximization linear programming problems with submodular constraints. The applications of the developed decomposition algorithm to scheduling with controllable processing times are presented in Sect. 5. The concluding remarks are contained in Sect. 6.

## 2 Scheduling with controllable processing times: a review

In this section, we give a brief overview of the known results on the preemptive scheduling problems with controllable processing times to minimize the total compression cost for schedules that are feasible with respect to given release dates and a common deadline.

Formally, in the model under consideration the jobs of set \(N=\{1,2,\ldots ,n\}\) have to be processed on parallel machines \(M_{1},M_{2},\ldots ,M_{m}\), where \(m\ge 2\). For each job \(j\in N\), its processing time \(p(j)\) is not given in advance but has to be chosen by the decision-maker from a given interval \(\left[ \underline{p}(j),\overline{p}(j)\right] \). That selection process can be seen as either *compressing* (also known as *crashing*) the longest processing time \(\overline{p}(j)\) down to \(p(j)\), or *decompressing *the shortest processing time \(\underline{p}(j)\) up to \( p(j)\). In the former case, the value \(x(j)=\overline{p}(j)-p(j)\) is called the *compression amount* of job \(j\), while in the latter case \( z(j)=p(j)-\underline{p}(j)\) is called the *decompression amount* of job \(j\). Compression may decrease the completion time of each job \(j\) but incurs additional cost \(w(j)x(j)\), where \(w(j)\) is a given non-negative unit compression cost. The total cost associated with a choice of the actual processing times is represented by the linear function \(W=\sum _{j\in N}w(j)x(j)\).

Each job \(j\in N\) is given a *release date*\(r(j)\), before which it is not available, and a common *deadline*\(d\), by which its processing must be completed. In the processing of any job, *preemption* is allowed, so that the processing can be interrupted on any machine at any time and resumed later, possibly on another machine. It is not allowed to process a job on more than one machine at a time, and a machine processes at most one job at a time.

Given a schedule, let \(C(j)\) denote the completion time of job \(j\), i.e., the time at which the last portion of job \(j\) is finished on the corresponding machine. A schedule is called *feasible* if the processing of a job \(j\in N\) takes place in the time interval \(\left[ r(j),d \right] \).

*identical*parallel machines and the

*uniform*parallel machines. In the former case, the machines have the same speed, so that for a job \(j\) with an actual processing time \(p(j)\) the total length of the time intervals in which this job is processed in a feasible schedule is equal to \(p(j)\). If the machines are uniform, then it is assumed that machine \(M_{h}\) has speed \(s_{h},\,1\le h\le m\). Without loss of generality, throughout this paper we assume that the machines are numbered in non-increasing order of their speeds, i.e.,

*processing amount*of job \(j\) on machine \(M_{h}\). It follows that

If the processing times \(p(j),\,j\in N\), are fixed then the corresponding counterpart of problem \(\alpha |r(j),p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W\) is denoted by \(\alpha |r(j),pmtn|C_{\max }\). In the latter problem it is required to find a preemptive schedule that for the corresponding settings minimizes the makespan \(C_{\max }=\max \left\{ C(j)|j\in N\right\} \).

In the scheduling literature, there are several interpretations and formulations of scheduling models that are related to those with controllable processing times. Below we give a short overview of them, indicating the points of distinction and similarity with our definition of the model.

*resource-dependent*, so that the more units of a single additional resource is given to a job, the more it can be compressed. In their model, a job \( j\in N\) has a ‘normal’ processing time \(b(j)\) (no resource given), and its actual processing time becomes \(p(j)=b(j)-a(j)u(j)\), provided that \(u(j)\) units of the resource are allocated to the job, where \(a(j)\) is interpreted as a compression rate. The amount of the resource to be allocated to a job is limited by \(0\le u(j)\le \tau (j)\), where \(\tau (j)\) is a known job-dependent upper bound. The cost of using one unit of the resource for compressing job \(j\) is denoted by \(v(j)\), and it is required to minimize the total cost of resource consumption. This interpretation of the controllable processing times is essentially equivalent to that adopted in this paper, which can be seen by setting

Scheduling problems with controllable processing times can serve as mathematical models in make-or-buy decision-making; see, e.g., Shakhlevich et al. [25]. In manufacturing, it is often the case that either the existing production capabilities are insufficient to fulfill all orders internally in time or the cost of work-in-process of an order exceeds a desirable amount. Such an order can be partly subcontracted. Subcontracting incurs additional cost but that can be either compensated by quoting realistic deadlines for all jobs or balanced by a reduction in internal production expenses. The make-or-buy decisions should be taken to determine which part of each order is manufactured internally and which is subcontracted. Under this interpretation, the orders are the jobs and for each order \(j\in N\), the value of \(\overline{p}(j)\) is interpreted as the processing requirement, provided that the order is manufactured internally in full, while \(\underline{p}(j)\) is a given mandatory limit on the internal production. Further, \(p(j)=\overline{p}(j)-x(j)\) is the chosen actual time for internal manufacturing, where \(x(j)\) shows how much of the order is subcontracted and \(w(j)x(j)\) is the cost of this subcontracting. Thus, the problem is to minimize the total subcontracting cost and find a deadline-feasible schedule for internally manufactured orders.

It is obvious that for scheduling problems with controllable processing times, minimizing the total compression cost \(W\) is equivalent to maximizing either the total decompression cost \(\sum w(j)z(j)\) or total weighted processing time \(\sum w(j)p(j)\). Most of the problems relevant to this study have been solved using a greedy approach. One way of implementing this approach is to start with a (possibly, infeasible) schedule in which all jobs are fully decompressed to their longest processing times \(\overline{p} (j)\), scan the jobs in non-decreasing order of their weights \(w(j)\) and compress each job by the smallest possible amount that guarantees a feasible processing of a job. Another approach, which is in some sense dual to the one described above, is to start with a feasible schedule in which all jobs are fully compressed to their smallest processing times \(\underline{p}(j),\) scan the jobs in non-increasing order of their weights \(w(j)\) and decompress each job by the largest possible amount.

Despite the similarity of these approaches, in early papers on this topic each problem is considered separately and a justification of the greedy approach is often lengthy and developed from the first principles. However, as established by later studies, the greedy nature of the solution approaches is due to the fact that many scheduling problems with controllable processing times can be reformulated in terms of linear programming problems over special regions such as submodular polyhedra, (generalized) polymatroids, base polyhedra, etc. See Sect. 3 for definitions and main concepts of submodular optimization.

Nemhauser and Wolsey [15] were among the first who noticed that scheduling with controllable processing times could be handled by methods of submodular optimization; see, e.g., Example 6 (Sect. 6 of Chapter III.3) of the book [15]. A systematic development of a general framework for solving scheduling problems with controllable processing times via submodular methods has been initiated by Shakhlevich and Strusevich [23, 24] and further advanced by Shakhlevich et al. [25]. This paper makes another contribution in this direction.

Below we review the known results on the problems to be considered in this paper. Two aspects of the resulting algorithms are important: (1) finding the actual processing times and therefore the optimal value of the function, and (2) finding the corresponding optimal schedule. The second aspect is related to traditional scheduling to minimize the makespan with fixed processing times.

*Zero release dates, common deadline*The results for the models under these conditions are summarized in the second and third columns of Table 1. If the machines are identical, then solving problem \( P|pmtn|C_{\max }\) with fixed processing times can be done by a linear-time algorithm that is due to McNaughton [14]. As shown by Jansen and Mastrolilli [9], problem \(P|p(j)=\overline{p} (j)-x(j),pmtn,C(j)\le d|W\) reduces to a continuous generalized knapsack problem and can be solved in \(O(n)\) time. Shakhlevich and Strusevich [23] consider the bicriteria problem \(P|p(j)=\overline{p} (j)-x(j),pmtn|\left( C_{\max },W\right) ,\) in which makespan \(C_{\max }\) and the total compression cost \(W=\sum w(j)x(j)\) have to be minimized simultaneously, in the Pareto sense; the running time of their algorithm is \( O(n\log n)\).

Summary of the results

Problem | \(r(j)=0\) | Arbitrary \(r(j)\) | ||
---|---|---|---|---|

\(\alpha =P\) | \(\alpha =Q\) | \(\alpha =P\) | \(\alpha =Q\) | |

\(\alpha |r(j),pmtn|C_{\max }\) | \(O(n)\) | \(O(m \log m+n)\) | \(O(n \log n)\) | \(O(nm + n \log n)\) |

[14] | [5] | [18] | [19] | |

\(\alpha |r(j), p(j)=\overline{p}(j)-x(j), pmtn,C(j)\le d|W\) | ||||

Previously known | \(O(n)\) | \({O(nm+n \log n)}\) | \(O(n^{2}\log m)\) | \(O(n^{2}m)\) |

[9] | [27] | [27] | ||

This paper | – | \( O(\min \{n \log n, n\!+\!m\log m \log n \})\) | \(O( n \log n \log m) \) | \(O\left( nm\log n\right) \) |

Section 5.1 | Section 5.2 | Section 5.3 | ||

\(\alpha |r(j), p(j)\!=\overline{p}_{j}\!-\!x(j),pmtn|\left( C_{\max },W\right) \) | \(O(n \log n)\) | \(O(nm \log m)\) | \(O\left( n^{2}\log m \right) \) | \(O\left( n^{2}m \right) \) |

[23] | [27] | [27] | [27] |

In the case of uniform machines, the best known algorithm for solving problem \(Q|pmtn|C_{\max }\) with fixed processing times is due to Gonzalez and Sahni [5]. For problem \(Q|p(j)=\overline{p}(j)-x(j),pmtn,C(j) \le d|W\) Nowicki and Zdrzałka [17] show how to find the actual processing times in \(O(nm+n\log n)\) time. Shakhlevich and Strusevich [24] reduce the problem to maximizing a linear function over a generalized polymatroid; they give an algorithm that requires the same running time as that by Nowicki and Zdrzałka [17], but can be extended to solving a bicriteria problem \(Q|p(j)=\overline{p}(j)-x(j),pmtn|\left( C_{\max },W\right) \). The best running time for the bicriteria problem is \(O(nm\log m)\), which is achieved in [27] by submodular optimization techniques.

*Arbitrary release dates, common deadline* The results for the models under these conditions are summarized in the fourth and fifth columns of Table 1. These models are symmetric to those with a common zero release date and arbitrary deadlines. Problem \(P|r(j),pmtn|C_{\max }\) with fixed processing times on \(m\) identical parallel machines can be solved in \(O(n\log n)\) time (or in \(O(n\log m)\) time if the jobs are pre-sorted) as proved by Sahni [18]. For the uniform machines, Sahni and Cho [19] give an algorithm for problem \(Q|r(j),pmtn|C_{\max }\) that requires \( O(mn+n\log n)\) time (or \(O(mn)\) time if the jobs are pre-sorted).

Prior to our work on the links between submodular optimization and scheduling with controllable processing times [27], no purpose-built algorithms have been known for problems \(\alpha |r(j),p(j)= \overline{p}(j)-x(j),pmtn,C(j)\le d|W\) with \(\alpha \in \left\{ P,Q\right\} \). It is shown in [27] that the bicriteria problems \(\alpha m|r(j),p(j)=\overline{p}(j)-x(j),pmtn|\left( C_{\max },W\right) \) can be solved in \(O\left( n^{2}\log m\right) \) time and in \(O(n^{2}m)\) time for \( \alpha =P\) and \(\alpha =Q\), respectively. Since a solution to a single criterion problem \(\alpha m|r(j),p(j)=\overline{p}(j)-x(j),pmtn,C(j)\le d|W\) is contained among the Pareto optimal solutions for the corresponding bicriteria problem \(\alpha m|r(j),p(j)=\overline{p}(j)-x(j),pmtn|\left( C_{\max },W\right) \), the algorithms from [27] are quoted in Table 1 as the best previously known for the single criterion problems with controllable processing times.

The main purpose of this paper is to demonstrate that the single criterion scheduling problems with controllable processing times to minimize the total compression cost can be solved by faster algorithms that are based on reformulation of these problems in terms of a linear programming problem over a submodular polyhedron intersected with a box. For the latter generic problem, we develop a recursive decomposition algorithm and show that for the scheduling applications it can be implemented in a very efficient way.

## 3 Scheduling with controllable processing times: submodular reformulations

For completeness, we start this section with definitions related to submodular optimization. Unless stated otherwise, we follow a comprehensive monograph on this topic by Fujishige [4], see also [10, 21]. In Sect. 3.1, we introduce a linear programming problem for which the set of constraints is a submodular polyhedron intersected with a box. Being quite general, the problem represents a range of scheduling models with controllable processing times. In Sect. 3.2 we give the details of the corresponding reformulations.

### 3.1 Preliminaries on submodular polyhedra

For a positive integer \(n\), let \(N=\{1,2,\ldots ,n\}\) be a ground set, and let \(2^{N}\) denote the family of all subsets of \(N\). For a subset \( X\subseteq N\), let \(\mathbb {R}^{X}\) denote the set of all vectors \({\mathbf {p}}\) with real components \(p(j)\), where \(j\in X\). For two vectors \({\mathbf {p}}=(p(1),p(2),\ldots ,p(n))\in \mathbb {R}^{N}\) and \({\mathbf {q}} =(q(1),q(2),\ldots ,q(n))\in \mathbb {R}^{N}\), we write \({\mathbf {p}}\le {\mathbf {q}}\) if \(p(j)\le q(j)\) for each \(j\in N\). Given a set \(X\subseteq \mathbb {R}^{N}\), a vector \({\mathbf {p}}\in X\) is called *maximal* in \( X\) if there exists no vector \({\mathbf {q}}\in X\) such that \({\mathbf {p}}\le {\mathbf {q}}\) and \({\mathbf {p}}\ne {\mathbf {q}}\). For a vector \({\mathbf {p}} \in \mathbb {R}^{N}\), define \(p(X)=\sum _{j\in X}p(j)\) for every set \(X\in 2^{N}\).

*submodular*if the inequality

*submodular system*on \(N\), while \(\varphi \) is referred to as the

*rank function*of that system.

*submodular polyhedron*and the

*base polyhedron*, respectively, associated with the submodular system. Notice that \(B(\varphi ) \) represents the set of all maximal vectors in \(P(\varphi )\).

In our previous work [25], we have demonstrated that Problem (LP) can be reduced to optimization over a simpler structure, namely, over a base polyhedron. In fact, we have shown that a problem of maximizing a linear function over the intersection of a submodular polyhedron and a box is equivalent to maximizing the same objective function over a base polyhedron associated with another rank function.

**Theorem 1**

- (i)
Problem (LP) has a feasible solution if and only if \(\underline{\mathbf {p}}\in P(\varphi )\) and \(\underline{\mathbf {p}}\le \overline{\mathbf {p}}\).

- (ii)If Problem (LP) has a feasible solution, then the set of maximal feasible solutions of Problem (LP) is a base polyhedron \( B(\tilde{\varphi })\) associated with the submodular system \((2^{N},\tilde{ \varphi })\), where the rank function \(\tilde{\varphi }:2^{N}\rightarrow \mathbb {R}\) is given by$$\begin{aligned} \tilde{\varphi }(X)= \min _{Y\in 2^{N}}\left\{ \varphi (Y)+\overline{p}(X{\setminus } Y)-\underline{p}(Y{\setminus } X)\right\} . \end{aligned}$$(5)

Notice that the computation of the value \(\tilde{\varphi }(X)\) for a given \( X\in 2^{N}\) reduces to minimization of a submodular function, which can be computed in polynomial time by using any of the available algorithms for minimizing a submodular function [7, 20]. However, the running time of known algorithms is fairly large. In many special cases of Problem (LP), including its applications to scheduling problems with controllable processing times, the value \(\tilde{\varphi }(X)\) can be computed more efficiently without using the submodular function minimization, as shown later.

An advantage of the reduction of Problem (LP) to a problem of the form (6) is that the solution vector can be obtained essentially in a closed form, as stated in the theorem below.

**Theorem 2**

This theorem immediately implies a simple algorithm for Problem (LP), which computes an optimal solution \(\mathbf {p^{*}}\) by determining the value \( \tilde{\varphi }(\{j_{1},j_{2},\ldots , j_{h}\})\) for each \(h = 1, 2, \ldots , n \). In this paper, instead, we use a different algorithm based on decomposition approach to achieve better running times for special cases of Problem (LP), as explained in Sect. 4.

### 3.2 Rank functions for scheduling applications

For each problem \(Q|p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W,\, P|r(j),p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W\) and \(Q|r(j),p(j)= \overline{p}(j)-x(j),C(j)\le d,pmtn|W\), we need to find the actual processing times \(p(j)=\overline{p}(j)-x(j),\,j\in N\), such that all jobs can be completed by a common due date \(d\) and the total compression cost \( W=\sum _{j\in N}w(j)x(j)\) is minimized. In what follows, we present LP formulations of these problems with \(p(j),\,j\in N\), being decision variables, and the objective function to be maximized being \(\sum _{j\in N}w(j)p(j)=\sum _{j\in N}w(j)\left( \overline{p}(j)-x(j)\right) \). Since each decision variable \(p(j)\) has a lower bound \(\underline{p}(j)\) and an upper bound \(\overline{p}(j),\) an LP formulation includes the box constraints of the form \(\underline{p}(j)\le p(j)\le \overline{p}(j),\,j\in N\).

- (i)
for each \(k,1\le k\le m-1,\,k\) longest jobs can be processed on \(k\) fastest machines by time \(d\), and

- (ii)
all \(n\) jobs can be completed on all \(m\) machines by time \(d\).

## 4 Decomposition of LP problems with submodular constraints

In this section, we describe a decomposition algorithm for solving LP problems defined over a submodular polyhedron intersected with a box. In Sect. 4.1, we demonstrate that the linear programming problem under study can be recursively decomposed into subproblems of a smaller dimension, with some components of a solution vector fixed to one of their bounds. We provide an outline of an efficient recursive decomposition procedure in Sect. 4.2 and analyze its time complexity in Sect. 4.3. In Sect. 5 we present implementation details of the recursive decomposition procedure for the relevant scheduling models with controllable processing times.

### 4.1 Fundamental idea for decomposition

In this section, we show an important property, which makes the foundation of our decomposition algorithm for Problem (LP) of the form (4).

*heavy-element subset of*\(N\)

*with respect to the weight vector*\(\mathbf {w}\) if it satisfies the condition

*instrumental*set for set \(X\).

**Lemma 1**

*Proof*

*restriction of*\((2^{N},\varphi )\)

*to*\(A\). On the other hand, for a set \(A\in 2^{N}\) define a set function \(\varphi _{A}:2^{N{\setminus } A}\rightarrow \mathbb {R}\) by

*contraction of*\((2^{N},\varphi )\)

*by*\(A\).

*direct sum*\({\mathbf {p}_\mathbf{1}}\oplus {\mathbf {p}}_{{\mathbf {2}}}\in \mathbb {R}^{N}\) of \( \mathbf {p_{1}}\) and \({\mathbf {p}}_{{\mathbf {2}}}\) is defined by

**Lemma 2**

- (i)
Each of problems (LP1) and (LP2) has a feasible solution.

- (ii)
If a vector \(\mathbf {p}_\mathbf{1}\in \mathbb {R}^{A}\) is an optimal solution of Problem (LP1) and a vector \(\mathbf {p}_\mathbf{2}\in \mathbb {R} ^{N{\setminus } A}\) is an optimal solution of Problem (LP2), then the direct sum \({\mathbf {p}}^{*}=\mathbf {p}_\mathbf{1}\oplus \mathbf {p}_\mathbf{2}\in \mathbb {R} ^{N}\) of \(\mathbf {p}_\mathbf{1}\) and \(\mathbf {p}_\mathbf{2}\) is an optimal solution of Problem (LP).

*Proof*

From Lemmas 1 and 2, we obtain the following property, which is used recursively in our decomposition algorithm.

**Theorem 3**

Notice that Problem (LPR) is obtained from Problem (LP) as a result of restriction to \(Y_{*}\) and the values of components \(p(j), j\in Y_{*}{\setminus } \hat{N}\), are fixed to their lower bounds in accordance with Property (c) of Lemma 1. Similarly, Problem (LPC) is obtained from Problem (LP) as a result of contraction by \(Y_{*}\) and the values of components \(p(j), j\in \hat{N}{\setminus } Y_{*}\), are fixed to their upper bounds in accordance with Property (b) of Lemma 1.

### 4.2 Recursive decomposition procedure

In this subsection, we describe how the original Problem (LP) can be decomposed recursively based on Theorem 3, until we obtain a collection of trivially solvable problems with no non-fixed variables. In each stage of this process, the current LP problem is decomposed into two subproblems, each with a reduced set of variables, while some of the original variables receive fixed values and stay fixed until the end.

*Remark 1*

*heavy-element subset with respect to the weight vector*\(\mathbf {w}\) if it satisfies the condition

\(H\subseteq N\) is the index set of components of vector \({\mathbf {p}}\);

\(F\subseteq H\) is the index set of fixed components, i.e., \(l(j)=u(j)\) holds for each \(j\in F\);

- \(K\subseteq N{\setminus } H\) is the set that defines the rank function \( \varphi _{K}^H :2^{H}\rightarrow \mathbb {R}\) such that$$\begin{aligned} \varphi _{K}^H (X)=\varphi (X\cup K)-\varphi (K), \qquad X\in 2^{H}; \end{aligned}$$
\(\mathbf {l}=(l(j)\mid j\in H)\) and \(\mathbf {u}=(u(j)\mid j\in H)\) are respectively the vectors of the lower and upper bounds on variables \(p(j), j\in H\). For \(j\in N\), each of \(l(j)\) and \(u(j)\) either takes the value of \( \underline{p}(j)\) or that of \(\overline{p}(j)\) from the original Problem (LP). Notice that \(l(j)=u\left( j\right) \) for each \(j\in F\).

The original Problem (LP) is represented as Problem LP\((N,\emptyset ,\emptyset ,\underline{{\mathbf {p}}},\overline{{\mathbf {p}}})\). For \(j\in H\), we say that the variable \(p(j)\) is a *non-fixed variable* if \( l(j)<u(j)\) holds, and a *fixed variable* if \(l(j)=u(j)\) holds. If all the variables in Problem LP\((H,F,K,\mathbf {l},\mathbf {u})\) are fixed, i.e., \( l(j)=u(j)\) holds for all \(j\in H\), then an optimal solution is uniquely determined by the vector \(\mathbf {u}\in \mathbb {R}^{H}\).

Problem LP\((H{\setminus } Y_{*},F_{2},K\cup Y_{*},\mathbf {l}_\mathbf{2}, \mathbf {u}_\mathbf{2})\) inherits the set of fixed variables \(\left( H{\setminus } Y_{*}\right) \cap F\) from the problem of a higher level, and additionally the variables of set \(\hat{H}{\setminus } Y_{*}\) become fixed. These two sets are disjoint. Thus, the complete description of the set \( F_{2} \) of fixed variables in Problem LP\((H{\setminus } Y_{*},F_{2},K, \mathbf {l}_\mathbf{2},\mathbf {u}_\mathbf{2})\) is given by \((\hat{H}\cup (H\cap F)){\setminus } Y_{*}\).

Recall that the original Problem (LP) is solved by calling Procedure Decomp\((N,\emptyset ,\emptyset ,\underline{{\mathbf {p}}},\overline{{\mathbf {p}}})\). Its actual running time depends on the choice of a heavy-element subset \(\hat{H}\) in Step 2 and on the time complexity of finding an instrumental set \(Y_{*}\).

### 4.3 Analysis of time complexity

We analyze the time complexity of Procedure Decomp. To reduce the depth of recursion of the procedure, it makes sense to perform decomposition in such a way that the number of non-fixed variables in each of the two emerging subproblems is roughly a half of the number of non-fixed variables in the current Problem LP\((H,F,K,\mathbf {l},\mathbf {u})\).

**Lemma 3**

If at each level of recursion of Procedure Decomp for Problem LP\((H,F,K,\mathbf {l},\mathbf {u})\) with \(|H{\setminus } F|>1\) a heavy-element subset \(\hat{H}\subseteq H{\setminus } F\) in Step 2 is chosen to contain \(\lceil |H{\setminus } F|/2\rceil \) non-fixed variables, then the number of non-fixed variables in each of the two subproblems that emerge as a result of decomposition is either \(\left\lceil |H{\setminus } F|/2\right\rceil \) or \(\lfloor |H{\setminus } F|/2\rfloor \).

*Proof*

For Problem LP\((H,F,K,\mathbf {l},\mathbf {u})\), let \(g=|H{\setminus } F|\) denote the number of the non-fixed variables. In Step 2 Procedure Decomp\((H,F,K,\mathbf {l},\mathbf {u})\) selects a heavy-element subset \(\hat{H}\subset H{\setminus } F\) that contains \(\lceil g/2\rceil \) non-fixed variables, i.e., \(|\hat{H}|=\left\lceil {g}/{2} \right\rceil \). Then, the number of the non-fixed variables in Problem LP\( (Y_{*},F_{1},K,\mathbf {l}_\mathbf{1},\mathbf {u}_\mathbf{1})\) considered in Step 3 satisfies \(|Y_{*}\cap \hat{H}|\le \left\lceil {g}/{2}\right\rceil \).

This lemma implies that the overall depth of recursion of Procedure Decomp applied to Problem LP\((N,\emptyset ,\emptyset ,\underline{{\mathbf {p}}} ,\overline{{\mathbf {p}}})\) is \(O(\log n)\).

**Theorem 4**

Problem (LP) can be solved by Procedure Decomp in \( O((T_{Y_{*}}(n)+T_{\mathrm{Split}}(n))\log n)\) time.

In the forthcoming discussion of three scheduling applications of the results of this section, we pay special attention to designing fast algorithms that could find the required set \(Y_{*}\) in all levels of the recursive Procedure Decomp. We develop fast algorithms that compute the value \(\widetilde{\varphi }(\hat{H})\) and find a set \(Y_{*}\) in accordance with its definition; see Sect. 5.

### 4.4 Comparison with decomposition algorithm for maximizing a concave separable function

In this subsection, we refer to our decomposition algorithm for Problem (LP) defined over a submodular polyhedron intersected with a box as Algorithm SSS-Decomp. Below, we compare that algorithm with a known decomposition algorithm that is applicable for maximizing a separable concave function over a submodular polyhedron; see [3], [4, Sect. 8.2] and [6].

The decomposition algorithm for Problem (SCFM) was first proposed by Fujishige [3] for the special case where each \(f_{j}\) is quadratic and \(\varphi \) is a polymatroid rank function. Groenevelt [6] then generalized the decomposition algorithm for the case where each \(f_{j}\) is a general concave function and \(\varphi \) is a polymatroid rank function. Later, it was pointed out by Fujishige [4, Sect. 8.2] that the decomposition algorithm in [6] can be further generalized to the case where \(\varphi \) is a general submodular function. We refer to that algorithm as Algorithm FG-Decomp.

Notice that for the set \(Y_{*}\) chosen in Step 3, there exists some optimal solution \(\mathbf {p}^{*}\) of Problem (SCFM) such that \(\varphi (Y_{*})=p^{*}(Y_{*})\); see [4, Sect. 8.2], [6].

For Problem (LP), Algorithm FG-Decomp is quite similar to Algorithm SSS-Decomp. Indeed, both algorithms recursively find a set \( Y_{*}\) and decompose a problem into two subproblems by using restriction to \(Y_{*}\) and contraction by \(Y_{*}\).

The difference of the two decomposition algorithms is in the selection rule of a set \(Y_{*}\). In fact, a numerical example can be provided that demonstrates that for the same instance of Problem (LP) the two decomposition algorithms may find different sets \(Y_{*}\) in the same iteration.

In addition, Algorithm SSS-Decomp fixes some variables in the subproblems so that the number of non-fixed variables in each subproblem is at most the half of the non-fixed variables in the original problem; this is an important feature of our algorithm which is not enjoyed by Algorithm FG-Decomp. This difference affects the efficiency of the two decomposition algorithms; indeed, for Problem (LP) the height of the decomposition tree can be \(\varTheta (n)\) if Algorithm FG-Decomp is used, while it is \(O(\log n)\) in our Algorithm SSS-Decomp.

Thus, despite certain similarity between the two decomposition algorithms, our algorithm cannot be seen as a straightforward adaptation of Algorithm FG-Decomp designed for solving problems of non-linear optimization with submodular constraints to a less general problem of linear programming.

On the other hand, assume that the feasible region for Problem (SCFM) is additionally restricted by imposing the box constraints, similar to those used in Problem (LP). Theorem 1 can be used to reduce the resulting problem to Problem (SCFM) with a feasible region being the base polyhedron with a modified rank function. Although the obtained problem can be solved by Algorithm FG-Decomp, this approach is computationally inefficient, since it requires multiple calls to a procedure for minimizing a submodular function. It is more efficient not to rely on Theorem 1, but to handle the additional box constraints by adapting the objective function, similarly to (27), and then to use Algorithm FG-Decomp.

## 5 Application to parallel machine scheduling problems

In this section, we show how the decomposition algorithm based on Procedure Decomp can be adapted for solving problems with parallel machines efficiently. Before considering implementation details that are individual for each scheduling problem under consideration, we start this section with a discussion that addresses the matters that are common to all three problems.

Recall that each scheduling problem we study in this paper can be formulated as Problem (LP) of the form (4) with an appropriate rank function. Thus, each of these problems can be solved by the decomposition algorithm described in Sect. 4.2 applied to Problem LP\((N,\emptyset ,\emptyset ,\mathbf {l},\mathbf {u)}\), where \(\mathbf {l}=\underline{{\mathbf {p}}}\) and \(\mathbf {u}=\overline{{\mathbf {p}}}\).

- 1.
If required, the jobs are numbered in non-decreasing order of their release dates in accordance with (9).

- 2.
If required, the machines are numbered in non-increasing order of their speeds in accordance with (1), and the partial sums \( S_{v}\) are computed for all \(v,\,0\le v\le m\), by (10).

- 3.
The lists \(\left( l(j)\mid j\in N\right) \) and \(\left( u(j)\mid j\in N\right) \) are formed and their elements are sorted in non-decreasing order.

To adapt the generic Procedure Decomp to solving a particular scheduling problem, we only need to provide the implementation details for Procedure Decomp\((H,F,K,{\mathbf {l}},\mathbf {u})\) that emerges at a certain level of recursion. To be precise, we need to explain how to compute for each particular problem the function \(\widetilde{\varphi }_{K}^{H}(X)\) for a chosen set \(X\in 2^{H}\) and how to find for a current heavy-element set an instrumental set \(Y_{*}\) defined by (22), which determines the pair of problems into which the current problem is decomposed.

### 5.1 Uniform machines, equal release dates

In this subsection, we show that problem \(Q|p(j)=\overline{p} (j)-x(j),C(j)\le d,pmtn|W\) can be solved in \(O(n\log n)\) time by the decomposition algorithm. To achieve this, we consider Problem LP\((H,F,K, \mathbf {l},\mathbf {u})\) that arises at some level of recursion of Procedure Decomp and present a procedure for computing the function \( \widetilde{\varphi }_{K}^{H}:2^{H}\rightarrow \mathbb {R}\) given by (22). We show that for an arbitrary set \(X\subseteq H\) the value \( \widetilde{\varphi }_{K}^{H}(X)\) can be computed in \(O(h)\) time. For a heavy-element set \(\hat{H}\subseteq H{\setminus } F\), finding a set \(Y_{*}\) that is instrumental for set \(\hat{H}\) also requires \(O(h)\) time.

Let us analyze the time complexity of Procedure CompQr0. In Step 2, the values \(\lambda _{1},\lambda _{2},\ldots ,\lambda _{\hat{h}}\) can be found in \(O(h)\) time by using the list \((\lambda (j)\mid j\in H)\), so that the value \(\varPhi ^{\prime }\) and set \(Y^{\prime }\) can be computed in \(O(h)\) time. It is easy to see that \(\varPhi ^{\prime \prime }\) and \(Y^{\prime \prime } \) can be obtained in \(O(h)\) time as well. Hence, the value \(\widetilde{ \varphi }_{K}^{H}(X)\) and set \(Y_{*}\) can be found in \(O(h)\) time.

**Theorem 5**

Problem \(Q|p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W\) can be solved either in \(O(n\log n)\) time or in \(O(n+m\log m\log n)\) time.

*Proof*

Here, we only present the proof of the running time \(O(n\log n)\), that is derived if in each level of recursion of Procedure Decomp we use Procedure CompQr0; the proof of the running time \(O(n+m\log m\log n)\) is given in “Appendix”.

As proved above, Procedure CompQr0 applied to Problem LP\((H,F,K,\mathbf {l}, \mathbf {u)}\) takes \(O(h)\) time. In terms of Theorem 4 on the running time of Procedure Decomp, this implies that \(T_{Y_{*}}(h)=O(h)\).

In the analysis of the time complexity of Procedure CompQr0, we assume that certain information is given as part of the input. This assumption can be satisfied by an appropriate preprocessing. In particular, when we decompose a problem with a set of job \(H\) at a certain level of recursion into two subproblems, we may create the sorted lists \((u(j)\mid j\in H)\) and \( (l(j)\mid j\in H)\). This can be done in \(O(h)\) time, since the sorted lists \( (u(j)\mid j\in N)\) and \((l(j)\mid j\in N)\) are available as a result of the initial preprocessing. Thus, we have that \(T_{\mathrm{Split}}(h)=O(h)\). Hence, the theorem follows from Theorem 4. \(\square \)

### 5.2 Identical machines, different release dates

In this subsection, we show that problem \(P|r(j),p(j)=\overline{p} (j)-x(j),C(j)\le d,pmtn|W\) can be solved in \(O(n\log m\log n)\) time by the decomposition algorithm. To achieve this, we consider Problem LP\((H,F,K, \mathbf {l},\mathbf {u})\) that arises at some level of recursion of Procedure Decomp and present a procedure for computing the function \( \widetilde{\varphi }_{K}^{H}:2^{H}\rightarrow \mathbb {R}\) given by (22). We show that for an arbitrary set \(X\subseteq H\) the value \( \widetilde{\varphi }_{K}^{H}(X)\) can be computed in \(O(h\log m)\) time. For a heavy-element set \(\hat{H}\subseteq H{\setminus } F\), finding a set \(Y_{*}\) that is instrumental for set \(\hat{H}\) also requires \(O(h\log m)\) time.

*i*-th smallest release dates among the jobs of set

*X*.

The following lemma is useful for computing the value \(\varPhi ^{\prime \prime }\) efficiently.

**Lemma 4**

*Proof*

First, notice that set \(Y^{\prime \prime }\cup K\) contains at least \(\hat{h}+1+k\ge m\) jobs, so that job \(t_{*}\) exists and \(m\le t_{*}\le h+k\). Notice that job \(t_{*}\) might belong to set \(H{\setminus } Y^{\prime \prime }\), and that job \(t_{*}\) is not necessarily unique. Indeed, if, e.g., job \(t_{*}+1\in H{\setminus } Y^{\prime \prime }\), then \(\{j\in Y^{\prime \prime }\cup K\mid j\le t_{*}\}=\{j\in Y^{\prime \prime }\cup K\mid j\le t_{*}+1\}\).

We need to show that there exists a \(t_{*}\) that satisfies \(t_{*}\le \bar{t}\). To prove this, we only need to consider the case that \(k\ge m\), since otherwise by definition \(\bar{t}=h+k\). For \(k\ge m\), let \(t_{*}\) be the smallest value of \(t\) such that the equality \(|\{j\in Y^{\prime \prime }\cup K\mid j\le t\}|=m\) holds. Since \(|\{j\in K\mid j\le t_{*}\}|\le m\), we have \(t_{*}\le \bar{t}\) by the definition of \(\bar{t}\).

Since \(\lambda (j)\ge 0\) for \(j\in H\), we should include all jobs \(j\in H\) with \(j>t_{*}\) into set \(Y_{2}^{\prime \prime }\) to achieve the maximum in (45), i.e., property (iii) holds. \(\square \)

**Lemma 5**

- (i)Given the values \(\rho [t-1]\) and \(\eta _{2}[t-1],\, \rho [t]\) and \(\eta _{2}[t]\) can be obtained as$$\begin{aligned} \rho [t]=\left\{ \begin{array}{l@{\quad }l} \rho [t-1], &{}\mathrm{if}\ \,t\in H, \\ \rho [t-1]+r(t), &{}\mathrm{if}\ \,t\in K, \end{array}\right. \quad \eta _{2}[t]=\left\{ \begin{array}{l@{\quad }l} \eta _{2}[t-1]-\lambda (t), &{}\mathrm{if}\ \,t\in H, \\ \eta _{2}[t-1], &{}\mathrm{if}\ \,t\in K. \end{array}\right. \end{aligned}$$(50)
- (ii)Given a set \(Q\in \mathcal {H}^{m}[t-1]\) with \(\eta _{1}[t-1]= \widetilde{\lambda }(Q)\), the value \(\eta _{1}[t-1]\) and job \(z\in Q\) such that \(\widetilde{\lambda }(z)=\min _{j\in Q}\widetilde{\lambda }(j)\), the value \(\eta _{1}[t]\) can be obtained as$$\begin{aligned} \eta _{1}[t]=\left\{ \begin{array}{l@{\quad }l} \eta _{1}[t-1], &{}\displaystyle \mathrm{if}\ \,t\in H,\ \widetilde{\lambda } (z)\ge \widetilde{\lambda }(t), \\ \displaystyle \eta _{1}[t-1]-\widetilde{\lambda }(z)+\widetilde{\lambda }(t), &{}\displaystyle \mathrm{if}\ \,t\in H,\ \widetilde{\lambda }(z)<\widetilde{ \lambda }(t), \\ \displaystyle \eta _{1}[t-1]-\widetilde{\lambda }(z), &{}\mathrm{if}\ \,t\in K. \end{array}\right. \end{aligned}$$(51)

*Proof*

We have \(K[t]=K[t-1]\) if \(t\in H\) and \( K[t]=K[t-1]\cup \{t\}\) if \(t\in K\). Hence, the first equation in (50) follows. The second equation in (50) is immediate from the definition of \(\eta _{2}\). The Eq. (51) follows from the observation that \(\eta _{1}[t]\) is equal to the sum of \(m-|K[t]|\) largest numbers in the list \( \left( \widetilde{\lambda }(j)\mid j\in H,\ j\le t\right) \). \(\square \)

Now we analyze the running time of this procedure. In Steps 1 and 2 we compute the value \(\varPhi ^{\prime }\) and find set \(Y^{\prime }\). Step 1 can be done in constant time. Step 2-1 can be done by selecting \(\hat{h}\) largest numbers in the list \((\widetilde{\lambda }(j)\mid j\in H)\) in \(O(h)\) time and then sorting them in \(O(\hat{h}\log \hat{h})\) time. Since Step 2-2 can be done in \(O(k+\hat{h})\) time, Step 2 requires \(O(k+h+\hat{h}\log \hat{h })=O(k+h\log \hat{h})=O(k+h\log m)\) time in total.

In Steps 3 and 4 we compute the value \(\varPhi ^{\prime \prime }\) and find set \( Y^{\prime \prime }\). Step 3 can be also done in constant time. We assume that both \(\left( r(j)\mid j\in H\right) \) and \(\left( r(j)\mid j\in K\right) \) are given as sorted lists; this can be easily satisfied by appropriate preprocessing. Then, Step 4-1 can be done in \(O(h+k)\) time by using merge sort. Step 4-2 can be done in \(O(h+k)\) time. In Step 4-3, we implement \(Q\) as a heap for computational efficiency. Initially \(Q=Q_{m}\) consists of at most \(m\) elements, and to initialize the heap \(Q\) takes \( O(h+m\log m)\) time. The number of elements in the heap does not increase, so that each iteration in Step 4-3 can be done in \(O(\log m)\) time, which implies that Step 4-3 requires \(O((h+k)\log m)\) time. Step 4-4 can be done in \(O(h+k)\) time. Step 4-5 is needed for finding the set \(Y^{\prime \prime }\) and is implemented as a partial rerun of Step 4-3 in \(O((h+k)\log m)\) time.

Finally, we compute the value \(\widetilde{\varphi }_{K}^{H}(X)\) in Step 5. We may assume that the value \(u(X)\) in Step 5 is given in advance. The value \(\sum _{i=1}^{\min \{m,k\}}r_{i}(K)\) can be computed in \(O(k)\) time, since a sorted list \(\left( r(j)\mid j\in K\right) \) is available. Hence, Step 5 can be done in \(O(k)\) time. In total, Procedure CompPrj requires \(O((h+k)\log m)\) time. In particular, the procedure runs in \(O(h\log m)\) time if \(h\ge k\).

In the rest of this subsection, we show that a slightly modified version of Procedure CompPrj can also be run in \(O(h\log m)\) time for \(h<k\).

We are now ready to prove the main statement regarding problem \(P|r(j),\,p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W\).

**Theorem 6**

Problem \(P|r(j),p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W\) can be solved in \(O(n\log m\log n)\) time.

*Proof*

As proved above, Procedure CompPrj applied to Problem LP\((H,F,K,\)\(\mathbf {l},\mathbf {u)}\) takes \(O(h\log m)\) time. In terms of Theorem 4 on the running time of Procedure Decomp, we have proved that \(T_{Y_{*}}(h)=O(h\log m)\).

In the analysis of the time complexity of Procedure CompPrj, we assume that certain information is given as part of the input. This assumption can be satisfied by an appropriate preprocessing, when we decompose a problem at a certain level of recursion into two subproblems, based on the found set \( Y_{*}\). It is not hard to see that this can be done in \(O(h\log m)\) time, i.e., we have \(T_{\mathrm{Split}}(h)=O(h\log m)\). Hence, the theorem follows from Theorem 4. \(\square \)

### 5.3 Uniform machines, different release dates

In this subsection, we show that problem \(Q|r(j),p(j)=\overline{p} (j)-x(j),C(j)\le d,pmtn|W\) can be solved in \(O(nm\log n)\) time by the decomposition algorithm. To achieve this, we consider Problem LP\((H,F,K, \mathbf {l},\mathbf {u})\) that arises at some level of recursion of Procedure Decomp and present a procedure for computing the function \( \widetilde{\varphi }_{K}^{H}:2^{H}\rightarrow \mathbb {R}\) given by (22). We show that for an arbitrary set \(X\subseteq H\) the value \( \widetilde{\varphi }_{K}^{H}(X)\) can be computed in \(O(hm)\) time. For a heavy-element set \(\hat{H}\subseteq H{\setminus } F\), finding a set \(Y_{*}\) that is instrumental for set \(\hat{H}\) also requires \(O(hm)\) time.

Applying (60) for \(t,\,1\le t\le h+k\), and \(v,\,1\le v\le m-k\) with the initial condition (58), we may find all values \(\xi _{v}[t]\) needed for computing \(\varPhi ^{\prime }\) by (57).

The most time consuming parts of the procedure are the double loops is Steps 6 and 10, which require \(O\left( \hat{h}\left( h+k\right) \right) \) time and \(O(m(h+k))\) time, respectively. Thus, the overall time complexity of Procedure CompQrj is \(O(m(h+k))\).

For \(h\ge k\), the time complexity becomes \(O(mh)\). We can show that the bound \(O(mh)\) also applies to the case that \(h<k\); this can be done by an approach similar to that used in Sect. 5.2. Hence, the next theorem follows from Theorem 4.

**Theorem 7**

Problem \(Q|r(j),p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W\) can be solved in \(O(nm\log n)\) time.

## 6 Conclusions

In this paper, we develop a decomposition recursive algorithm for maximizing a linear function over a submodular polyhedron intersected with a box. We illustrate the power of our approach by adapting the algorithm to solving three scheduling problems with controllable processing times. In these problems, it is required to find a preemptive schedule that is feasible with respect to a given deadline and minimizes total compression cost. The resulting algorithms run faster than previously known.

We intend to extend this approach to other scheduling models with controllable processing times, e.g., to a single machine with distinct release dates and deadlines. It will be interesting to identify problems, including those outside the area of scheduling, for which an adaptation of our approach is beneficial.

Although throughout the paper we assume that the processing times are real numbers from intervals \(\left[ \underline{p}(j),\overline{p}(j)\right] \), the formulated approach is applicable to the case where the processing times may only take integer values in the interval. Indeed, if all the input numbers, except for costs \(w(j)\), are given by integers, then the submodular rank function takes integer values, and the optimal solution \(p(j),\,j\in N\), found by Procedure Decomp is integral.

## Notes

### Acknowledgments

This research was supported by the EPSRC funded project EP/J019755/1 “Submodular Optimisation Techniques for Scheduling with Controllable Parameters”. The first author was partially supported by the Humboldt Research Fellowship of the Alexander von Humboldt Foundation and by Grant-in-Aid of the Ministry of Education, Culture, Sports, Science and Technology of Japan, grants 24500002 and 25106503.

### References

- 1.Brucker, P.: Scheduling Algorithms, 5th edn. Springer, Berlin (2007)MATHGoogle Scholar
- 2.Chen, Y.L.: Scheduling jobs to minimize total cost. Eur. J. Oper. Res.
**74**, 111–119 (1994)CrossRefMATHGoogle Scholar - 3.Fujishige, S.: Lexicographically optimal base of a polymatroid with respect to a weight factor. Math. Oper. Res.
**5**, 186–196 (1980)MathSciNetCrossRefMATHGoogle Scholar - 4.Fujishige, S.: Submodular Functions and Optimization. Annals of Discrete Mathematics, vol. 58, 2nd edn. Elsevier, Amsterdam (2005)Google Scholar
- 5.Gonzales, T.F., Sahni, S.: Preemptive scheduling of uniform processor systems. J. ACM
**25**, 92–101 (1978)CrossRefGoogle Scholar - 6.Groenevelt, H.: Two algorithms for maximizing a separable concave function over a polymatroid feasible region. Eur. J. Oper. Res.
**54**, 227–236 (1991)CrossRefMATHGoogle Scholar - 7.Iwata, S., Fleischer, L., Fujishige, S.: A combinatorial, strongly polynomial-time algorithm for minimizing submodular functions. J. ACM
**48**, 761–777 (2001)MathSciNetCrossRefMATHGoogle Scholar - 8.Janiak, A., Kovalyov, M.Y.: Single machine scheduling with deadlines and resource dependent processing times. Eur. J. Oper. Res.
**94**, 284–291 (1996)CrossRefMATHGoogle Scholar - 9.Jansen, K., Mastrolilli, M.: Approximation schemes for parallel machine scheduling problems with controllable processing times. Comput. Oper. Res.
**31**, 1565–1581 (2004)MathSciNetCrossRefMATHGoogle Scholar - 10.Katoh, N., Ibaraki, T.: Resource allocation problems. In: Du, D.-Z., Pardalos, P.M. (eds.) Handbook of Combinatorial Optimization, vol. 2, pp. 159–260. Kluwer, Dordrecht (1998)Google Scholar
- 11.Lawler, E.L., Lenstra, J.K., Rinnooy Kan, A.H.G., Shmoys, D.B.: Sequencing and scheduling: algorithms and complexity. In: Graves, S.C., Rinnooy Kan, A.H.G., Zipkin, P.H. (eds.) Handbooks in Operations Research and Management Science. Logistics of Production and Inventory, vol. 4, pp. 445–522. Elsevier, Amsterdam (1993)Google Scholar
- 12.Leung, J.Y.-T.: Minimizing total weighted error for imprecise computation tasks. In: Leung, J.Y.-T. (eds.) Handbook of Scheduling: Algorithms, Models and Performance Analysis, pp. 34-1–34-16. Chapman & Hall/CRC, London (2004)Google Scholar
- 13.McCormick, S.T.: Fast algorithms for parametric scheduling come from extensions to parametric maximum flow. Oper. Res.
**47**, 744–756 (1999)MathSciNetCrossRefMATHGoogle Scholar - 14.McNaughton, R.: Scheduling with deadlines and loss functions. Manag. Sci.
**12**, 1–12 (1959)MathSciNetCrossRefGoogle Scholar - 15.Nemhauser, G.L., Wolsey, L.A.: Integer and Combinatorial Optimization. Wiley, New York (1988)CrossRefMATHGoogle Scholar
- 16.Nowicki, E., Zdrzałka, S.: A survey of results for sequencing problems with controllable processing times. Discrete Appl. Math.
**26**, 271–287 (1990)MathSciNetCrossRefMATHGoogle Scholar - 17.Nowicki, E., Zdrzałka, S.: A bicriterion approach to preemptive scheduling of parallel machines with controllable job processing times. Discrete Appl. Math.
**63**, 237–256 (1995)MathSciNetCrossRefMATHGoogle Scholar - 18.Sahni, S.: Preemptive scheduling with due dates. Oper. Res.
**27**, 925–934 (1979)MathSciNetCrossRefMATHGoogle Scholar - 19.Sahni, S., Cho, Y.: Scheduling independent tasks with due times on a uniform processor system. J. ACM
**27**, 550–563 (1980)Google Scholar - 20.Schrijver, A.: A combinatorial algorithm minimizing submodular functions in strongly polynomial time. J. Comb. Theory B
**80**, 346–355 (2000)MathSciNetCrossRefMATHGoogle Scholar - 21.Schrijver, A.: Combinatorial Optimization: Polyhedra and Efficiency. Springer, Berlin (2003)Google Scholar
- 22.Shabtay, D., Steiner, G.: A survey of scheduling with controllable processing times. Discrete Appl. Math.
**155**, 1643–1666 (2007)MathSciNetCrossRefMATHGoogle Scholar - 23.Shakhlevich, N.V., Strusevich, V.A.: Pre-emptive scheduling problems with controllable processing times. J. Sched.
**8**, 233–253 (2005)MathSciNetCrossRefMATHGoogle Scholar - 24.Shakhlevich, N.V., Strusevich, V.A.: Preemptive scheduling on uniform parallel machines with controllable job processing times. Algorithmica
**51**, 451–473 (2008)MathSciNetCrossRefMATHGoogle Scholar - 25.Shakhlevich, N.V., Shioura, A., Strusevich, V.A.: Single machine scheduling with controllable processing times by submodular optimization. Int. J. Found. Comput. Sci.
**20**, 247–269 (2009)MathSciNetCrossRefMATHGoogle Scholar - 26.Shakhlevich, N.V., Shioura, A., Strusevich, V.A.: Fast divide-and-conquer algorithms for preemptive scheduling problems with controllable processing times—a polymatroidal approach. In: Halperin, D., Mehlhorn, K. (eds.) Lecture Notes Computer Science 5193, ESA 2008, pp. 756–767. Springer, Berlin (2008)Google Scholar
- 27.Shioura, A., Shakhlevich, N.V., Strusevich, V.A.: A submodular optimization approach to bicriteria scheduling problems with controllable processing times on parallel machines. SIAM J. Discrete Math.
**27**, 186–204 (2013)MathSciNetCrossRefMATHGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.