How UnsplittableFlowCovering Helps Scheduling with JobDependent Cost Functions
 580 Downloads
Abstract
Generalizing many wellknown and natural scheduling problems, scheduling with jobspecific cost functions has gained a lot of attention recently. In this setting, each job incurs a cost depending on its completion time, given by a private cost function, and one seeks to schedule the jobs to minimize the total sum of these costs. The framework captures many important scheduling objectives such as weighted flow time or weighted tardiness. Still, the general case as well as the mentioned special cases are far from being very well understood yet, even for only one machine. Aiming for better general understanding of this problem, in this paper we focus on the case of uniform job release dates on one machine for which the state of the art is a 4approximation algorithm. This is true even for a special case that is equivalent to the covering version of the wellstudied and prominent unsplittable flow on a path problem, which is interesting in its own right. For that covering problem, we present a quasipolynomial time \((1+\varepsilon )\)approximation algorithm that yields an \((e+\varepsilon )\)approximation for the above scheduling problem. Moreover, for the latter we devise the best possible resource augmentation result regarding speed: a polynomial time algorithm which computes a solution with optimal cost at \(1+\varepsilon \) speedup. Finally, we present an elegant QPTAS for the special case where the cost functions of the jobs fall into at most \(\log n\) many classes. This algorithm allows the jobs even to have up to \(\log n\) many distinct release dates. All proposed quasipolynomial time algorithms require the input data to be quasipolynomially bounded.
Keywords
Approximation algorithms Scheduling Jobdependent cost functions Unsplittable flowMathematics Subject Classification
Approximation algorithms (68W25)1 Introduction
In scheduling, a natural way to evaluate the quality of a solution is to assign a cost to each job which depends on its completion time. The goal is then to minimize the sum of these costs. The function describing this dependence may be completely different for each job.
There are many wellstudied and important scheduling objectives which can be cast in this framework. Some of them are already very well understood, for instance weighted sum of completion times \(\sum _{j}w_{j}C_{j}\) for which there are polynomial time approximation schemes (PTASs) [1], even for multiple machines and very general machine models. On the other hand, for natural and important objectives such as weighted flow time or weighted tardiness, not even a constant factor polynomial time approximation algorithm is known, even on a single machine. In a recent breakthrough result, Bansal and Pruhs presented a \(O(\log \log P)\)approximation algorithm [6] for the single machine case where every job has its private cost function, denoting by P the range of the processing times. Formally, they study the General Scheduling Problem (GSP) where the input consists of a set of jobs J where each job \(j\in J\) is specified by a processing time \(p_{j}\), a release date \(r_{j}\), and a nondecreasing cost function \(f_{j}\). The goal is to compute a preemptive schedule on one machine which minimizes \(\sum _{j}f_{j}(C_{j})\) where \(C_{j}\) denotes the completion time of job j in the schedule. Interestingly, even though this problem is very general, subsuming all the objectives listed above, the best known complexity result for it is only strong \(\mathsf {NP}\)hardness. Thus, there might even be a polynomial time \((1+\varepsilon )\)approximation.
Aiming to better understand GSP, in this paper we investigate the special case that all jobs are released at time 0. This version is still strongly \(\mathsf {NP}\)hard [19], even in the restricted case where the individual cost functions are scaled versions of an underlying common function [17]. The currently best know approximation algorithm for GSP without release dates is a \((4+\varepsilon )\)approximation algorithm [16]. As observed by Bansal and Verschae [7], this problem is a generalization of the coveringversion of the wellstudied Unsplittable Flow on a Path problem (UFP) [2, 3, 5, 9, 12, 15], which we refer to as UFP cover problem. The input of this problem consists of a path, each edge e having a demand \(u_{e}\), and a set of tasks T. Each task i is specified by a start vertex \(s_{i}\) and an end vertex \(t_{i}\) on the path, defining a subpath \(P_i\), a size \(p_{i}\), and a cost \(c_{i}\). In the UFP cover problem, the goal is to select a subset of the tasks \(T'\subseteq T\) which covers the demand profile, i.e., \(\sum _{i\in T'\cap T_{e}}p_{i}\ge u_{e}\) where \(T_{e}\) denotes the set of all tasks \(i\in T\) such that \(e \in P_i\). The objective is to minimize the total cost \(\sum _{i\in T'}c_{i}\).
The UFP cover problem is a generalization of the knapsack cover problem [10] and corresponds to instances of GSP without release dates where the cost function of each job attains only the values 0, some jobdependent value \(c_i\), and \(\infty \). The UFP cover problem has applications to resource allocation settings such as workforce and energy management, making it an interesting problem in its own right. For example, one can think of the tasks as representing time intervals when employees are available, and one aims at providing certain service level that changes over the day. The best known approximation algorithm for UFP cover is a 4approximation [8, 11]. This essentially matches the best known result for GSP without release dates.
1.1 Our Contribution
In this paper we present several new approximation results for GSP without release dates and some of its special cases. Since these results are based on approximations for UFP cover problem, we state these auxiliary related results first.
First, we give a \((1+\varepsilon )\)approximation algorithm for the UFP cover problem with quasipolynomial running time. Our algorithm uses some ideas from the QPTAS for UFP (packing) of Bansal et al. [3]. In UPF (packing), each edge has a capacity value analogous to the demand value in UFP cover; the goal is to select a maximum profit subset of tasks such that every edge the aggregate size of tasks using that edge does not exceed its capacity. The highlevel idea behind the QPTAS of Bansal et al. [3] is to start with an edge in the middle of the path and to consider the tasks using it. One divides these tasks into groups, all tasks in a group having roughly the same size and cost. For each group, one guesses an approximation of the capacity profile used by an optimal solution using the tasks in that group. In UPF (packing), one can show that by slightly underestimating the true profile one still obtains almost the same profit as the optimum.
A natural adaptation of this idea to the UFP cover problem would be to guess an approximate coverage profile that overestimates the profile covered by an optimal solution. Unfortunately, it might happen that the tasks in a group may not suffice to cover certain approximate profiles. When considering only a polynomial number of approximate profiles, this could lead to a situation where the coverable approximate profiles are much more expensive to cover that the optimal solution.
We remedy this problem in a maybe counterintuitive fashion. Instead of guessing an approximate upper bound of the true profile, we first guess a lower bound of it. Then we select tasks that cover this lower bound, and finally add a small number of “maximally long” additional tasks. Using this procedure, we cannot guarantee how much our selected tasks exceed the guessed profile on each edge. However, we can guarantee that for the correctly guessed profile, we cover at least as much as the optimum and pay only slightly more. Together with the recursive framework from [3], we obtain a QPTAS. As an application, we use this algorithm to get a quasipolynomial time \((e +\varepsilon )\)approximation algorithm for GSP with uniform release dates, improving the approximation ratio of the best known polynomial time 4approximation algorithm [16]. This algorithm, as well as the QPTAS mentioned below, requires the input data to be quasipolynomially bounded.
In addition, we consider a different way to relax the problem. Rather than sacrificing a \(1+\varepsilon \) factor in the objective value, we present a polynomial time algorithm that computes a solution with optimal cost but requires a speedup of \(1+\varepsilon \). Such a result can be easily obtained for jobindependent, scalable cost functions i.e., functions f for which there exist a function \(\phi \) satisfying \(f(c\cdot t)=\phi (c)\cdot f(t)\) for any \(c,t\ge 0\). In this case, the result is immediatly implied by the PTAS in [21] and the observation that for scalable cost functions sspeed capproximate algorithms translate into \((s\cdot c)\)speed optimal ones. In our case, however, the cost functions of the jobs can be much more complicated and, even worse, they can be different for each job. Our algorithm first imposes some simplification on the solutions under consideration, at the cost of a \((1+\varepsilon )\)speedup. Then, we use a recently introduced technique to by Sviridenko and Wiese [23]. They first guess a set of discrete intervals representing slots for large jobs and the placement of big jobs into discrete intervals. Then they use a linear program to simultaneously assign large jobs into these slots and small jobs into the remaining idle times. Like in the latter paper, for the case that the processing times of the jobs are not polynomially bounded, we employ a technically involved dynamic program which moves on the time axis from left to right and considers groups of \(O(\log n)\) intervals at a time.
An interesting open question is to design a (Q)PTAS for GSP without release dates. As a first step towards this goal, recently Megow and Verschae [21] presented a PTAS for minimizing the objective function \(\sum _{j}w_{j}g(C_{j})\) where each job j has a private weight \(w_{j}\) but the function g is identical for all jobs. In Sect. 4 we present a QPTAS for a generalization of this setting. Instead of only one function g for all jobs, we allow up to \((\log n)^{O(1)}\) such functions, each job using one of them, and we even allow the jobs to have up to \((\log n)^{O(1)}\) distinct release dates. We note that our algorithm requires the weights of the jobs to be in a quasipolynomial range. Despite the fact that this setting is much more general, our algorithm is very clean and easy to analyze.
1.2 Related Work
As mentioned above, Bansal and Pruhs present a \(O(\log \log P)\)approximation algorithm for GSP [6]. Even for some wellstudied special cases, this is now the best known polynomial time approximation result. For instance, for the important weighted flow time objective, previously the best known approximation factors were \(O(\log ^{2}P)\), \(O(\log W)\) and \(O(\log nP)\) [4, 14], where P and W denote the ranges of the job processing times and weights, respectively. A QPTAS with running time \(n^{O_{\varepsilon }(\log P\log W)}\) is also known [13]. For the objective of minimizing the weighted sum of completion times, PTASs are known, even for an arbitrary number of identical and a constant number of unrelated machines [1].
For the case of GSP with identical release dates, Bansal and Pruhs [6] give a 16approximation algorithm. Later, Cheung et al. [16] gave a pseudopolynomial primaldual 4approximation, which can be adapted to run in polynomial time at the expense of increasing the approximation factor to \((4+\varepsilon )\).
As mentioned above, a special case of GSP with uniform release dates is a generalization for the UFP cover problem. For this special case, a 4approximation algorithm is known [8, 11]. The packing version is very well studied. After a series of papers on the problem and its special cases [5, 9, 12, 15], the currently best known approximation results are a QPTAS [3] and a \((2+\varepsilon )\)approximation in polynomial time [2].
2 QuasiPTAS for UFP Cover
In this section, we present a quasipolynomial time \((1+\varepsilon )\)approximation algorithm for the UFP cover problem. Subsequently, we show how it can be used to obtain an approximation algorithm with approximation ratio \(e + \varepsilon \approx 2.718 + \varepsilon \) and quasipolynomial running time for GSP with uniform release dates. Throughout this section, we assume that the sizes of the tasks are quasipolynomially bounded. Our algorithm follows the structure from the QPTAS for the packing version of the UFP cover problem due to Bansal et al. [3]. First, we describe a recursive exact algorithm with exponential running time. Subsequently, we describe how to turn this routine into an algorithm with only quasipolynomial running time and an approximation ratio of \(1+\varepsilon \).
To compute the exact solution (in exponential time) one can use the following recursive algorithm: Given the path \(G=(V,E)\), denote by \(e_{M}\) the edge in the middle of G and let \(T_{M}\) denote the set of tasks that use \(e_{M}\), i.e., the set of all tasks i such that \(e_M \in P_i\). Our strategy is to “guess” which tasks in \(T_{M}\) are contained in \({{\mathrm{OPT}}}\), an (unknown) optimal solution.
Throughout this paper, whenever we use the notion of guessing a set of tasks (or some other entity), we mean that we enumerate all possibilities for this set of tasks (or the entity) and continue the algorithm for each enumerated option. One of them will correspond to the respective choice in an optimal solution or in a suitably chosen nearoptimal solution. In order to analyze the resulting algorithm, we can therefore assume that we know the corresponding choice in the mentioned (near)optimal solution. This motivates the notion of guessing. Note that if we enumerate K possibilities for the set of tasks (or the entity) then this increases the running time of the remaining algorithm by a factor of K.
Once we have choosen the tasks from \(T_M\) that we want to include in our solution, the remaining problem splits into the two independent subproblems given by the edges on the left and on the right of \(e_{M}\), respectively, and the tasks whose paths are fully contained in them. Therefore, we enumerate all subsets of \(T'_{M}\subseteq T_{M}\). Denote by \(\mathcal {T}_{M}\) the resulting set of sets. For each set \(T'_{M}\in \mathcal {T}_{M}\) we recursively compute the optimal solution for the subpaths \(\{e_{1},\ldots ,e_{M1}\}\) and \(\{e_{M+1},\ldots ,e_{E}\}\), subject to the tasks in \(T'_{M}\) being already chosen and that no more tasks from \(T_{M}\) are allowed to be chosen. The leaf subproblems are given when the path in the recursive call has only one edge. Since \(E=O(n)\) this procedure has a recursion depth of \(O(\log n)\) which is helpful when aiming at quasipolynomial running time. However, since in each recursive step we try each set \(T'_{M}\in \mathcal {T}_{M}\), the running time is exponential (even in a single step of the recursion). To remedy this issue, we will show that there is a set of task sets \(\bar{\mathcal {T}}_{M}\subseteq \mathcal {T}_{M}\) which is of small size and which approximates \(\mathcal {T}_{M}\) well. More precisely, we can compute \(\bar{\mathcal {T}}_{M}\) in quasipolynomial time (and it thus has only quasipolynomial size) and there is a set \(T_{M}^{*}\in \bar{\mathcal {T}}_{M}\) such that \(c(T_{M}^{*})\le (1+\varepsilon )\cdot c(T_{M}\cap {{\mathrm{OPT}}})\) and \(T_{M}^{*}\) dominates \(T_{M}\cap {{\mathrm{OPT}}}\). In this context, for any set of tasks T, its cost is denoted by \(c(T):=\sum _{i\in T}\,c_i\). We modify the above procedure such that we do recurse on each set in \(\bar{\mathcal {T}}_{M}\), instead of recursing on each set in \(\mathcal {T}_{M}\). The set of task sets \(\bar{\mathcal {T}}_{M}\) has quasipolynomial size and \(\bar{\mathcal {T}}_{M}\) contains the mentioned set \(T_{M}^{*}\). When we continue in the same manner, the recursion depth becomes \(O(\log n)\) and the resulting algorithm is a QPTAS. In the sequel, we describe the above algorithm in detail and show in particular how to obtain the set of task sets \(\bar{\mathcal {T}}_{M}\).
2.1 Formal Description of the Algorithm
We assume that we know the value of the optimal objective, which we denote by B; if we do not know the value of B, we can use binary search and the algorithm to estimate it within a \(1+\varepsilon \) factor. In a preprocessing step we reject all tasks i whose cost is larger than B and select all tasks i whose cost is at most \(\varepsilon B/n\). The latter cost at most \(n\cdot \varepsilon B/n\le \varepsilon B\) and thus only a factor \(1+\varepsilon \) in the approximation ratio. We update the demand profile accordingly.
In order to prove the lemma, we formally introduce the notion of a profile. A profile \(Q:E'\rightarrow \mathbb {R}_{\ge 0}\) assigns a height \(Q(e)\) to each edge \(e\in E'\), and a profile \(Q\) dominates a profile \(Q'\) if \(Q(e)\ge Q'(e)\) holds for all \(e\in E'\). The profile \(Q_{\tilde{T}}\) induced by a set of tasks \(\tilde{T}\) is defined by the heights \(Q_{\tilde{T}}(e)~:=~\sum _{i\in T_e \in \tilde{T}} p_i\) (recall that \(T_e\) denotes all tasks in T whose path \(P_i\) contains the edge e). Finally, a set of tasks \(\tilde{T}\) dominates a set of tasks \(\tilde{T}'\) if \(Q_{\tilde{T}}\) dominates \(Q_{\tilde{T}'}\).
Lemma 1
Proof
First, we guess the number of tasks in \({{\mathrm{OPT}}}_{(k,\ell )}\). If \({{\mathrm{OPT}}}_{(k,\ell )}\) is smaller than \(\tfrac{1}{\varepsilon ^2}\) then we can guess an optimal set \({{\mathrm{OPT}}}_{(k,\ell )}\). Otherwise, we will consider a polynomial number of approximate profiles, one of them underestimates the unknown true profile induced by \({{\mathrm{OPT}}}_{(k,\ell )}\) by at most \(O(\varepsilon )\cdot \big {{\mathrm{OPT}}}_{(k,\ell )}\big \) units on each edge. For each approximate profile we will compute a cover of cost no more than \(1+O(\varepsilon )\) time the optimum cost of covering this profile, and if the profile is close to the true profile induced by \({{\mathrm{OPT}}}_{(k,\ell )}\), we can extend this solution to a dominate the latter profile by adding only \(O(\varepsilon )\cdot \big {{\mathrm{OPT}}}_{(k,\ell )}\big \) more tasks.
Several arguments in the remaining proof are based on the structure of \(T_{(k,\ell )}\) and the structure of the true profile \(Q_{{{\mathrm{OPT}}}_{(k,\ell )}}\). Since all tasks in \(T_{(k,\ell )}\) contain the edge \(e_M\) and span a subpath of \(E'\), the height of the profile \(Q_{{{\mathrm{OPT}}}_{(k,\ell )}}\) is unimodular: It is nondecreasing until \(e_M\) and nonincreasing after that; see Fig. 1. In particular, a task that covers a certain edge e covers all edges in between e and \(e_M\) as well.
After covering some \(Q\in \mathcal {Q}\) with tasks \(T^{*}\) in the first step, in the second step we extend this cover by additional edges \(A^{*}\subseteq T_{(k,\ell )}\setminus T^{*}\). We define the set \(A^{*}\) to contain \(\varepsilon \,(1+\varepsilon )\cdot \big {{\mathrm{OPT}}}_{(k,\ell )}\big \) tasks \(T_{(k,\ell )}\setminus T^{*}\) with the leftmost start vertices and \(\varepsilon \,(1+\varepsilon )\cdot \big {{\mathrm{OPT}}}_{(k,\ell )}\big \) tasks in \(T_{(k,\ell )}\setminus T^{*}\) with the rightmost end vertices. We add \(T^{*}\cup A^{*}\) to the set of task sets \(\bar{\mathcal {T}}_{(k,\ell )}\).
Assume that \(Q=Q^{*}\). Then the above LP has a feasible solution and in particular the resulting set \(T^{*}\cup A^{*}\) was added to \(\bar{\mathcal {T}}_{(k,\ell )}\). We claim that the computed tasks \(T^{*}\cup A^{*}\) dominate \({{\mathrm{OPT}}}_{(k,\ell )}\). Firstly, observe that any set of \(\varepsilon \,(1+\varepsilon )\cdot \big {{\mathrm{OPT}}}_{(k,\ell )}\big \) tasks from \(T_{(k,\ell )}\) has a total size of at least the gap between two height steps from \(\mathcal {H}\). Hence, if an edge e is covered by that many edges from \(A^{*}\) and \(Q=Q^{*}\) then we know that \(Q_{T^{*}\cup A^{*}}(e)\ge Q_{{{\mathrm{OPT}}}_{(k,\ell )}}(e)\).
On the other hand, if an edge e is covered by less than \(\varepsilon \,(1+\varepsilon )\cdot \big {{\mathrm{OPT}}}_{(k,\ell )}\big \) tasks from \(A^{*}\), we know that there exists no further task in \(T_{(k,\ell )}\setminus (T^{*}\cup A^{*})\) whose path contains e. Otherwise, this would be a contradiction to the choice of the tasks \(A^{*}\) being the \(\varepsilon \,(1+\varepsilon )\cdot \big {{\mathrm{OPT}}}_{(k,\ell )}\big \) ones with the leftmost start and rightmost end vertices, respectively. Thus, since in this second case \(T^{*}\cup A^{*}\) contains all tasks that cover e, we have that \(Q_{T^{*}\cup A^{*}}(e)\ge Q_{{{\mathrm{OPT}}}_{(k,\ell )}}(e)\).
We define the set of tasks sets \(\bar{\mathcal {T}}_{M}\) as follows: we consider all combinations of taking exactly one set \(\bar{T}_{(k,\ell )} \in \bar{\mathcal {T}}_{(k,\ell )}\) from each set of task sets \(\bar{\mathcal {T}}_{(k,\ell )}\) (there is one such set for each group \(T_{(k,\ell )}\)). For each such combination we take the union of the respective sets \(\bar{T}_{(k,\ell )}\) and add the resulting union to \(\bar{\mathcal {T}}_{M}\). Since there are \((\log n)^{O(1)}\) groups, by Lemma 1 the set \(\bar{\mathcal {T}}_{M}\) contains only a quasipolynomial number of task sets and it contains one set \(T_{M}^{*}\) which is a good approximation to \(T_{M}\cap {{\mathrm{OPT}}}\), i.e., the set \(T_{M}^{*}\) dominates \(T_{M}\cap {{\mathrm{OPT}}}\) and it is at most by a factor \(1+O(\varepsilon )\) more expensive. Now each node in the recursion tree has at most \(n^{(\log n)^{O(1)}}\) children and, as argued above, the recursion depth is \(O(\log n)\). Thus, a call to \(\mathrm {UFPcover}(E,\emptyset )\) has quasipolynomial running time and yields a \((1+O(\varepsilon ))\)approximation for the overall problem.
Theorem 1
For any \(\varepsilon >0\) there is a quasipolynomial \((1+\varepsilon )\)approximation algorithm for UFP cover if the sizes of the tasks are in a quasipolynomial range.
2.2 \((e + \varepsilon )\)Approximation for GSP with Uniform Release Dates
Bansal and Pruhs [6] give a 4approximationpreserving reduction from GSP with uniform release dates to UFP cover using geometric rounding. Here we observe that if instead we use randomized geometric rounding [18], then one can obtain an eapproximationpreserving reduction. Together with our QPTAS for UFP cover, we get the following result.
Theorem 2
For any \(\varepsilon >0\) there is a quasipolynomial time \((e+\varepsilon )\)approximation algorithm for GSP with uniform release dates.
Proof
The heart of the proof is an eapproximationpreserving reduction from GSP with uniform release dates to UFP cover. Although here we develop a randomized algorithm, we note that the reduction can be derandomized using standard techniques.
Given an instance of the scheduling problem we construct an instance of UFP cover as follows. For ease of presentation, we take our path \(G=(V,E)\) to have vertices \(0, 1, \ldots , P\); towards the end, we explain how to obtain an equivalent and more succinct instance. For each \(i = 1, \ldots , P\), edge \(e =(i1,i)\) has demand \(u_e = P  i\). Finally, we assume that for each job j and time slot t we have \(f_j(t) = 0\) or \(f_j(t) \ge 1\); otherwise, we can always scale the functions so that this property holds.
The reduction has two parameters, \(\gamma > 1\) and \(\alpha \in [0,1]\), which will be chosen later to minimize the approximation guarantee. For each job j, we define a sequence of times \(t_0^j, t_1^j, t_2^j, \ldots , t_k^j\) starting from 0 and ending with \(P+1\) such that the cost of finishing a job in between two consecutive times differs by at most a factor of \(\gamma \). Formally, \(t_0^j = 0\), \(t_k^j = P+1\) and \(t_i^j\) is the first time step such that \(f(t_i^j) > \gamma ^{i  1 +\alpha }\). For each \(i> 0\) such that \(t_{i1}^j < t_i^j\), we create a task covering the interval \(\left[ t_{i1}^j, t_i^j  1\right] \) having demand \(p_j\) and costing \(f_j(t_i^j  1)\).
Given a feasible solution of the UFP cover instance, we claim that we can construct a feasible schedule of no greater cost. For each job j, we consider the rightmost task chosen (we need to pick at least one task from each job to be feasible) in the UFP cover solution and assign to j a due date equal to the right endpoint of the task. Notice that the cost of finishing the jobs by their due date equals the total cost of these rightmost tasks. By the feasibility of the UFP cover solution, it must be the case that for each time t, the total processing volume of jobs with a due date of t or greater is at least \(Tt+1\). Therefore, scheduling the jobs according to earliest due date first, yields a schedule that meets all the due date. Therefore, the cost of the schedule is at most the cost of the UFP cover instance.
To derandomize the reduction, and at the expense of adding another \(\varepsilon '\) to the approximation factor, one can discretize the random variable \(\alpha \), solve several instances, and return the one producing the best solution. Finally, we mention that it is not necessary to construct the full path from 0 to P. It is enough to keep the vertices where tasks start or end. Stretches where no task begins or end can be summarized by an edge having demand equal to the largest demand in that stretch.
Applying the eapproximationpreserving reduction and then running the \((1+ \varepsilon )\)approximation of Theorem 2 finishes the proof. \(\square \)
3 General Cost Functions Under Speedup
We present a polynomial time algorithm that computes a solution for an instance of GSP with uniform release dates whose cost is optimal and that is feasible if the machine runs with speed \(1+\varepsilon \) (rather than unit speed).
Let \(0< \varepsilon <1\) be a constant and assume for simplicity that \(\tfrac{1}{\varepsilon }\in \mathbb {N}\). For our algorithm, we first prove different properties that we can assume “at \(1+\varepsilon \) speedup”; by this, we mean that there is a schedule whose cost is at most the optimal cost (without enforcing these restricting properties) and that is feasible if we increase the speed of the machine by a factor \(1+\varepsilon \). Many statements are similar to properties that are used in [1] for constructing PTASs for the problem of minimizing the weighted sum of completion times.
Lemma 2
 1.
The objective function is \(\sum _{j}f_{j}\big (C_{j}^{(1+\varepsilon )}\big )\), instead of \(\sum _{j}f_{j}(C_{j})\).
 2.
For each job j it holds \(S_{j}\,\ge \,{(1+\varepsilon )}^{\left\lfloor \log _{1+\varepsilon } \left( \tfrac{\varepsilon }{1+\varepsilon } \cdot p_j\right) \right\rfloor }\,=:\,r(j).\)
 3.
Any small job starting during an interval \(I_{t}\) finishes in \(I_{t}\).
 4.
Each large job starts at some point in time \(R_{t,k}\) and every interval \(I_{t,k}\) is used by either only small jobs or by one large job or it is empty.
 5.
For each interval \(I_{t}\) there is a time interval \(I_{t,k,\ell }:=[R_{t,k},R_{t,\ell })\) with \(0 \le k\le \ell \le 6\,\frac{1+\varepsilon }{\varepsilon ^{3}}\) during which no large jobs are scheduled, and no small jobs are scheduled during \(I_{t}\setminus I_{t,k,\ell }\).
Proof
Each property of the lemma will require us to increase the speed of the machine by a factor of \(1+\varepsilon \), apart from the last property. Compared to the initial unit speed, the final speed will be some power of \(1+\varepsilon \). Technically, we consolidate the resulting polynomial in \(\varepsilon \) to some \(\varepsilon ' = O(\varepsilon )\), achieving all properties at speed \(1+\varepsilon '\).
Throughout the proof we make use of the observation that at speedup \(1+\varepsilon \), a processing time p reduces to \(\tfrac{1}{1+\varepsilon }\cdot p\), and hence, we gain idle time of length \(\tfrac{\varepsilon }{1+\varepsilon }\cdot p\).
Regarding the second point of the lemma, the above observation implies that at speedup \(1+\varepsilon \) a job j of processing time \(p_j\) allows for an additional idle time of length \({\varepsilon /(1+\varepsilon )\cdot p_j}\). Hence, if \(S_j < {(1+\varepsilon )}^{\left\lfloor \log _{1+\varepsilon } (\varepsilon \cdot p_j/(1+\varepsilon )) \right\rfloor }=r(j)\) we can set its start time to r(j) without exceeding its unit speed completion time.
For showing the third point, consider a small job that starts in \(I_t\) and that finishes in some later interval. By definition, its length is at most \(\varepsilon ^{3}\cdot R_{t+1}\). From the above observations and the interval length \(I_{t}=\varepsilon \cdot R_{t}\), it follows that at speed \(1+\varepsilon \), the interval \(I_t\) provides an additional idle time of length \(\varepsilon ^2/(1+\varepsilon )\cdot R_t\), and the length of the small job reduces to at most \(\varepsilon ^{3}\cdot R_{t}\). Since for sufficiently small \(\varepsilon \) it holds that \(\tfrac{\varepsilon ^2}{1+\varepsilon } \ge \varepsilon ^{3}\), the small job can be scheduled during the new idle time, and hence, it finishes in \(I_t\).
Note that by starting jobs from \(I_t\) earlier in the same interval we do not violate the release dates assumed by Lemma 2.2 since jobs are only released at the beginning of an interval \(I_t\). This completes the proof of the fourth part of the lemma.
The proof of the last point of the lemma is a straightforward implication of its fourth point. By this we can assume that all small jobs are contained in intervals \(I_{t,k}\) that contain no large jobs. We change the order of the subintervals which either belong to small jobs or to large jobs that are fully scheduled in \(I_t\). We proceed in such a way that all intervals of small jobs occur consecutively. Since all jobs still finish in \(I_t\) this modification does not increase the cost of the schedule.
Overall, we can make the assumptions of the lemma at a total speedup of \((1+\varepsilon )^4\), which is \(1+O(\varepsilon )\) under our assumption that \(\varepsilon <1\), so the lemma follows. \(\square \)
3.1 Special Case of Polynomial Processing Times
For the moment, let us assume that the processing times of the instance are polynomially bounded. We will give a generalization to arbitrary instances later.
Our strategy is the following: Since the processing times are bounded, the whole schedule finishes within \(\log _{1+\varepsilon }(\sum _{j}p_{j})\le O(\frac{1}{\varepsilon }\log n)\) intervals, (i.e., \(O(\log n)\) for constant \(\varepsilon \)).^{1} Ideally, we would like to guess the placement of all large jobs in the schedule and then use a linear program to fill in the remaining small jobs. However, this would result in \(n^{O(\frac{1}{\varepsilon }\log n)}\) possibilities for the large jobs, which is quasipolynomial but not polynomial. Instead, we only guess the pattern of largejob usage for each interval. A pattern P for an interval is a set of \(O(\tfrac{1}{\varepsilon ^{3}})\) integers which defines the start and end times of the slots during which large jobs are executed in \(I_{t}\).
Proposition 1
For each interval \(I_{t}\) there are only \(N\in O_{\varepsilon }(1)\) many possible patterns, i.e., constantly many for constant \(\varepsilon \). The value N is independent of t.

assigns large jobs to the slots specified by the pattern,

assigns small jobs into the remaining idle times on the intervals.
Lemma 3
Proof
The proof follows the general idea of [22]. Given some fractional solution (x, y) to the sLP (2)–(7), we construct a fractional matching M in a bipartite graph \(G=(V\cup W, E)\). For each job \(j\in J\) and for each large slot \(s\in Q\), we introduce vertices \(v_j\in V\) and \(w_s\in W\), respectively. Moreover, for each slot of small jobs \(t\in I\), we add \(k_t:=\big \lceil \sum _{j\in J}y_{t,j}\big \rceil \) vertices \(w_{t,1},\dots ,w_{t,k_t}\in W\). We introduce an edge \((v_j, w_s)\in E\) with cost \(f_j(R_{t(s)+1})\) for all jobslot pairs for which \(x_{s,j}>0\), and we choose it to an extent of \(x_{s,j}\) for M. Regarding the vertices \(w_{t,1},\dots ,w_{t,k_t}\), we add edges in the following way. We first sort all jobs j with \(y_{t,j}>0\) in nonincreasing order of their length \(p_j\), and we assign them greedily to \(w_{t,1},\dots ,w_{t,k_t}\); that is, we choose the first vertex \(w_{t,\ell }\) which has not yet been assigned one unit of fractional jobs, we assign as much as possible of \(y_{t,j}\) to it, and if necessary, we assign the remaining part to the next vertex \(w_{t,\ell +1}\). Analogously to the above edges, we define the cost of an edge \((v_j, w_{t,\ell })\) to be \(f_j(R_{t+1})\), and we add it fractionally to M according to the fraction \(y_{t, \ell ,j}\) of \(y_{t,j}\) the job was assigned to \(w_{t,\ell }\) by the greedy assignment. Note that \(p_{t,\ell }^{\min }\ge p_{t,\ell +1}^{\max }\) for \(\ell =1,\dots ,k_t1\) where \(p_{t,\ell }^{\min }\) and \(p_{t,\ell }^{\max }\) are the minimum and maximum length of all jobs (fractionally) assigned to \(w_{t,\ell }\), respectively.
In particular, the cost of the computed solution is no more than the cost of the integral optimum and it is feasible under \(1+O(\varepsilon )\) speedup according to Lemma 2. We remark that the technique of guessing patterns and filling them in by a linear program was first used in [23].
3.2 General Processing Times
For the general case, i.e., for arbitrary processing times, we first show that at \(1+\varepsilon \) speedup, we can assume that for each job j there are only \(O(\log n)\) intervals between r(j) (the artificial release date of j) and \(C_{j}\). Then we devise a dynamic program which moves from left to right on the time axis and considers sets of \(O(\log n)\) intervals at a time, using the technique from Sect. 3.1.
Lemma 4
Proof
Throughout the remainder of this section let \(K:=\big \lceil \log _{1+\varepsilon }(q(n))\big \rceil \in O_{\varepsilon }(\log n)\) where q(n) is the polynomial from Lemma 4. Thus, K denotes the number of intervals between the time r(j) and the completion time \(C_{j}\) of each job j.
If after the assumption of Lemma 4 there is a point in time \(\tau \) that will not schedule any job, i.e., there is no job j with \(\tau \in [r(j),r(j)\cdot q(n))\), then we divide the instance into two independent pieces.
Proposition 2
Without loss of generality we can assume that the union of all intervals \(\bigcup _{j}[r(j),r(j)\cdot q(n))\) is a (connected) interval.
For our dynamic program (DP) we subdivide the time axis into blocks. Each block \(B_{i}\) consists of the intervals \(I_{i\cdot K},\ldots ,I_{(i+1)\cdot K1}\). The idea is that in each iteration the DP schedules the jobs released during a block \(B_{i}\) in the intervals corresponding to the blocks \(B_{i}\) and \(B_{i+1}\). So in the end, the intervals of each block \(B_{i+1}\) contain jobs released during \(B_{i}\) and \(B_{i+1}\). To separate the jobs from both blocks we prove the following lemma.
Lemma 5

during \([a_t,b_t)\) only small jobs from block \(B_{i}\) are scheduled and during \(I_{t}\setminus [a_t,b_t)\) no small jobs from block \(B_{i}\) are scheduled,

during \([b_t,c_t)\) only small jobs from block \(B_{i+1}\) are scheduled and during \(I_{t}\setminus [b_t,c_t)\) no small jobs from block \(B_{i+1}\) are scheduled,

the interval boundaries \(a_t,b_t,c_t\) are of the form \(\big (1+z\cdot \tfrac{\varepsilon ^{4}}{4\,(1+\varepsilon )^2}\big )\cdot R_{t}\) for \(x\in \mathbb {N}\) and \(z\in \big \{0,1,\ldots ,\frac{4\,(1+\varepsilon )^2}{\varepsilon ^{3}}\big \}\) (so possibly \([a_t,b_t)=\emptyset \) or \([b_t,c_t)=\emptyset \)).
Proof
Based on Lemma 2.3 we can assume that all small jobs that are started within \(I_t\) also finish in \(I_t\); moreover, they are processed in some interval \(I_{t,k,\ell }\subseteq I_t\) which contains no large jobs (see Lemma 2.5 for the notation). By Lemma 4, the interval \(I_t\) can be assumed to contain only small jobs with release date in \(B_i\) and \(B_{i+1}\), and by Lemma 2.1 we know that we can rearrange the jobs in \(I_t\) without changing the cost. Hence, for proving the lemma it is sufficient to show that we can split \(I_{t,k,\ell }\) at some of the discrete points given in the lemma, such that the small jobs released in \(B_i\) and \(B_{i+1}\) are scheduled before and after this point, respectively.
The interval \(I_{t,k,\ell }\) starts at \((1+\tfrac{1}{4}\, k\cdot \varepsilon ^{4}/(1+\varepsilon ))\cdot R_{t}\) and its length is some integral multiple of \(\tfrac{1}{4}\, \varepsilon ^{4}/(1+\varepsilon )\cdot R_{t}\). At a speedup of \(1+\varepsilon \), the interval \(I_{t,k,\ell }\) provides additional idle time of length at least \(\tfrac{1}{4}\, \varepsilon ^{4}/(1+\varepsilon )^2\cdot R_{t}\) (if \(I_{t,k,\ell }\) is not empty), which equals the step width of the discrete interval end points required in the lemma. Hence, by scheduling all small jobs released in \(B_i\) and \(B_{i+1}\) at the very beginning and very end of \(I_{t,k,\ell }\), there must be a point in time \(\tau :=(1+z\cdot \varepsilon ^{4}/(4\,(1+\varepsilon )^2))\cdot R_{t}\) with \(z\in \{0,1,\ldots , 4\,(1+\varepsilon )^2/\varepsilon ^{3}\}\) which lies in the idle interval between the two groups of small jobs. Finally, if setting \(a_t\) and \(c_t\) to the start and end of \(I_{t,k,\ell }\), respectively, and if choosing \(b_t:=\tau \), we obtain intervals as claimed in the lemma. \(\square \)

the start and end times of the large jobs from \(B_{i1}\) which are executed during \(I_{t}\),

the start and end times of the large jobs from \(B_{i}\) which are executed during \(I_{t}\),

\(a_t,b_t,c_t\) according to Lemma 5, implying slots for small jobs.
Each dynamic programming cell is characterized by a tuple \((B_{i},P_{i})\) where \(B_{i}\) is a block during which at least one job is released or during the block thereafter, and \(P_i\) denotes a pattern for all intervals of block \(B_{i}\). For a pattern \(P_i\), we denote by \(Q_i(P_i)\) and \(Q_{i1}(P_i)\) the set of slots in \(B_i\) which are reserved for large jobs released in \(B_{i1}\) and \(B_i\), respectively. Moreover, for some interval \(I_t\) in \(B_i\) let \(D_{i1,t}(P_i)\) and \(D_{i,t}(P_{i})\) be the two slots for small jobs from \(B_{i1}\) and \(B_i\), respectively. The number of DPcells is polynomially bounded as there are only n blocks during which at least one job is released and, as in Sect. 3.1, the number of patterns for a block is bounded by \(\bar{N}^{O_{\varepsilon }(\log n)}\in n^{O_{\varepsilon }(1)}\).
The subproblem encoded in a cell \((B_{i},P_i)\) is to schedule all jobs j with \(r(j)\ge I_{i\cdot K}\) during \([R_{i\cdot K},\infty )\) while obeying the pattern \(P_i\) for the intervals \(I_{i\cdot K},\ldots ,I_{(i+1)\cdot K1}\). To solve this subproblem we first enumerate all possible patterns \(P_{i+1}\) for all intervals of block \(B_{i+1}\). Suppose that we guessed the pattern \(P_{i+1}\) corresponding to the optimal solution of the subproblem given by the cell \((B_{i},P_{i})\). Like in Sect. 3.1 we solve the problem of scheduling the jobs of block \(B_{i}\) according to the patterns \(P_i\) and \(P_{i+1}\) by solving and rounding a linear program of the same type as \(\mathrm {sLP}\). Denote by \({{\mathrm{opt}}}(B_{i},P_i,P_{i+1})\) the optimal solution to this subproblem.
Lemma 6

the cost is bounded by \({{\mathrm{opt}}}(B_{i},P_{i},P_{i+1})\) and

the schedule is feasible if during \(B_{i}\) and \(B_{i+1}\) the speed of the machine is increased by a factor of \(1+\varepsilon \).
Proof
Overall, in our argumentation above we needed to increase the speed of the machine by a factor \(1+O(\varepsilon )\le 1+\alpha \cdot \varepsilon \) for some constant \(\alpha \) and we obtained a polynomial time algorithm, assuming that \(\varepsilon \) is constant. Therefore, for any given constant \(\varepsilon '>0\) we can define \(\varepsilon := \varepsilon '/\alpha \) and construct our algorithm above for this value of \(\varepsilon \). This yields a polynomial time algorithm that needs to increase the speed of the machine only by a factor \(1+\varepsilon '\). The main theorem of this section follows.
Theorem 3
Let \(\varepsilon >0\). There is a polynomial time algorithm for GSP with uniform release dates which computes a solution with optimal cost and which is feasible if the machine runs with speed \(1+\varepsilon \).
4 Few Classes of Cost Functions
In this section, we study the following special case of GSP with release dates. We assume that each cost function \(f_{j}\) can be expressed as \(f_{j}=w_{j}\cdot g_{u(j)}\) for a jobdependent weight \(w_{j} \in \mathbb {N}\), k global functions \(g_{1},\ldots ,g_{k}\), and an assignment \(u:J\rightarrow [k]\) of cost functions to jobs. We present a QPTAS for this problem, assuming that \(k=(\log n)^{O(1)}\) and that the jobs have at most \((\log n)^{O(1)}\) distinct release dates. We assume that the job weights are in a quasipolynomial range, i.e., we assume that there is an upper bound \(W=2^{(\log n)^{O(1)}}\) for the job weights.
In our algorithm, we first round the values of the functions \(g_{i}\) so that they attain only few values, \((\log n)^{O(1)}\) many. Then we guess the \((\log n)^{O(1)}/\varepsilon \) most expensive jobs and their costs. For the remaining problem, we use a linear program. Since we rounded the functions \(g_{i}\), our LP is sparse, and by rounding an extreme point solution we increase the cost by at most an \(\varepsilon \)fraction of the cost of the previously guessed jobs, which yields an \((1+\varepsilon )\)approximation overall.
Formally, we use a binary search framework to estimate the optimal value B. Having this estimate, we adjust the functions \(g_{i}\) such that each of them is a step function with at most \((\log n)^{O(1)}\) steps, all being powers of \(1+\varepsilon \) or 0.
Lemma 7
At \(1+\varepsilon \) loss we can assume that for each \(i\in [k]\) and each t it holds that \(g_{i}(t)\) is either 0 or a power of \(1+\varepsilon \) in \(\big [\frac{\varepsilon }{n}\cdot \frac{B}{W},B\big )\).
Proof
Our problem is in fact equivalent to assigning a due date \(d_{j}\) to each job such that the due dates are feasible, meaning that there is a preemptive schedule where every job finishes no later than its due date, and the objective being \(\sum _{j}f_{j}(d_{j})\) (see also [6]). The following lemma characterizes when a set of due dates is feasible.
Lemma 8
Denote by D all points in time where at least one cost function \(g_{i}^{(1+\varepsilon )}\) increases. It suffices to consider only those values as possible due dates.
Proposition 3
There is an optimal due date assignment such that \(d_{j}\in D\) for each job j.
Denote by R the set of all release dates of the jobs. Recall that \(R\le (\log n)^{O(1)}\). Now, we guess the \(D\cdot R/\varepsilon \) most expensive jobs of the optimal solution and their respective costs. Due to the rounding in Lemma 7 we have that \(D\le k\cdot \log _{1+\varepsilon }(W\cdot n/\varepsilon )=(\log n)^{O(1)}\) and thus there are only \(O(n^{D\cdot R/\varepsilon })=n^{(\log n)^{O(1)}/\varepsilon }\) many guesses.
Suppose we guess this information correctly. Let \(J_{E}\) denote the guessed jobs and for each job \(j\in J_{E}\) denote by \(d_{j}\) the latest time where it attains the guessed cost, i.e., its due date. Denote by \(c_{\mathrm {thres}}\) the minimum cost of a job in \(J_{E}\), according to the guessed costs. The remaining problem consists of assigning a due date \(d_{j}\in D\) to each job \(J\setminus J_{E}\) such that none of these jobs costs more than \(c_{\mathrm {thres}}\), all due dates together are feasible, and the overall cost is minimized. We express this as a linear program.
Lemma 9
Proof
Since \(c(x^{*}) + \sum _{J_{E}}f_{j}(d_{j})\) is a lower bound on the optimum, we obtain a \((1+\varepsilon )\)approximation. As there are quasipolynomially many guesses for the expensive jobs and the remainder can be done in polynomial time, we obtain a QPTAS.
Theorem 4
There is a QPTAS for GSP, assuming that each cost function \(f_{j}\) can be expressed as \(f_{j}=w_{j}\cdot g_{u(j)}\) for some jobdependent weight \(w_{j}\) and at most \(k=(\log n)^{O(1)}\) global functions \(g_{1},\ldots ,g_{k}\), and that the jobs have at most \((\log n)^{O(1)}\) distinct release dates.
Footnotes
 1.
We write \(O_\varepsilon (f(n))\) for an expression which is O(f(n)) for constant \(\varepsilon \).
Notes
Acknowledgements
Open access funding provided by Max Planck Society. We would like to thank the anonymous reviewers for many helpful comments.
References
 1.Afrati, F., Bampis, E., Chekuri, C., Karger, D. Kenyon, C., Khanna, S., Milis, I., Queyranne, M., Skutella, M., Stein, C., Sviridenko, M.: Approximation schemes for minimizing average weighted completion time with release dates. In: Proceedings of the 40th Annual Symposium on Foundations of Computer Science (FOCS ’99), pp. 32–44 (1999)Google Scholar
 2.Anagnostopoulos, A., Grandoni, F., Leonardi, S., Wiese, A.: A mazing 2+\(\epsilon \) approximation for unsplittable flow on a path. In: Proceedings of the 25th Annual ACMSIAM Symposium on Discrete Algorithms (SODA ’14), pp. 26–41 (2014)Google Scholar
 3.Bansal, N., Chakrabarti, A., Epstein, A., Schieber, B.: A quasiPTAS for unsplittable flow on line graphs. In: Proceedings of the 38th Annual ACM Symposium on Theory of Computing (STOC ’06), pp. 721–729 (2006)Google Scholar
 4.Bansal, N., Dhamdhere, K.: Minimizing weighted flow time. ACM Trans. Algorithms 3(4), 39 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
 5.Bansal, N., Friggstad, Z., Khandekar, R., Salavatipour, R.: A logarithmic approximation for unsplittable flow on line graphs. In: Proceedings of the 20th Annual ACMSIAM Symposium on Discrete Algorithms (SODA ’09), pp. 702–709 (2009)Google Scholar
 6.Bansal, N., Pruhs, K.: The geometry of scheduling. SIAM J. Comput. 43(5), 1684–1698 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
 7.Bansal, N., Verschae, J.: Personal communicationGoogle Scholar
 8.BarNoy, A., BarYehuda, R., Freund, A., Naor, J., Schieber, B.: A unified approach to approximating resource allocation and scheduling. J. ACM 48(5), 1069–1090 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
 9.Bonsma, P., Schulz, J., Wiese, A.: A constant factor approximation algorithm for unsplittable flow on paths. In: Proceedings of the 52nd Annual IEEE Symposium on Foundations of Computer Science (FOCS ’11), pp. 47–56 (2011)Google Scholar
 10.Carr, R.D., Fleischer, L.K., Leung, V.J., Phillips, C.A.: Strengthening integrality gaps for capacitated network design and covering problems. In: Proceedings of the 11th Annual ACMSIAM Symposium on Discrete Algorithms (SODA ’00), pp. 106–115 (2000)Google Scholar
 11.Chakaravarthy, V.T., Kumar, A., Roy, S., Sabharwal, Y.: Resource allocation for covering time varying demands. In: Proceedings of the 19th European Symposium on Algorithms (ESA ’11), volume 6942 of LNCS, pp. 543–554. Springer (2011)Google Scholar
 12.Chakrabarti, A., Chekuri, C., Gupta, A., Kumar, A.: Approximation algorithms for the unsplittable flow problem. In: Proceedings of the 5th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX ’02), volume 2462 of LNCS, pp. 51–66. Springer (2002)Google Scholar
 13.Chekuri, C., Khanna, S.: Approximation schemes for preemptive weighted flow time. In: Proceedings of the 34th Annual ACM Symposium on Theory of Computing (STOC ’02), pp. 297–305 (2002)Google Scholar
 14.Chekuri, C., Khanna, S., Zhu, A.: Algorithms for minimizing weighted flow time. In: Proceedings of the 33rd Annual ACM Symposium on Theory of Computing (STOC ’01), pp. 84–93 (2001)Google Scholar
 15.Chekuri, C., Mydlarz, M., Shepherd, F.: Multicommodity demand flowin a tree and packing integer programs. ACM Trans. Algorithms 3(3), 27 (2007)CrossRefzbMATHGoogle Scholar
 16.Cheung, M., Mestre, J., Shmoys, D.B., Verschae, J.: A primaldual approximation algorithm for minsum singlemachine scheduling problems. arXiv:1612.03339
 17.Höhn, W., Jacobs, T.: On the performance of Smith’s rule insinglemachine scheduling with nonlinear cost. ACM Trans. Algorithms 11(4), 25 (2015)CrossRefGoogle Scholar
 18.Kao, M.Y., Reif, J.H., Tate, S.R.: Searching in an unknown environment: an optimal randomized algorithm for the cowpath problem. Inf. Comput. 131(1), 63–79 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
 19.Lawler, E.L.: A “pseudopolynomial” algorithm for sequencing jobsto minimize total tardiness. In: Studies in Integer Programming, volume 1 of Annals of Discrete Mathematics, pp. 331–342. NorthHolland, Amsterdam (1977)Google Scholar
 20.Lovász, L., Plummer, M.: Matching Theory, volume 29 of Annals of Discrete Mathematics. NorthHolland, Amsterdam (1986)Google Scholar
 21.Megow, N., Verschae, J.: Dual techniques for scheduling on a machine with varying speed. In: Proceedings of the 40th International Colloquium on Automata, Languages and Programming (ICALP ’13), volume 7965 of LNCS, pp. 745–756. Springer (2013)Google Scholar
 22.Shmoys, D.B., Tardos, É.: An approximation algorithm for the generalized assignment problem. Math. Program. 62(1–3), 461–474 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
 23.Sviridenko, M., Wiese, A.: Approximating the configurationLP for minimizing weighted sum of completion times on unrelated machines. In: Proceedings of the 16th Conference on Integer Programming and Combinatorial Optimization (IPCO ’13), volume 7801 of LNCS, pp. 387–398. Springer (2013)Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.