Theory of Computing Systems

, Volume 50, Issue 4, pp 641–674 | Cite as

Structure of Polynomial-Time Approximation

Open Access
Article

Abstract

Approximation schemes are commonly classified as being either a polynomial-time approximation scheme (ptas) or a fully polynomial-time approximation scheme (fptas). To properly differentiate between approximation schemes for concrete problems, several subclasses have been identified: (optimum-)asymptotic schemes (ptas, fptas), efficient schemes (eptas), and size-asymptotic schemes. We explore the structure of these subclasses, their mutual relationships, and their connection to the classic approximation classes. We prove that several of the classes are in fact equivalent. Furthermore, we prove the equivalence of eptas to so-called convergent polynomial-time approximation schemes. The results are used to refine the hierarchy of polynomial-time approximation schemes considerably and demonstrate the central position of eptas among approximation schemes.

We also present two ways to bridge the hardness gap between asymptotic approximation schemes and classic approximation schemes. First, using notions from fixed-parameter complexity theory, we provide new characterizations of when problems have a ptas or fptas. Simultaneously, we prove that a large class of problems (including all MAX-SNP-complete problems) cannot have an optimum-asymptotic approximation scheme unless P=NP, thus strengthening results of Arora et al. (J. ACM 45(3):501–555, 1998). Secondly, we distinguish a new property exhibited by many optimization problems: pumpability. With this notion, we considerably generalize several problem-specific approaches to improve the effectiveness of approximation schemes with asymptotic behavior.

Keywords

Efficient computation NP-optimization problems Polynomial-time approximation schemes EPTAS Asymptotic polynomial-time approximation schemes Approximation-preserving reductions Structure of complexity classes 

1 Introduction

In the theory and practice of hard NP-optimization problems, approximation schemes are widely used for efficiently finding solutions to within any specified relative error ϵ from the optimum. Paz and Moran [36] classified these schemes into polynomial-time approximation schemes (ptas) and fully polynomial-time approximation schemes (fptas). However, the theory of approximation algorithms has led to several other useful classes of schemes, including optimum-asymptotic (ptas, fptas), efficient (eptas), and size-asymptotic (ptas ω , fptas ω ) approximation schemes. It is the goal of this paper to expose the surprising connections between these seemingly unrelated notions and to study their deeper structural properties.

The foremost conclusion that follows from the results of this paper is that efficient polynomial-time approximation schemes (eptas) hold a central position in the landscape of polynomial-time approximation schemes. An eptas has a running time of the form f(1/ϵ)⋅n O(1), where n denotes the instance size and f is some computable function. The class of optimization problems admitting an eptas is called EPTAS. We show that EPTAS is closely related to the classes of problems admitting ‘asymptotic’ approximation schemes, where the relative error ϵ is attained only asymptotically, i.e. for instances of large size or with a large optimum. Optimum-asymptotic approximation schemes are well known, for instance from the study of approximation algorithms for bin packing problems (see e.g. [11, 12, 27]).

Concretely, we prove that all commonly distinguished classes of problems with an asymptotic polynomial-time approximation scheme are a superclass of EPTAS. Two of these classes, FPTAS ω and FIPTAS ω , both corresponding to size-asymptotic schemes, even coincide with the class EPTAS (see Sect. 3). This settles one of the main questions that motivated this paper: recent research [40, 41, 42] had shown that natural problems having an fptas ω exist, but their position in the hierarchy of approximable problems was hitherto unclear.

Moreover, we distinguish the notion of convergent polynomial-time approximation schemes, in which the approximation ratio improves by some function of the instance size as the instance size grows. We show that the corresponding class of optimization problems is equivalent to EPTAS as well (see Sect. 4). This strengthens the assertion that EPTAS is central in the landscape of problems admitting polynomial-time approximation schemes and deepens the understanding of this class.

We also consider the characteristics of asymptotic approximation schemes. In general, fully polynomial-time approximation schemes have a running time depending (polynomially) on both the instance size and 1/ϵ. If the running time depends only on the instance size, a scheme is called a fully input-polynomial-time approximation scheme (fiptas). We show that if a problem admits an asymptotic fptas (a fptas or a fptas ω ), then this problem admits an asymptotic fiptas of the same kind (a fiptas or a fiptas ω respectively). Hence the corresponding classes coincide, demonstrating an important property of the notions of size- and optimum-asymptotic approximation schemes.

Figure 1 shows the hierarchy of problem classes that follows from this paper. A proper definition of all classes in the figure is given in the next sections.
Fig. 1

The arrows represent the ‘is contained in’-relation. The existence of any inclusion relation not in the above graph or the collapse of one of the arrows implies that either P=NP or FPT=W[1]

In the second part of the paper, we discuss several ways to overcome the hardness gap between asymptotic approximation schemes and classic approximation schemes. Section 6 employs ideas from fixed-parameter complexity theory. The results of this section lead to a new characterization of problems having a ptas or fptas by means of fixed-parameter tractability and optimum-asymptotic approximation schemes. This characterization is subsequently used to prove that a large number of problems cannot have an optimum-asymptotic polynomial-time approximation scheme (ptas) unless P=NP. This includes all MAX-SNP-complete problems. From Fig. 1, we can see that this strengthens a result from Arora et al. [1], who only proved that such problems cannot have a ptas.

Additionally, we study reductions that preserve approximability by optimum-asymptotic approximation schemes. We show that several results on the nonexistence of optimum-asymptotic polynomial-time approximation schemes in the literature implicitly use such a reduction and thus follow from the general approach presented here. Furthermore, we prove that Minimum Bin Packing cannot have such a reduction from Maximum Satisfiability unless P=NP. This augments results by Crescenzi et al. [13], who showed that no approximation-preserving reduction exists in this case unless the polynomial hierarchy collapses.

Finally, we propose the notion of pumpability in Sect. 7. Problems having asymptotic polynomial-time approximation schemes can sometimes be ‘pumped’ to a form that admits a ptas, eptas, or even an fptas if the optimization problem under consideration is pumpable. This is useful for completing Fig. 1, but also for improving the effectiveness of asymptotic approximation schemes. Furthermore, we provide insight into which problems are pumpable and show for instance that all problems in MAX-SNP are pumpable.

2 Preliminaries

To make formal statements about equivalences among classes of approximation schemes, we have to be precise about the machine model we use, the type of problems that are considered, and the definitions of the studied classes. Throughout the paper, we assume the basic random access machine model with logarithmic costs and representations in bits, which implies that within cost (time) t, the machine can output at most t bits. This machine model is polynomially equivalent to the classic Turing machine and thus defines the classic complexity classes up to polynomial time factors. Furthermore, all numbers are assumed to be rationals, unless otherwise specified.

Using this model, we study optimization problems following the definitions as can be found for instance in Ausiello et al. [2].

Definition 2.1

An optimization problem P is characterized by four properties:
  • a set of instances (bitstrings) I P ;

  • a function S P that maps instances of P to (nonempty) sets of feasible solutions (bitstrings) for these instances;

  • an objective function m P that gives for each pair (x,y) consisting of instance xI P and solution yS P (x) a positive integer m P (x,y), the objective value;

  • a goal goal P ∈{min,max} depending on whether P is a minimization or a maximization problem.

We denote by \(S^{*}_{P}(x) \subseteq S_{P}(x)\) the set of optimal solutions for an instance xI P , i.e. \(S^{*}_{P}(x)\) consists of all y S P (x) for which
$$m_{P}(x, y^{*}) = \mathrm{goal}_{P} \{ m_{P}(x,y) \mid y \in S_{P}(x) \}.$$
The objective function value attained by the optimal solutions for an instance x is denoted \(m^{*}_{P}(x)\).

Definition 2.2

An optimization problem P is in the class NPO if
  • the set of instances I P can be recognized in polynomial time;

  • there is a (monotone nondecreasing) polynomial q P such that |y|≤q P (|x|) for any instance xI P and any feasible solution yS P (x);

  • for any instance xI P and any y with |y|≤q P (|x|), one can decide in polynomial time whether yS P (x);

  • there is a (monotone nondecreasing) polynomial r P such that the objective function m P is computable in r P (|x|,|y|) time for any xI P and yS P (x).

Note that for any problem P∈NPO and any n∈ℕ the maximum objective value of instances of size n, i.e. max{m P (x,y)∣xI P ,|x|=n,yS P (x)}, is bounded by \(2^{r_{P}(n, q_{P}(n))}\), as the objective function value of any xI P and yS P (x) can be represented by at most r P (|x|,|y|)≤r P (|x|,q P (|x|)) bits. Let \(M_{P}(n) =2^{r_{P}(n, q_{P}(n))}\).

Lemma 2.3

For any NPO-problem P, for any xI P , and for any n∈ℕ, if \(m^{*}_{P}(x) > M_{P}(n)\), then |x|>n.

All problems considered below will be in NPO and all considered classes will be subclasses of NPO. From now on, we drop the subscript P if P is clear from the context.

If one equates NPO to NP, then PO is the equivalent of P. PO is the class of problems in NPO for which an optimal solution y S (x) can be computed in time polynomial in |x| for any xI. Paz and Moran [36] proved that P=NP implies PO=NPO and vice versa. Because it is not expected that all problems in NPO also fall in PO, several classes have been defined that contain NPO-problems for which an approximate solution can be found in polynomial time. Approximation algorithms are classified by two properties: their running time and their approximation ratio.

Definition 2.4

[2, 21]

For an optimization problem P∈ NPO, any xI P , and any yS(x), the approximation ratio achieved by y for x is
$$R(x,y) = \max\biggl\{\frac{m(x,y)}{m^{*}(x)}, \frac {m^{*}(x)}{m(x,y)}\biggr\}.$$
We say that y is within (a factor) r of m (x) if R(x,y)≤r. The approximation ratio of an algorithm \(\mathcal {A}\) is defined as
$$R_{\mathcal {A}} = \max\{ R(x,\mathcal {A}(x)) \mid x \in I_{P} \}.$$
Any textbook on approximation algorithms covers at least the classes of Table 1. The table should be interpreted as follows: PTAS, for instance, is the class of optimization problems P in NPO having a ptas, i.e. having an algorithm \(\mathcal {A}\) such that for any instance xI P and any ϵ>0, \(\mathcal {A}(x,\epsilon)\) runs in time polynomial in |x| for every fixed ϵ and the solution output by \(\mathcal {A}(x,\epsilon)\) has approximation ratio (1+ϵ). We use lower-case letters for a scheme name and upper-case letters for the name of the corresponding class (i.e. ptas and PTAS).
Table 1

Problem classes and the distinguishing properties of the approximation algorithms admitted by problems in a particular class

Problem class

Running time

Approx. ratio

APX

Polynomial in |x|

c

PTAS

Polynomial in |x| (for every fixed ϵ)

(1+ϵ)

FPTAS

Polynomial in |x| and 1/ϵ

(1+ϵ)

FIPTAS

Polynomial in |x|

(1+ϵ)

PO

Polynomial in |x|

1

The class FIPTAS (Fully Input-Polynomial-Time Approximation Scheme) in Table 1 is a new class. Clearly, FIPTAS=PO (use ϵ=1/M(|x|)), but the reason for defining this class will become apparent later.

A relatively new class of increasing interest is EPTAS [3, 7].

Definition 2.5

Algorithm \(\mathcal {A}\) is an efficient polynomial-time approximation scheme (eptas) for problem P∈ NPO if there is a computable function f:ℚ≥1→ℕ such that for any xI P and any ϵ>0, \(\mathcal {A}(x, \epsilon)\) runs in time f(1/ϵ) times a fixed polynomial in |x| and the solution output by \(\mathcal {A}(x, \epsilon)\) has approximation ratio (1+ϵ). An NPO-problem is in the class EPTAS if and only if it has an eptas.

The popularity of eptas is not only due to the separate dependence on 1/ϵ and instance size in the running time, but also to the beautiful relation to the widely researched class FPT: any problem admitting an eptas is also in FPT in its standard parameterization [3, 7]. An interesting exploration of the type of problems that admits an eptas may be found in Cai et al. [5].

It is well-known that PO ⊆ FPTAS ⊆ EPTAS ⊆ PTAS ⊆ APX ⊆ NPO. In most cases, the inclusion is strict (unless P=NP), except that EPTAS ⊂ PTAS unless FPT=W[1] [3, 7]. The question whether FPT=W[1] is an open problem in fixed-parameter complexity theory akin to the question whether P=NP in classic complexity theory (see e.g. Downey and Fellows [17]).

3 Asymptotic Approximation Schemes

Informally, an approximation scheme is asymptotic if it gives a (1+ϵ)-approximation under a condition that is asymptotically true. We study two types of asymptotic approximation schemes. We first consider approximation schemes where the size of the instance needs to be large enough. The other type is treated in Sect. 5.

Definition 3.1

An approximation scheme \(\mathcal {A}\) for P∈ NPO is size-asymptotic if there is a computable function a:ℚ≥1→ℕ (the threshold function) such that for any ϵ>0 and any xI P , it returns a yS(x) and if |x|≥a(1/ϵ), then y is within (1+ϵ) of m (x).

This definition leads to the following classes of size-asymptotic approximation schemes.

Problem class

Running time

Approx. ratio

PTAS ω

Polynomial in |x| (for every fixed ϵ)

(1+ϵ) if |x|≥a(1/ϵ)

FPTAS ω

Polynomial in |x| and 1/ϵ

(1+ϵ) if |x|≥a(1/ϵ)

FIPTAS ω

Polynomial in |x|

(1+ϵ) if |x|≥a(1/ϵ)

Example 3.2

Maximum Independent Set has a fiptas ω on bounded-ply disk graphs [41, 42]. Disk graphs are intersection graphs of disks in the plane, i.e. given a set of disks, each vertex of the graph corresponds to a disk and there is an edge between two vertices if the corresponding disks intersect. A set of disks has ply γ if γ is the smallest integer such that any point of the plane is overlapped by at most γ disks. One can find in O(|x|10log4|x|) time an independent set of an instance x of this problem. If an odd integer k can be chosen such that max{5,4(1+ϵ)/ϵ}≤kc 1log|x|/log(c 2 γ) (where c 1,c 2 are fixed constants), then this independent set will be within (1+ϵ) of the optimum. If γ=γ(|x|)=O(|x| o(1)), such an integer exists if |x|≥a(1/ϵ) for some function a.

We start with some easy observations about the size-asymptotic classes.

Proposition 3.3

The following relations hold:
  • FIPTAS ω ⊆FPTAS ω ⊆PTAS ω and

  • FIPTAS⊆FIPTAS ω ,FPTAS⊆FPTAS ω ,PTAS⊆PTAS ω .

The relations given by this proposition are straightforward and one might expect that the inclusions are strict under some hardness condition. However, this turns out not to be true for all of them. We can prove some very interesting equivalences and tie these new classes to existing approximation classes, in particular to EPTAS.

Theorem 3.4

EPTAS=FPTAS ω =FIPTAS ω .

Proof

We first show that EPTAS ⊆ FIPTAS ω . Let P∈ EPTAS and let \(\mathcal{A}\) be an eptas for P with running time at most p(|x|)⋅f(1/ϵ) for some computable function f and polynomial p. Construct a fiptas ω for P as follows. Given an arbitrary instance xI P and an arbitrary ϵ>0, run \(\mathcal{A}(x, \epsilon)\) for p(|x|)⋅|x| time steps. If \(\mathcal{A}(x, \epsilon)\) finishes, return the solution given by \(\mathcal{A}(x, \epsilon)\). Otherwise, return \(\mathcal{A}(x, 1/2)\). This algorithm clearly runs in time polynomial in |x| and always returns a feasible solution. Furthermore if |x|≥f(1/ϵ), \(\mathcal {A}(x, \epsilon)\) always finishes and returns a feasible solution with approximation ratio (1+ϵ). Hence we constructed a fiptas ω for P with a=f.

We next prove that FPTAS ω ⊆ EPTAS. Let P∈ FPTAS ω and let \(\mathcal{A}\) be an fptas ω for P with threshold function a. Construct an eptas as follows. Given an arbitrary instance xI P and an arbitrary ϵ>0, compute a(1/ϵ). By assumption, a(1/ϵ) is computable. The amount of time it takes to compute a(1/ϵ) is some computable function depending on 1/ϵ. If |x|≥a(1/ϵ), simply compute and return \(\mathcal {A}(x, \epsilon)\) in time polynomial in |x| and 1/ϵ. If |x|<a(1/ϵ), proceed as follows. As FPTAS ω  ⊆ NPO, any feasible solution for x has size at most q(|x|) for some fixed polynomial q. Furthermore, given any y with |y|≤q(|x|), one can determine in polynomial time whether yS P (x). The objective value of a feasible solution can also be computed in polynomial time. Hence by employing exhaustive search, one can find a \(y^{*} \in S_{P}^{*}(x)\) in time poly(|x|)⋅2 q(|x|)r P (|x|,q(|x|))=2 q(a(1/ϵ))⋅poly(a(1/ϵ)). The result is an eptas for P with appropriately defined function f.

As FIPTAS ω ⊆ FPTAS ω , we have EPTAS ⊆ FIPTAS ω ⊆ FPTAS ω ⊆ EPTAS, and hence the classes must be equal. □

The exponential increase in running time in the reduction from an fptas ω to an eptas might be reduced by using an exact or fixed-parameter algorithm specific to the problem. As we show in Sect. 7, one can avoid such an increase altogether for many problems.

The equivalence of F(I)PTAS ω and EPTAS allows an indirect proof of the existence of an eptas for a problem, where a direct proof seems more difficult.

Example 3.5

Maximum Independent Set on disk graphs of bounded ply has a fiptas ω (Example 3.2) and thus, as a consequence of Theorem 3.4, an eptas.

We now show that PTAS ω and PTAS are in fact equivalent as well.

Theorem 3.6

PTAS=PTAS ω .

Proof

By Proposition 3.3 it suffices to prove that PTAS ω ⊆ PTAS. Let P∈ PTAS ω and let \(\mathcal {A}\) be a ptas ω for P. For an arbitrary instance xI P and an arbitrary ϵ>0, compute a(1/ϵ). If |x|≥a(1/ϵ), compute and return \(\mathcal {A}(x, \epsilon)\). Otherwise, apply the same exhaustive search technique as in the proof of Theorem 3.4. The result is a ptas for P. □

4 Convergent Approximation Schemes

Size-asymptotic approximation schemes have a threshold function, depending on 1/ϵ, such that a good approximate solution is guaranteed if the size of the instance is larger than the threshold. It seems then that the quality of the computed solution can be arbitrarily bad for small instances, while from a certain instance size onward, the quality suddenly becomes very good. Practical examples of size-asymptotic approximation schemes show however that the approximation ratio can improve steadily as the instance size increases and eventually converges (to 1).

Surprisingly, this also holds in general. In this section, we define and study these convergent approximation schemes more precisely. The main result is that a problem has a fptas ω if and only if it also has a convergent approximation scheme.

In the following, we use \(\mathcal {F}^{*}\) to denote the family of all monotone nondecreasing computable functions f:ℕ→ℚ≥1 with liminf n→∞ f(n)=∞. Let \(\mathcal {P}\) denote the family of those functions in \(\mathcal {F}^{*}\) that are bounded by a (monotone) polynomial.

Definition 4.1

Let \(f \in \mathcal {F}^{*}\). An approximation scheme \(\mathcal {A}\) for P∈ NPO is said to be ϵ-convergent w.r.t. f if for any ϵ>0 and any xI P , \(\mathcal {A}(x,\epsilon)\) returns a yS(x) within (1+ϵ/f(|x|)) of m (x).

This definition gives rise to schemes ptas[f], fptas[f], and fiptas[f] and the classes PTAS[f], FPTAS[f], and FIPTAS[f] in the natural way. We also define a special subclass for the case when ϵ=1.

Definition 4.2

Let \(f \in \mathcal {F}^{*}\). Algorithm \(\mathcal {A}\) is a convergent polynomial-time approximation scheme w.r.t. f (denoted as pconv[f]) if for any xI, \(\mathcal {A}(x)\) runs in time polynomial in |x| and returns a yS(x) within (1+1/f(|x|)) of m (x). The corresponding class is PCONV[f].

Example 4.3

Chiba et al. [10] give a pconv[\(O(\sqrt{\log\log|x|})\)] for Maximum Independent Set on planar graphs. This follows from a general O(|x|log|x|) algorithm giving a \(1 + 1/O(\sqrt{\log\log |x|})\)-approximation for several hereditary problems on planar graphs. Demaine et al. [16] give a pconv[log|x|]1 for Maximum Independent Set on single-crossing-minor-free graphs (a generalization of planar graphs).

Definition 4.4

For any family of functions \(\mathcal {F} \subseteq \mathcal {F}^{*}\), let \(\mathrm{PCONV}[\mathcal {F}] = \bigcup_{f \in \mathcal {F}}\mathrm{PCONV}[f]\). We similarly define \(\mathrm{PTAS}[\mathcal {F}]\), \(\mathrm{FPTAS}[\mathcal {F}]\), and \(\mathrm{FIPTAS}[\mathcal {F}]\).

We first state some straightforward relations.

Proposition 4.5

The following relations hold:
  • \(\mathrm{FIPTAS}[\mathcal {F}^{*}] = \mathrm{PO}\), \(\mathrm{FPTAS}[\mathcal {F}^{*}] \subseteq \mathrm{FPTAS}\), \(\mathrm{PTAS}[\mathcal {F}^{*}] \subseteq \mathrm{PTAS}\),

  • for any \(f \in \mathcal {F}^{*}\), FIPTAS[f]⊆FPTAS[f]⊆PTAS[f]⊆PCONV[f], and

  • for any \(f,f' \in \mathcal {F}^{*}\) with f(n)≤f′(n) for any n∈ℕ, FIPTAS[f′]⊆FIPTAS[f], FPTAS[f′]⊆FPTAS[f], PTAS[f′]⊆PTAS[f], and PCONV[f′]⊆PCONV[f].

Looking closely at the papers cited in Example 4.3, one can observe that the algorithms they describe are actually a pconv[f] (for certain f) as well as an eptas. This is not a coincidence!

Lemma 4.6

If P∈FPTAS ω , then P∈PCONV[f] for some \(f \in \mathcal {P}\).

Proof

Let P∈ FPTAS ω and let \(\mathcal {A}\) be an fptas ω for P delivering a (1+ϵ)-approximate solution if |x|≥a(1/ϵ) for some computable function a. We use \(\mathcal {A}\) to construct a pconv[f] for a suitably chosen function f.

Fix a monotone, polynomially-bounded function p(n) (for instance p(n)=n). Because a is computable, we can compute
$$c_{1} := 0,\qquad c_{2} := a(2),\qquad c_{3} := a(3),\qquad c_{4} := a(4),\qquad \ldots $$
Compute the values of this series (starting at c 1, then c 2,c 3,…) for at most p(n) time steps total. Let f(n) be the highest index k that satisfies nc k among the fully computed values c k . Because c 1=0, this function is properly defined. Furthermore, f is a monotone, computable function with liminf n→∞ f(n)=∞ and f(n)≤p(n).

Observe that because FPTAS ω ⊆ PTAS, P must have a 2-approximation algorithm \(\mathcal {B}\) running in time polynomial in the size of the input. Now consider the following algorithm \(\mathcal {A'}(x)\) for instances xI P : if f(|x|)=1, return \(\mathcal {B}(x)\), otherwise return \(\mathcal {A}(x, \epsilon)\) with ϵ=1/f(|x|). We claim that \(\mathcal {A'}(x)\) is a pconv[f] for P with f as defined above.

Let y be the solution output by \(\mathcal {A'}(x)\) for some xI P . Clearly, yS(x). If f(|x|)=1, \(\mathcal {A'}(x)\) trivially returns a (1+1/f(|x|))-approximation. If f(|x|)>1, then because |x|≥c f(|x|)=a(f(|x|)), y is a (1+1/f(|x|))-approximate solution. Furthermore, the running time is Note that f(|x|) is also computable in polynomial time. Hence we achieve a (1+1/f(|x|))-approximation in time polynomial in |x| and we thus have that P∈ PCONV[f]. □

The lemma implies that \(\mathrm{FPTAS}^{\omega}\subseteq \mathrm{PCONV}[\mathcal {F}^{*}]\). By Theorem 3.4, this in turn implies that \(\mathrm{EPTAS} \subseteq \mathrm{PCONV}[\mathcal {F}^{*}]\). We can give a direct proof of this consequence, which has the additional advantage that the function of 1/ϵ in the running time of the eptas is only needed for the analysis and not for the algorithmic part of the reduction.

Lemma 4.7

If P∈EPTAS, then P∈PCONV[f] for some \(f \in \mathcal {P}\).

Proof

Let \(\mathcal {A}\) be an eptas for P, which runs in time g(1/ϵ)⋅p(|x|) for any xI P and any ϵ>0 for some function g and polynomial p. We assume that g and p are computable and that p is known (g is not necessarily known). Now run \(y_{1} = \mathcal {A}(x, 1)\) to completion and \(y_{2} = \mathcal {A}(x, 1/2)\), …, \(y_{|x|+1} = \mathcal {A}(x, 1/(|x|+1))\) for at most |x|⋅p(|x|) time steps each, and return goal P {y k } over all k for which \(\mathcal {A}\) finished within the allotted time. This algorithm clearly always outputs a feasible solution in polynomial time. We claim that it yields a (1+1/f(|x|))-approximation for a suitably chosen monotone computable function f with liminf n→∞ f(n)=∞.

We construct the function f as a piecewise constant function on a sequence of intervals [n 1,n 2), [n 2,n 3),… . Define n 1=0 and for any i≥2,
$$n_{i} = \max\{ n_{i-1}+1, g(i) \}.$$
Define
$$f(n) = i \quad\mbox{if\ } n \in[n_{i}, n_{i+1}) \mbox{\ for some\ } i\geq1.$$
Clearly, n i n i−1+1 and n i is finite for any i. Hence f is a monotone nondecreasing function with liminf n→∞ f(n)=∞ and f(n)≤n+1 for any n∈ℕ. Hence \(f \in \mathcal {P}\). Furthermore, f(n) is computable, as we only need to compute n i for a finite number (at most n) values of i and g is computable.

If f(|x|)=1 then, since \(\mathcal {A}(x,1)\) runs to completion, the algorithm delivers at least a (1+1/f(|x|))-approximation. If f(|x|) is equal to i for some i≥2, then by the construction of f and n i , |x|≥n i g(i) and i≤|x|+1. Hence certainly \(\mathcal {A}(x, 1/i)\) runs to completion within the time limit set for it and the algorithm returns at least a (1+1/i)-approximation of the optimum. But then the algorithm returns at least a (1+1/f(|x|))-approximation of the optimum. □

We now prove the converse relation.

Lemma 4.8

If \(P \in \mathrm{PCONV}[\mathcal {F}^{*}]\), then P∈FIPTAS ω .

Proof

Let P∈ PCONV[f] and let \(\mathcal {A}\) be a pconv[f] for P with \(f\in \mathcal {F}^{*}\). We claim that running \(\mathcal {A}\) yields a fiptas ω for P with threshold function
$$a(1/\epsilon) := \left\{\begin{array}{l@{\quad}l}0 & \mbox{if $1/\epsilon\leq f(0)$} \\[3pt]\max\{ n \in\mathbb{N} \mid1/\epsilon> f(n) \} & \mbox{otherwise.}\end{array}\right.$$
Note that a is indeed properly defined, as f is monotone and liminf n→∞ f(n)=∞.

Clearly, a is a computable function and algorithm \(\mathcal {A}(x)\) runs in time polynomial in |x| for any xI P and any ϵ>0. Furthermore, if |x|≥a(1/ϵ), then ϵ≥1/f(|x|) by the definition of a. Hence (1+1/f(|x|))≤(1+ϵ) and thus \(\mathcal {A}(x)\) returns a (1+ϵ)-approximate solution. □

Note that the function a in the proof of Lemma 4.8 is essentially the inverse of f (it is not precisely the inverse, as f might not be invertible). Similarly, the function f constructed in the proof of Lemma 4.7 is the ‘inverse’ of g.

Using Theorem 3.4 and Lemmas 4.7 and 4.8, we obtain the following key theorem.

Theorem 4.9

\(\mathrm{EPTAS} = \mathrm{FPTAS}^{\omega}= \mathrm{FIPTAS}^{\omega}=\mathrm{PCONV}[\mathcal {F}^{*}]\).

To state this theorem informally: for polynomial-time approximation schemes, having a single factor depending (only) on 1/ϵ in the running time is equivalent to having such a function as a threshold for yielding an (1+ϵ)-approximation, which in turn is equivalent to having the attained approximation ratio improving to 1 as the instance size increases.

4.1 Detailed Relations for Specific Functions

If we have specific knowledge of the function f that appears in the approximation ratio of a convergent approximation scheme, we can prove some more detailed relations. First we consider the family of classes where this function is bounded by a polynomial.

Theorem 4.10

FPTAS=FPTAS[f] for any \(f \in \mathcal {P}\).

Proof

By Proposition 4.5, it suffices to show that FPTAS ⊆ FPTAS[f] for any \(f \in \mathcal {P}\). Let P∈ FPTAS and let \(\mathcal {A}\) be an fptas for P. Let f be upper bounded by the (monotone) polynomial p, i.e. f(n)≤p(n) for all n>0. Consider an xI P and an ϵ>0. Compute p(|x|) and run \(\mathcal {A}(x, \epsilon/p(|x|))\). This algorithm runs in time polynomial in |x| and 1/ϵ and returns a (1+ϵ/p(|x|))≤(1+ϵ/f(|x|))-approximate solution. Hence P∈ FPTAS[f]. □

Corollary 4.11

\(\mathrm{FPTAS}[\mathcal {P}] = \bigcup_{f \in \mathcal {P}} \mathrm{FPTAS}[f] = \bigcap_{f \in \mathcal {P}} \mathrm{FPTAS}[f]\).

A problem P∈ NPO is polynomially-bounded if there is a polynomial p:ℕ→ℕ such that m(x,y)≤p(|x|) for all xI P and yS P (x) [2, 20].

Lemma 4.12

Let P be an NP-hard, polynomially-bounded optimization problem and p the corresponding bounding (monotone) polynomial plus 1. Then P∉PCONV[p], unless P=NP.

Proof

Assume that P is a minimization problem (the case when P is a maximization problem is similar). Suppose that P∈ PCONV[p]. Then there exists an algorithm \(\mathcal {A}\) such that for any instance x of P, \(\mathcal {A}(x)\) runs in time polynomial in |x| and delivers a feasible solution y with approximation ratio 1+1/p(|x|). But then
$$m(x, y) \leq(1 + 1/p(|x|)) \cdot m^{*}(x) < m^{*}(x) + 1.$$
But this is only possible if m(x,y)=m (x). This means that we found an optimal solution in polynomial time. This is impossible unless P=NP. □

Using this lemma and the easy fact that FPTAS ⊆ PCONV[f] for any \(f \in \mathcal {P}\), we can show the following corollary, which is well known and easy [20].

Corollary 4.13

No NP-hard, polynomially-bounded optimization problem admits an fptas, unless P=NP.

Proof

Let P be an NP-hard, polynomially-bounded optimization problem and p the corresponding bounding polynomial plus 1. Then \(P \not{\in}\) PCONV[p] and thus \(P \not{\in}\) FPTAS, unless P=NP. □

For functions in the approximation ratio of convergent schemes that are not polynomial, we can also prove some interesting relations. As noted before, the function f constructed in the proof of Lemma 4.7 is the ‘inverse’ of the function g in the running time of the eptas. This leads to the following general result.

Theorem 4.14

If g is an invertible function, then a problem that has an eptas with running time g(1/ϵ)⋅|x| O(1) also has a pconv[g −1(|x|)].

For instance, if \(g(1/\epsilon) = 2^{2^{1/\epsilon}}\), then the problem has a pconv[loglog|x|]. The statement of this theorem is not ‘if and only if’, because Lemma 4.8 only gives a fiptas ω . Transforming it to an eptas using Theorem 3.4 increases the running time exponentially in g.

For some functions, one can even improve on Theorem 4.14.

Theorem 4.15

For any polynomial p of degree s, a problem that has an eptas with running time bounded by 2 p(1/ϵ)⋅|x| O(1) also has a ptas[log1/s |x|].

It seems unlikely that an equivalence as in Theorem 4.15 will also hold if g is doubly-exponential.

5 Optimum-Asymptotic Approximation Schemes

Approximation schemes that give a (1+ϵ)-approximation if the optimum is large enough are quite common. They are particularly well known for Minimum Bin Packing (see e.g. [11, 12, 27]), but similar schemes exist for other problems, e.g. for Minimum Degree Spanning Tree [19] and Chromatic Index [43]. There are also several ways to define what constitutes an optimum-asymptotic approximation scheme [2, 12, 21, 26]. We prove that these definitions are actually equivalent. More interestingly, we revisit the relation of optimum-asymptotic approximation schemes to nonasymptotic approximation schemes and show that EPTAS is a subclass of the optimum-asymptotic classes. Finally, we investigate the relations between various types of optimum-asymptotic approximation schemes.

We first define optimum-asymptotic approximation schemes. The definition we use is both in line with previous definitions of optimum-asymptotic approximation schemes (see e.g. [21]) and with the definition of size-asymptotic approximation schemes (see Definition 3.1).

Definition 5.1

An approximation scheme \(\mathcal {A}\) for P∈ NPO is optimum-asymptotic if there is a computable function b:ℚ≥1→ℕ (the threshold function) and an associated constant ϵ b with the property that b(1/ϵ)≤1 for each ϵϵ b , such that for any ϵ>0 and any xI P , it returns a yS(x) and if m (x)≥b(1/ϵ), then y is within (1+ϵ) of m (x).

This leads to the definition of ptas, fptas, and fiptas schemes and to classes PTAS, FPTAS, and FIPTAS, all defined as expected.

Example 5.2

Karmarkar and Karp [27] give a fiptas for Minimum Bin Packing. For an instance x, the returned solution has objective value at most m (x)+log2 m (x) and is found in \(\widetilde{O}(|x|^{8})\) time, where the \(\widetilde{O}\) hides certain polylogarithmic terms.

Note that optimum-asymptotic schemes are defined analogously to size-asymptotic schemes, except for the extra requirement on b(1/ϵ). This technicality seems to be indispensable when trying to prove that optimum-asymptotic approximation scheme classes are a subclass of APX and behave as the known asymptotic approximation classes. In particular, it facilitates the following crucial property.

Lemma 5.3

Let \(\mathcal {A}\) be a ptas for a problem P and let ϵ b be the constant associated with \(\mathcal {A}\) ’s threshold function b. Then P has a polynomial-time (1+ϵ b )-approximation algorithm.

This follows immediately from the fact that m (x)≥1≥b(1/ϵ b ) for any xI P .

Corollary 5.4

PTAS⊆APX.

It follows from Queyranne [38] that this inclusion is strict unless P=NP (see also Theorem 7.7).

5.1 Equivalence of Definitions

Lemma 5.3 can be used to prove that the definition of optimum-asymptotic approximation schemes yields classes that are polynomially equivalent to the asymptotic approximation classes defined in the literature [2, 12] .

Definition 5.5

Algorithm \(\mathcal {A}\) is an asymptotic polynomial-time approximation scheme (aptas) for P if for any ϵ>0 there is a computable constant c ϵ such that for any xI P , \(\mathcal {A}(x,\epsilon)\) runs in time polynomial in |x| for every fixed ϵ and the solution y output by \(\mathcal {A}(x, \epsilon)\) is feasible and within (1+ϵ)+c ϵ /m (x) of m (x). A problem is in the class APTAS if and only if it has an aptas.

One similarly defines afptas and afiptas and corresponding classes AFPTAS and AFIPTAS in the natural way. These classes are sometimes also referred to as PTAAS, FPTAAS, and FIPTAAS, for (Fully-)(Input-)Polynomial-Time Asymptotic Approximation Scheme [12]. The class APTAS has also been called ASY-PTAS [44].

Example 5.6

Coffman and Lueker [12] present an AFPTAS (or FPTAAS) for Extensible Bin Packing with c ϵ =O(1/ϵlog1/ϵ).

Theorem 5.7

APTAS=PTAS, AFPTAS=FPTAS, and AFIPTAS=FIPTAS.

Proof

Consider a problem P∈ APTAS and an aptas \(\mathcal {A}\) for P. It is well known (and easily proved) that APTAS ⊆ APX [2], so let \(\mathcal {B}\) be a polynomial-time c-approximation algorithm for P for some constant c>1. We claim that for any xI P and any ϵ>0, the solution attaining \(\mathrm {goal}_{P}\{m(x,\mathcal {A}(x, \epsilon/2)), m(x,\mathcal {B}(x))\}\) is a ptas with threshold function
$$b(1/\epsilon) = \left\{\begin{array}{l@{\quad}l} 1 & \mbox{if\ } \epsilon\geq c-1 \\[3pt]c_{\epsilon/2} \cdot2/\epsilon& \mbox{otherwise}\end{array}\right.$$
and associated constant ϵ b :=c−1.

The function b is obviously computable, since c ϵ is computable. As \(\mathcal {A}\) and \(\mathcal {B}\) run in time polynomial in |x| for any instance xI P and every fixed ϵ>0, it remains to show that the returned solution is a (1+ϵ)-approximate solution on instances xI P if m (x)≥b(1/ϵ). If ϵϵ b , then \(\mathcal {B}(x)\) ensures a feasible solution within c and thus within (1+ϵ) of m (x). For ϵ<ϵ b , a feasible solution is returned and if \(m^{*}(x) \geq c_{\epsilon\scriptscriptstyle/2} \cdot 2/\epsilon\), then (1+ϵ/2)+c ϵ/2/m (x)≤(1+ϵ), assuring that \(\mathcal {A}(x, \epsilon/2)\) delivers a (1+ϵ)-approximation. This implies that P∈ PTAS.

Next we consider a problem P∈ PTAS and a ptas \(\mathcal {A}\) for P with threshold function b and associated constant ϵ b . By Lemma 5.3, P also has a polynomial-time (1+ϵ b )-approximation algorithm \(\mathcal {B}\). For any instance xI P and any ϵ>0, we claim that the algorithm returning the solution attaining \(\mathrm {goal}_{P}\{m(x,\mathcal {A}(x, \epsilon)), m(x,\mathcal {B}(x))\}\) is an aptas with c ϵ =ϵ b b(1/ϵ). If m (x)≥b(1/ϵ), then \(\mathcal {A}(x, \epsilon)\) guarantees a (1+ϵ)-approximate solution. If m (x)≤b(1/ϵ), then \(\mathcal {B}(x)\) guarantees a (1+ϵ b )-approximate solution. Note that
$$1+\epsilon_{b} \leq(1+\epsilon) + \epsilon_{b} \cdot b(1/\epsilon )/m^{*}(x) = (1+\epsilon) + c_{\epsilon}/m^{*}(x).$$
Hence P∈APTAS.

Similar proofs can be used to show that AFPTAS=FPTAS and AFIPTAS=FIPTAS. □

Because of these equivalences, all complexity results proved below for PTAS, FPTAS, and FIPTAS also hold for the classes APTAS, AFPTAS, and AFIPTAS respectively.

We can also make an interesting observation about the class of problems that can be approximated within a constant absolute error.

Definition 5.8

A problem P can be approximated within a constant absolute error if there exists an algorithm \(\mathcal {A}\) and constant c≥0 such that for any xI P , \(\mathcal {A}(x)\) runs in time polynomial in |x| and the solution y output by \(\mathcal {A}\) is feasible and satisfies |m(x,y)−m (x)|≤c.

Example 5.9

Fürer and Raghavachari [19] give a polynomial-time algorithm that approximates Minimum Degree Spanning Tree within constant absolute error 1.

We now prove that all problems admitting an algorithm with constant absolute error must have a fiptas with a threshold function that is bounded by a linear function, and vice versa.

Theorem 5.10

A problem P can be approximated in polynomial time within a constant absolute error if and only if it has a fiptas with a threshold function b that is bounded by a linear function.

Proof

Suppose that P can be approximated within a constant absolute error c≥0. Hence it has a (c+1)-approximation algorithm. It then follows from the proof of Theorem 5.7 that P has a fiptas with b(1/ϵ)=2c/ϵ and associated constant ϵ b =c.

For the converse, let \(\mathcal {A}\) be a fiptas for P with a threshold function b that is bounded by a linear function. Without loss of generality, we may assume that
$$b(1/\epsilon) = \left\{\begin{array}{l@{\quad}l}1 & \mbox{if\ } \epsilon\geq\epsilon_{b} \\[3pt]c/\epsilon& \mbox{otherwise}\end{array}\right.$$
for constants ϵ b ≥0 and c≥1. Then we can also assume that ϵ b =c−1 by adjusting ϵ b or c. By Lemma 5.3, P has a polynomial-time (1+ϵ b )-approximation algorithm \(\mathcal {B}\). For any instance xI P , compute \(y' = \mathcal {B}(x)\) in polynomial time and let s:=m(x,y′).
Consider the case in which goal P =min. Then m (x)≤scm (x). Choose ϵ=c 2/s and compute \(y = \mathcal {A}(x, \epsilon)\) in polynomial time. Note that
$$m^{*}(x) \geq\frac{s}{c} = \frac{c}{c^{2}/s}$$
and thus m (x)≥b(1/ϵ). Hence y is within a factor (1+ϵ) of m (x). But then
$$m(x,y) \leq(1 + c^{2}/s) \cdot m^{*}(x) = m^{*}(x) + c^{2} \cdot\frac {m^{*}(x)}{s} \leq m^{*}(x) + c^{2}.$$
We thus have an algorithm that approximates P within a constant absolute error of c 2.
For the case in which goal P =max the proof is similar. Then sm (x)≤cs. Choose ϵ=c/s and compute \(y =\mathcal {A}(x, \epsilon)\) in polynomial time. Note that
$$m^{*}(x) \geq s = \frac{c}{c/s}$$
and thus m (x)≥b(1/ϵ). Hence y is within a factor (1+ϵ) of m (x). But then This gives an algorithm that approximates P again within a constant absolute error of c 2. □

5.2 Equivalence and Containment of Optimum-Asymptotic Classes

Consider now the following natural relations.

Proposition 5.11

The following relations hold:
  • FIPTAS⊆FPTAS⊆PTAS and

  • FIPTAS⊆FIPTAS, FPTAS⊆FPTAS, PTAS⊆PTAS.

One might hope or expect that for optimum-asymptotic approximation classes analogous relations hold as for size-asymptotic classes. We show that this is only partially true. First, we investigate the relation between PTAS and PTAS. We know from Theorem 3.6 that PTAS ω = PTAS, but for optimum-asymptotic problems the equivalent result does not hold (unless P=NP). Actually, we prove a stronger result.

Theorem 5.12

\(\mathrm{FIPTAS}^{\infty}\not{\subseteq}\mathrm{PTAS}\), unless P=NP.

Proof

The minimum degree spanning tree problem admits a fiptas (see Example 5.9), but cannot have a ptas unless P=NP [21]. Hence FIPTAS \(\not{\subseteq}\) PTAS. □

As described in Sects. 6 and 7 however, for many problems the existence of a (f(i))ptas does imply the existence of a ptas (or better).

Interestingly, there is a close relation between FPTAS and FIPTAS. In fact, we can prove that the classes are equal.

Theorem 5.13

FPTAS=FIPTAS.

Proof

By Proposition 5.11, it suffices to show that FPTAS ⊆ FIPTAS. Let P∈ FPTAS and let \(\mathcal {A}\) be an fptas for P, such that for some computable function b and for any ϵ>0 and any xI P , \(\mathcal {A}(x, \epsilon)\) runs in at most γ⋅(1/ϵ) s ⋅|x| t time (for constants γ,s,t>0) and yields a (1+ϵ)-approximate solution if m (x)≥b(1/ϵ). Because FPTAS ⊆ APX, P has a polynomial-time c-approximation algorithm \(\mathcal {B}\) for some constant c>1.

Consider an arbitrary instance xI P and let ϵ>0 be given. Run \(\mathcal {A}(x, \epsilon)\) for at most γ⋅|x|2t time steps. If it finished and thus returned a solution y, return the solution attaining \(\mathrm {goal}_{P}\{m(x,y),m(x,\mathcal {B}(x))\}\). Otherwise, just return \(\mathcal {B}(x)\). We claim that this algorithm combined with threshold function b′ defined as
$$b'(1/\epsilon) = \left\{\begin{array}{l@{\quad}l}1 & \mbox{if\ } \epsilon\geq c-1; \\[3pt]\max\{b(1/\epsilon), 1+ M_{P}((1/\epsilon)^{s/t})\} & \mbox{otherwise},\end{array}\right.$$
and associated constant ϵ b:=c−1 is a fiptas for P.

Clearly, b′ is a computable function, the running time of the new algorithm is bounded by a polynomial in |x|, and the algorithm returns a feasible solution. Assume that m (x)≥b′(1/ϵ). If ϵϵ b , then \(\mathcal {B}(x)\) returns a feasible solution within c and thus within (1+ϵ) of m (x). Suppose that ϵ<ϵ b . Since m (x)≥b′(1/ϵ)≥b(1/ϵ), \(\mathcal {A}(x,\epsilon)\) delivers a (1+ϵ)-approximation if it runs to completion. So it remains to show that this is indeed the case, i.e. that γ⋅(1/ϵ) s ⋅|x| t γ⋅|x|2t . But this follows from the fact that m (x)≥b′(1/ϵ)>M P ((1/ϵ) s/t ) and thus |x|≥(1/ϵ) s/t by Lemma 2.3, or (1/ϵ) s ≤|x| t . □

A similar idea can be used to tie EPTAS to the optimum-asymptotic approximation classes.

Theorem 5.14

FIPTAS ω ⊆FIPTAS.

Proof

Let P∈ FIPTAS ω and let \(\mathcal {A}\) be a fiptas ω for P, such that \(\mathcal {A}\) delivers a (1+ϵ)-approximation on instances xI P if |x|≥a(1/ϵ). As FIPTAS ω ⊆ APX, P also has a polynomial-time c-approximation algorithm \(\mathcal {B}\) for some constant c>1. Let xI P and some ϵ>0 be given. We claim that the algorithm returning the solution attaining \(\mathrm {goal}_{P} \{m(x,\mathcal {A}(x, \epsilon)), m(x,\mathcal {B}(x))\}\) combined with threshold function b defined as
$$b(1/\epsilon) = \left\{\begin{array}{l@{\quad}l}1 & \mbox{if\ } \epsilon\geq c-1; \\[3pt]1+M_{P}(a(1/\epsilon)) & \mbox{otherwise},\end{array}\right.$$
and ϵ b :=c−1 is a fiptas for P.

Clearly, the function b is computable, because a and M P are computable. As \(\mathcal {A}\) and \(\mathcal {B}\) run in time polynomial in |x| for any instance xI P , it remains to show that the returned solution is a (1+ϵ)-approximate solution on instances xI P if m (x)≥b(1/ϵ). If ϵϵ b , then \(\mathcal {B}(x)\) ensures a feasible solution within c and thus within (1+ϵ) of m (x). For ϵ<ϵ b , a feasible solution is returned and if m (x)≥b(1/ϵ)>M P (a(1/ϵ)), then |x|≥a(1/ϵ) by Lemma 2.3, assuring that \(\mathcal {A}\) delivers a (1+ϵ)-approximation. □

Note that one can similarly prove that PTAS ω ⊆ PTAS and FPTAS ω ⊆ FPTAS. However, we can also derive this from Proposition 5.11 and Theorem 3.6, respectively from Theorems 5.13, 5.14, and 3.4. Together with Theorem 5.12, this yields the following corollary.

Corollary 5.15

EPTAS⊆FIPTAS. The containment is strict, unless P=NP.

This implies that the hierarchy of optimum-asymptotic approximation classes starts not only above FPTAS, but even above EPTAS (see Fig. 1). It is an intriguing question whether this corollary can be strengthened to PTAS ⊆ FPTAS, or whether PTAS \(\not{\subseteq}\) FPTAS. We answer this question in Sect. 7 (Theorem 7.9).

6 Optimum-Asymptotic Schemes and Classic Classes

Asymptotic approximation schemes clearly play an important part in the hierarchy of approximation schemes. In the previous sections, we established inclusion and equivalence relations among the classes of problems admitting such schemes and more classic classes such as PTAS, EPTAS, and FPTAS. All inclusion relations are strict under some hardness condition. In some cases however, the hardness gap can be bridged. The next two sections build several of these bridges.

In this section, we give a new characterization of classic classes by means of optimum-asymptotic approximation schemes and concepts from fixed-parameter tractability. In this way, we can also prove that large classes of problems do not possess optimum-asymptotic schemes. Section 7 deals with asymptotic schemes in another way, in the sense that we try to increase the size or optimum of a problem instance to get around the threshold function of asymptotic schemes.

6.1 New Characterizations of Classic Classes

When we view optimum-asymptotic approximation schemes from the perspective of the theory of fixed-parameter tractability, we can obtain new characterizations of the classic classes of approximation schemes defined in Table 1. We first define some notions from fixed-parameter tractability, as found for instance in Downey and Fellows [17] and Flum and Grohe [18].

Definition 6.1

In the standard parameterization (or decision variant) of a problem P, one is asked, given xI P and a positive integer k, to decide whether m (x)≥k if goal P =max or m (x)≤k if goal P =min.

Definition 6.2

[36]

A problem P is simple if its standard parameterization can be decided in time polynomial in |x| for every instance xI P and every fixed k. It is p-simple if its standard parameterization can be decided in time polynomial in |x| and k for every xI P and every k.

Proposition 6.3

The standard parameterization of a problem P belongs to the class XP if and only if P is simple. It belongs to the class PFPT (Polynomial FPT) if and only if P is p-simple.

A precise definition of the classes PFPT and XP may be found in [8, 17, 18]. Here we only need an understanding of the restriction of these classes to standard parameterizations of optimization problems.

Definition 6.4

[4, 6]

An algorithm \(\mathcal {A}\) decides the standard parameterization of a problem P with witness if \(\mathcal {A}\) decides the standard parameterization of P and if it decides Yes, it also returns a yS(x) such that m(x,y)≥k if goal P =max or m(x,y)≤k if goal P =min.

Using this definition, we can consider problems that are (p-)simple with witness and define classes XP w and PFPT w as expected. As in Proposition 6.3, this means that a problem belongs to XP w if and only if it is simple with witness, and in PFPT w if and only if it is p-simple with witness.

We now give a new characterization of the classes PTAS and FPTAS.

Theorem 6.5

A problem is
  • in PTAS if and only if it has a ptas and its standard parameterization is in XP w .

  • in FPTAS if and only if it has an fptas with a polynomially-bounded threshold function and its standard parameterization is in PFPT w .

Proof

Consider a problem P and suppose that P∈ PTAS. Then P is in PTAS by Proposition 5.11. It follows from a proof of Paz and Moran [36] that the standard parameterization of P is simple with witness (run the ptas with ϵ=1/(k+1)), and thus in XP w .

For the converse, suppose that P is in PTAS and in XP w . Let \(\mathcal {A}\) be a ptas for P with computable threshold function b and let \(\mathcal {B}\) be an algorithm that decides the standard parameterization of P with witness in time polynomial in |x| for every fixed k. Assume w.l.o.g. that goal P =min. The case when goal P =max is similar.

Given an instance xI P and some ϵ>0, compute b(1/ϵ). For each integer k∈[1,…,b(1/ϵ)], call \(\mathcal {B}(x,k)\). If any of these calls returns a Yes-answer, then m (x) equals the smallest value of k for which \(\mathcal {B}(x,k)\) gives a Yes-answer. The witness solution yS(x) returned by \(\mathcal {B}\) in this case has m(x,y)=m (x) and thus trivially is a (1+ϵ)-approximation. If no call returns a Yes-answer, then m (x)≥b(1/ϵ) and \(\mathcal {A}(x,\epsilon)\) returns a (1+ϵ)-approximation to m (x). In either case, we get a (1+ϵ)-approximation.

The running time of this scheme is polynomial in |x| for every fixed ϵ>0. For a fixed value of ϵ, b(1/ϵ) can be computed in constant time. Furthermore, b(1/ϵ) itself is a constant and hence \(\mathcal {B}\) is called a constant number of times. Each call takes polynomial time. If none of these calls returns a Yes-answer, we run \(\mathcal {A}(x, \epsilon)\), which also takes polynomial time.

The proof of the characterization of FPTAS is similar. Since the threshold function is polynomially bounded, we may assume it is a polynomial. Since a polynomial can be evaluated in polynomial time, the theorem follows. □

The characterizations seem different from those given by Paz and Moran [36] and Chen et al. [8].

Example 6.6

Jansen and Zhang [25] prove that the standard parameterization of Maximum Rectangle Packing (maximizing the number of given rectangles that can be packed in a given rectangle) is in XP w . Then an fptas for this problem is given, implying by Theorem 6.5 that it is in PTAS.

Example 6.7

Minimum Bin Packing is in FIPTAS (see Example 5.2), but has no ptas unless P=NP [21]. Hence its standard parameterization is not in XP w unless P=NP.

A similar characterization can be given for the class EPTAS. Let EPTAS denote the class of problems admitting an eptas, i.e. a ptas with running time poly(|x|)⋅f(1/ϵ) for some computable function f. Call a problem e-simple if its standard parameterization can be decided in time poly(|x|)⋅f(k) for some computable function f. The standard parameterization of a problem belongs to FPT if and only if it is e-simple. Using Definition 6.4, we can define (similar to XP w and PFPT w ) the class FPT w .

Theorem 6.8

A problem is in EPTAS if and only if it has an eptas and its standard parameterization is in FPT w .

The proof is similar to the proof of Theorem 6.5.

6.2 Existence of Optimum-Asymptotic Approximation Schemes

Theorem 6.5 has interesting consequences. In particular, it gives the tools to improve on a theorem of Arora et al. [1]. They showed that MAX-3SAT has no ptas, unless P=NP. As a consequence, no MAX-SNP-complete problem can have a ptas, unless P=NP. We prove that this extends to ptas.

Definition 6.9

[35]

An NPO-problem P is in MAX-NP if it can be expressed as
$$\max_{S} |\{\bar{a} \mid\exists\bar{b}\, \psi(\bar{a}, \bar{b}, G, S)\}|,$$
where G is an instance of P described as a finite structure, S ranges over all admissible structures, \(\bar{a}\) and \(\bar{b}\) are tuples of variables, and ψ is a quantifier-free boolean formula. Problem P is in MAX-SNP if it can be expressed as
$$\max_{S} |\{\bar{a} \mid\psi(\bar{a}, G, S)\}|,$$
with G, S, ψ, and \(\bar{a}\) as before.

Kolaitis and Thakur [29] proved that MAX-SNP ⊂ MAX-NP, as Maximum Satisfiability is in MAX-NP, but not in MAX-SNP.

We first need the following theorem, a weaker form of which was proved by Cai and Chen [4].

Theorem 6.10

If P is in MAX-NP, then its standard parameterization is in FPT w .

Proof

Suppose that we are given an instance of P, described as a finite structure G, and a positive integer k. Because P is in MAX-NP, its instances can be expressed as
$$\max_{S} |\{\bar{a} \mid\exists\bar{b}\, \psi(\bar{a}, \bar{b}, G, S)\}|.$$
Since this expression is fixed irrespective of the instance, Papadimitriou and Yannakakis [35] showed that for a particular instance one only needs to consider a polynomial number of different values \(\bar{a}\). Also, for every fixed value of \(\bar{a}\), it again suffices to consider only a polynomial number of different values \(\bar{b}\). Finally, for any fixed values of \(\bar{a}\) and \(\bar {b}\), let \(\phi_{\bar{a}, \bar{b}}(S) = \psi(\bar{a}, \bar{b}, G, S)\) be the resulting boolean formula after substituting \(\bar{a}\), \(\bar {b}\), and G. It consists of a constant number, say at most c, of variables of the form \(Q(\bar{d})\), where Q is any predicate of S.
For every \(\bar{a}\), let \(B(\bar{a})\) denote the set of values \(\bar {b}\) such that \(\phi_{\bar{a}, \bar{b}}(S)\) is satisfiable for some structure S. Let A denote the set of values \(\bar{a}\) for which \(B(\bar{a}) \not{=} \emptyset\). Then we can express P as:
$$\max_{S} \bigg|\biggl\{\bar{a} \in A \mid\phi_{\bar{a}}(S) := \bigvee _{\bar{b} \in B(\bar{a})}\ \phi_{\bar{a}, \bar{b}}(S)\biggr\}\bigg|.$$
Furthermore, let n=|A| and \(l = \max_{\bar{a} \in A} |B(\bar{a})|\). As both n and l are polynomial in the size of the input, ln O(1).

Suppose that kn/2 c . Papadimitriou and Yannakakis [35] proved that by fixing a particular \(\bar{b}(\bar {a}) \in B(\bar{a})\) for each \(\bar{a} \in A\), one can find in polynomial time a structure S satisfying at least n/2 c clauses \(\phi_{\bar{a},\bar{b}(\bar{a})}(S)\). Hence if kn/2 c , the answer is trivially Yes. Moreover, a witness to this can be found in polynomial time.

So suppose that k>n/2 c . Then we enumerate all assignments of variables of the form \(Q(\bar{d})\) that occur (i.e. all relevant structures S) to verify whether the maximum number of satisfiable \(\phi_{\bar{a}}(S)\) is at least k. There are at most n clauses \(\phi _{\bar{a}}(S)\) which can be satisfied. For each such clause, if it is to be satisfied, there are at most l clauses \(\phi_{\bar{a}, \bar {b}}(S)\), from which we should choose one that must be satisfied. As each such clause \(\phi_{\bar{a}, \bar{b}}(S)\) consists of at most c variables of the form \(Q(\bar{d})\), where Q is any predicate in S, we can enumerate all relevant structures S in O(n⋅(l+1) n ⋅2 cn ) time. Since n<2 c k and ln O(1), this is O(k O(k)) time. Therefore we can check in O(k O(k)) time whether the instance of P has answer Yes and, if so, return a witness structure S for this. □

Observe that for problems in MAX-SNP we can apply a similar proof as above, but with l=1. Then the running time of the given algorithm improves to O(k⋅2 O(k)) plus a polynomial in the input size. Kratsch [31] recently showed that problems in MAX-NP admit a polynomial kernel, strengthening the above result.

Combining Theorem 6.10 with Theorems 6.5 and 6.8, we obtain the following result.

Theorem 6.11

If a problem P is in MAX-NP and PTAS (EPTAS), it is in PTAS (EPTAS).

In the given form, the theorem gives a way to construct a ptas (eptas) for a problem in MAX-NP if the problem has a ptas (eptas). Phrased differently however, it gives a powerful tool to prove that for some problems a ptas (eptas) cannot exist.

Corollary 6.12

If a problem P in MAX-NP cannot have a ptas (eptas) under some hardness condition, then it cannot have a ptas (eptas) under the same hardness condition.

This already proves the nonexistence of a ptas for many problems, for instance for Maximum Satisfiability. However, a more general statement is possible. Arora et al. [1] showed that no MAX-SNP-complete problem can have a ptas, unless P=NP. We now strengthen this result as follows.

Theorem 6.13

If a problem P is MAX-SNP-complete (under the L-reduction), then it cannot have a ptas, unless P=NP.

This implies for instance that problems such as MAX-3SAT, Maximum Independent Set on bounded degree graphs, and Maximum Cut do not have a ptas, unless P=NP. In fact, using the result of Arora et al., one can even prove that for each MAX-SNP-complete problem P there is a fixed constant c>1 such that P cannot be approximated (optimum-)asymptotically within c, unless P=NP.

It should be noted here that similar results can be proved for a syntactically defined class of minimization problems, called MIN F+Π1 [30], which includes Minimum Vertex Cover and many vertex-deletion and edge-deletion problems in graphs such as Minimum Feedback Arc Set. Cai and Chen [4] proved that the standard parameterizations of all problems in this class are in FPT w . Hence we obtain the following theorem.

Theorem 6.14

If a problem P is in MIN F+Π1 and in PTAS, then it is in PTAS.

Similar to Corollary 6.12, one can use this theorem to prove negative results. For instance, Theorem 6.14 implies that Minimum Vertex Cover, which cannot have a ptas unless P=NP [1, 35], also cannot have a ptas unless P=NP.

6.3 Approximation-Preserving Reductions

Due to results by Khanna et al. [28], we know that no APX-complete problem can have a ptas unless P=NP. Phrased differently, if for a problem P in APX there exists an approximation-preserving reduction from Maximum Satisfiability (or a specific bounded case of it) to P, then P cannot have a ptas unless P=NP. We prove that a similar statement can be made about ptas by using a different type of approximation-preserving reduction.

The result of Khanna et al. holds under the PTAS-reduction, defined by Crescenzi and Trevisan [15].

Definition 6.15

There is a PTAS-reduction from a problem P to a problem P′ if there exist computable functions t 1, t 2, and c:ℚ+→ℚ+ such that for any xI P and any ϵ>0,
  1. 1.

    t 1(x,ϵ)∈I P and t 1(x,ϵ) is computable in time polynomial in |x| for any fixed value of ϵ;

     
  2. 2.

    for any yS P(t 1(x,ϵ)), t 2(x,y,ϵ)∈S P (x) and t 2(x,y,ϵ) is computable in time polynomial in |x| and |y| for any fixed value of ϵ;

     
  3. 3.

    for any yS P(t 1(x,ϵ)), if y is within 1+c(ϵ) of \(m^{*}_{P'}(t_{1}(x,\epsilon))\), then t 2(x,y,ϵ) is within 1+ϵ of \(m^{*}_{P}(x)\).

     

Several well-known reductions are a special case of PTAS-reductions, such as P-reductions [34], L-reductions [35], E-reductions [28], and AP-reductions [13]. The most important property of all these reductions is that they preserve membership of PTAS.

Lemma 6.16

[15]

If there is a PTAS-reduction from P to Pand Phas a ptas, then P also has a ptas.

Proof

Let \(\mathcal {A'}\) be a ptas for P′ and (t 1,t 2,c) a PTAS-reduction from P to P′. It can be easily seen that given xI P and some ϵ>0, computing \(t_{2}(x,\mathcal {A'}(t_{1}(x,\epsilon ),c(\epsilon)),\epsilon)\) yields a ptas for P. □

Most PTAS-reductions given in the literature actually also preserve membership of PTAS. This is due to the following property.

Lemma 6.17

Suppose there is a PTAS-reduction (t 1,t 2,c) from P to Pand a monotone computable function f:ℕ→ℕ with liminf n→∞ f(n)=∞ such that for any ϵ>0 and any xI P , \(m^{*}_{P'}(t_{1}(x,\epsilon )) \geq f(m^{*}_{P}(x))\). Then if Phas a ptas, P also has a ptas.

Proof

Suppose that P′ has a ptas \(\mathcal {A'}\) such that any instance x′∈I P can be approximated within (1+ϵ) if m (x′)≥b′(1/ϵ) for some computable function b′. Let xI P and some ϵ>0 be given. We claim that computing \(\mathcal {A}(x,\epsilon) = t_{2}(x,\mathcal {A'}(t_{1}(x,\epsilon),c(\epsilon )),\epsilon)\) yields a ptas for P with some suitably chosen threshold function b.

It follows from the proof of Lemma 6.16 that \(\mathcal {A}(x,\epsilon)\) always returns a feasible solution y and that y is within (1+ϵ) of \(m^{*}_{P}(x)\) if \(m^{*}_{P'}(t_{1}(x,\epsilon)) \geq b'(1/c(\epsilon))\). Hence to prove that \(\mathcal {A}\) is a ptas we need a computable function b such that \(m^{*}_{P}(x) \geq b(1/\epsilon)\) implies that \(m^{*}_{P'}(t_{1}(x,\epsilon)) \geq b'(1/c(\epsilon))\). Choosing
$$b(1/\epsilon) = \min\{ n \mid f(n) \geq b'(1/c(\epsilon))\},$$
which is a computable function, we get
$$m^{*}_{P}(x) \,\geq\, b(1/\epsilon) \ \ \Rightarrow\ \ f(m^{*}_{P}(x))\,\geq\, f(b(1/\epsilon)) \ \ \Rightarrow\ \ m^{*}_{P'}(t_{1}(x,\epsilon)) \,\geq\, b'(1/c(\epsilon)),$$
proving the lemma. □

The lemma proves the usefulness of the following notion.

Definition 6.18

There is a PTAS-reduction from a problem P to a problem P′ if there is a PTAS-reduction (t 1,t 2,c) from P to P′ and a monotone computable function f:ℕ→ℕ with liminf n→∞ f(n)=∞ such that for any ϵ>0 and any xI P , \(m^{*}_{P'}(t_{1}(x,\epsilon)) \geq f(m^{*}_{P}(x))\).

Observe that the PTAS-reduction is transitive. Moreover, by Lemma 6.17, PTAS-reductions preserve membership of PTAS.

Some reductions appearing in the literature are a special case of PTAS-reductions, such as asymptotic continuous reductions [39] and (polynomial time) ratio-preserving reductions [36]. However, they are rather restrictive and not many have been shown to exist. Here we will rely on Lemma 6.17 instead.

A first question we need to answer with respect to PTAS-reductions is whether there exist any (natural) problems that are APX-complete under the PTAS-reduction. Such a problem indeed exists. Let Maximum Bounded Weighted Satisfiability be the variant of Maximum Satisfiability in which each variable x i has weight w i such that W≤∑ i w i ≤2W for some given weight W.

Theorem 6.19

Maximum Bounded Weighted Satisfiability (MBWS) is APX-complete under the PTAS-reduction.

Proof

Crescenzi and Panconesi [14] proved that MBWS is APX-complete under the P-reduction. Upon closer inspection and using Definition 6.18, it can be seen that the given reductions are also PTAS-reductions. □

Using this theorem, we can in fact prove that there is a problem in MAX-SNP that is APX-complete under the PTAS-reduction.

Theorem 6.20

Maximum 3-Satisfiability is APX-complete under the PTAS-reduction.

Proof

Crescenzi and Trevisan [15] presented a PTAS-reduction from MBWS to a polynomially-bounded variant of it, Maximum Polynomially-Bounded Weighted Satisfiability. Khanna et al. [28] showed that any polynomially-bounded problem in APX has an E-reduction to Maximum 3-Satisfiability. Upon closer inspection of these reductions, one can see that they are in fact PTAS-reductions. □

Observe that any problem that is APX-complete under the PTAS-reduction cannot have a ptas, unless P=NP.

Lemma 6.21

If a problem P is APX-complete under the PTAS-reduction, then it cannot have a ptas, unless P=NP.

Proof

We established in Theorem 6.13 that no MAX-SNP-complete problem can have a ptas unless P=NP. In particular, Maximum 3-Satisfiability, which is APX-complete, has no ptas. Let P be an APX-complete problem under the PTAS-reduction. If P had a ptas, then by the APX-completeness of P, Maximum 3-Satisfiability also has a ptas. This is a contradiction, unless P=NP. □

It is interesting to note that several reductions proving that no ptas can exist for a certain problem, actually use a PTAS-reduction implicitly. For example, a result of Woeginger [44] showing that Minimum 2-Dimensional Vector Packing cannot have a ptas unless P=NP can be explained this way. In Minimum 2-Dimensional Vector Packing, we want to partition a given set of vectors in [0,1]×[0,1] into a minimum number of subsets, such that in every subset the sum of all vectors is at most 1 in every coordinate.

Theorem 6.22

Minimum 2-D Vector Packing is APX-complete under the PTAS-reduction. Hence it cannot have a ptas unless P=NP.

Proof

Consider the following problems. In Maximum 3-Dimensional Matching, we are given three sets X, Y, and Z, each of size q, and a set TX×Y×Z of triples, and we are asked to find the maximum number of triples that do not agree on any coordinate. In Maximum Bounded 3-Dimensional Matching, we additionally impose that each element occurs in at least one, but at most three triples. Kann [26] gives an L-reduction to Maximum Bounded 3-D Matching from Maximum 3-Satisfiability-B, which itself has an L-reduction from Maximum 3-Satisfiability [35]. Both L-reductions are actually also PTAS-reductions.

The construction of Woeginger [44] reduces instances x of Maximum Bounded 3-Dimensional Matching to instances x′ of Minimum 2-Dimensional Vector Packing. From the construction, x has a solution of value at least k if and only if x′ has a solution of value at most \(q + \lceil\frac{1}{3} (|T| - k) \rceil\). This leads to a PTAS-reduction as follows. For any ϵ>0, for any y′∈S(x′) with m(x′,y′)≤(1+ϵ)⋅m (x′), and for y=t 2(x′,y′,ϵ),
$$q + \frac{1}{3} (|T| - m(x, y))= m(x',y') \leq (1 + \epsilon) \cdot\biggl(q + \bigg\lceil {\frac{1}{3}} (|T| - m^{*}(x))\bigg\rceil\biggr).$$
Then
$$m(x,y) \geq m^{*}(x) -3\epsilon\cdot\biggl(q +\bigg\lceil{\frac{1}{3}} (|T| - m^{*}(x))\bigg \rceil \biggr)\,\geq\, (1 - O(\epsilon)) \cdot m^{*}(x),$$
since q/7≤m (x)≤3q from the definition of Maximum Bounded 3-Dimensional Matching. Moreover, m (x′)≥q, and thus m (x′)≥m (x)/3. This yields the desired PTAS-reduction. □

We can also be interested in the existence of PTAS-reductions. Crescenzi et al. [13] showed that Minimum Bin Packing is not APX-complete under the AP-reduction (and thus also under the PTAS-reduction), unless the polynomial hierarchy collapses. Furthermore, they remark that this result “does not seem to be obtainable” under the condition that P=NP. The reason for this is that Crescenzi et al. show that if NP=co-NP, then there is an AP-reduction from Maximum Satisfiability to Minimum Bin Packing. Thus NP=co-NP would imply that P=NP, which is highly unlikely. If however we consider PTAS-reductions, then the result of Crescenzi et al. can be obtained under the condition that P=NP.

Theorem 6.23

Minimum Bin Packing is not APX-complete under the PTAS-reduction, unless P=NP.

Proof

Recall that Minimum Bin Packing has a ptas (see Example 5.2). Hence if Minimum Bin Packing were APX-complete under the PTAS-reduction, then this would contradict Lemma 6.21, unless P=NP. □

In particular, the theorem implies that no PTAS-reduction can exist from Maximum Satisfiability to Minimum Bin Packing, unless P=NP. It should be noted here that the AP-reduction given by Crescenzi et al. under the condition that NP=co-NP is not a PTAS-reduction, since it has m (t 1(x,ϵ))=O(1) for any instance x of Maximum Satisfiability and any ϵ>0. Therefore Theorem 6.23 does not contradict, but augments the results of Crescenzi et al. [13]. Furthermore, it can be easily seen that Theorem 6.23 extends to several other problems, including Minimum Degree Spanning Tree and Chromatic Index.

7 Pumpable Problems

Looking back at the previous sections, one can notice that in the equivalence proofs we do not always get the equivalence we hope for. For instance, one expects an fptas ω with a(1/ϵ)=21/ϵ to be equivalent to an eptas with f(1/ϵ)=21/ϵ . However, this does not seem to hold in general, as the proof of Theorem 3.4 only gives f(1/ϵ)=2poly(a(1/ϵ)). Hence we are interested in properties of problems for which the equivalences are nice. Similarly, we want to know if for certain types of problems the hierarchy developed in the previous sections collapses. A promising property of problems seems to be pumpability.

Definition 7.1

An optimization problem P is k-pumpable if there exist functions g 1 and g 2 such that for any instance xI P
  • g 1(x)∈I P and \(S_{P}(g_{1}(x)) \not{=} \emptyset\) if \(S_{P}(x) \not{=} \emptyset\);

  • for every r≥1 and for every yS P (g 1(x)) within a factor r of m (g 1(x)), g 2(y)∈S P (x) and g 2(y) is within r of m (x);

  • g 1 and g 2 are computable in time polynomial in |x| and logk;

  • some pumpability condition holds.

Note that in the logarithmic cost model we use, it makes sense that the running times of g 1 and g 2 depend polynomially on logk.

We distinguish two pumpability conditions, one related to the size of g 1(x) and one related to the optimal objective value of g 1(x).

Definition 7.2

An optimization problem P is k-size-pumpable if P is k-pumpable with the condition that |g 1(x)|≥|x|+k. An optimization problem P is k-opt-pumpable if P is k-pumpable with the condition that m (g 1(x))≥m (x)+k.

It appears that many problems possess either one of these properties or even both, as can be seen in the following example.

Example 7.3

Minimum Vertex Cover is 1-size-pumpable, because one can take for g 1 the function that makes two disjoint copies of an instance. The desired property of the translation back follows by the pigeonhole principle. Using the same idea, Minimum Vertex Cover is also 1-opt-pumpable. Minimum Makespan Scheduling is k-opt-pumpable for any k by multiplying all job lengths by k+1. Using a similar idea it is also 1-size-pumpable.

Observe that problems that are k-size-pumpable for arbitrary values of k cannot exist. Otherwise one could take k=2poly(|x|) and thus add an exponential number of bits to an instance x of P in time polynomial in |x|.

We consider the question which problems are pumpable in more detail in Sect. 7.3. First, we show how the property of being pumpable helps to prove some new equivalences among the classes we defined previously. In the following, when we talk about a size- or opt-pumpable problem, we assume that the functions g 1 and g 2 are known.

7.1 Optimum-Asymptotic Schemes and Pumpability

It follows from Theorem 5.12 that PTAS ⊂ PTAS, unless P=NP. For 1-opt-pumpable problems however, the two classes are equivalent.

Lemma 7.4

Let P be 1-opt-pumpable and in PTAS. Then P∈PTAS.

Proof

Assume we have a ptas \(\mathcal {A}\) for P with computable threshold function b. Given xI P and some ϵ>0, compute b(1/ϵ), which takes constant time for any fixed ϵ. Then pump x b(1/ϵ) times to get an instance x′. Note that the size of the output of g 1 is polynomial in the size of the input. Hence pumping b(1/ϵ) times means that x′ has size at most \(|x|^{O(1)^{b(1/\epsilon)}}\) and thus the pumping steps can be done in time polynomial in |x| for every fixed ϵ.

As x has been pumped b(1/ϵ) times, m (x′)≥b(1/ϵ)+m (x)≥b(1/ϵ). Hence we can compute \(y'= \mathcal {A}(x', \epsilon)\) and by the definition of ptas, y′ is within 1+ϵ of m (x′). Furthermore, y′ can be computed in time polynomial in |x| for every fixed ϵ, as |x′| is polynomial in |x| for every fixed ϵ. Iteratively applying g 2 to y′, we get a solution y for x within 1+ϵ of m (x). As we need to apply g 2 only b(1/ϵ) times, this again takes time polynomial in |x| for every fixed ϵ. □

There are several ways in which one could use this lemma. First of all, it provides a condition under which problems are not 1-opt-pumpable.

Theorem 7.5

Any problem that is in PTAS but not in PTAS (unless P=NP), is not 1-opt-pumpable (unless P=NP).

This is an immediate consequence of Lemma 7.4. There are several examples of problems that fit the theorem.

Corollary 7.6

Minimum Bin Packing, Chromatic Index, and Minimum Degree Spanning Tree are not 1-opt-pumpable, unless P=NP.

Secondly, Lemma 7.4 shows that some problems cannot have a ptas. Consider for instance the variation of Minimum Bin Packing with precedence constraints. (The precedence constraints state that for certain items i and j, item i has to appear in a bin with a lower number than item j.) It cannot have a ptas, as Minimum Bin Packing itself cannot have a ptas unless P=NP [20]. Queyranne [38] already proved the following result. But as Minimum Bin Packing with Precedence is 1-opt-pumpable (make two copies of the instance and by using precedence constraints ensure that items of the first instance have to come before items of the second), it now follows as a corollary of Lemma 7.4.

Theorem 7.7

Minimum Bin Packing with Precedence has no ptas, unless P=NP.

Actually, Queyranne applied a similar form of pumping to the problem in order to obtain his result.

We now consider the effect of pumpability on problems in FPTAS.

Lemma 7.8

Let P be 1-opt-pumpable and in FPTAS and let c>0 be a constant. If |g 1(x)|≤c⋅|x| for any xI P , then P∈EPTAS.

Proof

Following the proof of Lemma 7.4, the condition on g 1 now implies that after pumping b(1/ϵ) times we obtain an instance of size O(c b(1/ϵ)⋅|x|). Then the construction of Lemma 7.4 actually takes time bounded by a polynomial in |x| times some (computable) function of 1/ϵ. □

Using this lemma, one can prove interesting results about the relation of FPTAS to PTAS and PTAS.

Theorem 7.9

\(\mathrm{PTAS} \not{\subseteq}\mathrm{FPTAS}^{\infty}\), unless FPT=W[1].

Proof

Let P denote the minimum dominating set problem in unit disk graphs. Hunt et al. [24] showed that P∈ PTAS. Suppose that P∈ FPTAS. Because P is easily 1-opt-pumpable with a linear size output (let g 1 just take two disjoint copies of the graph of the instance), it has an EPTAS by Lemma 7.8. But then P is in FPT (with respect to its standard parameterization) by results of Bazgan [3] and Cesati and Trevisan [7]. However, as P is W[1]-hard [32], this is not possible, unless FPT=W[1]. □

This leads to the following corollary, which we could not derive yet in Sect. 5.

Corollary 7.10

\(\mathrm{PTAS}^{\infty} \not{=} \mathrm{FPTAS}^{\infty}\), unless FPT=W[1].

Although Lemma 7.8 gives a way to go from an fptas to an eptas, it only holds if the output of g 1 has linear size. If that is not the case, the following lemma can be useful.

Lemma 7.11

Let P be k-opt-pumpable for any k and in FPTAS. Then P∈EPTAS.

Proof

In this case it suffices to pump once with k=b(1/ϵ) to make the construction of Lemma 7.4 work. □

If additionally b(1/ϵ) is bounded by 2poly(1/ϵ), then P∈ FPTAS, as both g 1 and g 2 can be computed in time polynomial in |x| and logk=logb(1/ϵ)=poly(1/ϵ). This last observation has interesting consequences.

Example 7.12

Extensible Bin Packing (where bin size is part of the input) and Minimum Makespan Scheduling are strongly NP-hard, polynomially-bounded problems and thus have no fptas unless P=NP (see Corollary 4.13). However, both problems are k-opt-pumpable for any k (multiply all numbers of the instance by k). Hence by Lemma 7.11 they cannot have a fptas where b(1/ϵ) is bounded by 2poly(1/ϵ), unless P=NP.

For these two problems, the above facts were already known by results of Coffman and Lueker [12] and (in a weaker form) of Hochbaum and Schmoys [22], but here they are just a consequence of the general statement in Lemma 7.11. Moreover, the result of Hochbaum and Schmoys for Minimum Makespan Scheduling is strengthened by it.

7.2 Size-Asymptotic and Convergent Schemes and Pumpability

For size-asymptotic problems, the situation is slightly different. Recall that PTAS ω and PTAS are equivalent for all problems, not just for pumpable problems (Theorem 3.6). The classes FPTAS ω and EPTAS are also equivalent (Theorem 3.4), but turning an fptas ω with threshold function a into an eptas currently increases the time complexity by at least a factor 2poly(a(1/ϵ)). This is rather unfortunate. If a problem is 1-size-pumpable however, this exponential increase is not necessary.

Lemma 7.13

Let P be 1-size-pumpable and in FPTAS ω with computable threshold function a. Then P∈EPTAS with running time polynomial in |x|, 1/ϵ, a(1/ϵ), and the time needed to compute a(1/ϵ).

Proof

Assume that P has an fptas ω \(\mathcal {A}\) with threshold function a. Given some xI P and ϵ>0, compute a(1/ϵ). Pump x the smallest number of times needed to get an instance x′ with |x′|≥a(1/ϵ). Note that x′ has size at most polynomial in a(1/ϵ) and |x|. Hence computing \(y' =\mathcal {A}(x',\epsilon)\) takes time polynomial in 1/ϵ, a(1/ϵ), and |x|. Repeatedly applying g 2 to y′ also takes time polynomial in a(1/ϵ) and |x| and yields a solution to x within (1+ϵ) of m (x). □

If a(1/ϵ) is computable in time polynomial in 1/ϵ and a(1/ϵ), then the exponential increase is avoided. In particular, if the threshold function a of the fptas ω is a polynomial, then P∈ FPTAS.

We can also show that for 1-size-pumpable problems the classes PCONV[f] and PTAS[f] coincide for many functions f, such as the logarithm.

Lemma 7.14

Let \(f \in \mathcal {F}^{*}\) be such that f(nm)≥f(n)+f(m). If P∈PCONV[f] is 1-size-pumpable, then P∈PTAS[f].

Proof

Suppose that P∈ PCONV[f] and let \(\mathcal {A}\) be a pconv[f] for P. Consider an arbitrary xI P and some (fixed) ϵ>0. Let α≥1 be some integer to be chosen later. Pump x the smallest number of times needed to get an instance x′ with |x′|≥|x| α and run \(\mathcal {A}(x')\). This yields a (1+1/f(|x′|))-approximation. We claim that α can be chosen such that 1+1/f(|x′|)≤1+ϵ/f(|x|) and that we thus have bootstrapped \(\mathcal {A}\) to a ptas[f]. The claim holds if
$$\frac{1}{f(|x'|)} \leq\frac{1}{f(|x|^{\alpha})} \leq\frac{\epsilon}{f(|x|)}.$$
The first inequality is true by the monotonicity of f and the definition of x′. For the second inequality, note that since f(nm)≥f(n)+f(m),
$$\frac{1}{f(|x|^{\alpha})} \leq\frac{1}{\alpha\cdot f(|x|)}.$$
As ϵ is fixed, we can choose any α≥1/ϵ to ensure that 1/f(|x| α )≤ϵ/f(|x|). □

In Example 4.3, we noted that Maximum Independent Set on single-crossing-minor-free graphs has a pconv[log|x|]. By the above lemma, we conclude that it also has a ptas[log|x|].

Pumpability also leads to several negative results.

Lemma 7.15

Let P be an NP-hard, polynomially-bounded optimization problem with p the corresponding bounding polynomial plus 1. If P is 1-size-pumpable, then for any constant α>0, P∉PCONV[p(|x|) α ], unless P=NP.

Proof

We may assume that p is a monotone polynomial. Suppose by way of contradiction that P∈ PCONV[p(|x| α )]. Then there exists an algorithm \(\mathcal {A}\) such that for any instance x of P, \(\mathcal {A}(x)\) runs in time polynomial in |x| and delivers a solution with approximation ratio 1+1/p(|x|) α . Consider an arbitrary xI P . Pump x the smallest number of times (say k times) needed to get an instance x′ with |x′|≥p(|x|)1/α . As k is bounded by a polynomial, pumping takes polynomial time. Let \(y'= \mathcal {A}(x')\) and let yS P (x) be the resulting solution after iteratively applying g 2 to y′ for k times. Observe that if P is a minimization problem, then and thus
$$m(x, y) \leq(1 + 1/p(|x|)) \cdot m^{*}(x).$$
If P is a maximization problem, we similarly obtain that m (x)≤(1+1/p(|x|))⋅m(x,y). This means that we found a pconv[p], which is not possible by Lemma 4.12, unless P=NP. □

This lemma allows a simple subdivision of the PCONV-hierarchy.

Lemma 7.16

PCONV[|x| α ]⊂PCONV[log|x|] for any (fixed) α>0, unless P=NP.

Proof

Maximum Independent Set on single-crossing-minor-free graphs is in PCONV[log|x|] (see Example 4.3), but is also polynomially-bounded and 1-size-pumpable. □

This result can be strengthened significantly though. Huang [23] showed that Minimum Vertex Cover on planar graphs cannot have an eptas with \(f(1/\epsilon) = 2^{o(\sqrt{1/\epsilon})}\), unless FPT=W[1]. Marx [33] proved that even an eptas with \(f(1/\epsilon) = 2^{O((1/\epsilon)^{1-\delta})}\) for some δ>0 cannot exist, unless n-variable 3-SAT can be solved in 2 o(n) time.

Lemma 7.17

PCONV[log2|x|]⊂PCONV[log|x|], unless FPT=W[1]. PCONV[log1/(1−δ)|x|]⊂PCONV[log|x|] for any δ>0, unless n-variable 3-SAT can be solved in 2 o(n) time.

7.3 Which Problems Are Pumpable?

We already showed that several problems are pumpable (e.g. Minimum Vertex Cover and Minimum Makespan Scheduling). In Theorem 7.5, we proved that several other problems (such as Minimum Bin Packing) are not 1-opt-pumpable, and it seems unlikely that they are 1-size-pumpable. In fact, many problems seem both 1-size-pumpable and 1-opt-pumpable, or both not. We give some evidence why this might not be a coincidence. At the moment, we do not know of any problem which provably possesses only one of the properties and not both.

To prove the pumpability of several classes of problems, we consider problems that are m -opt-pumpable. Essentially, this means that we can pump to (at least) double the objective value of the optimum. For any problem P and any xI P , note that because m (x)≤M P (|x|), logm (x)≤logM P (|x|)≤poly(|x|). Hence for any m -opt-pumpable problem the functions g 1 and g 2 are computable in time polynomial in |x|.

Lemma 7.18

Let P be m -opt-pumpable. Then P is 1-size-pumpable.

Proof

Given an instance xI P , repeatedly opt-pump x to an instance x′ until |x′|>|x|. We claim it takes at most time polynomial in |x| until such an x′ is found. As a first step, we prove that we only need to pump a polynomial amount of times. By Lemma 2.3, if \(m^{*}(x') > M_{P}(|x|) = 2^{r_{P}(|x|,q_{P}(|x|))}\), then |x′|>|x|. As m -opt-pumping (at least) doubles the objective value of the optimum, pumping 1+r P (|x|,q P (|x|)) times gives an x′ with |x′|>|x|. Second, note that before any pumping step, the size of the instance is at most |x|. Hence any pumping step costs only time polynomial in |x|. Therefore all needed pumping steps can be done in time polynomial in |x| and thus P is 1-size-pumpable. □

Many graph optimization problems, such as Minimum Vertex Cover and Maximum Independent Set, are m -opt-pumpable and thus 1-size-pumpable. But when is a problem m -opt-pumpable?

Consider the following property of optimization problems.

Definition 7.19

[28]

A problem P is additive if there exists an operator + and a polynomial-time computable function f such that + maps any pair of instances x 1,x 2I P to an instance x 1+x 2I P such that m (x 1+x 2)=m (x 1)+m (x 2) and f maps a (feasible) solution y to x 1+x 2 to a pair of (feasible) solutions y 1,y 2 to x 1 and x 2 respectively such that m(x 1+x 2,y)=m(x 1,y 1)+m(x 2,y 2).

This notion is similar to the notion of paddable optimization problems [9, 37].

From the definitions of additive and pumpable and Lemma 7.18, one can easily prove the following theorem.

Theorem 7.20

Any additive problem is m -opt-pumpable and hence 1-opt-pumpable and 1-size-pumpable.

Khanna et al. [28] remark that many problems are additive, such as Maximum Clique, Chromatic Number, Minimum Set Cover, and all problems in the class MAX-SNP.

Corollary 7.21

Any problem in MAX-SNP is both 1-opt-pumpable and 1-size-pumpable.

However, there also problems that are not (easily seen to be) additive, but that are m -opt-pumpable, such as Maximum Knapsack and Longest Path.

When we combine Lemma 7.4 with Corollary 7.21, we obtain the following weaker version of Theorem 6.11: If a problem P is in MAX-SNP and in PTAS, then P is in PTAS. In this way, most results from Sect. 6.2 also follow by using pumpability.

8 Conclusion and Open Problems

In this paper, we defined several new types of approximation schemes and uncovered many interesting new relationships between classes of problems that can be approximated using these schemes and existing approximation classes. In particular, we have shown that EPTAS is a central class in the landscape of approximation classes. We also mapped the entire hierarchy of these classes, shown in Fig. 1.

There are several intriguing questions left. The notion of pumpability, introduced in Sect. 7, gives a possibility for bridging the gap between optimum-asymptotic schemes and nonasymptotic schemes. But we have very few properties to check whether a problem is pumpable. Can size- or opt-pumpable problems be characterized? Another interesting question is whether every opt-pumpable problem is also size-pumpable and vice versa. We gave some evidence in Lemma 7.18 why one direction might be true, but it goes too far to conjecture that it holds both ways.

Convergent approximation schemes also pose new challenges. For the class \(\mathrm{PCONV}[\mathcal {F}^{*}]\), we know that it is equivalent to EPTAS. However, the general classes \(\mathrm{PTAS}[\mathcal {F}^{*}]\) and \(\mathrm{FPTAS}[\mathcal {F}^{*}]\) remain mysterious. We know some problems in EPTAS also lie in PTAS[logn] (using Theorem 4.15) and that \(\mathrm{FPTAS}=\mathrm{FPTAS}[\mathcal {P}]\) (Theorem 4.10), but beyond this it seems hard to make conjectures about these classes.

Footnotes

  1. 1.

    Formally, this should be ⌊log|x|⌋. Throughout the paper we ignore these technicalities to improve legibility.

Notes

Acknowledgements

The authors would like to thank Lex Schrijver for several helpful suggestions.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

References

  1. 1.
    Arora, S., Lund, C., Motwani, R., Sudan, M., Szegedy, M.: Proof verification and the hardness of approximation problems. J. ACM 45(3), 501–555 (1998) MathSciNetMATHCrossRefGoogle Scholar
  2. 2.
    Ausiello, G., Crescenzi, P., Gambosi, G., Kann, V., Marchetti-Spaccamela, A., Protasi, M.: Complexity and Approximation—Combinatorial Optimization Problems and Their Approximation Properties. Springer, Berlin (1999) MATHGoogle Scholar
  3. 3.
    Bazgan, C.: Schémas d’approximation et complexité paramétrée, Rapport de stage de DEA d’Informatique, Université Paris-Sud, Orsay (1995) Google Scholar
  4. 4.
    Cai, L., Chen, J.: On fixed-parameter complexity and approximability of NP optimization problems. J. Comput. Syst. Sci. 54(3), 465–474 (1997) MathSciNetMATHCrossRefGoogle Scholar
  5. 5.
    Cai, L., Fellows, M., Juedes, D., Rosamond, F.: The complexity of polynomial-time approximation. Theory Comput. Syst. 41(3), 459–477 (2007) MathSciNetMATHCrossRefGoogle Scholar
  6. 6.
    Cai, L., Juedes, D.: On the existence of subexponential parameterized algorithms. J. Comput. Syst. Sci. 67(4), 789–807 (2003) MathSciNetMATHCrossRefGoogle Scholar
  7. 7.
    Cesati, M., Trevisan, L.: On the efficiency of polynomial time approximation schemes. Inf. Process. Lett. 64(4), 165–171 (1997) MathSciNetCrossRefGoogle Scholar
  8. 8.
    Chen, J., Huang, X., Kanj, I.A., Xia, G.: Polynomial time approximation schemes and parameterized complexity. Discrete Appl. Math. 155(2), 180–193 (2007) MathSciNetMATHCrossRefGoogle Scholar
  9. 9.
    Chen, Z.-Z., Toda, S.: On the complexity of computing optimal solutions. Int. J. Found. Comput. Sci. 2(3), 207–220 (1991) MathSciNetMATHCrossRefGoogle Scholar
  10. 10.
    Chiba, N., Nishizeki, T., Saito, N.: Applications of the Lipton and Tarjan’s planar separator theorem. J. Inf. Process. 4(4), 203–207 (1981) MathSciNetMATHGoogle Scholar
  11. 11.
    Coffman, E.G. Jr., Garey, M.G., Johnson, D.S.: Approximation algorithms for bin packing: a survey. In: Hochbaum, D.S. (ed.) Approximation Algorithms for NP-Hard Problems, pp. 46–93. PWS Publishing Company, Boston (1997) Google Scholar
  12. 12.
    Coffman, E.G. Jr., Lueker, G.S.: Approximation algorithms for extensible bin packing. J. Sched. 9(1), 63–69 (2006) MathSciNetMATHCrossRefGoogle Scholar
  13. 13.
    Crescenzi, P., Kann, V., Silvestre, R., Trevisan, L.: Structure in approximation classes. SIAM J. Comput. 28(5), 1759–1782 (1999) MathSciNetMATHCrossRefGoogle Scholar
  14. 14.
    Crescenzi, P., Panconesi, A.: Completeness in approximation classes. Inf. Comput. 93(2), 241–262 (1991) MathSciNetMATHCrossRefGoogle Scholar
  15. 15.
    Crescenzi, P., Trevisan, L.: On approximation scheme preserving reducibility and its applications. Theory Comput. Syst. 33(1), 1–16 (2000) MathSciNetMATHCrossRefGoogle Scholar
  16. 16.
    Demaine, E.D., Hajiaghayi, M., Nishimura, N., Ragde, P., Thilikos, D.M.: Approximation algorithms for classes of graphs excluding single-crossing graphs as minors. J. Comput. Syst. Sci. 69(2), 166–195 (2004) MathSciNetMATHCrossRefGoogle Scholar
  17. 17.
    Downey, R.G., Fellows, M.R.: Parameterized Complexity. Springer, New York (1999) CrossRefGoogle Scholar
  18. 18.
    Flum, J., Grohe, M.: Parameterized Complexity Theory. Springer, Berlin (2006) Google Scholar
  19. 19.
    Fürer, M., Raghavachari, B.: Approximating the minimum-degree steiner tree to within one of optimal. J. Algorithms 17(3), 409–423 (1994) MathSciNetCrossRefGoogle Scholar
  20. 20.
    Garey, M.R., Johnson, D.S.: ‘Strong’ NP-completeness results: motivation, examples, and implications. J. ACM 25(3), 499–508 (1976) MathSciNetCrossRefGoogle Scholar
  21. 21.
    Garey, M.R., Johnson, D.S.: Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, San Francisco (1979) MATHGoogle Scholar
  22. 22.
    Hochbaum, D.S., Shmoys, D.B.: Using dual approximation algorithms for scheduling problems: theoretical and practical results. J. ACM 34(1), 144–162 (1987) MathSciNetCrossRefGoogle Scholar
  23. 23.
    Huang, X.: Parameterized complexity and polynomial-time approximation schemes, Ph.D. Thesis, Texas A&M University (2004) Google Scholar
  24. 24.
    Hunt, D.B. III, Marathe, M.V., Radhakrishnan, V., Ravi, S.S., Rosenkrantz, D.J., Stearns, R.E.: NC-approximation schemes for NP- and PSPACE-hard problems for geometric graphs. J. Algorithms 26(2), 238–274 (1998) MathSciNetMATHCrossRefGoogle Scholar
  25. 25.
    Jansen, K., Zhang, G.: Maximizing the number of packed rectangles. In: Hagerup, T., Katajainen, J. (eds.) Algorithm Theory—SWAT 2004, Proc. 9th Scandinavian Workshop. Lecture Notes in Computer Science, vol. 3111, pp. 362–371. Springer, Berlin (2004) Google Scholar
  26. 26.
    Kann, V.: On the approximability of NP-complete optimization problems, Dissertation, Department of Numerical Analysis and Computing Science, Royal Institute of Technology, Stockholm, Sweden (1992) Google Scholar
  27. 27.
    Karmarkar, N., Karp, R.M.: An efficient approximation scheme for the one-dimensional bin packing problem. In: Proceedings of the 23rd Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 312–320, IEEE Comput. Soc., Los Alamitos (1982) Google Scholar
  28. 28.
    Khanna, S., Motwani, R., Sudan, M., Vazirani, U.V.: On syntactic versus computational views of approximability. SIAM J. Comput. 28(1), 164–191 (1998) MathSciNetMATHCrossRefGoogle Scholar
  29. 29.
    Kolaitis, P.G., Thakur, M.N.: Logical definability of NP optimization problems. Inf. Comput. 115(2), 321–353 (1994) MathSciNetMATHCrossRefGoogle Scholar
  30. 30.
    Kolaitis, P.G., Thakur, M.N.: Approximation properties of NP minimization classes. J. Comput. Syst. Sci. 50(3), 391–411 (1995) MathSciNetMATHCrossRefGoogle Scholar
  31. 31.
    Kratsch, S.: Polynomial kernelizations for MIN F+Π1 and MAX NP. In: Albers, S., Marion, J.-Y. (eds.) 26th International Symposium on Theoretical Aspects of Computer Science (STACS 2009), Schloss Dagstuhl—Leibniz-Zentrum fuer Informatik, Germany, pp. 601–612 (2009) Google Scholar
  32. 32.
    Marx, D.: Parameterized complexity of independence and domination on geometric graphs. In: Bodlaender, H.L., Langston, M.A. (eds.) Parameterized and Exact Computation—IWPEC 2006, Proceedings of the Second International Workshop. Lecture Notes in Computer Science, vol. 4169, pp. 154–165. Springer, Berlin (2006) Google Scholar
  33. 33.
    Marx, D.: On the optimality of planar and geometric approximation schemes. In: Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 338–348, IEEE Comput. Soc., Los Alamitos (2007) CrossRefGoogle Scholar
  34. 34.
    Orponen, P., Mannila, H.: On approximation preserving reductions: complete problems and robust measures, Technical Report C-1987-28, Department of Computer Science, University of Helsinki (1987) Google Scholar
  35. 35.
    Papadimitriou, C., Yannakakis, M.: Optimization, approximation and complexity classes. J. Comput. Syst. Sci. 43(3), 425–440 (1991) MathSciNetMATHCrossRefGoogle Scholar
  36. 36.
    Paz, A., Moran, S.: Non deterministic polynomial optimization problems and their approximations. Theor. Comput. Sci. 15(3), 251–277 (1981) MathSciNetMATHCrossRefGoogle Scholar
  37. 37.
    Petrank, E.: The hardness of approximation: gap location. Comput. Complex. 4(2), 133–157 (1994) MathSciNetMATHCrossRefGoogle Scholar
  38. 38.
    Queyranne, M.: Bounds for assembly line balancing heuristics. Oper. Res. 33(6), 1353–1359 (1985) MathSciNetMATHCrossRefGoogle Scholar
  39. 39.
    Simon, H.U.: On approximate solutions for combinatorial optimization problems. SIAM J. Discrete Math. 3(2), 294–310 (1990) MathSciNetMATHCrossRefGoogle Scholar
  40. 40.
    van Leeuwen, E.J.: Approximation algorithms for unit disk graphs. In: Kratsch, D. (ed.) Graph-Theoretic Concepts in Computer Science—WG 2005, Proceedings of the 31st International Workshop. Lecture Notes in Computer Science, vol. 3787, pp. 351–361. Springer, Berlin (2005) CrossRefGoogle Scholar
  41. 41.
    van Leeuwen, E.J.: Better approximation schemes for disk graphs. In: Arge, L., Freivalds, R. (eds.) Algorithm Theory—SWAT 2006, Proceedings of the 10th Scandinavian Workshop. Lecture Notes in Computer Science, vol. 4059, pp. 316–327. Springer, Berlin (2006) Google Scholar
  42. 42.
    van Leeuwen, E.J.: Optimization and approximation on systems of geometric objects, Ph.D. Thesis, University of Amsterdam (2009) Google Scholar
  43. 43.
    Vizing, V.G.: Ob otsenke khromaticheskogo klassa p-grafa (Russian: On an estimate of the chromatic class of a p-graph). Diskretn. Anal. 3, 25–30 (1964) MathSciNetGoogle Scholar
  44. 44.
    Woeginger, G.: There is no asymptotic PTAS for two-dimensional vector packing. Inf. Process. Lett. 64(6), 293–297 (1997) MathSciNetCrossRefGoogle Scholar

Copyright information

© The Author(s) 2011

Authors and Affiliations

  1. 1.Department of InformaticsUniversity of BergenBergenNorway
  2. 2.Department of Information and Computing SciencesUtrecht UniversityUtrechtThe Netherlands

Personalised recommendations