Structure of PolynomialTime Approximation
 757 Downloads
 1 Citations
Abstract
Approximation schemes are commonly classified as being either a polynomialtime approximation scheme (ptas) or a fully polynomialtime approximation scheme (fptas). To properly differentiate between approximation schemes for concrete problems, several subclasses have been identified: (optimum)asymptotic schemes (ptas^{∞}, fptas^{∞}), efficient schemes (eptas), and sizeasymptotic schemes. We explore the structure of these subclasses, their mutual relationships, and their connection to the classic approximation classes. We prove that several of the classes are in fact equivalent. Furthermore, we prove the equivalence of eptas to socalled convergent polynomialtime approximation schemes. The results are used to refine the hierarchy of polynomialtime approximation schemes considerably and demonstrate the central position of eptas among approximation schemes.
We also present two ways to bridge the hardness gap between asymptotic approximation schemes and classic approximation schemes. First, using notions from fixedparameter complexity theory, we provide new characterizations of when problems have a ptas or fptas. Simultaneously, we prove that a large class of problems (including all MAXSNPcomplete problems) cannot have an optimumasymptotic approximation scheme unless P=NP, thus strengthening results of Arora et al. (J. ACM 45(3):501–555, 1998). Secondly, we distinguish a new property exhibited by many optimization problems: pumpability. With this notion, we considerably generalize several problemspecific approaches to improve the effectiveness of approximation schemes with asymptotic behavior.
Keywords
Efficient computation NPoptimization problems Polynomialtime approximation schemes EPTAS Asymptotic polynomialtime approximation schemes Approximationpreserving reductions Structure of complexity classes1 Introduction
In the theory and practice of hard NPoptimization problems, approximation schemes are widely used for efficiently finding solutions to within any specified relative error ϵ from the optimum. Paz and Moran [36] classified these schemes into polynomialtime approximation schemes (ptas) and fully polynomialtime approximation schemes (fptas). However, the theory of approximation algorithms has led to several other useful classes of schemes, including optimumasymptotic (ptas^{∞}, fptas^{∞}), efficient (eptas), and sizeasymptotic (ptas^{ ω }, fptas^{ ω }) approximation schemes. It is the goal of this paper to expose the surprising connections between these seemingly unrelated notions and to study their deeper structural properties.
The foremost conclusion that follows from the results of this paper is that efficient polynomialtime approximation schemes (eptas) hold a central position in the landscape of polynomialtime approximation schemes. An eptas has a running time of the form f(1/ϵ)⋅n ^{ O(1)}, where n denotes the instance size and f is some computable function. The class of optimization problems admitting an eptas is called EPTAS. We show that EPTAS is closely related to the classes of problems admitting ‘asymptotic’ approximation schemes, where the relative error ϵ is attained only asymptotically, i.e. for instances of large size or with a large optimum. Optimumasymptotic approximation schemes are well known, for instance from the study of approximation algorithms for bin packing problems (see e.g. [11, 12, 27]).
Concretely, we prove that all commonly distinguished classes of problems with an asymptotic polynomialtime approximation scheme are a superclass of EPTAS. Two of these classes, FPTAS^{ ω } and FIPTAS^{ ω }, both corresponding to sizeasymptotic schemes, even coincide with the class EPTAS (see Sect. 3). This settles one of the main questions that motivated this paper: recent research [40, 41, 42] had shown that natural problems having an fptas^{ ω } exist, but their position in the hierarchy of approximable problems was hitherto unclear.
Moreover, we distinguish the notion of convergent polynomialtime approximation schemes, in which the approximation ratio improves by some function of the instance size as the instance size grows. We show that the corresponding class of optimization problems is equivalent to EPTAS as well (see Sect. 4). This strengthens the assertion that EPTAS is central in the landscape of problems admitting polynomialtime approximation schemes and deepens the understanding of this class.
We also consider the characteristics of asymptotic approximation schemes. In general, fully polynomialtime approximation schemes have a running time depending (polynomially) on both the instance size and 1/ϵ. If the running time depends only on the instance size, a scheme is called a fully inputpolynomialtime approximation scheme (fiptas). We show that if a problem admits an asymptotic fptas (a fptas^{∞} or a fptas^{ ω }), then this problem admits an asymptotic fiptas of the same kind (a fiptas^{∞} or a fiptas^{ ω } respectively). Hence the corresponding classes coincide, demonstrating an important property of the notions of size and optimumasymptotic approximation schemes.
In the second part of the paper, we discuss several ways to overcome the hardness gap between asymptotic approximation schemes and classic approximation schemes. Section 6 employs ideas from fixedparameter complexity theory. The results of this section lead to a new characterization of problems having a ptas or fptas by means of fixedparameter tractability and optimumasymptotic approximation schemes. This characterization is subsequently used to prove that a large number of problems cannot have an optimumasymptotic polynomialtime approximation scheme (ptas^{∞}) unless P=NP. This includes all MAXSNPcomplete problems. From Fig. 1, we can see that this strengthens a result from Arora et al. [1], who only proved that such problems cannot have a ptas.
Additionally, we study reductions that preserve approximability by optimumasymptotic approximation schemes. We show that several results on the nonexistence of optimumasymptotic polynomialtime approximation schemes in the literature implicitly use such a reduction and thus follow from the general approach presented here. Furthermore, we prove that Minimum Bin Packing cannot have such a reduction from Maximum Satisfiability unless P=NP. This augments results by Crescenzi et al. [13], who showed that no approximationpreserving reduction exists in this case unless the polynomial hierarchy collapses.
Finally, we propose the notion of pumpability in Sect. 7. Problems having asymptotic polynomialtime approximation schemes can sometimes be ‘pumped’ to a form that admits a ptas, eptas, or even an fptas if the optimization problem under consideration is pumpable. This is useful for completing Fig. 1, but also for improving the effectiveness of asymptotic approximation schemes. Furthermore, we provide insight into which problems are pumpable and show for instance that all problems in MAXSNP are pumpable.
2 Preliminaries
To make formal statements about equivalences among classes of approximation schemes, we have to be precise about the machine model we use, the type of problems that are considered, and the definitions of the studied classes. Throughout the paper, we assume the basic random access machine model with logarithmic costs and representations in bits, which implies that within cost (time) t, the machine can output at most t bits. This machine model is polynomially equivalent to the classic Turing machine and thus defines the classic complexity classes up to polynomial time factors. Furthermore, all numbers are assumed to be rationals, unless otherwise specified.
Using this model, we study optimization problems following the definitions as can be found for instance in Ausiello et al. [2].
Definition 2.1

a set of instances (bitstrings) I _{ P };

a function S _{ P } that maps instances of P to (nonempty) sets of feasible solutions (bitstrings) for these instances;

an objective function m _{ P } that gives for each pair (x,y) consisting of instance x∈I _{ P } and solution y∈S _{ P }(x) a positive integer m _{ P }(x,y), the objective value;

a goal goal_{ P }∈{min,max} depending on whether P is a minimization or a maximization problem.
Definition 2.2

the set of instances I _{ P } can be recognized in polynomial time;

there is a (monotone nondecreasing) polynomial q _{ P } such that y≤q _{ P }(x) for any instance x∈I _{ P } and any feasible solution y∈S _{ P }(x);

for any instance x∈I _{ P } and any y with y≤q _{ P }(x), one can decide in polynomial time whether y∈S _{ P }(x);

there is a (monotone nondecreasing) polynomial r _{ P } such that the objective function m _{ P } is computable in r _{ P }(x,y) time for any x∈I _{ P } and y∈S _{ P }(x).
Note that for any problem P∈NPO and any n∈ℕ the maximum objective value of instances of size n, i.e. max{m _{ P }(x,y)∣x∈I _{ P },x=n,y∈S _{ P }(x)}, is bounded by \(2^{r_{P}(n, q_{P}(n))}\), as the objective function value of any x∈I _{ P } and y∈S _{ P }(x) can be represented by at most r _{ P }(x,y)≤r _{ P }(x,q _{ P }(x)) bits. Let \(M_{P}(n) =2^{r_{P}(n, q_{P}(n))}\).
Lemma 2.3
For any NPOproblem P, for any x∈I _{ P }, and for any n∈ℕ, if \(m^{*}_{P}(x) > M_{P}(n)\), then x>n.
All problems considered below will be in NPO and all considered classes will be subclasses of NPO. From now on, we drop the subscript P if P is clear from the context.
If one equates NPO to NP, then PO is the equivalent of P. PO is the class of problems in NPO for which an optimal solution y ^{∗}∈S ^{∗}(x) can be computed in time polynomial in x for any x∈I. Paz and Moran [36] proved that P=NP implies PO=NPO and vice versa. Because it is not expected that all problems in NPO also fall in PO, several classes have been defined that contain NPOproblems for which an approximate solution can be found in polynomial time. Approximation algorithms are classified by two properties: their running time and their approximation ratio.
Definition 2.4
Problem classes and the distinguishing properties of the approximation algorithms admitted by problems in a particular class
Problem class  Running time  Approx. ratio 

APX  Polynomial in x  c 
PTAS  Polynomial in x (for every fixed ϵ)  (1+ϵ) 
FPTAS  Polynomial in x and 1/ϵ  (1+ϵ) 
FIPTAS  Polynomial in x  (1+ϵ) 
PO  Polynomial in x  1 
The class FIPTAS (Fully InputPolynomialTime Approximation Scheme) in Table 1 is a new class. Clearly, FIPTAS=PO (use ϵ=1/M(x)), but the reason for defining this class will become apparent later.
A relatively new class of increasing interest is EPTAS [3, 7].
Definition 2.5
Algorithm \(\mathcal {A}\) is an efficient polynomialtime approximation scheme (eptas) for problem P∈ NPO if there is a computable function f:ℚ_{≥1}→ℕ such that for any x∈I _{ P } and any ϵ>0, \(\mathcal {A}(x, \epsilon)\) runs in time f(1/ϵ) times a fixed polynomial in x and the solution output by \(\mathcal {A}(x, \epsilon)\) has approximation ratio (1+ϵ). An NPOproblem is in the class EPTAS if and only if it has an eptas.
The popularity of eptas is not only due to the separate dependence on 1/ϵ and instance size in the running time, but also to the beautiful relation to the widely researched class FPT: any problem admitting an eptas is also in FPT in its standard parameterization [3, 7]. An interesting exploration of the type of problems that admits an eptas may be found in Cai et al. [5].
It is wellknown that PO ⊆ FPTAS ⊆ EPTAS ⊆ PTAS ⊆ APX ⊆ NPO. In most cases, the inclusion is strict (unless P=NP), except that EPTAS ⊂ PTAS unless FPT=W[1] [3, 7]. The question whether FPT=W[1] is an open problem in fixedparameter complexity theory akin to the question whether P=NP in classic complexity theory (see e.g. Downey and Fellows [17]).
3 Asymptotic Approximation Schemes
Informally, an approximation scheme is asymptotic if it gives a (1+ϵ)approximation under a condition that is asymptotically true. We study two types of asymptotic approximation schemes. We first consider approximation schemes where the size of the instance needs to be large enough. The other type is treated in Sect. 5.
Definition 3.1
An approximation scheme \(\mathcal {A}\) for P∈ NPO is sizeasymptotic if there is a computable function a:ℚ_{≥1}→ℕ (the threshold function) such that for any ϵ>0 and any x∈I _{ P }, it returns a y∈S(x) and if x≥a(1/ϵ), then y is within (1+ϵ) of m ^{∗}(x).
This definition leads to the following classes of sizeasymptotic approximation schemes.
Problem class  Running time  Approx. ratio 

PTAS^{ ω }  Polynomial in x (for every fixed ϵ)  (1+ϵ) if x≥a(1/ϵ) 
FPTAS^{ ω }  Polynomial in x and 1/ϵ  (1+ϵ) if x≥a(1/ϵ) 
FIPTAS^{ ω }  Polynomial in x  (1+ϵ) if x≥a(1/ϵ) 
Example 3.2
Maximum Independent Set has a fiptas^{ ω } on boundedply disk graphs [41, 42]. Disk graphs are intersection graphs of disks in the plane, i.e. given a set of disks, each vertex of the graph corresponds to a disk and there is an edge between two vertices if the corresponding disks intersect. A set of disks has ply γ if γ is the smallest integer such that any point of the plane is overlapped by at most γ disks. One can find in O(x^{10}log^{4}x) time an independent set of an instance x of this problem. If an odd integer k can be chosen such that max{5,4(1+ϵ)/ϵ}≤k≤c _{1}logx/log(c _{2} γ) (where c _{1},c _{2} are fixed constants), then this independent set will be within (1+ϵ) of the optimum. If γ=γ(x)=O(x^{ o(1)}), such an integer exists if x≥a(1/ϵ) for some function a.
We start with some easy observations about the sizeasymptotic classes.
Proposition 3.3

FIPTAS^{ ω }⊆FPTAS^{ ω }⊆PTAS^{ ω } and

FIPTAS⊆FIPTAS^{ ω },FPTAS⊆FPTAS^{ ω },PTAS⊆PTAS^{ ω }.
The relations given by this proposition are straightforward and one might expect that the inclusions are strict under some hardness condition. However, this turns out not to be true for all of them. We can prove some very interesting equivalences and tie these new classes to existing approximation classes, in particular to EPTAS.
Theorem 3.4
EPTAS=FPTAS^{ ω }=FIPTAS^{ ω }.
Proof
We first show that EPTAS ⊆ FIPTAS^{ ω }. Let P∈ EPTAS and let \(\mathcal{A}\) be an eptas for P with running time at most p(x)⋅f(1/ϵ) for some computable function f and polynomial p. Construct a fiptas^{ ω } for P as follows. Given an arbitrary instance x∈I _{ P } and an arbitrary ϵ>0, run \(\mathcal{A}(x, \epsilon)\) for p(x)⋅x time steps. If \(\mathcal{A}(x, \epsilon)\) finishes, return the solution given by \(\mathcal{A}(x, \epsilon)\). Otherwise, return \(\mathcal{A}(x, 1/2)\). This algorithm clearly runs in time polynomial in x and always returns a feasible solution. Furthermore if x≥f(1/ϵ), \(\mathcal {A}(x, \epsilon)\) always finishes and returns a feasible solution with approximation ratio (1+ϵ). Hence we constructed a fiptas^{ ω } for P with a=f.
We next prove that FPTAS^{ ω } ⊆ EPTAS. Let P∈ FPTAS^{ ω } and let \(\mathcal{A}\) be an fptas^{ ω } for P with threshold function a. Construct an eptas as follows. Given an arbitrary instance x∈I _{ P } and an arbitrary ϵ>0, compute a(1/ϵ). By assumption, a(1/ϵ) is computable. The amount of time it takes to compute a(1/ϵ) is some computable function depending on 1/ϵ. If x≥a(1/ϵ), simply compute and return \(\mathcal {A}(x, \epsilon)\) in time polynomial in x and 1/ϵ. If x<a(1/ϵ), proceed as follows. As FPTAS^{ ω } ⊆ NPO, any feasible solution for x has size at most q(x) for some fixed polynomial q. Furthermore, given any y with y≤q(x), one can determine in polynomial time whether y∈S _{ P }(x). The objective value of a feasible solution can also be computed in polynomial time. Hence by employing exhaustive search, one can find a \(y^{*} \in S_{P}^{*}(x)\) in time poly(x)⋅2^{ q(x)}⋅r _{ P }(x,q(x))=2^{ q(a(1/ϵ))}⋅poly(a(1/ϵ)). The result is an eptas for P with appropriately defined function f.
As FIPTAS^{ ω } ⊆ FPTAS^{ ω }, we have EPTAS ⊆ FIPTAS^{ ω } ⊆ FPTAS^{ ω } ⊆ EPTAS, and hence the classes must be equal. □
The exponential increase in running time in the reduction from an fptas^{ ω } to an eptas might be reduced by using an exact or fixedparameter algorithm specific to the problem. As we show in Sect. 7, one can avoid such an increase altogether for many problems.
The equivalence of F(I)PTAS^{ ω } and EPTAS allows an indirect proof of the existence of an eptas for a problem, where a direct proof seems more difficult.
Example 3.5
Maximum Independent Set on disk graphs of bounded ply has a fiptas^{ ω } (Example 3.2) and thus, as a consequence of Theorem 3.4, an eptas.
We now show that PTAS^{ ω } and PTAS are in fact equivalent as well.
Theorem 3.6
PTAS=PTAS^{ ω }.
Proof
By Proposition 3.3 it suffices to prove that PTAS^{ ω } ⊆ PTAS. Let P∈ PTAS^{ ω } and let \(\mathcal {A}\) be a ptas^{ ω } for P. For an arbitrary instance x∈I _{ P } and an arbitrary ϵ>0, compute a(1/ϵ). If x≥a(1/ϵ), compute and return \(\mathcal {A}(x, \epsilon)\). Otherwise, apply the same exhaustive search technique as in the proof of Theorem 3.4. The result is a ptas for P. □
4 Convergent Approximation Schemes
Sizeasymptotic approximation schemes have a threshold function, depending on 1/ϵ, such that a good approximate solution is guaranteed if the size of the instance is larger than the threshold. It seems then that the quality of the computed solution can be arbitrarily bad for small instances, while from a certain instance size onward, the quality suddenly becomes very good. Practical examples of sizeasymptotic approximation schemes show however that the approximation ratio can improve steadily as the instance size increases and eventually converges (to 1).
Surprisingly, this also holds in general. In this section, we define and study these convergent approximation schemes more precisely. The main result is that a problem has a fptas^{ ω } if and only if it also has a convergent approximation scheme.
In the following, we use \(\mathcal {F}^{*}\) to denote the family of all monotone nondecreasing computable functions f:ℕ→ℚ_{≥1} with liminf_{ n→∞} f(n)=∞. Let \(\mathcal {P}\) denote the family of those functions in \(\mathcal {F}^{*}\) that are bounded by a (monotone) polynomial.
Definition 4.1
Let \(f \in \mathcal {F}^{*}\). An approximation scheme \(\mathcal {A}\) for P∈ NPO is said to be ϵconvergent w.r.t. f if for any ϵ>0 and any x∈I _{ P }, \(\mathcal {A}(x,\epsilon)\) returns a y∈S(x) within (1+ϵ/f(x)) of m ^{∗}(x).
This definition gives rise to schemes ptas[f], fptas[f], and fiptas[f] and the classes PTAS[f], FPTAS[f], and FIPTAS[f] in the natural way. We also define a special subclass for the case when ϵ=1.
Definition 4.2
Let \(f \in \mathcal {F}^{*}\). Algorithm \(\mathcal {A}\) is a convergent polynomialtime approximation scheme w.r.t. f (denoted as pconv[f]) if for any x∈I, \(\mathcal {A}(x)\) runs in time polynomial in x and returns a y∈S(x) within (1+1/f(x)) of m ^{∗}(x). The corresponding class is PCONV[f].
Example 4.3
Chiba et al. [10] give a pconv[\(O(\sqrt{\log\logx})\)] for Maximum Independent Set on planar graphs. This follows from a general O(xlogx) algorithm giving a \(1 + 1/O(\sqrt{\log\log x})\)approximation for several hereditary problems on planar graphs. Demaine et al. [16] give a pconv[logx]^{1} for Maximum Independent Set on singlecrossingminorfree graphs (a generalization of planar graphs).
Definition 4.4
For any family of functions \(\mathcal {F} \subseteq \mathcal {F}^{*}\), let \(\mathrm{PCONV}[\mathcal {F}] = \bigcup_{f \in \mathcal {F}}\mathrm{PCONV}[f]\). We similarly define \(\mathrm{PTAS}[\mathcal {F}]\), \(\mathrm{FPTAS}[\mathcal {F}]\), and \(\mathrm{FIPTAS}[\mathcal {F}]\).
We first state some straightforward relations.
Proposition 4.5

\(\mathrm{FIPTAS}[\mathcal {F}^{*}] = \mathrm{PO}\), \(\mathrm{FPTAS}[\mathcal {F}^{*}] \subseteq \mathrm{FPTAS}\), \(\mathrm{PTAS}[\mathcal {F}^{*}] \subseteq \mathrm{PTAS}\),

for any \(f \in \mathcal {F}^{*}\), FIPTAS[f]⊆FPTAS[f]⊆PTAS[f]⊆PCONV[f], and

for any \(f,f' \in \mathcal {F}^{*}\) with f(n)≤f′(n) for any n∈ℕ, FIPTAS[f′]⊆FIPTAS[f], FPTAS[f′]⊆FPTAS[f], PTAS[f′]⊆PTAS[f], and PCONV[f′]⊆PCONV[f].
Looking closely at the papers cited in Example 4.3, one can observe that the algorithms they describe are actually a pconv[f] (for certain f) as well as an eptas. This is not a coincidence!
Lemma 4.6
If P∈FPTAS^{ ω }, then P∈PCONV[f] for some \(f \in \mathcal {P}\).
Proof
Let P∈ FPTAS^{ ω } and let \(\mathcal {A}\) be an fptas^{ ω } for P delivering a (1+ϵ)approximate solution if x≥a(1/ϵ) for some computable function a. We use \(\mathcal {A}\) to construct a pconv[f] for a suitably chosen function f.
Observe that because FPTAS^{ ω } ⊆ PTAS, P must have a 2approximation algorithm \(\mathcal {B}\) running in time polynomial in the size of the input. Now consider the following algorithm \(\mathcal {A'}(x)\) for instances x∈I _{ P }: if f(x)=1, return \(\mathcal {B}(x)\), otherwise return \(\mathcal {A}(x, \epsilon)\) with ϵ=1/f(x). We claim that \(\mathcal {A'}(x)\) is a pconv[f] for P with f as defined above.
The lemma implies that \(\mathrm{FPTAS}^{\omega}\subseteq \mathrm{PCONV}[\mathcal {F}^{*}]\). By Theorem 3.4, this in turn implies that \(\mathrm{EPTAS} \subseteq \mathrm{PCONV}[\mathcal {F}^{*}]\). We can give a direct proof of this consequence, which has the additional advantage that the function of 1/ϵ in the running time of the eptas is only needed for the analysis and not for the algorithmic part of the reduction.
Lemma 4.7
If P∈EPTAS, then P∈PCONV[f] for some \(f \in \mathcal {P}\).
Proof
Let \(\mathcal {A}\) be an eptas for P, which runs in time g(1/ϵ)⋅p(x) for any x∈I _{ P } and any ϵ>0 for some function g and polynomial p. We assume that g and p are computable and that p is known (g is not necessarily known). Now run \(y_{1} = \mathcal {A}(x, 1)\) to completion and \(y_{2} = \mathcal {A}(x, 1/2)\), …, \(y_{x+1} = \mathcal {A}(x, 1/(x+1))\) for at most x⋅p(x) time steps each, and return goal_{ P }{y _{ k }} over all k for which \(\mathcal {A}\) finished within the allotted time. This algorithm clearly always outputs a feasible solution in polynomial time. We claim that it yields a (1+1/f(x))approximation for a suitably chosen monotone computable function f with liminf_{ n→∞} f(n)=∞.
If f(x)=1 then, since \(\mathcal {A}(x,1)\) runs to completion, the algorithm delivers at least a (1+1/f(x))approximation. If f(x) is equal to i for some i≥2, then by the construction of f and n _{ i }, x≥n _{ i }≥g(i) and i≤x+1. Hence certainly \(\mathcal {A}(x, 1/i)\) runs to completion within the time limit set for it and the algorithm returns at least a (1+1/i)approximation of the optimum. But then the algorithm returns at least a (1+1/f(x))approximation of the optimum. □
We now prove the converse relation.
Lemma 4.8
If \(P \in \mathrm{PCONV}[\mathcal {F}^{*}]\), then P∈FIPTAS^{ ω }.
Proof
Clearly, a is a computable function and algorithm \(\mathcal {A}(x)\) runs in time polynomial in x for any x∈I _{ P } and any ϵ>0. Furthermore, if x≥a(1/ϵ), then ϵ≥1/f(x) by the definition of a. Hence (1+1/f(x))≤(1+ϵ) and thus \(\mathcal {A}(x)\) returns a (1+ϵ)approximate solution. □
Note that the function a in the proof of Lemma 4.8 is essentially the inverse of f (it is not precisely the inverse, as f might not be invertible). Similarly, the function f constructed in the proof of Lemma 4.7 is the ‘inverse’ of g.
Using Theorem 3.4 and Lemmas 4.7 and 4.8, we obtain the following key theorem.
Theorem 4.9
\(\mathrm{EPTAS} = \mathrm{FPTAS}^{\omega}= \mathrm{FIPTAS}^{\omega}=\mathrm{PCONV}[\mathcal {F}^{*}]\).
To state this theorem informally: for polynomialtime approximation schemes, having a single factor depending (only) on 1/ϵ in the running time is equivalent to having such a function as a threshold for yielding an (1+ϵ)approximation, which in turn is equivalent to having the attained approximation ratio improving to 1 as the instance size increases.
4.1 Detailed Relations for Specific Functions
If we have specific knowledge of the function f that appears in the approximation ratio of a convergent approximation scheme, we can prove some more detailed relations. First we consider the family of classes where this function is bounded by a polynomial.
Theorem 4.10
FPTAS=FPTAS[f] for any \(f \in \mathcal {P}\).
Proof
By Proposition 4.5, it suffices to show that FPTAS ⊆ FPTAS[f] for any \(f \in \mathcal {P}\). Let P∈ FPTAS and let \(\mathcal {A}\) be an fptas for P. Let f be upper bounded by the (monotone) polynomial p, i.e. f(n)≤p(n) for all n>0. Consider an x∈I _{ P } and an ϵ>0. Compute p(x) and run \(\mathcal {A}(x, \epsilon/p(x))\). This algorithm runs in time polynomial in x and 1/ϵ and returns a (1+ϵ/p(x))≤(1+ϵ/f(x))approximate solution. Hence P∈ FPTAS[f]. □
Corollary 4.11
\(\mathrm{FPTAS}[\mathcal {P}] = \bigcup_{f \in \mathcal {P}} \mathrm{FPTAS}[f] = \bigcap_{f \in \mathcal {P}} \mathrm{FPTAS}[f]\).
A problem P∈ NPO is polynomiallybounded if there is a polynomial p:ℕ→ℕ such that m(x,y)≤p(x) for all x∈I _{ P } and y∈S _{ P }(x) [2, 20].
Lemma 4.12
Let P be an NPhard, polynomiallybounded optimization problem and p the corresponding bounding (monotone) polynomial plus 1. Then P∉PCONV[p], unless P=NP.
Proof
Using this lemma and the easy fact that FPTAS ⊆ PCONV[f] for any \(f \in \mathcal {P}\), we can show the following corollary, which is well known and easy [20].
Corollary 4.13
No NPhard, polynomiallybounded optimization problem admits an fptas, unless P=NP.
Proof
Let P be an NPhard, polynomiallybounded optimization problem and p the corresponding bounding polynomial plus 1. Then \(P \not{\in}\) PCONV[p] and thus \(P \not{\in}\) FPTAS, unless P=NP. □
For functions in the approximation ratio of convergent schemes that are not polynomial, we can also prove some interesting relations. As noted before, the function f constructed in the proof of Lemma 4.7 is the ‘inverse’ of the function g in the running time of the eptas. This leads to the following general result.
Theorem 4.14
If g is an invertible function, then a problem that has an eptas with running time g(1/ϵ)⋅x^{ O(1)} also has a pconv[g ^{−1}(x)].
For instance, if \(g(1/\epsilon) = 2^{2^{1/\epsilon}}\), then the problem has a pconv[loglogx]. The statement of this theorem is not ‘if and only if’, because Lemma 4.8 only gives a fiptas^{ ω }. Transforming it to an eptas using Theorem 3.4 increases the running time exponentially in g.
For some functions, one can even improve on Theorem 4.14.
Theorem 4.15
For any polynomial p of degree s, a problem that has an eptas with running time bounded by 2^{ p(1/ϵ)}⋅x^{ O(1)} also has a ptas[log^{1/s }x].
It seems unlikely that an equivalence as in Theorem 4.15 will also hold if g is doublyexponential.
5 OptimumAsymptotic Approximation Schemes
Approximation schemes that give a (1+ϵ)approximation if the optimum is large enough are quite common. They are particularly well known for Minimum Bin Packing (see e.g. [11, 12, 27]), but similar schemes exist for other problems, e.g. for Minimum Degree Spanning Tree [19] and Chromatic Index [43]. There are also several ways to define what constitutes an optimumasymptotic approximation scheme [2, 12, 21, 26]. We prove that these definitions are actually equivalent. More interestingly, we revisit the relation of optimumasymptotic approximation schemes to nonasymptotic approximation schemes and show that EPTAS is a subclass of the optimumasymptotic classes. Finally, we investigate the relations between various types of optimumasymptotic approximation schemes.
We first define optimumasymptotic approximation schemes. The definition we use is both in line with previous definitions of optimumasymptotic approximation schemes (see e.g. [21]) and with the definition of sizeasymptotic approximation schemes (see Definition 3.1).
Definition 5.1
An approximation scheme \(\mathcal {A}\) for P∈ NPO is optimumasymptotic if there is a computable function b:ℚ_{≥1}→ℕ (the threshold function) and an associated constant ϵ _{ b } with the property that b(1/ϵ)≤1 for each ϵ≥ϵ _{ b }, such that for any ϵ>0 and any x∈I _{ P }, it returns a y∈S(x) and if m ^{∗}(x)≥b(1/ϵ), then y is within (1+ϵ) of m ^{∗}(x).
This leads to the definition of ptas^{∞}, fptas^{∞}, and fiptas^{∞} schemes and to classes PTAS^{∞}, FPTAS^{∞}, and FIPTAS^{∞}, all defined as expected.
Example 5.2
Karmarkar and Karp [27] give a fiptas^{∞} for Minimum Bin Packing. For an instance x, the returned solution has objective value at most m ^{∗}(x)+log^{2} m ^{∗}(x) and is found in \(\widetilde{O}(x^{8})\) time, where the \(\widetilde{O}\) hides certain polylogarithmic terms.
Note that optimumasymptotic schemes are defined analogously to sizeasymptotic schemes, except for the extra requirement on b(1/ϵ). This technicality seems to be indispensable when trying to prove that optimumasymptotic approximation scheme classes are a subclass of APX and behave as the known asymptotic approximation classes. In particular, it facilitates the following crucial property.
Lemma 5.3
Let \(\mathcal {A}\) be a ptas^{∞} for a problem P and let ϵ _{ b } be the constant associated with \(\mathcal {A}\) ’s threshold function b. Then P has a polynomialtime (1+ϵ _{ b })approximation algorithm.
This follows immediately from the fact that m ^{∗}(x)≥1≥b(1/ϵ _{ b }) for any x∈I _{ P }.
Corollary 5.4
PTAS^{∞}⊆APX.
It follows from Queyranne [38] that this inclusion is strict unless P=NP (see also Theorem 7.7).
5.1 Equivalence of Definitions
Lemma 5.3 can be used to prove that the definition of optimumasymptotic approximation schemes yields classes that are polynomially equivalent to the asymptotic approximation classes defined in the literature [2, 12] .
Definition 5.5
Algorithm \(\mathcal {A}\) is an asymptotic polynomialtime approximation scheme (aptas) for P if for any ϵ>0 there is a computable constant c _{ ϵ } such that for any x∈I _{ P }, \(\mathcal {A}(x,\epsilon)\) runs in time polynomial in x for every fixed ϵ and the solution y output by \(\mathcal {A}(x, \epsilon)\) is feasible and within (1+ϵ)+c _{ ϵ }/m ^{∗}(x) of m ^{∗}(x). A problem is in the class APTAS if and only if it has an aptas.
One similarly defines afptas and afiptas and corresponding classes AFPTAS and AFIPTAS in the natural way. These classes are sometimes also referred to as PTAAS, FPTAAS, and FIPTAAS, for (Fully)(Input)PolynomialTime Asymptotic Approximation Scheme [12]. The class APTAS has also been called ASYPTAS [44].
Example 5.6
Coffman and Lueker [12] present an AFPTAS (or FPTAAS) for Extensible Bin Packing with c _{ ϵ }=O(1/ϵlog1/ϵ).
Theorem 5.7
APTAS=PTAS^{∞}, AFPTAS=FPTAS^{∞}, and AFIPTAS=FIPTAS^{∞}.
Proof
The function b is obviously computable, since c _{ ϵ } is computable. As \(\mathcal {A}\) and \(\mathcal {B}\) run in time polynomial in x for any instance x∈I _{ P } and every fixed ϵ>0, it remains to show that the returned solution is a (1+ϵ)approximate solution on instances x∈I _{ P } if m ^{∗}(x)≥b(1/ϵ). If ϵ≥ϵ _{ b }, then \(\mathcal {B}(x)\) ensures a feasible solution within c and thus within (1+ϵ) of m ^{∗}(x). For ϵ<ϵ _{ b }, a feasible solution is returned and if \(m^{*}(x) \geq c_{\epsilon\scriptscriptstyle/2} \cdot 2/\epsilon\), then (1+ϵ/2)+c _{ ϵ/2}/m ^{∗}(x)≤(1+ϵ), assuring that \(\mathcal {A}(x, \epsilon/2)\) delivers a (1+ϵ)approximation. This implies that P∈ PTAS^{∞}.
Similar proofs can be used to show that AFPTAS=FPTAS^{∞} and AFIPTAS=FIPTAS^{∞}. □
Because of these equivalences, all complexity results proved below for PTAS^{∞}, FPTAS^{∞}, and FIPTAS^{∞} also hold for the classes APTAS, AFPTAS, and AFIPTAS respectively.
We can also make an interesting observation about the class of problems that can be approximated within a constant absolute error.
Definition 5.8
A problem P can be approximated within a constant absolute error if there exists an algorithm \(\mathcal {A}\) and constant c≥0 such that for any x∈I _{ P }, \(\mathcal {A}(x)\) runs in time polynomial in x and the solution y output by \(\mathcal {A}\) is feasible and satisfies m(x,y)−m ^{∗}(x)≤c.
Example 5.9
Fürer and Raghavachari [19] give a polynomialtime algorithm that approximates Minimum Degree Spanning Tree within constant absolute error 1.
We now prove that all problems admitting an algorithm with constant absolute error must have a fiptas^{∞} with a threshold function that is bounded by a linear function, and vice versa.
Theorem 5.10
A problem P can be approximated in polynomial time within a constant absolute error if and only if it has a fiptas^{∞} with a threshold function b that is bounded by a linear function.
Proof
Suppose that P can be approximated within a constant absolute error c≥0. Hence it has a (c+1)approximation algorithm. It then follows from the proof of Theorem 5.7 that P has a fiptas^{∞} with b(1/ϵ)=2c/ϵ and associated constant ϵ _{ b }=c.
5.2 Equivalence and Containment of OptimumAsymptotic Classes
Consider now the following natural relations.
Proposition 5.11

FIPTAS^{∞}⊆FPTAS^{∞}⊆PTAS^{∞} and

FIPTAS⊆FIPTAS^{∞}, FPTAS⊆FPTAS^{∞}, PTAS⊆PTAS^{∞}.
One might hope or expect that for optimumasymptotic approximation classes analogous relations hold as for sizeasymptotic classes. We show that this is only partially true. First, we investigate the relation between PTAS^{∞} and PTAS. We know from Theorem 3.6 that PTAS^{ ω } = PTAS, but for optimumasymptotic problems the equivalent result does not hold (unless P=NP). Actually, we prove a stronger result.
Theorem 5.12
\(\mathrm{FIPTAS}^{\infty}\not{\subseteq}\mathrm{PTAS}\), unless P=NP.
Proof
The minimum degree spanning tree problem admits a fiptas^{∞} (see Example 5.9), but cannot have a ptas unless P=NP [21]. Hence FIPTAS^{∞} \(\not{\subseteq}\) PTAS. □
As described in Sects. 6 and 7 however, for many problems the existence of a (f(i))ptas^{∞} does imply the existence of a ptas (or better).
Interestingly, there is a close relation between FPTAS^{∞} and FIPTAS^{∞}. In fact, we can prove that the classes are equal.
Theorem 5.13
FPTAS^{∞}=FIPTAS^{∞}.
Proof
By Proposition 5.11, it suffices to show that FPTAS^{∞} ⊆ FIPTAS^{∞}. Let P∈ FPTAS^{∞} and let \(\mathcal {A}\) be an fptas^{∞} for P, such that for some computable function b and for any ϵ>0 and any x∈I _{ P }, \(\mathcal {A}(x, \epsilon)\) runs in at most γ⋅(1/ϵ)^{ s }⋅x^{ t } time (for constants γ,s,t>0) and yields a (1+ϵ)approximate solution if m ^{∗}(x)≥b(1/ϵ). Because FPTAS^{∞} ⊆ APX, P has a polynomialtime capproximation algorithm \(\mathcal {B}\) for some constant c>1.
Clearly, b′ is a computable function, the running time of the new algorithm is bounded by a polynomial in x, and the algorithm returns a feasible solution. Assume that m ^{∗}(x)≥b′(1/ϵ). If ϵ≥ϵ _{ b }, then \(\mathcal {B}(x)\) returns a feasible solution within c and thus within (1+ϵ) of m ^{∗}(x). Suppose that ϵ<ϵ _{ b }. Since m ^{∗}(x)≥b′(1/ϵ)≥b(1/ϵ), \(\mathcal {A}(x,\epsilon)\) delivers a (1+ϵ)approximation if it runs to completion. So it remains to show that this is indeed the case, i.e. that γ⋅(1/ϵ)^{ s }⋅x^{ t }≤γ⋅x^{2t }. But this follows from the fact that m ^{∗}(x)≥b′(1/ϵ)>M _{ P }((1/ϵ)^{ s/t }) and thus x≥(1/ϵ)^{ s/t } by Lemma 2.3, or (1/ϵ)^{ s }≤x^{ t }. □
A similar idea can be used to tie EPTAS to the optimumasymptotic approximation classes.
Theorem 5.14
FIPTAS^{ ω }⊆FIPTAS^{∞}.
Proof
Clearly, the function b is computable, because a and M _{ P } are computable. As \(\mathcal {A}\) and \(\mathcal {B}\) run in time polynomial in x for any instance x∈I _{ P }, it remains to show that the returned solution is a (1+ϵ)approximate solution on instances x∈I _{ P } if m ^{∗}(x)≥b(1/ϵ). If ϵ≥ϵ _{ b }, then \(\mathcal {B}(x)\) ensures a feasible solution within c and thus within (1+ϵ) of m ^{∗}(x). For ϵ<ϵ _{ b }, a feasible solution is returned and if m ^{∗}(x)≥b(1/ϵ)>M _{ P }(a(1/ϵ)), then x≥a(1/ϵ) by Lemma 2.3, assuring that \(\mathcal {A}\) delivers a (1+ϵ)approximation. □
Note that one can similarly prove that PTAS^{ ω } ⊆ PTAS^{∞} and FPTAS^{ ω } ⊆ FPTAS^{∞}. However, we can also derive this from Proposition 5.11 and Theorem 3.6, respectively from Theorems 5.13, 5.14, and 3.4. Together with Theorem 5.12, this yields the following corollary.
Corollary 5.15
EPTAS⊆FIPTAS^{∞}. The containment is strict, unless P=NP.
This implies that the hierarchy of optimumasymptotic approximation classes starts not only above FPTAS, but even above EPTAS (see Fig. 1). It is an intriguing question whether this corollary can be strengthened to PTAS ⊆ FPTAS^{∞}, or whether PTAS \(\not{\subseteq}\) FPTAS^{∞}. We answer this question in Sect. 7 (Theorem 7.9).
6 OptimumAsymptotic Schemes and Classic Classes
Asymptotic approximation schemes clearly play an important part in the hierarchy of approximation schemes. In the previous sections, we established inclusion and equivalence relations among the classes of problems admitting such schemes and more classic classes such as PTAS, EPTAS, and FPTAS. All inclusion relations are strict under some hardness condition. In some cases however, the hardness gap can be bridged. The next two sections build several of these bridges.
In this section, we give a new characterization of classic classes by means of optimumasymptotic approximation schemes and concepts from fixedparameter tractability. In this way, we can also prove that large classes of problems do not possess optimumasymptotic schemes. Section 7 deals with asymptotic schemes in another way, in the sense that we try to increase the size or optimum of a problem instance to get around the threshold function of asymptotic schemes.
6.1 New Characterizations of Classic Classes
When we view optimumasymptotic approximation schemes from the perspective of the theory of fixedparameter tractability, we can obtain new characterizations of the classic classes of approximation schemes defined in Table 1. We first define some notions from fixedparameter tractability, as found for instance in Downey and Fellows [17] and Flum and Grohe [18].
Definition 6.1
In the standard parameterization (or decision variant) of a problem P, one is asked, given x∈I _{ P } and a positive integer k, to decide whether m ^{∗}(x)≥k if goal_{ P }=max or m ^{∗}(x)≤k if goal_{ P }=min.
Definition 6.2
[36]
A problem P is simple if its standard parameterization can be decided in time polynomial in x for every instance x∈I _{ P } and every fixed k. It is psimple if its standard parameterization can be decided in time polynomial in x and k for every x∈I _{ P } and every k.
Proposition 6.3
The standard parameterization of a problem P belongs to the class XP if and only if P is simple. It belongs to the class PFPT (Polynomial FPT) if and only if P is psimple.
A precise definition of the classes PFPT and XP may be found in [8, 17, 18]. Here we only need an understanding of the restriction of these classes to standard parameterizations of optimization problems.
Definition 6.4
An algorithm \(\mathcal {A}\) decides the standard parameterization of a problem P with witness if \(\mathcal {A}\) decides the standard parameterization of P and if it decides Yes, it also returns a y∈S(x) such that m(x,y)≥k if goal_{ P }=max or m(x,y)≤k if goal_{ P }=min.
Using this definition, we can consider problems that are (p)simple with witness and define classes XP^{ w } and PFPT^{ w } as expected. As in Proposition 6.3, this means that a problem belongs to XP^{ w } if and only if it is simple with witness, and in PFPT^{ w } if and only if it is psimple with witness.
We now give a new characterization of the classes PTAS and FPTAS.
Theorem 6.5

in PTAS if and only if it has a ptas^{∞} and its standard parameterization is in XP^{ w }.

in FPTAS if and only if it has an fptas^{∞} with a polynomiallybounded threshold function and its standard parameterization is in PFPT^{ w }.
Proof
Consider a problem P and suppose that P∈ PTAS. Then P is in PTAS^{∞} by Proposition 5.11. It follows from a proof of Paz and Moran [36] that the standard parameterization of P is simple with witness (run the ptas with ϵ=1/(k+1)), and thus in XP^{ w }.
For the converse, suppose that P is in PTAS^{∞} and in XP^{ w }. Let \(\mathcal {A}\) be a ptas^{∞} for P with computable threshold function b and let \(\mathcal {B}\) be an algorithm that decides the standard parameterization of P with witness in time polynomial in x for every fixed k. Assume w.l.o.g. that goal_{ P }=min. The case when goal_{ P }=max is similar.
Given an instance x∈I _{ P } and some ϵ>0, compute b(1/ϵ). For each integer k∈[1,…,b(1/ϵ)], call \(\mathcal {B}(x,k)\). If any of these calls returns a Yesanswer, then m ^{∗}(x) equals the smallest value of k for which \(\mathcal {B}(x,k)\) gives a Yesanswer. The witness solution y∈S(x) returned by \(\mathcal {B}\) in this case has m(x,y)=m ^{∗}(x) and thus trivially is a (1+ϵ)approximation. If no call returns a Yesanswer, then m ^{∗}(x)≥b(1/ϵ) and \(\mathcal {A}(x,\epsilon)\) returns a (1+ϵ)approximation to m ^{∗}(x). In either case, we get a (1+ϵ)approximation.
The running time of this scheme is polynomial in x for every fixed ϵ>0. For a fixed value of ϵ, b(1/ϵ) can be computed in constant time. Furthermore, b(1/ϵ) itself is a constant and hence \(\mathcal {B}\) is called a constant number of times. Each call takes polynomial time. If none of these calls returns a Yesanswer, we run \(\mathcal {A}(x, \epsilon)\), which also takes polynomial time.
The proof of the characterization of FPTAS is similar. Since the threshold function is polynomially bounded, we may assume it is a polynomial. Since a polynomial can be evaluated in polynomial time, the theorem follows. □
The characterizations seem different from those given by Paz and Moran [36] and Chen et al. [8].
Example 6.6
Jansen and Zhang [25] prove that the standard parameterization of Maximum Rectangle Packing (maximizing the number of given rectangles that can be packed in a given rectangle) is in XP^{ w }. Then an fptas^{∞} for this problem is given, implying by Theorem 6.5 that it is in PTAS.
Example 6.7
Minimum Bin Packing is in FIPTAS^{∞} (see Example 5.2), but has no ptas unless P=NP [21]. Hence its standard parameterization is not in XP^{ w } unless P=NP.
A similar characterization can be given for the class EPTAS. Let EPTAS^{∞} denote the class of problems admitting an eptas^{∞}, i.e. a ptas^{∞} with running time poly(x)⋅f(1/ϵ) for some computable function f. Call a problem esimple if its standard parameterization can be decided in time poly(x)⋅f(k) for some computable function f. The standard parameterization of a problem belongs to FPT if and only if it is esimple. Using Definition 6.4, we can define (similar to XP^{ w } and PFPT^{ w }) the class FPT^{ w }.
Theorem 6.8
A problem is in EPTAS if and only if it has an eptas^{∞} and its standard parameterization is in FPT^{ w }.
The proof is similar to the proof of Theorem 6.5.
6.2 Existence of OptimumAsymptotic Approximation Schemes
Theorem 6.5 has interesting consequences. In particular, it gives the tools to improve on a theorem of Arora et al. [1]. They showed that MAX3SAT has no ptas, unless P=NP. As a consequence, no MAXSNPcomplete problem can have a ptas, unless P=NP. We prove that this extends to ptas^{∞}.
Definition 6.9
[35]
Kolaitis and Thakur [29] proved that MAXSNP ⊂ MAXNP, as Maximum Satisfiability is in MAXNP, but not in MAXSNP.
We first need the following theorem, a weaker form of which was proved by Cai and Chen [4].
Theorem 6.10
If P is in MAXNP, then its standard parameterization is in FPT^{ w }.
Proof
Suppose that k≤n/2^{ c }. Papadimitriou and Yannakakis [35] proved that by fixing a particular \(\bar{b}(\bar {a}) \in B(\bar{a})\) for each \(\bar{a} \in A\), one can find in polynomial time a structure S satisfying at least n/2^{ c } clauses \(\phi_{\bar{a},\bar{b}(\bar{a})}(S)\). Hence if k≤n/2^{ c }, the answer is trivially Yes. Moreover, a witness to this can be found in polynomial time.
So suppose that k>n/2^{ c }. Then we enumerate all assignments of variables of the form \(Q(\bar{d})\) that occur (i.e. all relevant structures S) to verify whether the maximum number of satisfiable \(\phi_{\bar{a}}(S)\) is at least k. There are at most n clauses \(\phi _{\bar{a}}(S)\) which can be satisfied. For each such clause, if it is to be satisfied, there are at most l clauses \(\phi_{\bar{a}, \bar {b}}(S)\), from which we should choose one that must be satisfied. As each such clause \(\phi_{\bar{a}, \bar{b}}(S)\) consists of at most c variables of the form \(Q(\bar{d})\), where Q is any predicate in S, we can enumerate all relevant structures S in O(n⋅(l+1)^{ n }⋅2^{ cn }) time. Since n<2^{ c } k and l≤n ^{ O(1)}, this is O(k ^{ O(k)}) time. Therefore we can check in O(k ^{ O(k)}) time whether the instance of P has answer Yes and, if so, return a witness structure S for this. □
Observe that for problems in MAXSNP we can apply a similar proof as above, but with l=1. Then the running time of the given algorithm improves to O(k⋅2^{ O(k)}) plus a polynomial in the input size. Kratsch [31] recently showed that problems in MAXNP admit a polynomial kernel, strengthening the above result.
Combining Theorem 6.10 with Theorems 6.5 and 6.8, we obtain the following result.
Theorem 6.11
If a problem P is in MAXNP and PTAS^{∞} (EPTAS^{∞}), it is in PTAS (EPTAS).
In the given form, the theorem gives a way to construct a ptas (eptas) for a problem in MAXNP if the problem has a ptas^{∞} (eptas^{∞}). Phrased differently however, it gives a powerful tool to prove that for some problems a ptas^{∞} (eptas^{∞}) cannot exist.
Corollary 6.12
If a problem P in MAXNP cannot have a ptas (eptas) under some hardness condition, then it cannot have a ptas^{∞} (eptas^{∞}) under the same hardness condition.
This already proves the nonexistence of a ptas^{∞} for many problems, for instance for Maximum Satisfiability. However, a more general statement is possible. Arora et al. [1] showed that no MAXSNPcomplete problem can have a ptas, unless P=NP. We now strengthen this result as follows.
Theorem 6.13
If a problem P is MAXSNPcomplete (under the Lreduction), then it cannot have a ptas^{∞}, unless P=NP.
This implies for instance that problems such as MAX3SAT, Maximum Independent Set on bounded degree graphs, and Maximum Cut do not have a ptas^{∞}, unless P=NP. In fact, using the result of Arora et al., one can even prove that for each MAXSNPcomplete problem P there is a fixed constant c>1 such that P cannot be approximated (optimum)asymptotically within c, unless P=NP.
It should be noted here that similar results can be proved for a syntactically defined class of minimization problems, called MIN F^{+}Π_{1} [30], which includes Minimum Vertex Cover and many vertexdeletion and edgedeletion problems in graphs such as Minimum Feedback Arc Set. Cai and Chen [4] proved that the standard parameterizations of all problems in this class are in FPT^{ w }. Hence we obtain the following theorem.
Theorem 6.14
If a problem P is in MIN F^{+}Π_{1} and in PTAS^{∞}, then it is in PTAS.
Similar to Corollary 6.12, one can use this theorem to prove negative results. For instance, Theorem 6.14 implies that Minimum Vertex Cover, which cannot have a ptas unless P=NP [1, 35], also cannot have a ptas^{∞} unless P=NP.
6.3 ApproximationPreserving Reductions
Due to results by Khanna et al. [28], we know that no APXcomplete problem can have a ptas unless P=NP. Phrased differently, if for a problem P in APX there exists an approximationpreserving reduction from Maximum Satisfiability (or a specific bounded case of it) to P, then P cannot have a ptas unless P=NP. We prove that a similar statement can be made about ptas^{∞} by using a different type of approximationpreserving reduction.
The result of Khanna et al. holds under the PTASreduction, defined by Crescenzi and Trevisan [15].
Definition 6.15
 1.
t _{1}(x,ϵ)∈I _{ P′} and t _{1}(x,ϵ) is computable in time polynomial in x for any fixed value of ϵ;
 2.
for any y∈S _{ P′}(t _{1}(x,ϵ)), t _{2}(x,y,ϵ)∈S _{ P }(x) and t _{2}(x,y,ϵ) is computable in time polynomial in x and y for any fixed value of ϵ;
 3.
for any y∈S _{ P′}(t _{1}(x,ϵ)), if y is within 1+c(ϵ) of \(m^{*}_{P'}(t_{1}(x,\epsilon))\), then t _{2}(x,y,ϵ) is within 1+ϵ of \(m^{*}_{P}(x)\).
Several wellknown reductions are a special case of PTASreductions, such as Preductions [34], Lreductions [35], Ereductions [28], and APreductions [13]. The most important property of all these reductions is that they preserve membership of PTAS.
Proof
Let \(\mathcal {A'}\) be a ptas for P′ and (t _{1},t _{2},c) a PTASreduction from P to P′. It can be easily seen that given x∈I _{ P } and some ϵ>0, computing \(t_{2}(x,\mathcal {A'}(t_{1}(x,\epsilon ),c(\epsilon)),\epsilon)\) yields a ptas for P. □
Most PTASreductions given in the literature actually also preserve membership of PTAS^{∞}. This is due to the following property.
Lemma 6.17
Suppose there is a PTASreduction (t _{1},t _{2},c) from P to P′ and a monotone computable function f:ℕ→ℕ with liminf_{ n→∞} f(n)=∞ such that for any ϵ>0 and any x∈I _{ P }, \(m^{*}_{P'}(t_{1}(x,\epsilon )) \geq f(m^{*}_{P}(x))\). Then if P′ has a ptas^{∞}, P also has a ptas^{∞}.
Proof
Suppose that P′ has a ptas^{∞} \(\mathcal {A'}\) such that any instance x′∈I _{ P′} can be approximated within (1+ϵ) if m ^{∗}(x′)≥b′(1/ϵ) for some computable function b′. Let x∈I _{ P } and some ϵ>0 be given. We claim that computing \(\mathcal {A}(x,\epsilon) = t_{2}(x,\mathcal {A'}(t_{1}(x,\epsilon),c(\epsilon )),\epsilon)\) yields a ptas^{∞} for P with some suitably chosen threshold function b.
The lemma proves the usefulness of the following notion.
Definition 6.18
There is a PTAS^{∞}reduction from a problem P to a problem P′ if there is a PTASreduction (t _{1},t _{2},c) from P to P′ and a monotone computable function f:ℕ→ℕ with liminf_{ n→∞} f(n)=∞ such that for any ϵ>0 and any x∈I _{ P }, \(m^{*}_{P'}(t_{1}(x,\epsilon)) \geq f(m^{*}_{P}(x))\).
Observe that the PTAS^{∞}reduction is transitive. Moreover, by Lemma 6.17, PTAS^{∞}reductions preserve membership of PTAS^{∞}.
Some reductions appearing in the literature are a special case of PTAS^{∞}reductions, such as asymptotic continuous reductions [39] and (polynomial time) ratiopreserving reductions [36]. However, they are rather restrictive and not many have been shown to exist. Here we will rely on Lemma 6.17 instead.
A first question we need to answer with respect to PTAS^{∞}reductions is whether there exist any (natural) problems that are APXcomplete under the PTAS^{∞}reduction. Such a problem indeed exists. Let Maximum Bounded Weighted Satisfiability be the variant of Maximum Satisfiability in which each variable x _{ i } has weight w _{ i } such that W≤∑_{ i } w _{ i }≤2W for some given weight W.
Theorem 6.19
Maximum Bounded Weighted Satisfiability (MBWS) is APXcomplete under the PTAS^{∞}reduction.
Proof
Crescenzi and Panconesi [14] proved that MBWS is APXcomplete under the Preduction. Upon closer inspection and using Definition 6.18, it can be seen that the given reductions are also PTAS^{∞}reductions. □
Using this theorem, we can in fact prove that there is a problem in MAXSNP that is APXcomplete under the PTAS^{∞}reduction.
Theorem 6.20
Maximum 3Satisfiability is APXcomplete under the PTAS^{∞}reduction.
Proof
Crescenzi and Trevisan [15] presented a PTASreduction from MBWS to a polynomiallybounded variant of it, Maximum PolynomiallyBounded Weighted Satisfiability. Khanna et al. [28] showed that any polynomiallybounded problem in APX has an Ereduction to Maximum 3Satisfiability. Upon closer inspection of these reductions, one can see that they are in fact PTAS^{∞}reductions. □
Observe that any problem that is APXcomplete under the PTAS^{∞}reduction cannot have a ptas^{∞}, unless P=NP.
Lemma 6.21
If a problem P is APXcomplete under the PTAS^{∞}reduction, then it cannot have a ptas^{∞}, unless P=NP.
Proof
We established in Theorem 6.13 that no MAXSNPcomplete problem can have a ptas^{∞} unless P=NP. In particular, Maximum 3Satisfiability, which is APXcomplete, has no ptas^{∞}. Let P be an APXcomplete problem under the PTAS^{∞}reduction. If P had a ptas^{∞}, then by the APXcompleteness of P, Maximum 3Satisfiability also has a ptas^{∞}. This is a contradiction, unless P=NP. □
It is interesting to note that several reductions proving that no ptas^{∞} can exist for a certain problem, actually use a PTAS^{∞}reduction implicitly. For example, a result of Woeginger [44] showing that Minimum 2Dimensional Vector Packing cannot have a ptas^{∞} unless P=NP can be explained this way. In Minimum 2Dimensional Vector Packing, we want to partition a given set of vectors in [0,1]×[0,1] into a minimum number of subsets, such that in every subset the sum of all vectors is at most 1 in every coordinate.
Theorem 6.22
Minimum 2D Vector Packing is APXcomplete under the PTAS^{∞}reduction. Hence it cannot have a ptas^{∞} unless P=NP.
Proof
Consider the following problems. In Maximum 3Dimensional Matching, we are given three sets X, Y, and Z, each of size q, and a set T⊆X×Y×Z of triples, and we are asked to find the maximum number of triples that do not agree on any coordinate. In Maximum Bounded 3Dimensional Matching, we additionally impose that each element occurs in at least one, but at most three triples. Kann [26] gives an Lreduction to Maximum Bounded 3D Matching from Maximum 3SatisfiabilityB, which itself has an Lreduction from Maximum 3Satisfiability [35]. Both Lreductions are actually also PTAS^{∞}reductions.
We can also be interested in the existence of PTAS^{∞}reductions. Crescenzi et al. [13] showed that Minimum Bin Packing is not APXcomplete under the APreduction (and thus also under the PTASreduction), unless the polynomial hierarchy collapses. Furthermore, they remark that this result “does not seem to be obtainable” under the condition that P=NP. The reason for this is that Crescenzi et al. show that if NP=coNP, then there is an APreduction from Maximum Satisfiability to Minimum Bin Packing. Thus NP=coNP would imply that P=NP, which is highly unlikely. If however we consider PTAS^{∞}reductions, then the result of Crescenzi et al. can be obtained under the condition that P=NP.
Theorem 6.23
Minimum Bin Packing is not APXcomplete under the PTAS^{∞}reduction, unless P=NP.
Proof
Recall that Minimum Bin Packing has a ptas^{∞} (see Example 5.2). Hence if Minimum Bin Packing were APXcomplete under the PTAS^{∞}reduction, then this would contradict Lemma 6.21, unless P=NP. □
In particular, the theorem implies that no PTAS^{∞}reduction can exist from Maximum Satisfiability to Minimum Bin Packing, unless P=NP. It should be noted here that the APreduction given by Crescenzi et al. under the condition that NP=coNP is not a PTAS^{∞}reduction, since it has m ^{∗}(t _{1}(x,ϵ))=O(1) for any instance x of Maximum Satisfiability and any ϵ>0. Therefore Theorem 6.23 does not contradict, but augments the results of Crescenzi et al. [13]. Furthermore, it can be easily seen that Theorem 6.23 extends to several other problems, including Minimum Degree Spanning Tree and Chromatic Index.
7 Pumpable Problems
Looking back at the previous sections, one can notice that in the equivalence proofs we do not always get the equivalence we hope for. For instance, one expects an fptas^{ ω } with a(1/ϵ)=2^{1/ϵ } to be equivalent to an eptas with f(1/ϵ)=2^{1/ϵ }. However, this does not seem to hold in general, as the proof of Theorem 3.4 only gives f(1/ϵ)=2^{poly(a(1/ϵ))}. Hence we are interested in properties of problems for which the equivalences are nice. Similarly, we want to know if for certain types of problems the hierarchy developed in the previous sections collapses. A promising property of problems seems to be pumpability.
Definition 7.1

g _{1}(x)∈I _{ P } and \(S_{P}(g_{1}(x)) \not{=} \emptyset\) if \(S_{P}(x) \not{=} \emptyset\);

for every r≥1 and for every y∈S _{ P }(g _{1}(x)) within a factor r of m ^{∗}(g _{1}(x)), g _{2}(y)∈S _{ P }(x) and g _{2}(y) is within r of m ^{∗}(x);

g _{1} and g _{2} are computable in time polynomial in x and logk;

some pumpability condition holds.
Note that in the logarithmic cost model we use, it makes sense that the running times of g _{1} and g _{2} depend polynomially on logk.
We distinguish two pumpability conditions, one related to the size of g _{1}(x) and one related to the optimal objective value of g _{1}(x).
Definition 7.2
An optimization problem P is ksizepumpable if P is kpumpable with the condition that g _{1}(x)≥x+k. An optimization problem P is koptpumpable if P is kpumpable with the condition that m ^{∗}(g _{1}(x))≥m ^{∗}(x)+k.
It appears that many problems possess either one of these properties or even both, as can be seen in the following example.
Example 7.3
Minimum Vertex Cover is 1sizepumpable, because one can take for g _{1} the function that makes two disjoint copies of an instance. The desired property of the translation back follows by the pigeonhole principle. Using the same idea, Minimum Vertex Cover is also 1optpumpable. Minimum Makespan Scheduling is koptpumpable for any k by multiplying all job lengths by k+1. Using a similar idea it is also 1sizepumpable.
Observe that problems that are ksizepumpable for arbitrary values of k cannot exist. Otherwise one could take k=2^{poly(x)} and thus add an exponential number of bits to an instance x of P in time polynomial in x.
We consider the question which problems are pumpable in more detail in Sect. 7.3. First, we show how the property of being pumpable helps to prove some new equivalences among the classes we defined previously. In the following, when we talk about a size or optpumpable problem, we assume that the functions g _{1} and g _{2} are known.
7.1 OptimumAsymptotic Schemes and Pumpability
It follows from Theorem 5.12 that PTAS ⊂ PTAS^{∞}, unless P=NP. For 1optpumpable problems however, the two classes are equivalent.
Lemma 7.4
Let P be 1optpumpable and in PTAS^{∞}. Then P∈PTAS.
Proof
Assume we have a ptas^{∞} \(\mathcal {A}\) for P with computable threshold function b. Given x∈I _{ P } and some ϵ>0, compute b(1/ϵ), which takes constant time for any fixed ϵ. Then pump x b(1/ϵ) times to get an instance x′. Note that the size of the output of g _{1} is polynomial in the size of the input. Hence pumping b(1/ϵ) times means that x′ has size at most \(x^{O(1)^{b(1/\epsilon)}}\) and thus the pumping steps can be done in time polynomial in x for every fixed ϵ.
As x has been pumped b(1/ϵ) times, m ^{∗}(x′)≥b(1/ϵ)+m ^{∗}(x)≥b(1/ϵ). Hence we can compute \(y'= \mathcal {A}(x', \epsilon)\) and by the definition of ptas^{∞}, y′ is within 1+ϵ of m ^{∗}(x′). Furthermore, y′ can be computed in time polynomial in x for every fixed ϵ, as x′ is polynomial in x for every fixed ϵ. Iteratively applying g _{2} to y′, we get a solution y for x within 1+ϵ of m ^{∗}(x). As we need to apply g _{2} only b(1/ϵ) times, this again takes time polynomial in x for every fixed ϵ. □
There are several ways in which one could use this lemma. First of all, it provides a condition under which problems are not 1optpumpable.
Theorem 7.5
Any problem that is in PTAS^{∞} but not in PTAS (unless P=NP), is not 1optpumpable (unless P=NP).
This is an immediate consequence of Lemma 7.4. There are several examples of problems that fit the theorem.
Corollary 7.6
Minimum Bin Packing, Chromatic Index, and Minimum Degree Spanning Tree are not 1optpumpable, unless P=NP.
Secondly, Lemma 7.4 shows that some problems cannot have a ptas^{∞}. Consider for instance the variation of Minimum Bin Packing with precedence constraints. (The precedence constraints state that for certain items i and j, item i has to appear in a bin with a lower number than item j.) It cannot have a ptas, as Minimum Bin Packing itself cannot have a ptas unless P=NP [20]. Queyranne [38] already proved the following result. But as Minimum Bin Packing with Precedence is 1optpumpable (make two copies of the instance and by using precedence constraints ensure that items of the first instance have to come before items of the second), it now follows as a corollary of Lemma 7.4.
Theorem 7.7
Minimum Bin Packing with Precedence has no ptas^{∞}, unless P=NP.
Actually, Queyranne applied a similar form of pumping to the problem in order to obtain his result.
We now consider the effect of pumpability on problems in FPTAS^{∞}.
Lemma 7.8
Let P be 1optpumpable and in FPTAS^{∞} and let c>0 be a constant. If g _{1}(x)≤c⋅x for any x∈I _{ P }, then P∈EPTAS.
Proof
Following the proof of Lemma 7.4, the condition on g _{1} now implies that after pumping b(1/ϵ) times we obtain an instance of size O(c ^{ b(1/ϵ)}⋅x). Then the construction of Lemma 7.4 actually takes time bounded by a polynomial in x times some (computable) function of 1/ϵ. □
Using this lemma, one can prove interesting results about the relation of FPTAS^{∞} to PTAS^{∞} and PTAS.
Theorem 7.9
\(\mathrm{PTAS} \not{\subseteq}\mathrm{FPTAS}^{\infty}\), unless FPT=W[1].
Proof
Let P denote the minimum dominating set problem in unit disk graphs. Hunt et al. [24] showed that P∈ PTAS. Suppose that P∈ FPTAS^{∞}. Because P is easily 1optpumpable with a linear size output (let g _{1} just take two disjoint copies of the graph of the instance), it has an EPTAS by Lemma 7.8. But then P is in FPT (with respect to its standard parameterization) by results of Bazgan [3] and Cesati and Trevisan [7]. However, as P is W[1]hard [32], this is not possible, unless FPT=W[1]. □
This leads to the following corollary, which we could not derive yet in Sect. 5.
Corollary 7.10
\(\mathrm{PTAS}^{\infty} \not{=} \mathrm{FPTAS}^{\infty}\), unless FPT=W[1].
Although Lemma 7.8 gives a way to go from an fptas^{∞} to an eptas, it only holds if the output of g _{1} has linear size. If that is not the case, the following lemma can be useful.
Lemma 7.11
Let P be koptpumpable for any k and in FPTAS^{∞}. Then P∈EPTAS.
Proof
In this case it suffices to pump once with k=b(1/ϵ) to make the construction of Lemma 7.4 work. □
If additionally b(1/ϵ) is bounded by 2^{poly(1/ϵ)}, then P∈ FPTAS, as both g _{1} and g _{2} can be computed in time polynomial in x and logk=logb(1/ϵ)=poly(1/ϵ). This last observation has interesting consequences.
Example 7.12
Extensible Bin Packing (where bin size is part of the input) and Minimum Makespan Scheduling are strongly NPhard, polynomiallybounded problems and thus have no fptas unless P=NP (see Corollary 4.13). However, both problems are koptpumpable for any k (multiply all numbers of the instance by k). Hence by Lemma 7.11 they cannot have a fptas^{∞} where b(1/ϵ) is bounded by 2^{poly(1/ϵ)}, unless P=NP.
For these two problems, the above facts were already known by results of Coffman and Lueker [12] and (in a weaker form) of Hochbaum and Schmoys [22], but here they are just a consequence of the general statement in Lemma 7.11. Moreover, the result of Hochbaum and Schmoys for Minimum Makespan Scheduling is strengthened by it.
7.2 SizeAsymptotic and Convergent Schemes and Pumpability
For sizeasymptotic problems, the situation is slightly different. Recall that PTAS^{ ω } and PTAS are equivalent for all problems, not just for pumpable problems (Theorem 3.6). The classes FPTAS^{ ω } and EPTAS are also equivalent (Theorem 3.4), but turning an fptas^{ ω } with threshold function a into an eptas currently increases the time complexity by at least a factor 2^{poly(a(1/ϵ))}. This is rather unfortunate. If a problem is 1sizepumpable however, this exponential increase is not necessary.
Lemma 7.13
Let P be 1sizepumpable and in FPTAS^{ ω } with computable threshold function a. Then P∈EPTAS with running time polynomial in x, 1/ϵ, a(1/ϵ), and the time needed to compute a(1/ϵ).
Proof
Assume that P has an fptas^{ ω } \(\mathcal {A}\) with threshold function a. Given some x∈I _{ P } and ϵ>0, compute a(1/ϵ). Pump x the smallest number of times needed to get an instance x′ with x′≥a(1/ϵ). Note that x′ has size at most polynomial in a(1/ϵ) and x. Hence computing \(y' =\mathcal {A}(x',\epsilon)\) takes time polynomial in 1/ϵ, a(1/ϵ), and x. Repeatedly applying g _{2} to y′ also takes time polynomial in a(1/ϵ) and x and yields a solution to x within (1+ϵ) of m ^{∗}(x). □
If a(1/ϵ) is computable in time polynomial in 1/ϵ and a(1/ϵ), then the exponential increase is avoided. In particular, if the threshold function a of the fptas^{ ω } is a polynomial, then P∈ FPTAS.
We can also show that for 1sizepumpable problems the classes PCONV[f] and PTAS[f] coincide for many functions f, such as the logarithm.
Lemma 7.14
Let \(f \in \mathcal {F}^{*}\) be such that f(n⋅m)≥f(n)+f(m). If P∈PCONV[f] is 1sizepumpable, then P∈PTAS[f].
Proof
In Example 4.3, we noted that Maximum Independent Set on singlecrossingminorfree graphs has a pconv[logx]. By the above lemma, we conclude that it also has a ptas[logx].
Pumpability also leads to several negative results.
Lemma 7.15
Let P be an NPhard, polynomiallybounded optimization problem with p the corresponding bounding polynomial plus 1. If P is 1sizepumpable, then for any constant α>0, P∉PCONV[p(x)^{ α }], unless P=NP.
Proof
This lemma allows a simple subdivision of the PCONVhierarchy.
Lemma 7.16
PCONV[x^{ α }]⊂PCONV[logx] for any (fixed) α>0, unless P=NP.
Proof
Maximum Independent Set on singlecrossingminorfree graphs is in PCONV[logx] (see Example 4.3), but is also polynomiallybounded and 1sizepumpable. □
This result can be strengthened significantly though. Huang [23] showed that Minimum Vertex Cover on planar graphs cannot have an eptas with \(f(1/\epsilon) = 2^{o(\sqrt{1/\epsilon})}\), unless FPT=W[1]. Marx [33] proved that even an eptas with \(f(1/\epsilon) = 2^{O((1/\epsilon)^{1\delta})}\) for some δ>0 cannot exist, unless nvariable 3SAT can be solved in 2^{ o(n)} time.
Lemma 7.17
PCONV[log^{2}x]⊂PCONV[logx], unless FPT=W[1]. PCONV[log^{1/(1−δ)}x]⊂PCONV[logx] for any δ>0, unless nvariable 3SAT can be solved in 2^{ o(n)} time.
7.3 Which Problems Are Pumpable?
We already showed that several problems are pumpable (e.g. Minimum Vertex Cover and Minimum Makespan Scheduling). In Theorem 7.5, we proved that several other problems (such as Minimum Bin Packing) are not 1optpumpable, and it seems unlikely that they are 1sizepumpable. In fact, many problems seem both 1sizepumpable and 1optpumpable, or both not. We give some evidence why this might not be a coincidence. At the moment, we do not know of any problem which provably possesses only one of the properties and not both.
To prove the pumpability of several classes of problems, we consider problems that are m ^{∗} optpumpable. Essentially, this means that we can pump to (at least) double the objective value of the optimum. For any problem P and any x∈I _{ P }, note that because m ^{∗}(x)≤M _{ P }(x), logm ^{∗}(x)≤logM _{ P }(x)≤poly(x). Hence for any m ^{∗}optpumpable problem the functions g _{1} and g _{2} are computable in time polynomial in x.
Lemma 7.18
Let P be m ^{∗}optpumpable. Then P is 1sizepumpable.
Proof
Given an instance x∈I _{ P }, repeatedly optpump x to an instance x′ until x′>x. We claim it takes at most time polynomial in x until such an x′ is found. As a first step, we prove that we only need to pump a polynomial amount of times. By Lemma 2.3, if \(m^{*}(x') > M_{P}(x) = 2^{r_{P}(x,q_{P}(x))}\), then x′>x. As m ^{∗}optpumping (at least) doubles the objective value of the optimum, pumping 1+r _{ P }(x,q _{ P }(x)) times gives an x′ with x′>x. Second, note that before any pumping step, the size of the instance is at most x. Hence any pumping step costs only time polynomial in x. Therefore all needed pumping steps can be done in time polynomial in x and thus P is 1sizepumpable. □
Many graph optimization problems, such as Minimum Vertex Cover and Maximum Independent Set, are m ^{∗}optpumpable and thus 1sizepumpable. But when is a problem m ^{∗}optpumpable?
Consider the following property of optimization problems.
Definition 7.19
[28]
A problem P is additive if there exists an operator + and a polynomialtime computable function f such that + maps any pair of instances x _{1},x _{2}∈I _{ P } to an instance x _{1}+x _{2}∈I _{ P } such that m ^{∗}(x _{1}+x _{2})=m ^{∗}(x _{1})+m ^{∗}(x _{2}) and f maps a (feasible) solution y to x _{1}+x _{2} to a pair of (feasible) solutions y _{1},y _{2} to x _{1} and x _{2} respectively such that m(x _{1}+x _{2},y)=m(x _{1},y _{1})+m(x _{2},y _{2}).
This notion is similar to the notion of paddable optimization problems [9, 37].
From the definitions of additive and pumpable and Lemma 7.18, one can easily prove the following theorem.
Theorem 7.20
Any additive problem is m ^{∗}optpumpable and hence 1optpumpable and 1sizepumpable.
Khanna et al. [28] remark that many problems are additive, such as Maximum Clique, Chromatic Number, Minimum Set Cover, and all problems in the class MAXSNP.
Corollary 7.21
Any problem in MAXSNP is both 1optpumpable and 1sizepumpable.
However, there also problems that are not (easily seen to be) additive, but that are m ^{∗}optpumpable, such as Maximum Knapsack and Longest Path.
When we combine Lemma 7.4 with Corollary 7.21, we obtain the following weaker version of Theorem 6.11: If a problem P is in MAXSNP and in PTAS^{∞}, then P is in PTAS. In this way, most results from Sect. 6.2 also follow by using pumpability.
8 Conclusion and Open Problems
In this paper, we defined several new types of approximation schemes and uncovered many interesting new relationships between classes of problems that can be approximated using these schemes and existing approximation classes. In particular, we have shown that EPTAS is a central class in the landscape of approximation classes. We also mapped the entire hierarchy of these classes, shown in Fig. 1.
There are several intriguing questions left. The notion of pumpability, introduced in Sect. 7, gives a possibility for bridging the gap between optimumasymptotic schemes and nonasymptotic schemes. But we have very few properties to check whether a problem is pumpable. Can size or optpumpable problems be characterized? Another interesting question is whether every optpumpable problem is also sizepumpable and vice versa. We gave some evidence in Lemma 7.18 why one direction might be true, but it goes too far to conjecture that it holds both ways.
Convergent approximation schemes also pose new challenges. For the class \(\mathrm{PCONV}[\mathcal {F}^{*}]\), we know that it is equivalent to EPTAS. However, the general classes \(\mathrm{PTAS}[\mathcal {F}^{*}]\) and \(\mathrm{FPTAS}[\mathcal {F}^{*}]\) remain mysterious. We know some problems in EPTAS also lie in PTAS[logn] (using Theorem 4.15) and that \(\mathrm{FPTAS}=\mathrm{FPTAS}[\mathcal {P}]\) (Theorem 4.10), but beyond this it seems hard to make conjectures about these classes.
Footnotes
 1.
Formally, this should be ⌊logx⌋. Throughout the paper we ignore these technicalities to improve legibility.
Notes
Acknowledgements
The authors would like to thank Lex Schrijver for several helpful suggestions.
Open Access
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
References
 1.Arora, S., Lund, C., Motwani, R., Sudan, M., Szegedy, M.: Proof verification and the hardness of approximation problems. J. ACM 45(3), 501–555 (1998) MathSciNetzbMATHCrossRefGoogle Scholar
 2.Ausiello, G., Crescenzi, P., Gambosi, G., Kann, V., MarchettiSpaccamela, A., Protasi, M.: Complexity and Approximation—Combinatorial Optimization Problems and Their Approximation Properties. Springer, Berlin (1999) zbMATHGoogle Scholar
 3.Bazgan, C.: Schémas d’approximation et complexité paramétrée, Rapport de stage de DEA d’Informatique, Université ParisSud, Orsay (1995) Google Scholar
 4.Cai, L., Chen, J.: On fixedparameter complexity and approximability of NP optimization problems. J. Comput. Syst. Sci. 54(3), 465–474 (1997) MathSciNetzbMATHCrossRefGoogle Scholar
 5.Cai, L., Fellows, M., Juedes, D., Rosamond, F.: The complexity of polynomialtime approximation. Theory Comput. Syst. 41(3), 459–477 (2007) MathSciNetzbMATHCrossRefGoogle Scholar
 6.Cai, L., Juedes, D.: On the existence of subexponential parameterized algorithms. J. Comput. Syst. Sci. 67(4), 789–807 (2003) MathSciNetzbMATHCrossRefGoogle Scholar
 7.Cesati, M., Trevisan, L.: On the efficiency of polynomial time approximation schemes. Inf. Process. Lett. 64(4), 165–171 (1997) MathSciNetCrossRefGoogle Scholar
 8.Chen, J., Huang, X., Kanj, I.A., Xia, G.: Polynomial time approximation schemes and parameterized complexity. Discrete Appl. Math. 155(2), 180–193 (2007) MathSciNetzbMATHCrossRefGoogle Scholar
 9.Chen, Z.Z., Toda, S.: On the complexity of computing optimal solutions. Int. J. Found. Comput. Sci. 2(3), 207–220 (1991) MathSciNetzbMATHCrossRefGoogle Scholar
 10.Chiba, N., Nishizeki, T., Saito, N.: Applications of the Lipton and Tarjan’s planar separator theorem. J. Inf. Process. 4(4), 203–207 (1981) MathSciNetzbMATHGoogle Scholar
 11.Coffman, E.G. Jr., Garey, M.G., Johnson, D.S.: Approximation algorithms for bin packing: a survey. In: Hochbaum, D.S. (ed.) Approximation Algorithms for NPHard Problems, pp. 46–93. PWS Publishing Company, Boston (1997) Google Scholar
 12.Coffman, E.G. Jr., Lueker, G.S.: Approximation algorithms for extensible bin packing. J. Sched. 9(1), 63–69 (2006) MathSciNetzbMATHCrossRefGoogle Scholar
 13.Crescenzi, P., Kann, V., Silvestre, R., Trevisan, L.: Structure in approximation classes. SIAM J. Comput. 28(5), 1759–1782 (1999) MathSciNetzbMATHCrossRefGoogle Scholar
 14.Crescenzi, P., Panconesi, A.: Completeness in approximation classes. Inf. Comput. 93(2), 241–262 (1991) MathSciNetzbMATHCrossRefGoogle Scholar
 15.Crescenzi, P., Trevisan, L.: On approximation scheme preserving reducibility and its applications. Theory Comput. Syst. 33(1), 1–16 (2000) MathSciNetzbMATHCrossRefGoogle Scholar
 16.Demaine, E.D., Hajiaghayi, M., Nishimura, N., Ragde, P., Thilikos, D.M.: Approximation algorithms for classes of graphs excluding singlecrossing graphs as minors. J. Comput. Syst. Sci. 69(2), 166–195 (2004) MathSciNetzbMATHCrossRefGoogle Scholar
 17.Downey, R.G., Fellows, M.R.: Parameterized Complexity. Springer, New York (1999) CrossRefGoogle Scholar
 18.Flum, J., Grohe, M.: Parameterized Complexity Theory. Springer, Berlin (2006) Google Scholar
 19.Fürer, M., Raghavachari, B.: Approximating the minimumdegree steiner tree to within one of optimal. J. Algorithms 17(3), 409–423 (1994) MathSciNetCrossRefGoogle Scholar
 20.Garey, M.R., Johnson, D.S.: ‘Strong’ NPcompleteness results: motivation, examples, and implications. J. ACM 25(3), 499–508 (1976) MathSciNetCrossRefGoogle Scholar
 21.Garey, M.R., Johnson, D.S.: Computers and Intractability: A Guide to the Theory of NPCompleteness. Freeman, San Francisco (1979) zbMATHGoogle Scholar
 22.Hochbaum, D.S., Shmoys, D.B.: Using dual approximation algorithms for scheduling problems: theoretical and practical results. J. ACM 34(1), 144–162 (1987) MathSciNetCrossRefGoogle Scholar
 23.Huang, X.: Parameterized complexity and polynomialtime approximation schemes, Ph.D. Thesis, Texas A&M University (2004) Google Scholar
 24.Hunt, D.B. III, Marathe, M.V., Radhakrishnan, V., Ravi, S.S., Rosenkrantz, D.J., Stearns, R.E.: NCapproximation schemes for NP and PSPACEhard problems for geometric graphs. J. Algorithms 26(2), 238–274 (1998) MathSciNetzbMATHCrossRefGoogle Scholar
 25.Jansen, K., Zhang, G.: Maximizing the number of packed rectangles. In: Hagerup, T., Katajainen, J. (eds.) Algorithm Theory—SWAT 2004, Proc. 9th Scandinavian Workshop. Lecture Notes in Computer Science, vol. 3111, pp. 362–371. Springer, Berlin (2004) Google Scholar
 26.Kann, V.: On the approximability of NPcomplete optimization problems, Dissertation, Department of Numerical Analysis and Computing Science, Royal Institute of Technology, Stockholm, Sweden (1992) Google Scholar
 27.Karmarkar, N., Karp, R.M.: An efficient approximation scheme for the onedimensional bin packing problem. In: Proceedings of the 23rd Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 312–320, IEEE Comput. Soc., Los Alamitos (1982) Google Scholar
 28.Khanna, S., Motwani, R., Sudan, M., Vazirani, U.V.: On syntactic versus computational views of approximability. SIAM J. Comput. 28(1), 164–191 (1998) MathSciNetzbMATHCrossRefGoogle Scholar
 29.Kolaitis, P.G., Thakur, M.N.: Logical definability of NP optimization problems. Inf. Comput. 115(2), 321–353 (1994) MathSciNetzbMATHCrossRefGoogle Scholar
 30.Kolaitis, P.G., Thakur, M.N.: Approximation properties of NP minimization classes. J. Comput. Syst. Sci. 50(3), 391–411 (1995) MathSciNetzbMATHCrossRefGoogle Scholar
 31.Kratsch, S.: Polynomial kernelizations for MIN F^{+}Π_{1} and MAX NP. In: Albers, S., Marion, J.Y. (eds.) 26th International Symposium on Theoretical Aspects of Computer Science (STACS 2009), Schloss Dagstuhl—LeibnizZentrum fuer Informatik, Germany, pp. 601–612 (2009) Google Scholar
 32.Marx, D.: Parameterized complexity of independence and domination on geometric graphs. In: Bodlaender, H.L., Langston, M.A. (eds.) Parameterized and Exact Computation—IWPEC 2006, Proceedings of the Second International Workshop. Lecture Notes in Computer Science, vol. 4169, pp. 154–165. Springer, Berlin (2006) Google Scholar
 33.Marx, D.: On the optimality of planar and geometric approximation schemes. In: Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 338–348, IEEE Comput. Soc., Los Alamitos (2007) CrossRefGoogle Scholar
 34.Orponen, P., Mannila, H.: On approximation preserving reductions: complete problems and robust measures, Technical Report C198728, Department of Computer Science, University of Helsinki (1987) Google Scholar
 35.Papadimitriou, C., Yannakakis, M.: Optimization, approximation and complexity classes. J. Comput. Syst. Sci. 43(3), 425–440 (1991) MathSciNetzbMATHCrossRefGoogle Scholar
 36.Paz, A., Moran, S.: Non deterministic polynomial optimization problems and their approximations. Theor. Comput. Sci. 15(3), 251–277 (1981) MathSciNetzbMATHCrossRefGoogle Scholar
 37.Petrank, E.: The hardness of approximation: gap location. Comput. Complex. 4(2), 133–157 (1994) MathSciNetzbMATHCrossRefGoogle Scholar
 38.Queyranne, M.: Bounds for assembly line balancing heuristics. Oper. Res. 33(6), 1353–1359 (1985) MathSciNetzbMATHCrossRefGoogle Scholar
 39.Simon, H.U.: On approximate solutions for combinatorial optimization problems. SIAM J. Discrete Math. 3(2), 294–310 (1990) MathSciNetzbMATHCrossRefGoogle Scholar
 40.van Leeuwen, E.J.: Approximation algorithms for unit disk graphs. In: Kratsch, D. (ed.) GraphTheoretic Concepts in Computer Science—WG 2005, Proceedings of the 31st International Workshop. Lecture Notes in Computer Science, vol. 3787, pp. 351–361. Springer, Berlin (2005) CrossRefGoogle Scholar
 41.van Leeuwen, E.J.: Better approximation schemes for disk graphs. In: Arge, L., Freivalds, R. (eds.) Algorithm Theory—SWAT 2006, Proceedings of the 10th Scandinavian Workshop. Lecture Notes in Computer Science, vol. 4059, pp. 316–327. Springer, Berlin (2006) Google Scholar
 42.van Leeuwen, E.J.: Optimization and approximation on systems of geometric objects, Ph.D. Thesis, University of Amsterdam (2009) Google Scholar
 43.Vizing, V.G.: Ob otsenke khromaticheskogo klassa pgrafa (Russian: On an estimate of the chromatic class of a pgraph). Diskretn. Anal. 3, 25–30 (1964) MathSciNetGoogle Scholar
 44.Woeginger, G.: There is no asymptotic PTAS for twodimensional vector packing. Inf. Process. Lett. 64(6), 293–297 (1997) MathSciNetCrossRefGoogle Scholar