An approximation algorithm for a general class of multi-parametric optimization problems

In a widely-studied class of multi-parametric optimization problems, the objective value of each solution is an affine function of real-valued parameters. Then, the goal is to provide an optimal solution set, i.e., a set containing an optimal solution for each non-parametric problem obtained by fixing a parameter vector. For many multi-parametric optimization problems, however, an optimal solution set of minimum cardinality can contain super-polynomially many solutions. Consequently, no polynomial-time exact algorithms can exist for these problems even if P=NP\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textsf {P}=\textsf {NP}$$\end{document}. We propose an approximation method that is applicable to a general class of multi-parametric optimization problems and outputs a set of solutions with cardinality polynomial in the instance size and the inverse of the approximation guarantee. This method lifts approximation algorithms for non-parametric optimization problems to their parametric version and provides an approximation guarantee that is arbitrarily close to the approximation guarantee of the approximation algorithm for the non-parametric problem. If the non-parametric problem can be solved exactly in polynomial time or if an FPTAS is available, our algorithm is an FPTAS. Further, we show that, for any given approximation guarantee, the minimum cardinality of an approximation set is, in general, not ℓ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell $$\end{document}-approximable for any natural number ℓ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell $$\end{document} less or equal to the number of parameters, and we discuss applications of our results to classical multi-parametric combinatorial optimizations problems. In particular, we obtain an FPTAS for the multi-parametric minimum s-t-cut problem, an FPTAS for the multi-parametric knapsack problem, as well as an approximation algorithm for the multi-parametric maximization of independence systems problem.


Introduction
Many optimization problems depend on parameters whose values are unknown or can only be estimated.Changes in the parameters may alter the set of optimal solutions or even affect feasibility of solutions.Multiparametric optimization models describe the dependencies of the objective function and/or the constraints on the values of the parameters.That is, for any possible combination of parameter values, multi-parametric optimization problems ask for an optimal solution and its objective value.
In this article, we consider linear multi-parametric optimization problems in which the objective depends affine-linearly on each parameter.For simplicity, we focus on minimization problems, but all our reasoning and results can be applied to maximization problems as well.Formally, for K ∈ N \ {0}, (an instance of) a linear K-parametric optimization problem Π is given by a nonempty (finite or infinite) set X of feasible solutions, functions a, b k : X → R, k = 1, . . ., K, and a parameter set Λ ⊆ R K .Then, the optimization problem is typically formulated (cf.[36,40]) as inf x∈X f (x, λ) := a(x) Fixing a parameter vector λ ∈ Λ yields (an instance of) the non-parametric version Π(λ) of the linear Kparametric optimization problem.Moreover, the function f : Λ → R ∪ {−∞}, λ → f (λ) := infx∈X f (x, λ) that assigns the optimal objective value of Π(λ) to each parameter vector λ ∈ Λ, is called the optimal cost curve.The goal is to find a set S ′ ⊆ X of feasible solutions that contains an optimal solution for Π(λ) for each λ ∈ Λ for which infx∈X f (x, λ) is attained.Such a set S ′ is called an optimal solution set of the multi-parametric problem and induces a decomposition of the parameter set Λ: For each solution x ∈ S ′ , the associated critical region Λ(x) subsumes all parameter vectors λ ∈ Λ such that x is optimal for Π(λ).
For many linear multi-parametric optimization problems, however, the cardinality of any optimal solution set can be super-polynomially large, even if K = 1 (see, for example, [2,5,18,35,41]).In general, this rules out per se the existence of polynomial-time exact algorithms even if P = NP.Approximation provides a concept to substantially reduce the number of required solutions while still obtaining provable solution quality.For the non-parametric version Π(λ), approximation is defined as follows (cf.[46]): Definition 1.1 For β ≥ 1 and a parameter vector λ ∈ Λ such that f (λ) ≥ 0, a feasible solution x ∈ X is called β-approximate (or a β-approximation) for the non-parametric version Π(λ) if f (x, λ) ≤ β • f (x ′ , λ) for all x ′ ∈ X.
This concept can be adapted to linear multi-parametric optimization problems.There, the task is then to find a set of solutions that contains a β-approximate solution for each non-parametric problem Π(λ).Formally, this is captured in the following definition (cf.[3,19]): Definition 1.2 For β ≥ 1, a finite set S ⊆ X is called a β-approximation set for Π if it contains a βapproximate solution x ∈ S for Π(λ) for any λ ∈ Λ for which f (λ) ≥ 0. An algorithm A that computes a β-approximation set for any instance Π in time polynomially bounded in the instance size is called a β-approximation algorithm.A polynomial time approximation scheme (PTAS) is a family (Aε)ε>0 of algorithms such that, for every ε > 0, algorithm Aε is a (1 + ε)-approximation algorithm.A PTAS (Aε)ε>0 is a fully polynomial-time approximation scheme (FPTAS) if the running time of Aε is in addition polynomial in 1ε .Next, we discuss some assumptions that are necessary in order to ensure a well-defined notion of approximation and to allow for the existence of efficient approximation algorithms.Note that the outlined (technical) assumptions are rather mild and they are satisfied for multi-parametric formulations of a large variety of well-known optimization problems.This includes well-known problems such as the knapsack problem, the minimum s-t-cut problem, and the maximization of independence systems problem (see Section 4), as well as the assignment problem, the minimum cost flow problem, the shortest path problem, and the metric traveling salesman problem (see Section 5 in [3]).
Similar to the case of non-parametric problems, where non-negativity of the optimal objective value is required in order to define approximation (cf.[46] and Definition 1.1 above), approximation for multiparametric problems can only be defined if the optimal objective value f (λ) is non-negative for any λ ∈ Λ.To ensure this, assumptions on the parameter set and the functions a, b k , k = 1, . . ., K, are necessary.An initial approach would be to assume nonnegativity of the parameter vectors as well as nonnegativity of the functions a, b k , k = 1, . . ., K. A natural generalization also allows for negative parameter vectors.
Moreover, solutions must be polynomially encodable 1 and the values a(x) and b k (x), k = 1, . . ., K, must be efficiently computable for any x ∈ X in order for the problem to admit any polynomial-time approximation algorithm.Hence, we assume that any solution x ∈ X is of polynomial encoding length and the values a(x) and b k (x), k = 1, . . ., K, can be computed in time polynomial in the instance size and the encoding length of x (Assumption 1.3 (b)).This implies that the values a(x) and b k (x), k = 1, . . ., K, are rationals of polynomial encoding length.Consequently, the assumptions made so far imply the existence of positive rational bounds LB and UB such that b k (x), f (x, λ min ) ∈ {0} ∪ [LB, UB] for all x ∈ X and all k = 1, . . ., K. It is further assumed that LB and UB can be computed polynomially in the instance size (Assumption 1.3 (c)).Note that the numerical values of LB and UB may still be exponential in the instance size.
Extending the results for 1-parametric optimization problems from [3], we study how an exact or approximate algorithm ALG for the non-parametric version can be used in order to approximate the multiparametric problem and which approximation guarantee can be achieved when relying on polynomially many calls to ALG.Hence, the last assumption is the existence of an exact algorithm or an approximation algorithm for the non-parametric version (Assumption 1.3 (d)).In summary, the following assumptions are made: (b) Any x ∈ X can be encoded by a number of bits polynomial in the instance size and the values a(x) and b k (x), k = 1, . . ., K, can be computed in time polynomial in the instance size and the encoding length of x.
(c) Positive rational bounds LB and UB such that b k (x), f (x, λ min ) ∈ {0} ∪ [LB, UB] for all x ∈ X and all k = 1, . . ., K can be computed in time polynomial in the instance size.
(d) For some α ≥ 1, there exists an algorithm ALGα that returns, for any parameter vector λ ∈ Λ, a solution 2 The running time is denoted by TALG α .

Related Literature
Linear 1-parametric problems are widely-studied in the literature.Under the assumption that there exists an optimal solution for any non-parametric version, the parameter set can be decomposed into critical regions consisting of finitely many intervals (−∞, λ 1 ], [λ 1 , λ 2 ], . . ., [λ L , ∞) with the property that, for each interval, one feasible solution is optimal for all parameters within the interval.Assuming that L is chosen as small as possible, the parameter values λ 1 , . . ., λ L are exactly the points of slope change (the breakpoints) of the piecewise-linear optimal cost curve.A general solution approach for obtaining the optimal cost curve is presented by Eisner and Severance [11].Exact solution methods for specific optimization problems exist for the linear 1-parametric shortest path problem [27], the linear 1-parametric assignment problem [18], and the linear 1-parametric knapsack problem [9].Note that linear 1-parametric optimization problems also appear in the context of some well-known combinatorial problems.For example, Karp and Orlin [27] observe that the minimum mean cycle problem can be reduced to a linear 1-parametric shortest path problem [5,34], and Young et al. [47] note that linear 1-parametric programming problems arise in the process of solving the minimum balance problem, the minimum concave-cost dynamic network flow problem [20], and matrix scaling [38,43].These and many other problems share an inherent difficulty (see, e.g., Carstensen [5]): The optimal cost curve may have super-polynomially many breakpoints in general.This precludes the existence of polynomial-time exact algorithms even if P = NP.Nevertheless, there exist 1-parametric optimization problems for which the number of breakpoints is polynomial in the instance size.For example, this is known for linear 1-parametric minimum spanning tree problems [14] as well as for special cases of linear 1-parametric binary integer programs [5,4], linear 1-parametric maximum flow problems [16,32], and linear 1-parametric shortest path problems [12,27,47].
Exact solution methods for general linear multi-parametric optimization problems are studied by Gass and Saaty [17,42] and Gal and Nedoma [15].The minimum number of solutions needed to decompose the parameter set into critical regions, called the parametric complexity,3 is a natural criterion to measure the complexity.As for the 1-parametric case, the parametric complexity of a variety of problems is superpolynomial in the instance size.This even holds true for the special cases of minimum s-t-cut problems whose 1-parametric versions are tractable [2].Known exceptions are certain linear K-parametric binary integer programs [5], various linear K-parametric multiple alignment problems [13], linear K-parametric global minimum cut problems [1,26], and the linear K-parametric minimum spanning tree problem [44].
As outlined above, many linear multi-parametric optimization problems do not admit polynomial-time algorithms in general, even if P = NP and K = 1.This fact strongly motivates the design of approximation algorithms for multi-parametric optimization problems.So far, approximation schemes exist only for linear 1-parametric optimization problems.A general algorithm, which can be interpreted as an approximate version of the method of Eisner and Severance, is presented by Herzel et al. [3].The approximation of the linear 1-parametric 0-1-knapsack problem is considered in [19,22,25].
We conclude this section by expounding the relationship between multi-parametric optimization and multi-objective optimization.We first mention similarities and then discuss differences between (the approximation concepts for) both types of problems.In a multi-objective optimization problem, the K + 1 objective functions a, b k , k = 1, . . ., K, are to be optimized over the feasible set X simultaneously, and a β-approximation set is a set S ′ ⊆ X of feasible solutions such that, for each solution x ∈ X, there exists a solution x ′ ∈ S ′ that is at most a factor of β worse than x in each objective function a, b k , k = 1, . . ., K. We refer to the seminal work of Papadimitriou and Yannakakis [39] for further details.
When restricting to nonnegative parameter sets Λ, linear multi-parametric problems can be solved exactly by methods that compute so-called (extreme) supported solutions of multi-objective problems.Moreover, since the functions a, b k , k = 1, . . ., K, are linearly combined by a nonnegative parameter vector, multiobjective approximation sets are also multi-parametric approximation sets in this case.Surveys on exact methods and on the approximation of multi-objective optimization problems are provided by Ehrgott et al. [10] and Herzel et al. [24], respectively.Using techniques from multi-objective optimization with the restriction that the functions a, b k are assumed to be strictly positive, multi-parametric optimization problems with nonnegative parameter sets are approximated in [6,7,8].We note that the proposed concepts heavily rely on scaling of the objectives such that, for each solution x ∈ X, all the pair-wise ratios of a(x), b k (x), k = 1, . . ., K, are bounded by two.This clearly cannot be done if (strict subsets of) the function values of solutions x ∈ X are equal to zero.Despite these connections, there are significant differences between the approximation of multi-parametric and multi-objective problems: (1) As already pointed out by Diakonikolas [7], the class of problems admitting an efficient multi-parametric approximation algorithm is larger than the class of problems admitting an efficient multi-objective approximation algorithm.For example, the multi-parametric minimum s-t-cut problem with positive parameter set can be approximated efficiently [7], whereas it is shown in [39] that there is no FPTAS for constructing a multi-objective approximation set for the bi-objective minimum s-t-cut problem unless P = NP.This is also highlighted by the simple fact that (2) multi-objective approximation is not well-defined for negative objectives, whereas multi-parametric approximation allows the functions a, b k to be negative as long as the parameter set Λ is restricted such that f (x, λ) ≥ 0 for all solutions x ∈ X and all parameter vectors λ ∈ Λ. (3) For nonnegative parameter sets, Herzel et al. [23] show that, in the case of minimization, a multi-parametric β-approximation set is only a multi-objective ((K + 1) • β)-approximation set.In the case of maximization, they even show that no multi-objective approximation guarantee can be achieved by a multi-parametric approximation in general.(4) There also exist substantial differences with respect to the minimum cardinality of approximation sets: For some M ∈ N, consider a 2-parametric maximization problem with feasible solutions x 0 , . . ., x M such that a(x i ) = β 2i , b1(x 1 ) = β 2(M −i) for i = 0, . . ., M .Then, any multi-objective β-approximation set must contain all solutions, whereas {x 0 , x M } is a multi-parametric β-approximation set.
Consequently, existing approximation algorithms for 1-parametric and/or multi-objective optimization problems are not sufficient for obtaining efficient and broadly-applicable approximation methods for general multi-parametric optimization problems.This motivates the article at hand, in which we establish a theory of and provide an efficient method for the approximation of general linear multi-parametric problems.

Our Contribution
We provide a general approximation method for a large class of multi-parametric optimization problems by extending the ideas of both the approximation algorithm of Diakonikolas et al. [7,8] and the 1-parametric approximation algorithm of Herzel et al. [3] to linear multi-parametric problems.Note that, in [3], only 1-parametric problems are considered, which leads to an easier structure of the optimal cost curve due to the one-dimensional parameter set, which allows for a bisection-based approximation algorithm.This bisection-based approach cannot be generalized to multi-dimensional parameter sets as considered here.
For any 0 < ε < 1, we show that, if the non-parametric version can be approximated within a factor of α ≥ 1, then the linear multi-parametric problem can be approximated within a factor of (1 + ε) • α in running time polynomially bounded by the size of the instance and 1 ε .That is, the algorithm outputs a set of solutions that, for any feasible vector of parameter values, contains a solution that ((1 + ε) • α)-approximates all feasible solutions in the corresponding non-parametric problem.Consequently, the availability of a polynomial-time exact algorithm or an (F)PTAS for the non-parametric problem implies the existence of an (F)PTAS for the multi-parametric problem.
In Section 2, we show basic properties of the parameter set with respect to approximation.These results allow a decomposition of the parameter set by means of assigning each vector of parameter values to the approximating solution.We state our polynomial-time (multi-parametric) approximation method for the general class of linear multi-parametric optimization problems.Furthermore, we discuss the task of finding a set of solutions with minimum cardinality that approximates the linear K-parametric optimization problem in Section 3. We adapt the impossibility result of [7,8], which states that there does not exist an efficient approximation algorithm that provides any constant approximation factor on the minimum cardinality if the non-parametric problem can be approximated within a factor of 1 + δ for some δ > 0. We extend this to the case that an exact non-parametric algorithm is available.Here, we show that there cannot exist an efficient approximation algorithm that yields an approximation set with cardinality less than or equal to K times the minimum cardinality.
Section 4 discusses applications of our general approximation algorithm to multi-parametric versions of several well-known optimization problems.In particular, we obtain fully polynomial-time approximation schemes for the linear multi-parametric minimum s-t-cut problem and the multi-parametric knapsack problem (where approximation schemes for the linear 1-parametric version have been presented in [19,25]).We also obtain an approximation algorithm for the multi-parametric maximization problem of independence systems, a class of problems where the well-known greedy method is an approximation algorithm for the non-parametric version.

A General Approximation Algorithm
We now present our approximation method for linear multi-parametric optimization problems satisfying Assumption 1.3.We first sketch the general idea and then discuss the details.In the following, given some β ≥ 1, we simply say that x is a β-approximation for λ instead of x is a β-approximation for Π(λ) if this does not cause any confusion.Clearly, for each solution x ∈ X, there is a (possibly empty) subset of parameter vectors Λ ′ ⊆ Λ such that x is a β-approximation for all parameter vectors λ ′ ∈ Λ ′ .Hence, the notion of β-approximation (sets) is relaxed as follows: Let 0 < ε < 1 be given and let α ≥ 1 be the approximation guarantee obtained by the algorithm ALGα for the non-parametric version as in Assumption 1.3 (d).The general idea of our multi-parametric approximation method can be described as follows: We show that there exists a compact subset Λ compact ⊆ Λ with the following property: (A) For each parameter vector λ ∈ Λ\Λ compact , there exists a parameter vector λ ′ ∈ Λ compact such that any -approximation for λ (see Proposition 2.7 and Corollary 2.8).Thus, since (( l k for some l k ∈ Z and k = 1, . . ., K, such that the following holds: (B) The cardinality of Λ Grid is polynomially bounded in the encoding length of the instance and 1 ε (but exponential in K), see Proposition 2.12.
We now present the details of the algorithm and start with Property (A).To this end, it is helpful to also allow parameter dependencies in the constant term.Hence, we define F0(x) := f (x, λ min ) and and the goal is to provide an optimal solution for any w ∈ R K+1 ≧ .The vectors w ∈ R K+1 ≧ are called weights, and the set of all weights is called the weight set in order to distinguish it from the parameter set of the non-augmented problem.The terms β-approximate solution and β-approximation set for the augmented problem are defined analogously to Definitions 1.1 and 1.2, respectively.Note that the nonparametric version Π(λ) of Π for some λ = (λ1, . . ., λK) ∈ Λ coincides with the non-parametric version of the augmented problem for the weight w = (1, λ1 − λ min 1 , . . ., λK − λ min K ).A solution x * is optimal for some weight w ∈ R K+1 ≧ if and only if x * is optimal for t • w for any positive scalar t > 0. An analogous result holds in the approximate sense: Observation 2.1 Let x, x * ∈ X be two feasible solutions.Then, for any positive scalar t > 0 and . The conclusion of this observation is twofold: On the one hand, any β-approximation set for the augmented multi-parametric problem (1) is also a β-approximation set for Π.On the other hand, restricting the weight set to the bounded K-dimensional simplex W1 := {w ∈ R K+1 ≧ : K i=0 wi = 1} again yields an equivalent problem.
The next results establish the compact set Λ compact ⊆ Λ of parameter vectors such that any β-approximation set for Λ compact is a (β + ε ′ )-approximation set for Λ.
Let w be a strictly positive weight whose components wi for i in some index set ∅ = I ⊆ {0, . . ., K} sum up to a small threshold.The next proposition states that, instead of computing an approximate solution for w, one can compute an approximate solution for the weight obtained by projecting all component wi, i ∈ I, to zero, and still obtain a 'sufficiently good' approximation guarantee for w.
To this end, for a set I ⊆ {0, . . ., K} of parameter indices, the projection proj I : R K+1 → R K+1 that maps all components wi of a vector w ∈ R K+1 with indices i ∈ I to zero is defined by Lemma 2.3 Let 0 < ε ′ < 1 and β ≥ 1.Further, let ∅ = I {0, . . ., K} be an index set and let w ∈ R K+1 ≧ be a weight for which Then, any β-approximation for w is a (β + ε ′ )-approximation for proj I (w).
Proof.Let x ∈ X be a β-approximate solution for w.We have to show that, for any x ′ ∈ X, Since x is a β-approximation for w, we know that, for any solution x ′ ∈ X, 4 Note that Λ compact could also be defined by means of the intersection W cone ∩ {w ∈ R K+1 ≥ : w0 = 1}.However, structural insights into the geometry of Λ compact would then be missed.Moreover, the presented construction allows to easily derive lower and upper bounds on Λ compact , which are necessary for proving the polynomial bound on the cardinality of the grid Λ Grid .
which implies that Note that, for any solution x ′′ ∈ X, it holds that wi and, therefore, j / ∈I wjFj (x) = 0. Hence, in this case, we have which proves the claim.
Let w ∈ R K+1 ≧ and ∅ = I {0, . . ., K} be given as in Lemma 2.3.By the convexity property from Lemma 2.2, every β-approximation for w is not only a (β + ε ′ )-approximation for proj I (w), but also a (β + ε ′ )-approximation for all weights in conv({w, proj I (w)}).This suggests the following definition: Definition 2.4 Given 0 < ε ′ < 1 and β ≥ 1, the threshold used in the proof of Lemma 2.3 is denoted by Additionally, for any index set ∅ = I {0, . . ., K}, we define The set P ≤ (I) is defined analogously by replacing "<" by "≤" in (3).Note that P ≤ (I) is a polyhedron.Moreover, we define and, finally, Note that none of the visualized sets are bounded from above.
Let w ∈ W cone such that w ∈ P=(I) for some index set ∅ = I {0, . . ., K}. Lemma 2.3 implies that a β-approximation for w is a (β + ε ′ )-approximation for conv( w, proj I ( w)).However, the reverse statement is needed: For w ∈ R K+1 ≧ \ W cone , does there exist a weight w ∈ W cone such that a β-approximation for w is a (β + ε ′ )-approximation for w? Proposition 2.7 will show that this holds true.In fact, the corresponding proof is constructive and relies on the lifting procedure described in the following.
Proof.First consider the case that wj = 0 for all j ∈ I. Then it holds that Thus, the weight w can be written as a convex combination of w and proj I ( w) by which concludes the proof.
When given a weight w ∈ P<(I) for some index set I, a lifted weight w ∈ P=(I) can be constructed using Lemma 2.5.A β-approximation for w is then a (β + ε ′ )-approximation for w due to Lemma 2.2 and Lemma 2.3.Next, it is shown that this idea generalizes to the set W cone in the following way: For each weight w / ∈ W cone , a weight w ∈ W cone can be found such that any β-approximation for w is a (β + ε ′ )-approximation for w.The remaining task is to prove that this holds true for weights contained in P<(I) ∩ P<(I ′ ) for two (or more) different index sets I and I ′ , since using the previous construction for I might result in a lifted weight that is still contained in P<(I ′ ) and vice versa.Notwithstanding, such weights can inductively be lifted with respect to different index sets and, if this is done in a particular order, a weight is obtained that is contained in W cone after at most K lifting steps.
The following lemma states that, for a weight w that is not contained in P<(I) for some index set I, increasing any of its components wi with indices i ∈ I preserves the fact that the weight is not contained in P<(I).Proof.Let w ∈ R K+1
Since w / ∈ W cone , we have w ∈ P<(I) for at least one index set ∅ = I [K].Hence, we choose k max ∈ [K − 1] to be the largest index such that w ∈ P<([k max ]) holds.Similarly, choose k 0 ∈ [K − 1] to be the smallest index such that w ∈ P<([k 0 ]) holds.This means that w / ∈ P<([k]) for all k ∈ [K − 1] with 0 ≤ k < k 0 and k max < k < K. Further, set w 0 := w and construct a (finite) sequence w 0 , w 1 , . . ., w L of weights and a corresponding sequence k 0 < k 1 < • • • < k L−1 of indices such that, for each ℓ ∈ {1, . . ., L}, the following statements hold: The construction, which is illustrated in Figure 2, is as follows: Given a weight , we set k ℓ to be the smallest index such that w ℓ ∈ P<([k ℓ ]) and, analogously to Lemma 2.5, define We repeat this construction until, for some L ∈ N, the weight w L is not contained in P<([k]) for any k ∈ [K − 1].Note that Statement (b) implies that, for any ℓ ∈ N, the weight w ℓ cannot be contained in P<(I) for any index set I that is not of the form Thus, in both cases, we obtain that ), this yields that k ℓ > k ℓ−1 (for ℓ ≥ 1) and, hence, the construction indeed terminates after at most k max − k 0 < K steps with w L ∈ W cone .Furthermore, Statement (d) and Lemma 2.3 imply that any β-approximation for w L is a (β + ε ′ )approximation for proj [k ℓ ′ ] (w L ) for each l ′ ∈ 0, . . ., L − 1 and, thus, also for w using Statement (e) and the convexity Lemma 2. For ℓ = 1, in order to prove Statement (b), first consider the case that w 0 0 > 0. In this case, w 1 0 > 0 by Statement (a).Next, consider the case that w 0 0 = . . ., w 0 k = 0 and w 0 k+1 > 0 for some k ∈ [K − 1] (note that w cannot be the zero vector since w / ∈ W cone ).In this case, we must have k = k 0 by definition of k 0 , and, therefore, The inequality w 1 k 0 ≤ w 1 k 0 +1 even holds with strict inequality since, in both cases, it holds that . All other inequalities of Statement (b) follow from the corresponding inequalities for ℓ = 0 (or trivially hold for i = 1, . . ., k and all other inequalities of Statement (b) immediately follow from the corresponding inequalities for ℓ.In order to prove Statement (c), note that, for k ∈ [k ℓ − 1], it holds that w ℓ / ∈ P<([k]) by the choice of k ℓ .For k = k max + 1, . . ., K, we have w ℓ / ∈ P<([k]) by Statement (a) and Lemma 2.6.For Statement (d), we have for ℓ ′ = 0, . . ., ℓ − 1.Thus, for ℓ ′ = 0, . . ., ℓ − 1.Moreover, by Lemma 2.5, it holds that w ℓ+1 ∈ P=([k ℓ ]), which concludes the proof of Statement (d).Finally, Statement (e) holds for ℓ + 1 since, by induction hypothesis, we know that w ∈ conv {w ℓ } ∪ {proj [k 0 ] (w ℓ ), . . ., proj [k ℓ−1 ] (w ℓ )} , which means that there exist coefficients θ0, . . ., θ ℓ ∈ [0, 1] such that Lemma 2.5 implies that w ℓ ∈ conv {w ℓ+1 , proj [k ℓ ] (w ℓ+1 )} , i.e., there exists some µ ∈ [0, 1] such that and, thus, with )} , which completes the induction and the proof.
The following corollary states that the same result holds true for the K-dimensional simplex W1 = {w ∈ R K+1 ≧ : K i=0 wi = 1}, see Figure 3 for an illustration.
Proof.Note that W cone is a cone, i.e., w ∈ W cone if and only if t • w ∈ W cone for each t > 0. In particular, for each weight w ∈ R K+1 ≧ \ {0}, it holds that Thus, the claim follows immediately from Observation 2.1 and Lemma 2.3.The following lemma provides a lower bound on the components of weights w ∈ W compact .This allows us to derive lower and upper bounds on Λ compact , which will be useful when proving the polynomial cardinality of the grid Λ Grid .Lemma 2.9 Let 0 < ε ′ < 1 and β ≥ 1. Define Then, W compact ⊆ W compact ⊆ W1.
Next, we construct a grid Λ Grid ⊆ Λ possessing Properties (B) and (C).That is, the cardinality is polynomially bounded in the encoding length of the instance and 1 ε , and computing a ((1 + ε 2 ) • α)-approximation for any λ ∈ Λ compact is possible by computing an α-approximation for each grid point We employ the bounds on Λ compact given by Lemma 2.11, and define a lower bound as well as an upper bound by We then set Now, Property (B) can be shown using the construction of Λ Grid : Proposition 2.12 Let Λ Grid be defined as in (5).Then, Proof.We have |Λ Grid | = (ub − lb + 1) K , where It remains to prove that Λ Grid indeed satisfies Property (C), for which the main idea is motivated by the approximation of multi-objective optimization problems, cf.[39].
Note that Λ Grid is constructed in a way such that, for any parameter vector λ ′ ∈ Λ compact , there exists a parameter vector λ ∈ Λ Grid satisfying λk Hence, Property (C) follows immediately by Proposition 2.13.This concludes the discussion of the details.
Our general approximation method for multi-parametric optimization problems is now obtained as follows: Given an instance Π, an α-approximation algorithm ALGα for the non-parametric version, and ε > 0, we construct the grid Λ Grid defined in (5), apply ALGα for each parameter vector λ ∈ Λ Grid , and collect all solutions in a set S. Since, as shown before, Properties (A)-(C) hold true, the set S is indeed a ((1 + ε) • α)approximation set.Algorithm 1 summarizes the method.
Algorithm 1: Grid approach for the approximation of multi-parametric optimization problems.
1 Compute LB and UB. 2 Compute LB and UB.
where T LB/UB denotes the time needed for computing the bounds LB and UB, and TALG α denotes the running time of ALGα.
In particular, Theorem 2.14 yields: Corollary 2.15 Algorithm 1 yields a FPTAS if either an exact algorithm ALG1 or an FPTAS is available for the non-parametric version of Π.If a PTAS is available for the non-parametric version, Algorithm 1 yields a PTAS.

Minimum-Cardinality Approximation Sets
In this section, the task of finding a β-approximation set S * with minimum cardinality is investigated.It is stated in [45] that no constant approximation factor on the cardinality of S * can be achieved in general for multi-parametric optimization problems with positive parameter set and positive, polynomialtime computable functions a, b k if only (1 + δ)-approximation algorithms for δ > 0 are available for the non-parametric problem.Thus, the negative result also holds in the more general case considered here.
Theorem 3.1 For any β > 1 and any integer L ∈ N, there does not exist an algorithm that computes a β-approximation set S such that |S| < L • |S * | for every 3-parametric minimization problem and generates feasible solutions only by calling ALG 1+δ for values of δ > 0 such that 1 δ is polynomially bounded in the encoding length of the input.
We remark that the corresponding proof (published in [7], Theorem 5.4.12) is imprecise, but the idea remains valid with a more careful construction.We provide a counterexample and a correction of the proof in the appendix.
Note that this result does not rule out the existence of a method that achieves a constant factor if a polynomial-time exact algorithm ALG1 is available.We now show that, in this case, there cannot exist a method that yields an approximation factor smaller than K + 1 on the cardinality of S * .Theorem 3.2 For any β > 1 and K ∈ N, there does not exist an algorithm that computes a β-approximation set S with |S| < (K + 1) • |S * | for every K-parametric minimization problem and generates feasible solutions only by calling ALG1.
Proof.Let β > 1.In the following, an instance of the augmented multi-parametric optimization problem with parameter set given by the bounded K-dimensional simplex W1 is constructed such that the minimumcardinality β-approximation set S * has cardinality one, but the unique solution x ∈ S * cannot be obtained by ALG1, and any other β-approximation set must have cardinality greater than or equal to K + 1.Consider an instance with X = {x, x 0 , . . ., x K } such that Fi(x) = (K + 1) • β for i = 0, . . ., K, and, for i = 0, . . ., K, Fi(x i ) = K + 1 and Fj(x i ) = (K + 2) • β − 1 for j = i.
We show that the solution x cannot be obtained via ALG1, the set {x} a β-approximation set, and the only β-approximation set that does not contain x is {x 0 , . . ., x K } with cardinality K + 1.
First, we show that x cannot be obtained via ALG1.For any w ∈ W1, there exists an index i ∈ {0, . . ., K} such that wi ≥ 1 K+1 and, thus, Hence, the solution x cannot be obtained via ALG1.Next, we show that the set {x} is a β-approximation set.For any w ∈ W1 and any i = 0, . . ., K, it holds that Hence, the solution x is a β-approximation for any λ ∈ Λ.Finally, we show that the only β-approximation set that does not contain x is {x 0 , . . ., x K }.Let e i ∈ W1 be the ith unit vector.Then, for any j ∈ {0, . . ., K} \ {i}, we have Note that, by continuity of w ⊤ F (x) in wi for i = 0, . . ., K, there exists a small t > 0 such that, for each i ∈ {0, . . ., K}, the weight w i defined by w i i := 1 − K•t K+1 and w i j = t K+1 , j = i, satisfies β • (w i ) ⊤ F (x i ) < (w i ) ⊤ F (x j ) for all j = i.Hence, the above arguments also hold for weights w ∈ W1 ∩ R K+1 > .Therefore, the above instance shows that no β-approximation set with cardinality less than K + 1 times the size of the smallest β-approximation set can be obtained using ALG1 for linear multi-parametric optimization problems in general.

Applications
In this section, the established results are applied to linear multi-parametric versions of important optimization problems.The 1-parametric versions of the shortest path problem, the assignment problem, linear mixed-integer programs, the minimum cost flow problem, and the metric traveling salesman problem have previously been covered in [3].By employing Theorem 2.14, it is easy to see that the stated results generalize to the multi-parametric case in a straightforward manner.We now apply Theorem 2.14 to several other well-known problems.Note that, for a maximization problem and some β ≥ 1, a β-approximate solution for the non-parametric version Π(λ) is a feasible solution where ar, b k,r ∈ N0, and two vertices s, t ∈ V with s = t, the multi-parametric minimum s-t-cut problem asks to compute an s-t-cut (S λ , T λ ), s ∈ S λ and t ∈ T λ , of minimum total cost r:α(r)∈S λ ,ω(r)∈T λ ar + K k=1 λ k br for each λ ∈ Λ (where α(r) denotes the start vertex and ω(r) the end vertex of an arc r ∈ R).Here, λ min can be defined by setting λ min k := maxr∈R{− ar K•b k,r : b k,r = 0} such that, for each parameter vector greater than or equal to λ min , the cost of each s-t-cut is nonnegative.
A positive rational upper bound UB as in Assumption 1.3 can be obtained by summing up the m cost components ar and summing up the m cost components b k,r for each k, and taking the maximum of these K + 1 sums.The lower bound LB can be chosen as LB := 1.The non-parametric problem can be solved in O (n • m) for any fixed λ (cf.[37]).Hence, an FPTAS for the multi-parametric minimum s-t-cut problem with running time O The number of required solutions in an optimal solution set can be super-polynomial even for K = 1 [4].Remarkably, a recent result shows that the number of required solutions in an optimal solution set of the K-parametric minimum s-t-cut problem with K > 1 can be exponential even for instances that satisfy the so-called source-sink-monotonicity [2], whereas instances of the 1-parametric minimum s-t-cut problem satisfying source-sink-monotonicity can be solved exactly in polynomial time [16,32].Consequently, in the multi-parametric case, an FPTAS is the best-possible approximation result even for instances satisfying source-sink-monotonicity.
Multi-Parametric Maximization of Independence Systems Let a finite set E = {1, . . ., n} of elements and a nonempty family F ⊆ 2 E be given.The pair (E, F) is called an independence system if ∅ ∈ F and, for each set x ∈ F, it follows that all its subsets x ′ ⊆ x are also contained in F. The elements of F are then called independent sets.The lower rank l(F ) and the upper rank r(F ) of a subset F ⊆ E of elements are defined by l(F ) := min{|B| : B ⊆ E, B ∈ F and B ∪ {e} / ∈ F for all e ∈ E \ F } and r(F ) := max{|B| : B ⊆ E, B ∈ F}, respectively.The rank quotient q(E, F) of the independence system (E, F) is then defined as q(E, F) := min F ⊆E,l(F ) =0 r(F ) l(F ) .Moreover, let a multi-parametric cost of the form ae + K k=1 λ k b k,e , where ae, b k,e ∈ N0, k = 1, . . ., K, be given for each element e ∈ E.Then, with λ min defined by λ min k := maxe∈E{− ae K•b k,e : b k,e = 0}, k = 1, . . ., K, the multi-parametric maximization of independence systems problem asks to compute, for each parameter vector λ greater than or equal to λ min , an independent set x λ ∈ F of maximum cost e∈E ae + K k=1 λ k b k,e .Here, a positive rational upper bound UB as in Assumption 1.3 can be obtained by summing up the n profit components ae and summing up the n profit components b k,e for each k, and taking the maximum of these K +1 sums.The lower bound LB can again be chosen as LB := 1.For independence systems (E, F) with rank quotient q(E, F), it is known that the greedy algorithm is a q(E, F)-approximation algorithm for the non-parametric problem obtained by fixing any parameter vector λ [31].Hence, for any ǫ > 0, the maximization version of Theorem 2.14 yields a ((1+ǫ)•q(E, F))-approximation algorithm with running time , where C denotes the maximum profit component among all ae, b k,e , and T Greedy denotes the running time of the greedy algorithm (which can often be seen to be in O(n log(n) times the running time of deciding whether F ∈ F holds for any set F ⊆ E of elements).Since the maximum matching problem (with the assignment problem as a special case) in an undirected graph G = (V, E) constitutes a special case of the maximization of independence systems problem [31], the number of required solutions in an optimal solution set for the multi-parametric maximization of independence systems problem can be super-polynomial in n even for K = 1 [4,5].
For example, our result yields a ((1 + ε) • 2)-approximation algorithm for the multi-parametric b-matching problem and a ((1 + ε) • 3)-approximation algorithm for the multi-parametric maximum asymmetric TSP (cf.[33]).Note that the knapsack problem can also be formulated using independence systems.For this problem, an approximation scheme for the non-parametric version is known: Then, the multi-parametric knapsack problem asks to compute a subset x ⊆ E satisfying e∈x we ≤ W of maximum profit for each parameter vector λ greater than or equal to λ min .
For this problem, a positive rational upper bound UB as in Assumption 1.3 can again be obtained by summing up the n profit components ae and summing up the n profit components b k,e for each k, and taking the maximum of these K +1 sums.The lower bound LB can again be chosen as LB := 1.The currently best approximation scheme for the non-parametric problem is given in [28,29], which computes, for any ε ′ > 0, a feasible solution whose profit is no worse than (1 − ε ′ ) times the profit of any other feasible solution in time O n • min log n, log 1 . Assuming that n is much larger than 1 ε ′ (cf.[30]), choosing ε ′ = 1 2 • ε and applying the maximization version of Theorem 2.14 with 1 2 • ε yields an FPTAS for the multi-parametric knapsack problem with running time O (n log , where C denotes the maximum profit component among all ae, b k,e .
Again, the number of required solutions in an optimal solution set can be super-polynomial in n even for K = 1 [5].

Conclusion
Exact solution methods, complexity results, and approximation methods for multi-parametric optimization problems are of major interest in recent research.In this paper, we establish that approximation algorithms for many important non-parametric optimization problems can be lifted to approximation algorithms for the multi-parametric version of such problems.The provided approximation guarantee is arbitrarily close to the approximation guarantee of the non-parametric approximation algorithm.This implies the existence of a multi-parametric FPTAS for many important multi-parametric optimization problems for which optimal solution sets require super-polynomially many solutions in general.
Moreover, our results show that computing an approximation set containing the smallest-possible number of solutions is not possible in general.However, practical routines to reduce the number of solutions in the approximation set, based for example on the convexity property of Lemma 2.2 and the approximation method in [3], might be of interest.Another direction of future research could be the approximation of multi-parametric MILPs with parameter dependencies in the constraints.Here, relaxation methods or a multi-objective multi-parametric formulation of the problems may provide a suitable approach.

Figure 1 Figure 1 :
Figure 1 provides a visualization of the sets defined in Definition 2.4.
w ∈ P=(I).Moreover, w = proj I ( w) ∈ conv({ w, proj I ( w)}).Now consider the case that wj = 0 for some j ∈ I. Here, it holds that i∈I that w ∈ P=(I).Note that, since w ∈ P<(I), we must have c • min

Lemma 2 . 6 Proposition 2 . 7
Let w ∈ R K+1 ≧ \ P<(I) for some index set ∅ = I {0, . . ., K}.Let w ∈ R K+1 ≧ be a weight such that wi ≥ wi for all i ∈ I and wj = wj for all j / ∈ I.Then, w / ∈ P<(I).Proof.Since w ∈ P<(I), it holds that i∈I wi ≥ i∈I wi ≥ c • min j / ∈I wj = c • min j / ∈I wj, which proves the claim.Now, we can prove the central result for Property (A).Note that the proof is constructive.Let 0 < ε ′ < 1 and β ≥ 1 be given.Then, for any weight w ∈ R K+1 ≧ \ W cone , there exists a weight w ∈ W cone such that any β-approximation for w is a (β + ε ′ )-approximation for w.

Figure 3 :
Figure 3: Illustration of the set W compact for a linear multi-parametric optimization problem (K = 2).Left: W compact as a subset of R 3 ≧ .Right: Schematic view of W compact (light blue).The dashed lines indicate the boundary of W compact defined in Lemma 2.9.

Figure 4 :
Figure 4: Illustration of the set Λ compact (white region).Left: Full schematic view.Right: Focus on {λ min } + [0, 1] K .The dashed lines indicate the boundary of the set Λcompact = φ( W compact ) considered in the proof of Lemma 2.11.