From approximate to exact integer programming

Approximate integer programming is the following: For a convex body $K \subseteq \mathbb{R}^n$, either determine whether $K \cap \mathbb{Z}^n$ is empty, or find an integer point in the convex body scaled by $2$ from its center of gravity $c$. Approximate integer programming can be solved in time $2^{O(n)}$ while the fastest known methods for exact integer programming run in time $2^{O(n)} \cdot n^n$. So far, there are no efficient methods for integer programming known that are based on approximate integer programming. Our main contribution are two such methods, each yielding novel complexity results. First, we show that an integer point $x^* \in (K \cap \mathbb{Z}^n)$ can be found in time $2^{O(n)}$, provided that the remainders of each component $x_i^* \mod{\ell}$ for some arbitrarily fixed $\ell \geq 5(n+1)$ of $x^*$ are given. The algorithm is based on a cutting-plane technique, iteratively halving the volume of the feasible set. The cutting planes are determined via approximate integer programming. Enumeration of the possible remainders gives a $2^{O(n)}n^n$ algorithm for general integer programming. This matches the current best bound of an algorithm by Dadush (2012) that is considerably more involved. Our algorithm also relies on a new asymmetric approximate Carath\'eodory theorem that might be of interest on its own. Our second method concerns integer programming problems in equation-standard form $Ax = b, 0 \leq x \leq u, \, x \in \mathbb{Z}^n$ . Such a problem can be reduced to the solution of $\prod_i O(\log u_i +1)$ approximate integer programming problems. This implies, for example that knapsack or subset-sum problems with polynomial variable range $0 \leq x_i \leq p(n)$ can be solved in time $(\log n)^{O(n)}$. For these problems, the best running time so far was $n^n \cdot 2^{O(n)}$.


Introduction
Many combinatorial optimization problems as well as many problems from the algorithmic geometry of numbers can be formulated as an integer linear program where A ∈ Z m×n , b ∈ Z m and c ∈ Z n , see, e.g.[18,30,34].Lenstra [26] has shown that integer programming can be solved in polynomial time, if the number of variables is fixed.A careful analysis of his algorithm yields a running time of 2 O(n 2 ) times a polynomial in the binary encoding length of the input of the integer program.Kannan [21] has improved this to n O(n) , where, from now on we ignore the extra factor that depends polynomially on the input length.At the time, this paper was first submitted, the best algorithm was the one of Dadush [12] with a running time of 2 O(n) • n n .The question whether there exists a singly exponential time, i.e., a 2 O(n) -algorithm for integer programming is one of the most prominent open problems in the area of algorithms and complexity.Integer programming can be described in the following more general form.Here, a convex body is synonymous for a full-dimensional compact and convex set.

Integer Programming (IP)
Given a convex body K ⊆ R n , find an integer solution x * ∈ K ∩ Z n or assert that K ∩ Z n = .
The convex body K must be well described in the sense that there is access to a separation oracle, see [18].Furthermore, one assumes that K contains a ball of radius r > 0 and that it is contained in some ball of radius R. In this setting, the current best running times hold as well.The additional polynomial factor in the input encoding length becomes a polynomial factor in log(R/r ) and the dimension n.Central to this paper is Approximate integer programming which is as follows.

Approximate Integer Programming (Approx-IP)
Given a convex body K ⊆ R n , let c ∈ R n be its center of gravity.Either find an integer vector The convex body 2 • (K − c) + c is K scaled by a factor of 2 from its center of gravity.The algorithm of Dadush [13] solves approximate integer programming in singly exponential time 2 O(n) .Despite its clear relation to exact integer programming, there is no reduction from exact to approximate known so far.Our guiding question is the following: Can approximate integer programming be used to solve the exact version of (specific) integer programming programming problems?

Contributions of this paper
We present two different algorithms to reduce the exact integer programming problem (IP) to the approximate version (APPROX-IP).a) Our first method is a randomized cutting-plane algorithm that, in time 2 O(n) and for any ℓ ≥ 5(n + 1) finds a point in K ∩ (Z n /ℓ) with high probability, if K contains an integer point.This algorithm uses an oracle for (APPROX-IP) on K intersected with one side of a hyperplane that is close to the center of gravity.Thereby, the algorithm collects ℓ integer points close to K .
The collection is such that the convex combination with uniform weights 1/ℓ of these points lies in K .If, during an iteration, no point is found, the volume of K is roughly halved and eventually K lies on a lower-dimensional subspace on which one can recurse.
b) If equipped with the component-wise remainders v ≡ x * (mod ℓ) of a solution x * of (IP), one can use the algorithm to find a point in (K −v)∩Z n and combine it with the remainders to a full solution of (IP), using that (K −v)∩ℓZ n ̸ = .This runs in singly exponential randomized time 2 O(n) .Via enumeration of all remainders, one obtains an algorithm for (IP) that runs in time 2 O(n) • n n .This matches the best-known running time for general integer programming [13], which is considerably involved.
c) Our analysis depends on a new approximate Carathéodory theorem that we develop in Section 4. While approximate Carathéodory theorems are known for centrally symmetric convex bodies [31,6,29], our version is for general convex sets and might be of interest on its own.d) Our second method is for integer programming problems Ax = b, x ∈ Z n , 0 ≤ x ≤ u in equation standard form.We show that such a problem can be reduced to 2 O(n) • ( i log(u i + 1)) instances of (APPROX-IP).This yields a running time of (log n) O(n) for such IPs, in which the variables are bounded by a polynomial in the dimension.The so-far best running time for such instances was 2 O(n) • n n at the time of the first submission of this paper.Well known benchmark problems in this setting are knapsack and subset-sum with polynomial upper bounds on the variables, see Section 5.

Related work
If the convex body K is an ellipsoid, then the integer programming problem (IP) is the well known closest vector problem (CVP) which can be solved in time 2 O(n) with an algorithm by Micciancio and Voulgaris [28].Blömer and Naewe [9] previously observed that the sampling technique of Ajtai et al. [3] can be modified in such a way as to solve the closest vector approximately.More precisely, they showed that a (1 + ϵ)-approximation of the closest vector problem can be found in time O(2+1/ϵ) n time.This was later generalized to arbitrary convex sets by Dadush [13].This algorithm either asserts that the convex body K does not contain any integer points, or it finds an integer point in the body stemming from K scaled by (1 + ϵ) from its center of gravity.Also the running time of this randomized algorithm is O(2 + 1/ϵ) n .In our paper, we restrict to the case ϵ = 1 which can be solved in singly exponential time.The technique of reflection sets was also used by Eisenbrand et al. [15] to solve (CVP) in the ℓ ∞ -norm approximately in time In the setting in which integer programming can be attacked with dynamic programming, tight upper and lower bounds on the complexity are known [16,19,23].Our n n • 2 O(n) algorithm could be made more efficient by constraining the possible remainders of a solution (mod ℓ) efficiently.This barrier is different than the one in classical integer-programming methods that are based on branching on flat directions [26,18] as they result in a branching tree of size n O(n) .
The subset-sum problem is as follows.Given a set Z ⊆ N of n positive integers and a target value t ∈ N, determine whether there exists a subset S ⊆ Z with s∈S s = t .Subset sum is a classical NP-complete problem that serves as a benchmark in algorithm design.The problem can be solved in pseudopolynomial time [7] by dynamic programming.The current fastest pseudopolynomial-time algorithm is the one of Bringmann [10] that runs in time O(n + t ) up to polylogarithmic factors.There exist instances of subset-sum whose set of feasible solutions, interpreted as 0/1 incidence vectors, require numbers of value n n in the input, see [4].Lagarias and Odlyzko [24] have shown that instances of subset sum in which each number of the input Z is drawn uniformly at random from {1, . . ., 2 O(n 2 ) } can be solved in polynomial time with high probability.The algorithm of Lagarias and Odlyzko is based on the LLL-algorithm [25] for lattice basis reduction.

Subsequent work
After the acceptance of the conference version of this work, Reis and Rothvoss [33] proved that an algorithm originally suggested by Dadush [12] can solve any n-variable integer program max{〈c, x〉 | Ax ≤ b, x ∈ Z n } in time (log n) O(n) times a polynomial in the encoding length of A, b and c.However, the question whether there is a 2 O(n) -time algorithm remains wide open and the approach used by Reis and Rothvoss inherently cannot provide running times bounds below (log n) O(n) .

Preliminaries
A lattice Λ is the set of integer combinations of linearly independent vectors, i.e.Λ := Λ(B ) := {B x | x ∈ Z r } where B ∈ R n×r has linearly independent columns.The determinant is the volume of the r -dimensional parallelepiped spanned by the columns of the basis B , i.e. det(Λ) := det r (B T B ).We say that Λ has full rank if n = r .In that case the determinant is simply det(Λ) = | det n (B )|.For a full rank lattice Λ, we denote the dual lattice by For an introduction to lattices, we refer to [27].
as the length of the shortest vector with respect to the norm induced by Q.We denote the Euclidean ball by 2 ) where A : R n → R n is an invertible linear map.For any such ellipsoid E there is a unique positive definite matrix M ∈ R n×n so that ∥x∥ E = x T M x.The barycenter (or centroid) of a convex body Q is the point We will use the following version of (APPROX-IP) that runs in time 2 O(n) , provided that the symmetrizer for the used center c is large enough.This is the case for c being the center of gravity, see Theorem 5. Note that the center of gravity of a convex body can be (approximately) computed in randomized polynomial time [14,8].
Theorem 1 (Dadush [13]).There is a 2 O(n) -time algorithm APXIP(K , c, Λ) that takes as input a convex body One of the classical results in the geometry of numbers is Minkowski's Theorem which we will use in the following form: Theorem 2 (Minkowski's Theorem).For a full rank lattice Λ ⊆ R n and a symmetric convex body We will use the following bound on the density of sublattices which is an immediate consequence of Minkowski's Second Theorem.Here we abbreviate λ 1 (Λ) := λ 1 (Λ, B n 2 ).
Finally, we revisit a few facts from convex geometry.Details and proofs can be found in the excellent textbook by Artstein-Avidan, Giannopoulos and Milman [5].
Lemma 4 (Grünbaum's Lemma).Let K ⊆ R n be any convex body and let 〈a, x〉 = β be any hyperplane through the barycenter of K .Then For a convex body K , there are two natural symmetric convex bodies that approximate K in many ways: the "inner symmetrizer" K ∩ (−K ) (provided 0 ∈ K ) and the "outer symmetrizer" in form of the difference body K −K .The following is a consequence of a more general inequality of Milman and Pajor.Theorem 5. Let K ⊆ R n be any convex body with barycenter 0.
In particular Theorem 5 implies that choosing c as the barycenter of K in Theorem 1 results in a 2 O(n) running time -however this will not be the choice that we will later make for c.Also the size of the difference body can be bounded:

Theorem 6 (Inequality of Rogers and Shephard). For any convex body
Recall that for a convex body Q with 0 ∈ int(Q), the polar is We will use the following relation between volume of a symmetric convex body and the volume of the polar; to be precise we will use the lower bound (which is due to Bourgain and Milman).
Theorem 7 (Blaschke-Santaló-Bourgain-Milman).For any symmetric convex body Q ⊆ R n one has where C > 0 is a universal constant.
We will also rely on the result of Frank and Tardos to reduce the bit complexity of constraints: Theorem 8 (Frank, Tardos [17]).There is a polynomial time algorithm that takes (a, b) ∈ Q n+1 and ∆ ∈ N + as input and produces a pair

The Cut-or-Average algorithm
First, we discuss our CUT-OR-AVERAGE algorithm that on input of a convex set K , a lattice Λ and integer ℓ ≥ 5(n + 1), either finds a point x ∈ Λ ℓ ∩ K or decides that K ∩ Λ = in time 2 O(n) .Note that for any polyhedron K = {x ∈ R n | Ax ≤ b} with rational A, b and lattice Λ with basis B one can compute a value of ∆ so that log(∆) is polynomial in the encoding length of A, b and B and  [35] for details.In other words, w.l.o.g.we may assume that our convex set is bounded.The pseudo code of the algorithm can be found in Figure 1.An intuitive description of the algorithm is as follows: we compute the barycenter c of K and an ellipsoid E that approximates K up to a factor of R = n + 1.The goal is to find a point z close to the barycenter c so that z is a convex combination of lattice points that all lie in a 3-scaling of K .We begin by choosing z as any such lattice vector and then iteratively update z using the oracle for approximate integer programming from Theorem 1 to move closer to c.If this succeeds, then we can directly use an asymmetric version of the Approximate Carathéodory Theorem (Lemma 18) to find an unweighted average of ℓ lattice points that lies in K ; this would be a vector of the form x ∈ Λ ℓ ∩ K .If the algorithm fails to approximately express c as a convex combination of lattice points, then we will have found a hyperplane H going almost through the barycenter c so that K ∩ H ≥ does not contain a lattice point.Then the algorithm continues searching in K ∩ H ≤ .This case might happen repeatedly, but after polynomial number of times, the volume of K will have dropped below a threshold so that we may recurse on a single (n − 1)dimensional subproblem.We will now give the detailed analysis.Note that in order to obtain a clean exposition we did not aim to optimize any constant.However by merely tweaking the parameters one could make the choice of ℓ = (1 + ε)n work for any constant ε > 0.

Bounding the number of iterations
We begin the analysis with a few estimates that will help us to bound the number of iterations.Lemma 9. Any point x found in line (7) Proof.We verify that Next we bound the distance of z to the barycenter: Lemma 10.At the beginning of the kth iterations of the WHILE loop on line (5), one has ∥c−z∥ 2  E ≤ 9R 2 k .
Proof.We prove the statement by induction on k.At k = 1, by construction on line (4), the values of z during iteration k − 1 before and after the execution of line ( 9) respectively, and let x be the vector found on line (7) By the induction hypothesis, we have that ∥z − c∥ 2 E ≤ 9R 2 /(k − 1).Our goal is to show that ∥z ′ − c∥ 2  E ≤ 9R 2 /k.In (6), we define d as the normalized version of z − c with ∥d ∥ E = 1 and hence d ∈ K − c.By construction 〈a, x − c〉 ≥ 0 and from Lemma 9 we have The desired bound on the E -norm of z ′ − c follows from the following calculation: In particular Lemma 10 implies an upper bound on the number of iterations of the inner WHILE loop: Corollary 11.The WHILE loop on line (5) never takes more than 36R 2 iterations.

Proof. By Lemma 10, for k
Next, we prove that every time we replace K by K ′ ⊂ K in line ( 8), its volume drops by a constant factor.

Lemma 12. In step (8) one has Vol
Proof.The claim is invariant under affine linear transformations, hence we may assume w.l.o.g. that E = B n 2 , M = I n and c = 0. Note that then B n 2 ⊆ K ⊆ RB n 2 .Let us abbreviate K ≤t := {x ∈ K | 〈d , x〉 ≤ t }.In this notation K ′ = K ≤ρ/2 .Recall that Grünbaum's Lemma (Lemma 4) guarantees that 1  e ≤ Vol n (K ≤0 ) Vol n (K ) ≤ 1 − 1 e .Moreover, it is well known that the function t → Vol n (K ≤t ) 1/n is concave on its support, see again [5].Then and so

Rearranging gives the first claim in the form Vol
For the 2nd part we verify that for ρ ≤ 1 4n one has (1 Lemma 13.Consider a call of CUT-OR-AVERAGE on (K , Λ) where K ⊆ r B n 2 for some r > 0. Then the total number of iterations of the outer WHILE loop over all recursion levels is bounded by O(n 2 log( nr λ 1 (Λ) )).
Proof.Consider any recursive run of the algorithm.The convex set will be of the form K := K ∩U and the lattice will be of the form Λ := Λ ∩U where U is a subspace and we denote ñ := dim(U ).We think of K and Λ as ñ-dimensional objects.Let Kt ⊆ K be the convex body after t iterations of the outer WHILE loop.Recall that Vol ñ ( Kt ) ≤ ).Our goal is to show that for t large enough, there is a non-zero lattice vector y ∈ Λ * with ∥y∥ ( Kt − Kt ) • ≤ 1  2 which then causes the algorithm to recurse, see Figure 3.To prove existence of such a vector y, we use Minkowski's Theorem (Theorem 2) followed by the Blaschke-Santaló-Bourgain-Milman Theorem (Theorem 7) to obtain Here we use the convenient estimate of Vol ñ (B ñ 2 ) Moreover, we have used that by Lemma 3 one has det( Λ) ≥ ( λ 1 (Λ) ñ ) ñ .Then t = Θ( ñ log( ñr λ 1 (Λ) )) iterations suffice until λ 1 ( Λ * , ( Kt − Kt ) • ) ≤ 1 2 and the algorithm recurses.Hence the total number of iterations of the outer WHILE loop over all recursion levels can be bounded by O(n 2 log( nr λ 1 (Λ) )).
The iteration bound of Lemma 13 can be improved by amortizing the volume reduction over the different recursion levels following the approach of Jiang [20].We refrain from that to keep our approach simple.

Running times of the subroutines
We have already seen that the number of iterations of the CUT-OR-AVERAGE algorithm is polynomially bounded.Goal of this subsection is to prove that all used subroutines can be implemented in time that is single-exponential or less.First we prove that steps (2)+(3) take polynomial time.Lemma 14.For any convex body K ⊆ R n one can compute the barycenter c and a 0-centered ellipsoid E in randomized polynomial time so that c Proof.We say that a convex body Q ⊆ R n is centered and isotropic if the uniform random sample X ∼ Q satisfies the following conditions: (i) E[X ] = 0 and (ii) E[X X T ] = I n .For any convex body K one can compute an affine linear transformation T : R n → R n in polynomial time1 so that T (K ) is centered and isotropic; this can be done for example by obtaining polynomially many samples of X , see [22,1].A result by Kannan, Lovász and Simonovits (Lemma 5.1 in [22]) then says that any such centered and isotropic body Then c := T −1 (0) and 2 ) − c satisfy the claim.
In order for the call of APXIP in step (7) to be efficient, we need that the symmetrizer of the set is large enough volume-wise, see Theorem 1.We will prove now that this is indeed the case.In particular for any parameters 2 −Θ(n) ≤ ρ ≤ 0.99 and R ≤ 2 O(n) we will have Vol n ((Q − c)∩( c −Q)) ≥ 2 −Θ(n) Vol n (Q) which suffices for our purpose.

Proof. Consider the symmetrizer K
For the inclusion in ( * ) we use that K ′′ + c ⊆ K and c + d ∈ K ; moreover for any x ∈ K ′′ one has 〈a, (1 Step ( 10) can be done in polynomial time and we defer the analysis to Section 4.
Step (12) corresponds to finding a shortest non-zero vector in a lattice w.r.t.norm ∥ • ∥ (K −K ) • which can be done in time 2 O(n) using the Sieving algorithm [2].

Conclusion on the Cut-Or-Average algorithm
From the discussion above, we can summarize the performance of the algorithm in Figure 1 as follows: Theorem 16.Given a full rank matrix B ∈ Q n×n and parameters r > 0 and ℓ ≥ 5(n + 1) with ℓ ∈ N and a separation oracle for a closed convex set K ⊆ r B n 2 , there is a randomized algorithm that with high probability finds a point Here the running time is 2 O(n) times a polynomial in log(r ) and the encoding length of B .
This can be easily turned into an algorithm to solve integer linear programming: Theorem 17.Given a full rank matrix B ∈ Q n×n , a parameter r > 0 and a separation oracle for a closed convex set K ⊆ r B n 2 , there is a randomized algorithm that with high probability finds a point x ∈ K ∩ Λ(B ) or decides that there is none.The running time is 2 O(n) n n times a polynomial in log(r ) and the encoding length of B .

An asymmetric Approximate Carathéodory Theorem
In this section we show correctness of (10) and prove that given lattice points X ⊆ Λ that are contained in a in a 3-scaling of K and satisfy c ∈ conv(X ), we can find a point in Λ ℓ ∩ K .The Approximate Carathéodory Theorem states the following.

Given any point-set X ⊆ B n
2 in the unit ball with 0 ∈ conv(X ) and a parameter k ∈ N, there exist u 1 , . . ., u k ∈ X (possibly with repetition) such that 1 The theorem is proved, for example, by Novikoff [31] in the context of the perceptron algorithm.An ℓ p -version was provided by Barman [6] to find Nash equilibria.Deterministic and nearlylinear time methods to find the convex combination were recently described in [29].In the following, we provide a generalization to asymmetric convex bodies and the dependence on k will be weaker but sufficient for our analysis of our CUT-OR-AVERAGE algorithm from Section 3.
Recall that with a symmetric convex body K , we one can associate the Minkowski norm ∥ • ∥ K with ∥x∥ K = inf{s ≥ 0 | x ∈ sK }.In the following we will use the same definition also for an arbitrary convex set K with 0 ∈ K .Symmetry is not given but one still has ∥x + y∥ K ≤ ∥x∥ K + ∥y∥ K for all x, y ∈ R n and ∥αx∥ K = α∥x∥ K for α ∈ R ≥0 .Using this notation we can prove the main result of this section.Lemma 18.Given a point-set X ⊆ K contained in a convex set K ⊆ R n with 0 ∈ conv(X ) and a parameter k ∈ N, there exist u 1 , . . ., u k ∈ X (possibly with repetition) so that Moreover, given X as input, the points u 1 , . . ., u k can be found in time polynomial in |X |, k and n.
Proof.Let ℓ = min{|X |, n + 1}.The claim is true whenever k ≤ ℓ since then we may simply pick an arbitrary point in X .Hence from now on we assume k > ℓ.
By Carathéodory's theorem, there exists a convex combination of zero, using ℓ elements of X .We write 0 This implies that there exists an integer vector µ ∈ N ℓ with µ ≥ (k − ℓ)λ and ℓ i =1 µ i = k.It remains to show that we have 1 In fact, one has For the moreover part, note that the coefficients λ 1 , . . ., λ ℓ are the extreme points of a linear program which can be found in polynomial time.Finally, the linear system µ ≥ ⌈(k −ℓ)λ⌉, ℓ i =1 µ i = k has a totally unimodular constraint matrix and the right hand side is integral, hence any extreme point solution is integral as well, see e.g.[35].

IPs with polynomial variable range
Now we come to our second method that reduces (IP) to (APPROX-IP) that applies to integer programming in standard equation form Here, A ∈ Z m×n , b ∈ Z m , and the u i ∈ N + are positive integers that bound the variables from above.Our main goal is to prove the following theorem.
We now describe the algorithm.It is again based on the approximate integer programming technique of Dadush [13].We exploit it to solve integer programming exactly via the technique of reflection sets developed by Cook et al. [11].For each i = 1, . . ., n we consider the two families of hyperplanes that slice the feasible region with the shifted lower and upper bounds respectively Following [11], we consider two points w, v that lie in the region between two consecutive planes x i = 2 j −1 and x i = 2 j for some j , see Figure 5. Suppose that w i ≤ v i holds.Let s be the point such that w = 1/2(s + v).The line-segment s, v is the line segment w, v scaled by a factor of 2 from v.
Let us consider what can be said about the i -th component of s.
Similarly, if w and v lie in the region in-between x i = 0 and x i = 1/2, then s i ≥ −1/2.We conclude with the following observation.
Lemma 21.Consider the hyperplane arrangement defined by the equations (3) as well as by x i = 0 and x i = u i for 1 ≤ i ≤ n.Let K ⊆ R n a cell of this hyperplane arrangement and v ∈ K .If K ′ is the result of scaling K by a factor of 2 from v, i.e.
We use this observation to prove Theorem 20: Proof of Theorem 20.The task of ( 2) is to find an integer point in the affine subspace defined by the system of equations Ax = b that satisfies the bound constraints 0 ≤ x i ≤ u i .We first partition the feasible region with the hyperplanes (3) as well as x i = 0 and x i = u i for each i .We then apply the approximate integer programming algorithm with approximation factor 2 on each convex set P K = {x ∈ R n | Ax = b} ∩ K where K ranges over all cells of the arrangement (see Figure 6).In 2 O(n) time, the algorithm either finds an integer point in the convex set C K that results from P K by scaling it with a factor of 2 from its center of gravity, or it asserts that P K does not contain an integer point.Clearly, C K ⊆ {x ∈ R n | Ax = b} and if the algorithm returns an integer point x * , then, by Lemma 21, this integer point also satisfies the bounds 0 ≤ x i ≤ u i .The running time of the algorithm is equal to the number of cells times 2 O(n) which is 2 O(n) n i =1 log 2 (u i + 1).

IPs in inequality form
We can also use Theorem 20 to solve integer linear programs in inequality form.Here the efficiency is strongly dependent on the number of inequalities.n+m) where ∆ := max{u i | i = 1, . . ., n}.

Theorem 22. Let
Proof.Via binary search it suffices to solve the feasibility problem

Subset sum and knapsack
The subset-sum problem (with multiplicities) is an integer program of the form (2) with one linear constraint.Polak and Rohwedder [32] have shown that subset-sum with multiplicitiesthat means n i =1 x i z i = t , 0 ≤ x i ≤ u i ∀i ∈ [n], x ∈ Z n -can be solved in time O(n + z 5/3 max ) times a polylogarithmic factor where z max := max i =1,...,n z i .The algorithm of Frank and Tardos [17] (Theorem 8) finds an equivalent instance in which z max is bounded by 2 O(n 3 ) u O(n2 ) max .All-together, if each multiplicity is bounded by a polynomial p(n), then the state-of-the-art 2 for subset-sum with multiplicities is straightforward enumeration resulting in a running time n O(n) which is the current best running time for integer programming.We can significantly improve the running time in this regime.This is a direct consequence of Theorem 22.
Corollary 23.The subset sum problem with multiplicities of the form n i =1 x i z i = t , 0 ≤ x ≤ u, x ∈ Z n can be solved in time 2 O(n) • (log(1 + ∥u∥ ∞ )) n .In particular if each multiplicity is bounded by a polynomial p(n), then it can be solved in time (log n) O(n) .Knapsack with multiplicities is the following integer programming problem where c, a, u ∈ Z n ≥0 are integer vectors.Again, via the preprocessing algorithm of Frank and Tardos [17] (Theorem 8) one can assume that ∥c∥ ∞ as well as ∥a∥ ∞ are bounded by 2 O(n 3 ) u O(n 2 ) max .If each u i is bounded by a polynomial in the dimension, then the state-of-the-art for this problem is again straightforward enumeration which leads to a running time of n O(n) .Also in this regime, we can significantly improve the running time which is an immediate consequence of Theorem 22.
Corollary 24.A knapsack problem (6) can be solved in time 2 O(n) • (log(1 + ∥u∥ ∞ )) n .In particular if ∥u∥ ∞ is bounded by a polynomial p(n) in the dimension, it can be solved in time (log n) O(n) .

Figure 4 :
Figure 4: Visualization of the proof of Lemma 15 where c = c + ρd .

Figure 6 :
Figure 6: Visualization of the proof of Theorem 20.