Effective Results on the Size and Structure of Sumsets

Let A⊂Zd\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A \subset {\mathbb {Z}}^d$$\end{document} be a finite set. It is known that NA has a particular size (|NA|=PA(N)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\vert NA\vert = P_A(N)$$\end{document} for some PA(X)∈Q[X]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P_A(X) \in {\mathbb {Q}}[X]$$\end{document}) and structure (all of the lattice points in a cone other than certain exceptional sets), once N is larger than some threshold. In this article we give the first effective upper bounds for this threshold for arbitrary A. Such explicit results were only previously known in the special cases when d=1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d=1$$\end{document}, when the convex hull of A is a simplex or when |A|=d+2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\vert A\vert = d+2$$\end{document} Curran and Goldmakher (Discrete Anal. Paper No. 27, 2021), results which we improve.


Introduction
For any given finite subset A of an abelian group G, we consider the sumset If G is finite and N is sufficiently large then for any a 0 ∈ A where A − A is the subgroup of G generated by A − A, so that |NA| is eventually constant.In this article we study instead the case when G = Z d is infinite, and ask similar questions about the size and structure of NA when N is large.
The size of NA.Khovanskii's 1992 theorem [8] states that if A ⊂ Z d is finite then there exists P A (X) ∈ Q[X] of degree d such that if N N Kh (A) then Although there are now several different proofs of Khovanskii's theorem [12,7], the only effective bounds on N Kh (A) have been obtained when d = 1 [11,14,5,6], when the convex hull of A is a d-simplex or when |A| = d + 2 (see [3]).We will determine an upper bound for N Kh (A) for any such A in terms of the width of A, w(A) = width(A) := max Theorem 1.1 (Effective Khovanskii).If A ⊂ Z d is finite then The theorem states that N Kh (A) (2ℓ w(A)) (d+4)ℓ where ℓ := |A|.We expect that N Kh (A) is considerably smaller (see Section 2); for example, if |A| = d + 2 and A − A generates Z d then [3,Theorem 1.2] gives that where the convex hull H(A) is defined by We can replace w(A) in Theorem 1.1 by w * (A) which is defined to be the minimum of w(A ′ ) over all A ′ ⊂ Z d that are Freiman isomorphic to A. 1  Previous proofs of Khovanskii's theorem [12,7] relied on the following ineffective principle.
Lemma 1.2 (The Mann-Dickson Lemma).For any S ⊂ Z d 0 there exists a finite subset S min ⊂ S such that for all s ∈ S there exists x ∈ S min with s − x ∈ Z d 0 .For a proof see [5, Lemma 5].Here we rework the method of Nathanson-Ruzsa from [12] as a collection of linear algebra problems which we solve quantitatively (see Section 6), and therefore bypass Lemma 1.2 and prove our effective threshold.
The structure of NA.For a given finite set A ⊂ Z d with 0 ∈ A we have We let ex(H(A)) be the set of extremal points of H(A), that is the "corners" of the boundary of A, 2 which is a subset of A. We define the lattice generated by A, 1 That is, there is a map φ : A → A ′ such that for all a 1 , . . ., a k , b 1 , . . ., b k ∈ A and k 1, we have 2 That is, those points p ∈ H(A) for which there is a vector v ∈ span(A − A) \ {0} and a constant c such that v, p = c and v, x > c for all x ∈ H(A) \ {p}; see Appendix A.
Hence, as aN + Λ a−A is independent of the choice of a ∈ A and Λ a−A = Λ A−A , for any fixed a 0 ∈ A we have NA ⊂ (NH(A) ∩ (a 0 N + Λ A−A )) \ a∈ex(H(A)) (aN − E(a − A)) . (1.4) In [5] the first two authors showed3 there exists a constant N Str (A) such that we get equality in (1.4) provided N N Str (A); that is, NA = (NH(A) ∩ (a 0 N + Λ A−A )) \ a∈ex(H(A)) (aN − E(a − A)) .
(1.5) (Compare this statement to (1.1).)The proof in [5] relied on the ineffective Lemma 1.2 so did not produce a value for N Str (A).
In this article we give an effective bound on N Str (A): Theorem 1.3 (Effective structure).If A ⊂ Z d is finite then (1.5) holds for all N (d|A| • width(A)) 13d 6 .
That is, Theorem 1.3 implies that N Str (A) (dℓ w(A)) 13d 6 where |A| = ℓ.The 1-dimensional case is easier than higher dimensions, since if 0 = min A and Λ A = Z then E(A) is finite, and so has been the subject of much study [11,14,5,6]: We have N Str (A) = 1 if |A| = 3 in [5], and N Str (A) w(A) + 2 − |A| in [6], with equality in a family of examples.There are also effective bounds known when H(A) is a d-simplex, as we will discuss in the next subsection.
Suppose that x belongs to the right-hand side of (1.5).To prove Theorem 1.3 when x is far away from the boundary of NH(A) we develop an effective version of Proposition 1 of Khovanskii's original paper [8] using quantitative linear algebra.Otherwise x is close to a separating hyperplane of NH(A): Suppose the hyperplane is z d = 0; write each a = (a 1 , . . ., a d ) and x = (x 1 , . . ., x d ), so that every a d 0 and x d is "small".Now x = a∈A m a a where each m a ∈ Z 0 as x ∈ P(A) and so a∈A,a d =0 m a a d x d is small.The contribution from those a with a d = 0 is a "smaller dimensional problem", living in the hyperplane z d = 0. Carefully formulated, one can apply induction on the dimension to bound a∈A m a , and hence show that x ∈ NA.
The structure (1.5) is evidently related to Khovanskii's theorem.However, we have not been able to find a precise way to relate Khovanskii's theorem and Theorem 1.3.Our proofs of Theorem 1.1 and Theorem 1.3 are almost entirely disjoint, and we get a different quality of bound in each theorem.
The size and structure of NA when (1.6) ) holds for all N 1 for which ) (1.9) The hypotheses imply that |A| d + 1.If d = 1 our bound gives N Str (A) 2w(A) −2|A| + 3 which is weaker than the bound N Str (A) w(A) − |A| + 2 from [6], which suggests that Theorem 1.5 is still some way from being "best possible".
Even though the main bounds in Theorems 1.4 and 1.5 are very similar, we have not been able to find a way to directly deduce one theorem from the other.Instead, we present separate arguments for each theorem (in Sections 4 and 5 respectively), albeit based on the same fundamental lemmas in Section 3.
Curran and Goldmakher [3] gave similar (but slightly weaker) bounds in the simplex case.In [3,Theorem 1.4]The proofs of Theorems 1.4 and 1.5 look seemingly very different from the work in [3].Our method manipulates A directly using additive-combinatorial language; Curran and Goldmakher, being inspired by Ehrhart theory, used generating functions such as S(t) := N 0 |NA|t N ), and 'raised the dimension' by examining further properties of subsets of Z d+1 generated by {(a, 1) : a ∈ A}.
However, the two approaches are in fact closely related.The central notion of our method for the simplex case is that of a 'B-minimal element', see Definition 3.3 below; this is equivalent to the notion of 'minimal elements' defined in [3], at the end of page 7 and in the remark following the statement of Proposition 4.1 of that paper.There are also analogies between some of our preparatory lemmas and partial results in [3], which will be discussed in Sections 3, 4, and 5 below when they occur.
Our improvement over [3] comes from refining an additive combinatorial lemma concerning the B-minimal elements, related to the Davenport constant of the group Z d /Λ B−B .The key results are Lemmas 3.5 and 3.7 below.In fact, it would have been possible to derive Theorems 1.4 and 1.5 directly by inputting the conclusions of Lemmas 3.5 and 3.7 into the relevant parts of the argument of [3], following a translation into the generating function language of [3] (the details are discussed after Lemma 3.7 below).However, we think there is extra value in showing how the analysis from [3] can be phrased -efficiently -in a classical additive-combinatorial language.
Having discussed the similarities to [3], it should be stressed that the main work of this paper -all parts of the proof of Theorem 1.1, and the technical heart of the proof of Theorem 1.3 -is not related to any part of [3].These novel elements comprise the majority of the present work.
The structure of the paper is as follows.In the next section we briefly discuss the 1dimensional case, and in the three subsequent sections, the simplex case.In Section 6, we prove the effective Khovanskii theorem (Theorem 1.1).In Section 7 we then prove the effective structure result (Theorem 1.3); this part may be read essentially independently of the previous section, although there is one piece of quantitative linear algebra in common.An appendix collects together some facts from the theory of convex polytopes (which are useful in Section 7).
Acknowledgements: We would like to thank the anonymous referees for their detailed analysis of the manuscript, and for making several suggestions which refined the final bounds.

It might well be that for finite
(2.1) We refrain from calling this speculation a conjecture, since we have not even proved it for d = 1.However, a slight specialisation of the relation (2.1) is true when d = 1, and we know of no counterexample for larger d, so it is certainly worth investigating; we make a few remarks in this section.
After translating suppose that 0 ∈ ex(H(A)).First we note that if . Indeed, Khovanskii's theorem [8] and Theorem 1.3 imply that the Khovanskii polynomial We also obtain the bounds N Kh (A), N Str (A) < (d+1)!vol(H(A)) in Theorems 1.4 and 1.5, bigger than in (2.1) by a factor of d+1 (and one can see where this comes from in the proof).
Proof.We may translate A so that it has minimal element 0 and largest element b = w(A).(If |A| = 2 then A = {0, 1} and N Str (A) = N Kh (A) = 1).The main theorem of [6] Although we do not yet know whether N Str (A) N Kh (A) in general when d = 1, the methods of Curran-Goldmakher do show something along these lines. 4For each g ∈ {0, 1, . . ., b− 1}, let N Kh,g (A) denote the optimal threshold for which |NA ∩ {n : n ≡ g mod b}| = P g (N) for all N N Kh,g (A), where P g is some fixed polynomial; let N Str,g (A) denote the optimal threshold for which for all N N Str,g (A).Then N Str,g (A) N Kh,g (A).
(2.2) This is obtained by considering the proofs in [3, Section 3], which show that N Kh,g (A) = deg P − d when H(A) is a simplex, where P is some auxiliary polynomial: In [3, Section 4] Curran-Goldmakher then show that N Str,g (A) deg P −1 for the same auxiliary polynomial P .Unfortunately, although N Str (A) = max g N Str,g (A), one could potentially get N Kh (A) < max g N Kh,g (A), so the inequality (2.2) does not immediately give (2.1) when d = 1.
Curran-Goldmakher also give the precise value of N Kh (A) in (1.3) in certain special cases including the useful example A := {(0, . . ., 0), (1, . . ., 1), m 1 e 1 , . . ., m d e d } ⊂ Z d where the m j are pairwise coprime positive integers and the e 1 , . . ., e d are the standard basis vectors.If all the m j are close to x so that w(A) ≈ x for some large x then N Kh (A) ∼ x→∞ w(A) d , which suggests we might be able to reduce the bound in Theorem 1.1 to w(A) d .However d! vol(H(A)) would be a preferable bound to w(A) d , since it is smaller and more precise in the example where we let m 2 = • • • = m d = 1 and m 1 = x be arbitrarily large so that N Kh (A) ∼ x→∞ w(A).

Preparatory lemmas for the simplex case
Throughout this section, 0 ∈ A ⊂ Z d and A is finite.Let N A (0) = 0 and for each v ∈ P(A) \ {0} let N A (v) denote the minimal positive integer N such that v ∈ NA.Definition 3.1 (B-minimal elements).Suppose that B ∪ {0} ⊂ A ⊂ Z d , with A finite.Let S(A, B) denote the set of B-minimal elements5 , which comprises 0 and those elements u ∈ P(A) \ {0} such that a i ∈ B ∪ {0} for every i whenever B-minimal elements can be used to decompose NA and P(A) into simpler parts.The following is the analogous statement to [3, Proposition 4.1], although that proposition is only stated in the case when H(A) is a d-simplex.Proof.The second assertion implies the first by taking a union over all N.That each Now, for any v ∈ NA we can write where L, M 0, and each a i ∈ A \ B and b i ∈ B, with M maximal and L + M = N A (v). Then N A (u) = L and N A (w) = M, else we could replace the above expression for u or w by a shorter sum of elements of A, and therefore obtain a shorter sum of elements to give v, contradicting that L + M = N A (v) is minimal.Moreover u ∈ S(A, B) else we could replace the sum a 1 + • • • + a L in the expression for v by a different sum of length L which includes some elements of B, contradicting the maximality of M.
Therefore u ∈ S(A, B) with It will be useful to control the complexity of the B-minimal elements.Proof.Suppose we are given a longest sum (3.1) 1), and otherwise we choose a = 0 so each , and each r i ≡ a i (mod 1).As each r i 0 we write m i = r i − a i so that each m i ∈ Z 0 and r − a = d i=1 m i b i ∈ P(B ∪ {0}).We use this lemma to bound K(A, B).A, B) is non-zero then any subsum with two or more elements cannot belong to A B := A mod Λ B , and no subsum can be congruent to 0 mod Λ B .Therefore Proof.Let r be a subsum of Then ℓ = N A (r) and r ∈ S(A, B) as u ∈ S(A, B).We write r as in (3.1) so that i d r i ℓ = N A (r). Suppose that r ≡ a (mod Λ B ) for some a ∈ A (where we choose a = 0 if r ∈ Λ B ) so that m := r −a ∈ P(B ∪{0}) by Lemma 3.6.Therefore N A (m) ℓ − N A (a) ℓ − 1 > 0 (so m = 0).On the other hand . We deduce that r can be represented as a plus the sum of ℓ − N A (a) elements of B, contradicting that r ∈ S(A, B).
The combination of Lemmas 3.5, 3.6 and 3.7 effects an upper bound bound on N A (u) when u ∈ S(A, B), which is analogous to the bound from the statement of [3, Lemma 3.1] (albeit slightly stronger due to the stronger bound on k(G, H) in this paper).
If the convex hull of A is not a simplex then S(A, B) need not be finite.For example, if This is one reason why S(A, B) is not used later in Section 7, when dealing with general sets A.

3.2.
Translations.We finish by observing that under rather general hypotheses the sets S(A, B), and consequently the quantities K(A, B), are well-behaved under translations.This observation was also made in [3,Lemma 4.2].
We deduce that there is a 1-to-1 correspondance between the representations of u as the sum of N elements of A, and of v as the sum of N elements of b − A, and the result follows.

Structure bounds in the simplex case
First we deal with the special cases.If we can write a = d i=0 a i b i uniquely with each a i 0 and d i=0 a i = 1.We know that the finite group Λ A /Λ B is generated by a.If a has order M in the group Λ A /Λ B then the classes of Λ A /Λ B can be represented by Suppose that v ≡ ma mod Λ B for some 0 m M − 1.This implies that v i − ma i ∈ Z for i = 0, 1, . . ., d, and we now show that We now give an analogous argument for representations of b j N − v for each j = 1, . . ., d: For each j we also have Combining these observations, we deduce that v i − ma i ∈ Z 0 for all i, which implies that We now prove the rest of Theorem 1.5.We'll use our bound on K(A, B) from Lemma 3.7, combined with the following theorem.This result can be abstracted from the proof of [3, Lemma 3.2] and the part of the proof of [3, Theorem 1.3] following expression (11).
Proof.The proof follows similar lines to [5].For all We will now show that v ∈ N j A for each j, where N j = K(A, B) + i =j ⌊v i ⌋: Taking j = d (all other cases are analogous), we observe that We have v ∈ NA if i =j ⌊v i ⌋ N − K for some j, where K = K(A, B).If not then i =j v i i =j ⌊v i ⌋ N − K + 1 for each j, and so and the above inequalities fail to yield a contradiction, the last two chains of inequalities must be equalities.Therefore each v i ∈ Z, and so u = 0, (since 0 is the only element in Proof of Theorem 1.5 for |A| d + 3. Now A \ B is non-empty.Replacing A with A − b (for some b ∈ ex(H(A)) we may assume, without loss of generality, that 0 ∈ ex(H(A)) = B. Applying Lemma 3.7 and Lemma 3.5, we then have By Theorem 4.1, we conclude that (1.5) holds for all N in the range (1.7), as required.The result follows.

The Khovanskii polynomial in the simplex case
In this section we prove Theorem 1.4, and make various remarks about the form of the Khovanskii polynomial itself.By analogy with the previous section, the main technical result is as follows: where each g i ∈ [0, 1).We may partition NA as the (disjoint) union over g ∈ G of (NA) g := {v ∈ NA : v ∈ g + Λ B }, and thus we wish to count the number of elements in each (NA) g .If This union is not necessarily disjoint, but we may nonetheless develop a formula for its size by using inclusion-exclusion.
It is helpful to distinguish the case when g = 0.In this instance S(A, B) g = {0}, and since N A (0) = 0 we conclude that for all N 1, and this is a polynomial in N, namely ).Now we consider the case g = 0. Let S(A, B) g = {u 1 , . . ., u k }, as S(A, B) is finite by Lemma 3.7, and so write where each u j,i ∈ Z 0 .Expressing each a ℓ in terms of the basis {b 1 , . . ., b d }, and using the fact that g = 0, we deduce that Since the u j,i are integers, we conclude that (and the set on the right-hand side of the above expression is empty when N < N A (u j )).Therefore for all N and for all non-empty subsets J ⊂ {1, . . ., k} we have where we understand the m i to always be integers, and we let u J,i := max j∈J u j,i and ∆ J := max j∈J ∆ j . Let To count the number of points in the intersection (5.1) we write each m i = u J,i + ℓ i , and then where we define N −N J +d d := 0 if N < N J .Hence, by inclusion-exclusion we obtain In fact this formula extends to cover the case g = 0, taking k = 1 and N {1} := 0. Therefore we have the general formula We wish to replace the binomial coefficients in this formula by polynomials in N; that is, but these are only equal if N N J − d.Therefore we are guaranteed that #(NA) g = P g (N) where We remark that the −2d term (in the (d + 1)!K(A, B) − 2d bound from Theorem 5.1) was saved by two separate actions.First, −d was saved through considering g = 0 and g = 0 separately; there is an equivalent manoeuvre on [3, Page 9] when it is assumed that 'g i is not congruent to 0'.Then, −d was saved by noting that the binomial coefficient ) for all N N J − d not just for all N N J .This is analogue to the −d that is saved by the application of the division algorithm in [3, Proof of Theorem 1.4] at the bottom of page 10 of that paper.Proof.Letting m 0 and If m h − 1 then every term on the right-hand side is 0 and so #(NA) 5.2.Determing W (0). We do not see how to easily determine h in general, though it is sometimes possible to identify whether W (0) = 0.
(ii) If J * = {1, . . ., k} and, for each i, there exists j i ∈ J i such that j i ∈ J ℓ for any ℓ = i, then W (0) = (−1) d+1 = 0. (For example, when the sets J i are disjoint.) Proof.We have N J = N {1,...,k} if and only if J ∩ J i = ∅ for all 0 i d.Therefore, by inclusion-exclusion we have a proper subset of {1, . . ., k} then there are no terms in the sum.(ii) If ∪ ℓ∈I J ℓ = {1, . . ., k} then each j i ∈ ∪ ℓ∈I J ℓ , so we conclude that i ∈ I. Therefore I = {0, . . ., d} and the result follows.5.3.Explicitly enumerating the coefficients of P g (T ).It turns out that the quantities Λ j and u j,i also feature in the Khovanskii polynomial itself.Indeed, expanding the polynomial P g (T ) we find that the leading two terms of P g (T ) are since N J is a sum of various maximums and we have the identity for any sequence {a j }.The proof of (5.4) is an exercise in inclusion-exclusion.

Delicate linear algebra and an effective Khovanskii's theorem
The proof of Theorem 1.1 rests on various principles of quantitative linear algebra.The first is an application of the pigeon-hole principle.Lemma 6.1.Let M be a non-zero m-by-n matrix with integer coefficients and n > m.Let K be the maximum of the absolute values of the entries of M. Then there is a solution to To prove Corollary 7.9 in the next section, we will need the more sophisticated Siegel's lemma due to Bombieri-Vaaler [1], which gives a basis for ker M rather than just a single vector X; for the results in this section, the elementary result in Lemma 6.1 suffices.
Proof.Suppose first that Kn is odd.If there were two distinct vectors X 1 , X 2 ∈ Z n for which MX 1 = MX 2 and X i ∞ 1 2 (Kn) m , then by choosing X = X 1 − X 2 we would be done.Now, the number of vectors X ∈ Z n for which X ∞ 1 2 (Kn) m is equal to (2( 12 ((Kn) m − 1)) + 1) n , which is (Kn) mn .For all such X we have MX ∞ 1 2 (Kn) m+1 and MX ∈ Z m .We may further assume that MX = 0, since otherwise we would be immediately done.There are exactly (2( 12 ((Kn Kn) m+1 , i.e exactly (Kn) m(m+1) − 1 such vectors.Since n m + 1, by the pigeonhole principle we may find distinct X 1 , X 2 with MX 1 = MX 2 as required.
If Kn is even, then the number of vectors X ∈ Z n for which X ∞ 1 2 (Kn) m is exactly ((Kn) m + 1) n , and there are at most ((Kn we can conclude using the pigeonhole principle as before. Next, we will consider solutions to the equation My = b in which all the coordinates of y are positive integers.Lemma 6.2.Let M = (µ ij ) i m, j n be an m-by-n matrix with integer coefficients, with m n and rank M = m, and let b ∈ Z m .Suppose that max i,j |µ ij | K 1 and b ∞ K 2 (where we choose K 1 , K 2 1), and suppose that there is some x ∈ Z n >0 for which Mx = b.Then we may find y ∈ Z n >0 for which My = b and Proof.We prove this by induction on n.The base case is n = m.In this case we observe that M is invertible, and so x = y = M −1 b.Using the formula M −1 = det(M) −1 adj(M), and since (det M) −1 1 as M has integer entries, we conclude that y This gives the base case.We proceed to the induction step, assuming that n m + 1.By Lemma 6.1, there is a vector X ∈ Z n \ {0} such that MX = 0 and Replacing X by −X if necessary, we may assume that X has at least one positive coordinate with respect to the standard basis.Let S ⊂ {1, . . ., n} be the set of indices where the coordinate of X is positive.
Take x from the hypotheses of the lemma, and write x = (x 1 , . . ., x n ) T ∈ Z n >0 .By replacing x with x − λX for some λ ∈ Z >0 as appropriate, we may assume that there is some i ∈ S for which 1 Fix such an i and x i , and now consider the m-by-(n − 1) matrix M {i} which is M with the i th column removed.Similarly define x {i} ∈ Z n−1 >0 to be the vector x with the i th coordinate removed.Then where e i is the i th standard basis vector in Now rank M {i} is either m or m − 1.If rank M {i} = m then, by the induction hypothesis (with x replaced by x {i} ), there is some Let y := y {i} + x i e i , where we have abused notation by treating y {i} also as an element of Z n 0 by extending by 0 in the i th coordinate.Then we have y ∈ Z n >0 , My = b, and since m 2, thus completing the induction as above.
Corollary 6.3.Let M = (µ ij ) i m, j n be an m-by-n matrix with integer coefficients, and let b ∈ Z m .Suppose that max i,j |µ ij | K 1 and b ∞ K 2 (where we choose K 1 , K 2 1), and suppose that there is some x ∈ Z n >0 for which Mx = b.Then we may find y ∈ Z n >0 for which My = b and Proof.We restrict M to a maximal linearly independent subset of its rows and so obtain an m ′ -by-n matrix M ′ with rank M ′ = m ′ n.The result follows by applying Lemma 6.2 to M ′ .
We introduce a partial ordering < unif on Z d by saying that x ≤ unif y if x i y i for all i d (that is, y − x ∈ Z d 0 as in the Mann-Dickson lemma).The next lemma controls the set of minimal solutions (with respect to the partial ordering < unif ) to a certain kind of linear equation.Lemma 6.4.Let n = n 1 + n 2 2 with n 1 , n 2 ∈ Z >0 .Let M = (µ ij ) i m, j n be an m-by-n matrix with integer coefficients, and b ∈ Z m .Suppose that max i,j |µ ij | K 1 and b ∞ K 2 (where we choose and let S min = S min (M, b, n 1 , n 2 ) be defined as Proof.We use induction on n 1 .If S is empty then Lemma 6.4 is vacuously true.Otherwise S is non-empty and so is S min .
If n 1 = 1 then S ⊂ Z >0 so |S min | = 1 by the well-ordering principle.Writing S min = {x min } we note that there exists ( x y ) ∈ S by Corollary 6.3 with If n 1 2 let x min ∈ S min and choose y ∈ Z n 2 >0 with ( x min y ) ∈ S. By Corollary 6.3 we may choose ( Thus there is some i n 1 for which as otherwise x * < unif x min , in contradiction to the fact that x min ∈ S min .Fixing such a coordinate i, as in the proof of Lemma 6.2 we let x {i} min denote the vector x min with the i th coordinate removed, and let M {i} be the matrix M but with the i th column removed (from the initial set of n 1 columns).Then where e i is the i th basis vector in R n 1 .We have The vector ), n 1 − 1, n 2 ).Indeed, were there another vector (w, z ), n 1 − 1, n 2 ) and w < unif x {i} min , then w + x i e i < unif x min and ( w+x min,i e i z ) ∈ S(M, b, n 1 , n 2 ), contradicting the minimality of x min .(We have abused notation here by treating w as both an element of Z n−1 >0 and, by extending by 0, an element of Z n 0 .)So by the induction hypothesis we have and the induction is completed.
We are now ready to prove an effective version of Khovanskii's theorem.Our method is a quantitative adaptation of Nathanson-Ruzsa's argument from [12].
Proof of Theorem 1.1.Without loss of generality, we may first translate A (which preserves the width w(A)) so that 0 ∈ A. Therefore we can assume that max a∈A a ∞ w(A).We can also assume that A − A contains d linearly independent vectors: If not we can project the question down to a smaller dimension (by removing some co-ordinate but keeping all the linear dependencies) and the result follows by induction on d.So |A| =: ℓ d + 1.
Let us now recall the lexicographic ordering on Z d .If x = (x 1 , . . ., x d ) T ∈ Z d and y = (y 1 , . . ., y d ) T ∈ Z d we say that x < lex y if there exists some i d for which x i < y i and x j = y j for all j < i.This is a total ordering on Z d .
Following Nathanson-Ruzsa, we say that an element x ∈ Z ℓ 0 is useless if there exists y ∈ Z ℓ 0 with y < lex x, y 1 = x 1 and i ℓ x j a j = j ℓ y j a j .We say that element x ∈ Z ℓ 0 is minimally useless if there does not exist a useless x ′ ∈ Z ℓ 0 for which x ′ < unif x.Let U denote the set of useless elements and U min be the set of minimally useless elements.By definition see that For x ∈ U min , let I 1 = {i ℓ : x i 1} and I 2 = {j ℓ : y j 1} (with y as above).Now I 1 ∩ I 2 = ∅ else if i ∈ I 1 ∩ I 2 then x − e i is also useless (via y − e i ) contradicting minimality.We may assume that both I 1 and I 2 are non-empty, since otherwise we would have x = y = 0. Evidently min I 1 < min I 2 as y < lex x.
By the Mann-Dickson lemma we know that U min is finite, but now we will be able to get an explicit bound on max( u ∞ : u ∈ U min ): Fix a pair of disjoint non-empty subsets I 1 ∪ I 2 ⊂ {1, . . ., ℓ} with min I 1 < min I 2 , and let We define a (d + 1)-by-n matrix M where the columns are indexed by the elements of I 1 ∪ I 2 , and the row numbers run from 0 to d.If j ∈ I 1 then M 0,j = 1 and M i,j = (a j ) i for 1 i d; if j ∈ I 2 then M 0,j = −1 and M i,j = −(a j ) i for 1 i d.Then the top row of the equation M( x y ) = 0 with x ∈ Z n 1 >0 and y ∈ Z n 2 >0 gives that y 1 = x 1 and the i th row yields that j ℓ x j (a j ) i = j ℓ y j (a j ) i for 1 i d, so together they yield that j ℓ x j a j = j ℓ y j a j .
By the minimality of x there cannot exist (x * , y * ) ∈ Z n 1 >0 × Z n 2 >0 such that M( x * y * ) = 0 and x * < unif x.Indeed, by construction of I 1 and I 2 we would have (after extending by zeros) that y * < lex x * , thus implying that x * is useless -contradicting the fact that x is minimally useless.
Using Lemma 6.4, as applied to the matrix M with K 1 = max a∈A a ∞ := K and K 2 = 1, we conclude that In [12, Lemma 1], Nathanson and Ruzsa proved that for all where each u j = (u 1,j , u 2,j , . . ., u s,j ) ∈ Z ℓ 0 .Letting u * i = max j m u i,j , and Then by inclusion-exclusion we have To obtain the last displayed inequality we assumed that d 2 (as we use Lemma 2.1 for d = 1 which gives N Kh (A) w(A) − 1), and ℓ d + 1.

Structure bounds in the general case: proof of Theorem 1.3
We start by introducing the central structural result of this section.As a reminder, we say that p ∈ ex(H(A)) if there is a vector v ∈ span(A − A) \ {0} and a constant c such that v, p = c and v, x > c for all x ∈ H(A) \ {p}.A + ⊂ NA for some N N 0 (A) := 2 11d 2 d 12d 6 ℓ 3d 2 w(A) 8d 6 .
Lemma 7.1 is straightforward for large N ((7.1) was already given in [5, Proposition 4]) but our focus is on getting an effective bound on such N.
The only other ingredient in the proof of Theorem 1.3 is the following classical lemma: Lemma 7.2 (Carathéodory).Let A ⊂ R d be a finite set, and let

H(B).
Proof.After an affine transformation one may assume that V = R d .Then see [5,Lemma 4] for the proof, in which the union is taken over all B ⊂ A with |B| = d+1 and span(B −B) = R d .The equality as claimed, where B ⊂ ex(H(A)), then follows from general fact that We will show that v ∈ NA for all N (d + 1)N 0 (A).Let V = span(A − A) and r = dim V as above.By Lemma 7.2 there exists a set )N 0 (A) there must be some c i N 0 (A).After permuting coordinates, we will assume that c r N 0 (A).Thus Expressing u and w with respect to the basis (b r − B) \ {0}, and noting that u Putting everything together we have Hence v ∈ NA as required.
It remains to prove Lemma 7.1.The condition x ∈ P(A) ∩ C B but x − b / ∈ P(A) ∩ C B in the definition of A + is a minimality-type condition on x. 7 As our argument for analysing the set A + will not stay within C B , it turns out to be convenient to separate the P(A) part and the C B part of this condition; this motivates the following definition.Let S abs (A, ∅) = P(A) and use the convention that C ∅ = {0}.By this definition S abs (A, B) ⊂ S(A, B), though these sets needn't be equal, so being a B-minimal element is a weaker condition than being an absolutely B-minimal element.
For a subset U ⊂ R d and x ∈ R d , we define Lemma 7.4 (Controlling the absolutely B-minimal elements).Let B ∪ {0} ⊂ A ⊂ Z d with |A| = ℓ 2, and assume that B is a (possibly empty) linearly independent set.Let r := dim span(A) and suppose that 0 ∈ ex(H(A)).If x ∈ S abs (A, B) and dist(x, C B ) X then x ∈ NA for some N ∈ Z >0 with N (X + 1)2 10dr d 11d 5 r ℓ 3dr w(A) 7d 5 r .Lemma 7.4 is the main technical result of this section.The hypotheses allow r to be less than d, even though we will only apply the lemma when r = d, since our proof involves induction on r.Similarly, we do not assume that Λ A = Z d ∩ span(A), as this property would not necessarily be preserved by the induction step.Deducing Lemma 7.1 is straightforward: . Writing x with respect to the basis B, we get Since ℓ ∞ dw(A), this implies that dist(x, C B ′′ ) dw(A).Furthermore, for all b ′′ ∈ B ′′ we have x − b ′′ / ∈ P(A).Hence x ∈ S abs (A, B ′′ ).By Lemma 7.4 as applied to B ′′ and X = dw(A), we may conclude that x ∈ NA for N N 0 (A) as in Lemma 7.1.
To establish (7.1), note that A + + P(B ∪ {0}) ⊂ P(A) ∩ C B by definition.On the other hand if y ∈ P(A) ∩ C B and there exists some b 1 ∈ B with y − b 1 ∈ P(A) ∩ C B then we replace y by y − b 1 .We repeat this with b 2 , . . .until the process terminates, which it must do since the sum of the coefficients of y with respect to the basis B decreases by 1 at each step.We are left with It remains is to prove Lemma 7.4.Following the proofs in [6,3] we now show that in certain favourable circumstances, S abs (A, B) may be controlled in terms of the Davenport constant of Z d /Λ B .However this is not used in our proof of Lemma 7.4 (except when d = 1) but, for reasons of motivation, it is helpful to understand why this type of argument fails.Proof.Let x ∈ S abs (A, B), and assume that x = 0. Then write for some a i ∈ A. If there were a subsum i∈I a i ≡ 0 mod Λ B , then since , so i∈I a i ∈ P(B) ∪ {0}.By minimality of N A (x) we also have i∈I a i = 0. Therefore x ∈ P(A) + y for some non-zero y ∈ P(B), contrary to the assumption that x ∈ S abs (A, B).Hence N A (x) max(1, D(G) − 1), which also takes care of the x = 0 case.
If C B is a strict subset of C A then the above argument doesn't necessarily work, as i∈I a i ≡ 0 mod Λ B does not automatically imply that i∈I a i ∈ P(B) ∪ {0}: Indeed the key issue is how an element Sketch of our proof of Lemma 7.4.The easy cases are r = 1 (which follows from any of the existing literature [11,14,5,6], or from Lemma 7.5) and B = ∅ (which is dealt with in Lemma 7.11 below).From these base cases, we will construct a proof by induction on r.We may assume, therefore, that r 2 and B is non-empty.For this sketch, we will also assume that r = d.There are three main phases to the induction step.
• We provide an extra restriction on the region of R d where S abs (A, B) can lie, by showing that if x ∈ S abs (A, B) then dist(x, ∂(C A )) Y , where ∂(C A ) is the topological boundary of C A and Y is some explicit bound. 8The bound dist(x, ∂(C A )) Y is a generalisation of a basic result from the one dimensional case -the classical 'Frobenius postage stamp' problem -in which the boundary of C A is just {0} and one shows that the exceptional set E(A) is finite.Since ∂(C A ) is a union of d − 1 dimensional facets, there is some non-zero linear map α : R d −→ R for which dist(x, ker α) Y .
• We combine the distance condition from above with the hypotheses of Lemma 7.4, giving dist(x, C B ) X and dist(x, ker α) Y .In turn, we show that this implies dist(x, C B ∩ ker α) f (X, Y, A) (for some explicit function f (X, Y, A)), by a quantitative linear algebra argument.For this part, one should have in mind the situation of two rays, both starting from the origin.If x is in a neighbourhood of both rays separately, then x will be in some neighbourhood of the origin.The size of this neighbourhood will be determined by the angle between the rays (the smaller the angle, the larger the neighbourhood).To study the general dimension version of this phenomenon we avoid talking explicitly about angles, relying instead on the existence of suitable bases of vectors with integer coordinates.
Defining B ′ = B ∩ ker α then C B ′ = C B ∩ ker α, and so we establish that dist(x, C B ′ ) f (X, Y, A).
• Let A ′ = A∩ker α.If x is expressed as a sum a 1 +• • •+a N with a i ∈ A for all i then only finitely many of the a i are in A \ A ′ .This is because α(x) is bounded, by the assumption dist(x, ker α) Y , and α(a) > 0 for all a ∈ A \ A ′ , since ker α is a separating hyperplane for H(A).
Now let x ′ be the subsum of a 1 + • • • + a N coming just from those a i ∈ A ′ .One still has an upper bound on dist(x ′ , C B ′ ), since x − x ′ ∞ is bounded.One may also show that x ′ ∈ S abs (A ′ , B ′ ).However, dim span(A ′ ) = dim ker α = d − 1 < r, so by applying the induction hypothesis we conclude that x ′ ∈ N ′ A ′ for some explicit N ′ .Adding on the elements of A\A ′ , of which there are boundedly many, we end up with x ∈ NA for some other explicit N.
Phase 1: Quantitative details.We will prove the following.Lemma 7.6 (Interior points are representable).Let A ⊂ Z d be a finite set with 0 ∈ A and |A| = ℓ 2. There is a constant then x ∈ P(A).Moreover we may take The proof will be a quantitative adaptation of an argument of Khovanskii from his original paper (Proposition 1 of [8], repeated as Lemma 1 of [9]).Proof.We may assume that u = 0 else the result is trivial.Pick some (x a (u)) a∈A ∈ Z A for which a∈A x a (u)a = u.Let A ′ := {a ∈ A : x a (u) = 0} and ℓ ′ = |A ′ |.Let M be the d-by-ℓ ′ matrix M whose columns are the vectors sign(x a (u))a for a ∈ A ′ .The absolute values of the coefficients of M are all max a∈A a ∞ w(A).Since x ′ (u) := (|x a (u)|) a∈A ′ ∈ Z A ′ >0 satisfies Mx ′ (u) = u, we may apply Corollary 6.3 and conclude that there is some y(u) ∈ Z A ′ >0 for which My(u) = u and y ∞ 2d d (ℓ ′ ) d+1 w(A) 2d + d d w(A) d−1 u ∞ .We have u = a∈A n a (u)a with n a (u) := sign(x a (u))y a (u) for a ∈ A ′ , and n a (u) := 0 otherwise.
Proof of Lemma 7.6.Let U = {u ∈ Λ A : u = a∈A c a a with c a ∈ [0, 1) for all a ∈ A}.
From Lemma 7.7, we may write u = a∈A n a (u)a for coefficients n a (u) ∈ Z satisfying and write By the construction of K A , we have x − D a∈A a ∈ C A .Therefore, we may write x = a∈A λ a a for some real coefficients λ a which satisfy λ a D for all a.Then consider u := x − a∈A ⌊λ a ⌋a.
We have u ∈ U, so writing u = a∈A n a (u)a we get x = a∈A (⌊λ a ⌋ + n a (u))a.Since ⌊λ a ⌋ + n a (u) ∈ Z 0 by the construction of D, this shows that x ∈ P(A), as required.
The bound on K A follows from the bound D 4d d ℓ d+1 w(A) 2d .
We use a classical result due to Bombieri-Vaaler for the more complicated pieces of quantitative linear algebra to come: Lemma 7.8 (Siegel's lemma, Theorem 2 of [1]).With n m let M be an m-by-n matrix with integer entries.Then the equation MX = 0 has n − m linearly independent integer solutions where D is the greatest common divisor of the determinants of all the m-by-m minors of M. Corollary 7.9.With n m let M be an m-by-n matrix with integer entries.Let K be the maximum of the absolute values of the entries of M. Then the equation MX = 0 has n − m linearly independent integer solutions X Proof.In Lemma 7.8 we have D 1 and, since the coefficients of MM T are at most nK 2 in absolute value, we have det(MM T ) m!(nK 2 ) m .the standard basis vectors of R d .By Cramer's rule, we see that Now let f be the linear map given by matrix DM, and let Henceforth we will abuse notation and consider A ′ as a subset of Z r .We now make an appeal to facts about C A ′ and ∂(C A ′ ) which are laid out in Lemma A.2 below.In particular, we see that there is a collection of non-zero linear maps α 1 , . . ., α n : R r −→ R, with n 2rℓ r/2 , for which and for which for each i n there exists a subset . Therefore, using Corollary 7.9 on the (r − 1)-by-r matrix with rows given by the elements of A ′ i , without loss of generality we may assume the following: for all i n, there exists a vector x i ∈ Z r \ {0} with x i ∞ r r w(A ′ ) r such that for all y ∈ R r we have α i (y) = x i , y .Indeed, by directly applying Corollary 7.9 we find a z i ∈ Z r \ {0} with z i ∞ r r w(A ′ ) r that is orthogonal to ker α i .Hence there is some c i ∈ R \ {0} for which α i (y) = c i z i , y for all y ∈ R r .Then |c i | −1 α i (y) = sign(c i )z i , y , and without loss of generality we may rename |c i | −1 α i (y) as α i (y) (as this preserves C A ′ ) and define x i := sign(c i )z i .
We claim that for each a ′ ∈ A ′ \ {0} there exists i n for which x i , a ′ > 0. Indeed, suppose for contradiction that there were some a ′ ∈ A ′ \ {0} for which α i (a ′ ) = 0 for all i.By (7.4), this would mean that λa ′ ∈ C A ′ for all λ ∈ R. Yet 0 ∈ ex(H(A ′ )), which means that there is a non-zero linear map β : R r −→ R for which β(y) > 0 for all y ∈ C A ′ \ {0}.Taking λ = ±1 we would have both β(a ′ ) > 0 and β(−a ′ ) > 0, which gives the contradiction.Therefore for each a ′ ∈ A ′ \ {0} we have a ′ , i n x i > 0, and since these are both integer vectors we have a ′ , i n x i 1.Now suppose that v ∈ P(A) \ {0}.Then f (v) ∈ P(A ′ ) \ {0}.Writing Writing v = j N f −1 (a ′ j ), we have v ∈ NA as claimed.This completes all the necessary preparation for the first phase of the induction step.
Phase 2: Quantitative details.We will prove the following.
Let n := dim(span(B 1 ) ∩ span(B 2 )), and let {v 1 , . . ., v d } be a basis for R d that satisfies all the properties in Lemma 7.13.Expanding with respect to this basis, we write for some coefficients α i , β i , γ i , δ i .We know that y 1 − y 2 ∞ X 1 + X 2 , and that {v 1 , . . ., v d } is a basis with integer coordinates and max i v i ∞ d 3d 3 K d 3 .By Cramer's rule (or equivalently considering the change of basis matrix), we conclude that max This implies, taking that there exists some y 3 ∈ span(B 1 ) ∩ span(B 2 ) such that then we are done directly from the bound on x − y 3 ∞ .If not, let us assume without loss of generality that y 3 / ∈ C B 1 .The rest of the argument proceeds as follows.We know that y 1 ∈ C B 1 , but since y 1 − y 3 ∞ is bounded it follows that y 1 is nonetheless quite close to the boundary of 1 as well, and we may finish off by applying the induction hypothesis on the cones C B ′ 1 and C B 2 .We now proceed with the details.Expanding y 1 in terms of the basis B 1 , one obtains the (unique) expression with c b 0 for all b ∈ B 1 .We then claim that there must exist a set for all b ∈ B ′ .Indeed, were this not the case then Then expand both sides of (7.5) with respect to the basis B 1 of span(B 1 ).We get y 3 =   Suppose dist(x, C B 1 ) X 1 and dist(x, V ) X 2 .Then Proof.Since dist(x, V ) X 2 , by replacing some vectors v i with −v i as necessary we may assume that dist(x, C B 2 ) X 2 .Then apply Lemma 7.12.
Having prepared both the first and second phase of the induction step, we may plough ahead and resolve Lemma 7.4.(The third phase will be dealt with in situ.)Proof of Lemma 7.4.If B = ∅ then x ∞ X and we are done by Lemma 7.11, so we may assume that |B| 1.We then proceed by induction on r := dim span(A).The base case is r = 1.For an arbitrary non-negative real X, suppose x ∈ S abs (A, B) with dist(x, C B ) X. Since r = 1, we have moreover x ∈ S abs (A, B) ⊂ P(A) ⊂ C A = C B , and thus in fact dist(x, C B ) = 0. Observe further that Λ A ∩ C A = vZ 0 for some non-zero vector v ∈ Z d .Taking the linear map f : span(A) → R for which f (v) = 1, let A ′ = f (A), B ′ = f (B), and x ′ = f (x).Then w(A ′ ) w(A), Λ A ′ = Z, x ′ ∈ S abs (A ′ , B ′ ).Applying Lemma 7.5, we conclude that x ∈ NA with N D(Z/Λ B ′ ) w(A ′ ) w(A).This settles the base case.
From now on, we assume that r 2 and x = 0. Our first task is to find a vector y ∈ span(A) \ C A for which x − y ∞ is bounded.Indeed, choosing some b ∈ B, since x ∈ S abs (A, B) we have x − b / ∈ P(A).We know that x ∈ C A , since P(A) ⊂ C A , and so if x−b / ∈ C A we let y = x−b and then x−y ∞ w(A).Otherwise x−b ∈ (Λ A ∩C A )\P(A) = E(A).By Lemma 7.6, there is therefore some y ∈ span(A) \ C A for which x − b − y ∞ 8d d ℓ 3d w(A) 3d .
We now apply Lemma 7.10 to this pair x and y.This gives a linearly independent set A bas ⊂ A, with |A bas | = r − 1, and a vector z ∈ span(A bas ) for which x − z ∞ 16d d ℓ 3d w(A) 3d .In particular dist(x, span(A bas )) 16d d ℓ 3d w(A) 3d . (7.6) We also have a vector v ∈ Z d ∩ span(A) ∩ (span(A bas )) ⊥ for which v ∞ d 2d 2 w(A) d 2 and v, u 0 for all u ∈ C A .Phase one of the induction step is complete.We now begin the second phase, in which we show that dist(x, C B ′ ) is bounded for some suitable B ′ ⊂ B. Indeed, since dist(x, C B ) X, Corollary 7.14 implies that dist(x, C B ∩ span(A bas )) (16d d ℓ 3d w(A) 3d + X)2 2d d 10d 5 w(A) 4d (7.7)Now we move onto the third phase of the induction step.Let A ′ = A ∩ span(A bas ).We now collect a few facts about A ′ and about x.Firstly, if a ∈ A \ A ′ then v, a > 0, and thus v, a 1 as both v and a are in Z d .Next, letting x 0 be the orthogonal projection of x onto span(A bas ), we have v, x = v, x − x 0 d v ∞ dist(x, span(A bas )).

Now define
x ′ := i N :a i ∈A ′ a i ∈ P(A ′ ).
We now apply the induction hypothesis to the sets A ′ and B ′ , and to the element x ′ .The hypotheses are satisfied (taking dist(x ′ , C B ′ ) from (7.10)), since B ′ is linearly independent (though possibly empty), and 0 ∈ ex(H(A ′ )); this is since, if V ∩ span(A) is a separating hyperplane for H(A) with V ∩ H(A) = {0}, then V ∩ span(A ′ ) is a separating hyperplane for H(A ′ ) with V ∩ H(A ′ ) = 0.
we have |NA| P A (N) for all N, and |NA| = P A (N) if and only if (1.5) holds, and thus N Kh (A) = N Str (A).
are all distinct in G, else subtracting would give a subsum equal to 0, and they are all contained in G \ H. Therefore k + |H| |G| and the first result follows.By definition k(G, H) < D(G) so the second result claims from the result noted for D(G) above.Curran and Goldmakher's [3, Lemma 3.1] implies the weaker upper bound k(G, H) |G| − 1.This difference leads in part to the improvements in Theorems 1.4 and 1.5.3.1.d-dimensional simplices.Let B = {b 1 , . . ., b d } ⊂ A be a basis for R d with B ∪ {0} ⊂ A ⊂ H(B ∪ {0}) and A ⊂ Z d so that A is finite.Since C A = C B , and B is a basis, there is a unique representation of every vector r ∈ C A as r = d i=1 r i b i where each r i 0.

1 K 2 since n m + 1 .
Thus we have completed the induction in this case.If rank M {i} = m − 1 then there are some further cases.If m = 1 and M {i} is the zero matrix, then we can choose any vector y {i} ∈ Z n−1 >0 .Otherwise, we may replace M {i} with m − 1 of its rows.Call this new (m − 1)-by-(n − 1) matrix M {i} res , and further we may assume that rank M {i} res = m − 1. Denote the analogous restriction of the vector b − M(x i e i ) as b res − M(x i e i ) res .Then by the induction hypothesis as applied to M {i} res , there is some y {i} ∈ Z n−1 >0 for which M {i} res y {i} = b res − M(x i e i ) res and y {i} ∞

Lemma 7 . 1 (
Decomposing P(A)).Let B ∪ {0} ⊂ A ⊂ Z d with |A| = ℓ and B a nonempty linearly independent set.Suppose that 0 ∈ ex(H(A)).Let A + denote the set of x ∈ P(A) ∩ C B with the property that, for all b ∈ B, x − b / ∈ P(A) ∩ C B .Then A + has the following two properties: C B ∩ P(A) = A + + P(B ∪ {0}) (7.1) and By the assumption (7.2) we also have b r N − v / ∈ E(b r − A) and b r N − v ∈ Λ br−A .Hence b r N − v ∈ P(b r − A).We may now apply Lemma 7.1 to the sets b r − A and (b r − B) \ {0}; the hypotheses are satisfied since b r

Definition 7 . 3 (
Absolutely B-minimal).Let B ∪ {0} ⊂ A ⊂ Z d , with A finite.We say that u ∈ P(A) is absolutely B-minimal with respect to A if u − b / ∈ P(A) for all b ∈ B. Let S abs (A, B) denote the set of absolutely B-minimal elements.

Lemma 7 . 5 .
Let B ∪ {0} ⊂ A ⊂ Z d , with A finite and B a basis of R d .Suppose that C A = C B .Let Z d /Λ B := G. Then S abs (A, B) ⊂ NA, where N = max(1, D(G) − 1) and D(G) is the Davenport constant of G.

Lemma 7 . 7 (
Quantitative representation of basis elements).Let A ⊂ Z d be a finite set with |A| = ℓ 2 and 0 ∈ A. If u ∈ Λ A then there exists (n a (u)) a∈A ∈ Z A for which u = a∈A n a (u)a and |n a (u)| 2d d ℓ d+1 w(A) 2d + d d w(A) d−1 u ∞ for all a ∈ A.

Lemma 7 . 12 (
Intersecting cones).Let d, d 1 , d 2 ∈ Z, with d 1 and 0 d 1 , d 2 d.Let B 1 , B 2 ⊂ Z d be finite sets with |B i | = d i for each i, and assume that B 1 is linearly independent and B 2 is linearly independent.

b∈B 1
c ′ b b, where c ′ b is either of the form c b or c b − β i for some i.In any case, c ′ b 0 for all b ∈ B 1 .So y ∈ C B 1 , but this is in contradiction with the earlier assumption that y 3/ ∈ C B 1 .With this set B ′ , we conclude that dist(y 1 , C B 1 \B ′ ) (X 1 + X 2 )d 4d 4 +1 K d 4 +1 (X 1 + X 2 )d 5d

Corollary 7 . 14 .
Let d, d 1 , d 2 ∈ Z >0 with d 1 , d 2 d, and let K 1.Let B 1 ⊂ Z d be a linearly independent set with |B 1 | = d 1 and max b∈B b ∞ K. Let V R d be a subspace of dimension d 2 , with a basis of d 2 vectors B 2 := {v 1 , . . ., v d 2 } ⊂ Z d ∩ V satisfying v i ∞ K for all i.

5 .
Let B ′ = B ∩ span(A bas ).We then have C B ∩ span(A bas ) = C B ′ .To justify this assertion, note that if u ∈ C B ∩ span(A bas ) we have u = b∈B c b b for some coefficients c b 0. But then 0 = v, u = b∈B c b v, b = b∈B\B ′ c b v, b since v ∈ span(A bas ) ⊥ .As v, b > 0 for all b ∈ B \ B ′ we must have c b = 0 for all b ∈ B \ B ′ .Hence y ∈ C B ′ .(The reverse inclusion C B ′ ⊂ C B ∩span(A bas ) is immediate from definitions.)Therefore, dist(x, C B ′ ) (16d d ℓ 3d w(A) 3d + X)2 2d d 10d 5 w(A) 4d 5 .