## Abstract

The main results of this paper concern growth in sums of a *k*-convex function *f*. Firstly, we streamline the proof (from Hanson et al. (Combinatorica 42:71–85, 2020)) of a growth result for *f*(*A*) where *A* has small additive doubling, and improve the bound by removing logarithmic factors. The result yields an optimal bound for

We also generalise a recent result from Hanson et al. (J Lond Math Soc, 2021), proving that for any finite \(A\subset \mathbb {R}\)

where \(s = \frac{k+1}{2}\). This allows us to prove that, given any natural number \(s \in \mathbb {N}\), there exists \(m = m(s)\) such that if \(A \subset \mathbb {R}\), then

This is progress towards a conjecture (Balog et al. in Electron J Comb 24(3):Paper No. 3.14, 17, 2017) which states that (1) can be replaced with

Developing methods of Solymosi, and Bloom and Jones, and using an idea from Bradshaw et al. (Electron J Comb 29, 2021), we present some new sum-product type results in the complex numbers \(\mathbb {C}\) and in the function field \(\mathbb {F}_q((t^{-1}))\).

## 1 Introduction

Studying growth under sums and differences of convex sets has received significant attention over the last twenty years. Recently, there has been important progress culminating in, among other results, a threshold-breaking bound for the number of dot products in the plane [10]. Another central problem which appears related is whether enough products of the difference set \(A-A\) must grow ad infinitum.

### 1.1 Growth of Sets Under Convex Functions

Let *A* be a finite set of reals. The sum-product problem is one of the most studied problems in additive combinatorics. It deals with the relative sizes of the sumset and product set:

The following was conjectured by Erdős and Szemerédi [8] (originally over the integers).

### Conjecture 1.1

(Sum-Product Conjecture) For any positive \(\delta <1\), there exists a constant \(C(\delta )>0\), such that for any sufficiently large \(A\subset \mathbb {R}\), we have

### Notation

We use Vinogradov’s symbol extensively. We write \(X \ll Y\) to mean that \(X \le CY\) for some absolute constant *C*, and \(X \lesssim Y\) to mean that \(X \le C_1Y(\log Y)^{C_2}\) for some absolute constants \(C_1,C_2\). If \(X \ll Y\) and \(Y \ll X\), we will write \(X \approx Y\). If additionally a suppressed constant \(C:=C(k)\) depends on some parameter *k*, we write \(\ll _k, \lesssim _k, \approx _k\). Usually in such cases *k* will be fixed and therefore small compared to the growing parameters.

Equipped with this notation, we could rewrite (3) as

While the full sum-product conjecture remains open, proving the above for increasing values of \(\delta \) represents the progress of the last forty years. The best known to date is \(\delta = 1/3 + \frac{2}{1167} - o(1)\) due to Rudnev and Stevens [17].

The notation in (2) naturally generalises to incorporate differences and ratios as well as longer sums. One of the key quantities of interest in this paper will be

For longer sums and products we will suppress notation so that *nA* is the *n*-fold sumset and \(A^{(n)}\) is the *n*-fold product set. Incorporating differences and division, we allow sets of the form \(mA-nA\) and \(A^{(m)}/A^{(n)}\).

A set \(B = \{ b_1< \dots < b_N \}\) is said to be convex if

It can be easily shown that a convex set can be expressed as \(B = f([N])\) where *f* is an increasing function with an increasing first derivative and \([N]:=\{1,\dots ,N\}\).

We say a \(C^1(\mathbb {R})\) function *f* is *convex* if both *f* and \(f'\) are either strictly increasing or strictly decreasing. There is an aphorism in additive combinatorics that convex functions destroy additive structure; if *f* is a convex function, then either *A* or *f*(*A*) can be additively structured, but not both. Statements about the additive structure of *A* and *f*(*A*) are more general than sum-product results. Indeed, if for some positive \(\delta <1\), we have

then one immediately obtains a sum-product theorem by choosing \(f(x) = \log (x)\). Elekes, Nathanson and Ruzsa [7] proved a more general form of (4) with \(\delta = 1/4\). We firstly prove the following, a generalisation of (4) to longer sums/differences.

### Theorem 1.1

Let *A* be a finite set of reals and *f* be a convex function. Then

In [9], Hanson, Roche-Newton and Rudnev proved the slightly weaker statement

The removal of the logarithmic factors makes the bound of Theorem 1.1 sharp. Indeed, if \(f(x) = x^2\) and \(A = [N]\) then

Theorem 1.1 is simply the base case for the forthcoming Theorem 1.2. It is stated separately because we present a short, new proof inspired by the very simple method by which Solymosi establishes the \(\delta = 1/4\) sum-product bound in \(\mathbb {C}\) [19].

A simple idea which emerged in a recent paper of Hanson, Rudnev and the author [5] is that \(A+A-A\) must, in some sense, be evenly spaced among the elements of *A*. More specifically, if \(a,a'\in A\) have very few elements of \(A+A-A\) between them, then *a* and \(a'\) must be close to each other. The proof of Theorem 1.1 elucidates how this idea may be leveraged in proving sumset bounds.

### 1.2 Higher Convex Functions

It is expected that the more convex a function, the less likely *f*(*A*) is to exhibit additive structure. We say a \(C^k(\mathbb {R})\) function is *k*-*convex* if it has strictly monotone derivatives \(f^{(0)}, f^{(1)},\dots , f^{(k)}\). Here monotone means either increasing or decreasing. A 1-convex function is simply a convex function and a 0-convex function is a strictly monotone function. To streamline exposition, we henceforth assume that a *k*-convex function *f* has monotone *increasing* derivatives \(f^{(0)}, f^{(1)}, \dots , f^{(k)}\). If any of them are decreasing, the proofs herein can be easily modified to handle this. Such modifications are discussed in [9]. It is nonetheless worth noting that if *f* has all derivatives positive, then \(F(x):= -f(-x)\) has the signs of its derivatives alternating and *F* has the same sumset behaviour as *f*. This is a more direct way to see that our theorems will apply in the important case where \(f = \log \).

The forthcoming Theorem 1.2 improves the main result in [9] by logarithmic factors. Specifically, we use an idea from [5] to sidestep the need for dyadic pigeonholing.

### Theorem 1.2

Let *A* be a finite set of real numbers and *f* be a *k*-convex function. Then if \(|A+A-A|\le K|A|\), we have

Setting \(A = [N]\) and \(f(x) = x^{k+1}\) verifies that the term \(|A|^{k+1}\) on the right-hand side the best possible.

In [10], it is proved that if *A* is a finite real set and *f* is any convex function, then

We prove the following generalisation.

### Theorem 1.3

Let *A* be a set of reals and *f* be a *k*-convex function. If *k* is even with \(k = 2s\) then

and if *k* is odd with \(k = 2s-1\) then

Notice that in both cases, the number of summands inside the function *f* is the same as the power of |*A*| on the right-hand side. For the purposes of discussion, assume we are in the odd *k* case. A generic set *A* is expected to have \(|sA-sA| \approx |A|^{2\,s} = |A|^{k+1}\), and therefore \(|f(sA-sA)| \approx |A|^{k+1}\) as well since *f* is a monotone function. However, obviously \(sA-sA\) may be significantly smaller than \(|A|^{k+1}\). Theorem 1.3 affirms that even in this case, sufficiently many sums (and differences) of \(f(sA-sA)\) guarantee the same \(|A|^{k+1}\) growth.

It is also worth mentioning that our proof method actually permits a stronger conclusion: in the even *k* case, \((s+1)A-sA\) may be replaced by \(A + \{ -s,\dots , s \}d\), and in the odd *k* case, \(sA-sA\) may be replaced by \(A-A + \{ -(s-1),\dots , s-1 \}d\), where *d* is some fixed element of \(A-A\). So even though generically \(|A + \{-s, \dots , s\}d| \approx s|A|\), adding sufficiently many copies of \(f(A + \{-s, \dots , s\}d)\) guarantees \(|A|^{k+1}\) growth.

While it may be possible to improve Theorem 1.3 by reducing the number of summands of \(f(sA-sA)\) and \(f((s+1)A-sA)\) on the left hand side, the bound \(|A|^{k+1}\) cannot be improved, evinced by setting \(A = [N]\) and \(f(x) = x^{k+1}\).

It is easy to see that \(A-A + \{ -s+1,\dots , s-1 \}d\) is the union of a small number of translates of \(A-A\). (A similar statement applies to \(A + \{ -s,\dots , s \}d\).) This motivates the following conjecture.

### Conjecture 1.2

Let *A* be a set of reals and *f* be a *k*-convex function. Then

We feel that bounding the size of sumsets of convex functions with many summands as well as the iteration techniques to prove them are of independent interest. However, additionally they allow improvements of bounds for 2-fold sumsets and energy of highly convex sets. See [5] for more details.

### 1.3 Applications

#### 1.3.1 Growth for Products of Generalised Difference Sets

It was conjectured in [1] that for any \(s >0\), there exists \(m = m(s)\) such that if *A* is a finite real set then

where we recall that \((A-A)^{(m)}\) denotes an *m*-fold product set \(\underbrace{(A-A)\dots (A-A)}_{m \text { times}}\). This was proved in [11] for the case when \(A \subset \mathbb {Q}\). Balog, Roche-Newton and Zhelezov proved (5) for \(s=3\), and additionally that for \(s=17/8\), choosing \(m = 3\) suffices. These were the first known results for \(s>2\). Recently, Hanson, Roche-Newton and Senger proved (implicitly) [10] that (5) holds if \(m = 8, s = 33/16\), but their method was stronger in the sense that some of the \(A-A\) terms could be replaced with \(A-a\) for specific values of \(a\in A\). They use this to improve the best known lower bound for \(|\Lambda (P)|\) where \(\Lambda (P)\) is the set of dot products induced by a point set *P* in \(\mathbb {R}^2\). They proved

the first result to break the threshold \(|P|^{2/3}\). See [16] for more connections between similar problems and growth in \(A-A\).

We prove a result approaching this conjecture from a different direction, namely allowing for products of *many*-fold differences.

### Theorem 1.4

Given any natural number \(s \in \mathbb {N}\), there exists \(m = m(s)\) such that if *A* is a finite set of reals, then

The proof of Theorem 1.4 is an easy corollary of Theorem 1.3, and is proved in Sect. 4. If Conjecture 1.2 holds, then (5) is the natural corollary. Again, our proofs permit a stronger conclusion than (6), namely

which is remarkably close to (5). The proof also delivers the explicit value \(m(s) = 2^{2s-1}\).

#### 1.3.2 Sum-Product Type Results

Studying the sizes of sumsets and product sets which are not the traditional \(A+A\) and *AA* has led to many variations of the sum-product problem. See for example [4, 14, 20]. What these and many other results *do* share with the sum-product problem is they all enshrine the philosophy that additive structure and multiplicative structure cannot coexist in the same set. We may refer to any such result as a sum-product *type* result.

The sum-product problem has been studied in other fields as well. We particularly note that for \(A\subset \mathbb {C}\), (3) is known for all \(\delta < 1/3 + c\) (for some small c) [2] and for subsets of a function field \(A \subset \mathbb {F}_q((t^{-1}))\), it is known for all \(\delta < 1/5\) (with the implied constant *C* also depending on *q*) [3]. In this paper, we prove a related sum-product type result in each setting. Over the complex numbers we have the following.

### Theorem 1.5

Let \(A \subset \mathbb {C}\) be a finite set. Then the following holds:

The function field \(\mathbb {F}_q((t^{-1}))\) is the field of all Laurent series of the form

We prove the following.

### Theorem 1.6

For any finite \(A \subset \mathbb {F}_q((t^{-1}))\) and any \(\epsilon > 0\), we have

Theorem 1.6 does not extend to function fields where the base field is not finite. Furthermore, the dependence in this result on *q* is necessary (and cannot be improved), since \(\mathbb {F}_q((t^{-1}))\) has small non-trivial subfields. Indeed, setting \(A=\mathbb {F}_q\) demonstrates this sharpness. It is also worth mentioning that the same proof of Theorem 1.6 holds for finite subsets of any field with nonarchimedean norm and finite residue field. In particular, it holds for finite subsets of the *p*-adic numbers \(\mathbb {Q}_p\).

A form of Plünnecke’s inequality [12, Cor 1.5] shows that given any set *A* in some group *G*, there exists \(A' \subset A\) with \(|A'|\ge |A|/2\) such that

If \(A' \subset A \in \mathbb {C}\), then applying Theorem 1.5 to \(A'\) and some simple inequalities yields

which matches Solymosi’s bound [19] (which has since been improved). Similarly, if \(A' \subset A \in \mathbb {F}_q((t^{-1}))\), then applying Theorem 1.6 to \(A'\) yields

This matches Bloom and Jones’ bound in [3], which is the best known sum-product bound in function fields with finite residue field. Unfortunately, the bounds (7) and (8) do not follow from our theorems if \(A-A\) is replaced with \(A+A\).

It is in the few products, many sums framework that Theorems 1.5 and 1.6 are most relevant. In studying sum-product phenomena, it is often instructive to find bounds for \(|A+A|\) given that |*AA*| is small. In other words, given that there are *few products*, we show that there are *many sums*. This problem has been studied in [6, 13, 14].

In particular, we are addressing the few products, many 3-fold sums problem. For example, if we know that \(|AA| \approx |A|\), then for \(A \subset \mathbb {C}\) we have

and for \(A \subset \mathbb {F}_q((t^{-1}))\) we have

The proofs of Theorems 1.5 and 1.6 are both inspired by the combination of techniques used to prove Theorem 1.1.

### 1.4 Structure of this Paper

In Sect. 2, we present some essential preliminaries and a proof of Theorem 1.1, which forms the base step for the induction proof in the following section. In Sect. 3, we prove Theorem 1.2 using a variation of the induction proof in [9]. Section 4 is devoted to proving Theorem 1.3 through the auxiliary result Proposition 4.1. We finally demonstrate that Theorem 1.4 is an easy corollary.

The proofs of Theorems 1.5 and 1.6 are almost identical to the corresponding proofs in [19] and [3] respectively. Simple modifications using a version of Lemma 2.3 produce the improvements. For this reason, both proofs appear only in Appendix 1, which is the only section in which we do not work exclusively over \(\mathbb {R}\).

## 2 Preliminaries

### 2.1 Convex Functions and Squeezing Elements

A simple and clever *squeezing* approach for proving sumset bounds for convex sets was given in [18], providing a sharp lower bound

where *B* is a convex set. This approach was significantly extended in [9], providing estimates for longer sums of more convex sets and also the images *f*(*A*) of sets with small additive doubling under *k*-convex functions.

The basic idea is the following. Recall that a set \(B = \{b_1< \dots < b_N\}\) is convex if the adjacent differences between elements is increasing

It follows that if *i* is fixed, then for any \(j < i\) the terms

are all unique and lie in \((b_i,b_{i+1})\). We may apply this for any *i* to obtain \(\sum _{i=1}^N (i-2) \approx |B|^2\) elements of \(|B+B-B|\).

More can be said if we are looking at the images of convex functions. Given a function *f*, define its *d*-derivative by

### Lemma 2.1

If *f* is a *k*-convex function, then for any *d*, \(\Delta _d f\) is a \((k-1)\)-convex function.

### Proof

We use induction on *k*. Suppose *f* is 1-convex. We have

Since \(f'\) is monotone, it follows that \(\Delta _d f\) is also monotone, and hence 0-convex.

Next assume the statement holds for \((k-1)\)-convex functions. Let *f* be a *k*-convex function. By definition, this implies that \(f'\) is a \((k-1)\)-convex function. The induction hypothesis implies that \(\Delta _d(f')\) is a \((k-2)\)-convex function. But since \(\Delta _d(f') = (\Delta _d f)'\), it follows that \(\Delta _d f\) is \((k-1)\)-convex, completing the induction. \(\square \)

If *f* is convex, \(A = \{a_1< \dots < a_N\}\) and \(d >0\), then Lemma 2.1 implies that \(\Delta _d f\) is increasing, meaning that

Consequently if *i* is fixed, then for any \(j\le i\) the terms

are all different and lie in \((f(a_i),f(a_i + d)]\). We usually take *d* to be at most the smallest difference between adjacent elements of *A* to ensure that these intervals are disjoint. These observations are summarised in the following.

### Lemma 2.2

(The squeezing lemma) Let *f* be a convex function and \(d > 0\). Let *s* be a real number and \(S_-\) a set of real numbers no larger than *s*. Then

### 2.2 The Basic Argument

The following method will be extended to different fields (\(\mathbb {C}\) and \(\mathbb {F}_q((t^{-1}))\)) in Appendix 1, but ideas herein will also be employed in Sect. 3.

We will also introduce the following notation: if \(a'<a\), then

Taking \(a,a' \in A\), the quantity \(n_A(a',a)\) tells us about how \(A+A-A\) is distributed among the elements of *A*. Intuitively we would think that wide intervals contain many elements of \(A+A-A\). This intuition is quantified by the following Lemma, which also appears in [5].

### Lemma 2.3

Let \(D:=\{d_1<d_2\dots <d_{|D|}\}\) be the positive differences in \(A-A\). If \(a,a' \in A\) with \(a'<a\) and \(n_A(a',a)\le Z\), then \(a-a' \le d_Z\).

In other words, if there are at most *Z* elements of \(A+A-A\) in \((a',a]\), then \(a-a'\) must be among the *Z* smallest positive differences in \(A-A\).

### Proof

If not then \(a-a' = d_Y\) where \(Y>Z\). But then

for \(i = 1,\dots ,Y\). Thus there are at least \(Y>Z\) elements of \(A+A-A\) in \((a',a]\), contradicting that \(n_A(a',a) \le Z\). \(\square \)

We are now equipped to prove Theorem 1.1.

### Proof of Theorem 1.1

Let \(A:= \{a_1< \dots < a_{|A|}\}\). We say that \(a_i\) is *good* if

Since

by the pigeonhole principle it follows that there is a set \(A'\) with \(|A'| \ge |A|/2\) such that each element of \(A'\) is good.

Now consider the map

By the mean value theorem, there exists a sequence \(\{c_i\}\) where \(c_i \in (a_i,a_{i+1})\) and

Once \(f(a_{i+1})-f(a_i)\) and \(a_{i+1}-a_i\) are fixed, \(c_i\) is known uniquely since \(f'\) is strictly monotone. Thus \(a_i\) is also known uniquely, and \(\Psi \) is injective.

Restrict the domain of \(\Psi \) to \(A'\). Since \(\Psi \) is injective, the size of its domain \(A'\) equals the size of its image \(\Psi (A')\), proving that

A suitable upper bound for \(|\Psi (A')|\) will complete the proof.

Since each \(a_i \in A'\) is good it satisfies

Lemma 2.3 shows that \(a_{i+1}- a_i\) is among the smallest \(\frac{4|A+A-A|}{|A|}\) positive elements in \(A-A\) and therefore, there are \(\le \frac{4|A+A-A|}{|A|}\) values it can take. By an identical argument there are \(\le \frac{4|f(A)+f(A)-f(A)|}{|A|}\) values \(f(a_{i+1})-f(a_i)\) can take.

This proves that \(|\Psi (A')| \le 16\ |A+A-A||f(A)+f(A)-f(A)||A|^{-2}\). It follows from (9) that

and rearranging completes the proof. \(\square \)

### Remark

In Sect. 3, we will claim that Theorem 1.1 is the base case for the induction proof of Theorem 1.2. In fact, we will need the slightly stronger statement that \(f(A)+f(A)-f(A)\) contains \(\gg |A|^3 |A+A-A|^{-1}\) elements in \((\min (f(A)),\max (f(A))]\). This is immediate after modifying the definition of “good” in the above proof: say \(a_i\) is *good* if

## 3 Proof of Theorem 1.2

### Proof of Theorem 1.2

The proof will be by induction on *k*. The actual statement we will prove is the slightly stronger statement that all sums produced lie in the interval \((\min f(A),\max f(A))]\). So the base step (which is proved in Theorem 1.1) states that \(f(A)+f(A)-f(A)\) contains \(\gg |A|^3 |A+A-A|^{-1}\) elements in \((\min f(A),\max f(A))]\).

Similar to previous proofs, we say that \(a_i\in A\) is *good* if

By the pigeonhole principle, at least half of all \(a_i \in A\) are good. We henceforth restrict our attention to the good \(a_i\). Consider the differences \(a_{i+1}-a_i\) and let *H* be the set of all such differences. Since we are only considering good values of \(a_i\), Lemma 2.3 implies that \(|H| \ll |A+A-A||A|^{-1}\). For each \(h \in H\), define

We furthermore know that \(\sum _{h\in H} |A_{h}| \approx |A|\).

If \(A_{h} = \{ a_{e_1}< \dots <a_{e_{|A_h|}}\}\) then let \(A_h^i = \{ a_{e_1}< \dots <a_{e_{i}}\}\) be the truncation taking only the smallest *i* elements of \(A_h\). For any \(a_{e_i} \in A_{h}\), the squeezing lemma (Lemma 2.2) implies that

Since *f* is a *k*-convex function, Lemma 2.1 proves that \(g_i:= f(a_{e_i}) + \Delta _{h}f\) is a \((k-1)\)-convex function, and we have

It follows from the induction hypothesis that

contains \(\gg _k \frac{|A_{h}^i|^{2^{k}-1}}{|A_{h}^i+A_{h}^i-A_{h}^i|^{2^{k}-k-1}}\) elements in \((f(a_{e_i}),f(a_{e_i+1})]\). This argument can be run for every element of \(A_{h}\), and then also for each \(h \in H\) to obtain

Above we have used the trivial facts that \(|A+A-A| \gg |A_{h}^i+A_{h}^i - A_{h}^i|\) and \(|A_h^i| = i\). Now by Hölder’s inequality, we have

Recalling that \(|H| \ll |A+A-A||A|^{-1}\), (10) and (11) yield the desired

with all constructed elements lying in \((\min f(A),\max f(A)]\). \(\square \)

### Remark

In [9], Lemma 3.1 is used to find a consecutive difference \(h\in A-A\) with many realisations, and this is done by dyadic pigeonholing.

Instead we have found a large set of consecutive pairs \((a_i,a_{i+1})\) which don’t have many elements of \(A+A-A\) in between them. By Lemma 2.3, there are few possible values that \(a_{i+1}-a_i\) can take, and therefore the pigeonhole principle proves that some of these differences must be realised many times. This approach avoids the logarithmic factors intrinsic to a dyadic pigeonholing argument.

## 4 Proof of Theorem 1.3

In proving Theorem 1.3, we will actually prove the following stronger but more cumbersome result, because it makes the induction step more manageable.

### Proposition 4.1

Let \(A = \{a_1< \dots < a_N\}\) be any set of reals (with \(N\ge 2\)), *k* be a positive integer and *f* be a *k*-convex function. Also let \(d >0\) be such that

For \(i = 1,\dots , N\), set

and for \(i = 2,\dots ,N\), set

Then the set

contains \(\gg _k |A|^{k+1}\) elements in \((\min f(A),\max f(A)]\).

### Proof

The proof is by induction on *k*. We begin with the base step. Let \(d < a_i-a_j\) for all \(j<i\). We have

Since *f* is a convex function it follows that

and consequently if *i* is fixed, then for any \(j\le i\) the terms

are all different and lie in the interval \(I_i = (f(a_i),f(a_i + d)]\). This produces *i* elements of \(f(S_{1,N}) + f(S_{1,N}) - f(S_{1,N})\). Since \(d < a_i-a_j\) for all \(j<i\), the intervals \(I_i\) are disjoint. Apply this argument for \(i = 1, \dots , N-1\), producing

elements of \(f(S_{1,N}) + f(S_{1,N}) - f(S_{1,N})\) lying in \((\min f(A), \max f(A)]\), thus completing the base step.

We proceed to the induction. Note that (12) guarantees that the intervals

spanned by the sets \(P_{k,i}\) are all disjoint. Using the squeezing lemma (Lemma 2.2) and (12), we have

Since *f* is *k*-convex, the new functions \(g_i:= f(a_i) + \Delta _d f\) are all \((k-1)\)-convex by Lemma 2.1, and also

From (12), the inequality \((k-1)d \le a_i-a_j\) trivially holds for all \(j<i\), so the induction hypothesis can be applied to \(A_i:= \{a_1, \dots , a_{i}\}\), showing that the set

contains \(\gg |A_i|^{k} = i^{k}\) elements in \((f(a_i),f(a_{i+1})]\). Applying this for each function \(g_i\) we get

and all constructed elements lie in \((\min f(A),\max f(A)]\), closing the induction. \(\square \)

### Proof of Theorem 1.3

Given \(A = \{a_1< \dots < a_N\}\), let \(b,b' \in A\) be such that \(d_0 = b-b'\) is the smallest positive element of \(A-A\).

We start by proving the case where \(k= 2s\) is even. We set

where \(M = \lfloor N/k \rfloor \). Now apply Proposition 4.1 to \(A'\) with \(d = d_0\). Since \(d_0 \in A-A\) we get \(S_{k,M} \subset (s+1)A-sA\), and the result follows.

If \(k = 2s-1\) is odd, then instead using

completes the proof. \(\square \)

We now prove Theorem 1.4.

### Proof of Theorem 1.4

Set \(f(x) = \log (x)\) in the case \(k = 2s-1\) of Theorem 1.3. Using a crude upper bound on the size of the quotient set, we get that for any natural number *s*,

Taking square roots and setting \(m(s) = 2^k = 2^{2s-1}\) completes the proof. \(\square \)

One can think of a *k*-convex set *A* as *f*([*N*]) where *f* is a *k*-convex function. Then [9, Theorem 1.3] shows that

This is also a corollary of Proposition 4.1 by setting *A* to be an arithmetic progression. In fact, Proposition 4.1 is designed to synthesise the important properties of [*N*] which make *f*([*N*]) grow under many sums and differences.

## References

Balog, A., Roche-Newton, O., Zhelezov, D.: Expanders with superquadratic growth. Electron. J. Comb.

**24**(3), Paper No. 3.14, 17 (2017)Basit, A., Lund, B.: An improved sum-product bound for quaternions. SIAM J. Discret. Math.

**33**(2), 1044–1060 (2019)Bloom, T.F., Jones, T.G.F.: A sum-product theorem in function fields. Int. Math. Res. Not. IMRN

**2014**(19), 5249–5263 (2014)Bourgain, J., Chang, M.-C.: On the size of \(k\)-fold sum and product sets of integers. J. Am. Math. Soc.

**17**(2), 473–497 (2004)Bradshaw, P.J., Hanson, B., Misha, R.: Higher convexity and iterated second moment estimates. Electron. J. Comb.

**29**(2021)Bush, A., Croot, E.: Few products, many \(h\)-fold sums. Int. J. Number Theory

**14**(8), 2107–2128 (2018)Elekes, G., Nathanson, M.B., Ruzsa, I.Z.: Convexity and sumsets. J. Number Theory

**83**(2), 194–201 (2000)Erdős, P., Szemerédi, E.: On sums and products of integers. In: Studies in Pure Mathematics, pp. 213–218. Birkhäuser, Basel (1983)

Hanson, B., Roche-Newton, O., Rudnev, M.: Higher convexity and iterated sum sets. Combinatorica

**42**, 71–85 (2020)Hanson, B., Roche-Newton, O., Senger, S.: Convexity, superquadratic growth, and dot products. J. Lond. Math, Soc (2021)

Hanson, B., Roche-Newton, O., Zhelezov, D.: On iterated product sets with shifts. II. Algebra Number Theory

**14**(8), 2239–2260 (2020)Katz, N.H., Shen, C.-Y.: A slight improvement to Garaev’s sum product estimate. Proc. Am. Math. Soc.

**136**(7), 2499–2504 (2008)Konyagin, S.: \(h\)-fold sums from a set with few products. Mosc. J. Comb. Number Theory

**4**(3), 14–20 (2014)Murphy, B., Rudnev, M., Shkredov, I., Shteinikov, Y.: On the few products, many sums problem. J. Théor. Nombres Bordeaux

**31**(3), 573–602 (2019)Petridis, G.: New proofs of Plünnecke-type estimates for product sets in groups. Combinatorica

**32**(6), 721–733 (2012)Rudnev, M.: On distinct cross-ratios and related growth problems. Mosc. J. Comb. Number Theory

**7**(3), 51–65 (2017)Rudnev, M., Stevens, S.: An update on the sum-product problem. In: Mathematical Proceedings of the Cambridge Philosophical Society, vol. 173, pp. 411–430. Cambridge University Press, Cambridge (2022)

Ruzsa, I., Shakan, G., Solymosi, J., Szemerédi, E.: On distinct consecutive differences. In: Combinatorial and Additive Number Theory IV: CANT, New York, USA, 2019 and 2020, vol. 4, pp. 425–434. Springer (2021)

Solymosi, J.: On sum-sets and product-sets of complex numbers. J. Théor. Nombres Bordeaux

**17**(3), 921–924 (2005)Stevens, S., Warren, A.: On sum sets of convex functions, (2021)

## Acknowledgements

I would like to thank Misha Rudnev for useful input and suggesting some of these problems. I would also like to thank Tom Bloom, Oleksiy Klurman, Sam Mansfield, Akshat Mudgal, Jonathan Passant and Oliver Roche-Newton for useful conversations. I would also like to thank the referees for carefully reading the paper, for their suggestions, and for their kind comments.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Appendix A: Proof of Theorems 1.5 and 1.6

### Appendix A: Proof of Theorems 1.5 and 1.6

Let \((\mathbb {F},\Vert \cdot \Vert )\) be a field with a norm. In this section, we will use extensively the notation

That is, *B*(*a*, *r*) is the closed ball around *a* with radius *r*.

We will need a version of Lemma 2.3 which is applicable in \(\mathbb {C}\) and in \(\mathbb {F}_q((t^{-1}))\).

### Lemma A.1

Let \((\mathbb {F},\Vert \cdot \Vert )\) be a field with a norm, and let *A* be a finite subset of \(\mathbb {F}\). Write \(D:= A-A = \{0\} \cup \{ d_1,\dots , d_{|D|-1} \}\) such that the norms of the \(d_i\) are non-decreasing. If \(a,a' \in A\) are distinct and

then \(a-a' \in \{d_1, \dots , d_Z\}\).

The proof is almost identical to the proof of Lemma 2.3.

### Proof

If not then \(a-a' = d_Y\) where \(Y>Z\), whence

for \(i = 1,\dots ,Y\). This produces \(Y>Z\) elements of \(A+A-A\) in \(B(a,\Vert a-a'\Vert )\), a contradiction. \(\square \)

Importantly Lemma A.1 can be applied to finite subsets of \(\mathbb {C}\) with the usual complex modulus as the norm, and \(\mathbb {F}_q((t^{-1}))\) with \(\Vert x\Vert = q^{\deg x}\) as the norm.

### 1.1 Sum-Product Type Result in \(\mathbb {C}\)

We prove Theorem 1.5 following a method of Solymosi [19], incorporating Lemma A.1 to obtain an improvement.

### Proof of Theorem 1.5

For notation, let \(B_A(a):= B(a,|a-b|)\) where *b* is a nearest neighbour of *a* (according to the standard modulus function in \(\mathbb {C}\)). We will say that \((a,b,c) \in A^3\) is *good* if:

Note that balls of the form \(B_A(a)\) have no elements of \(A\backslash \{a\}\) in their interior. It can be easily shown that no complex number can be contained in more than 7 such balls. Therefore, we have

and for any \(c \in A\)

Since each \(a \in A\) has at least one nearest neighbour, applying the pigeonhole principle to (15) and (16), there exists a subset \(T \in A^3\) with \(|T| \ge |A|^2/2\) such that each triple \((a,b,c)\in T\) is good. Now consider the set *T* under the map

As long as \(a\ne b\) (which certainly holds for all triples in *T*), \(\Psi \) is injective, whereupon

We now search for an upper bound on \(|\Psi (T)|\) and will do this crudely by counting, given that (*a*, *b*, *c*) is a good triple, how many values \(a-b\) may take, and then separately, how many values the pair (*ac*, *bc*) can take.

Because (*a*, *b*, *c*) is good, *a* satisfies (13). Also, since \(b \in B_A(a)\), we know that \(B_A(a) = B(a,|a-b|)\), so Lemma A.1 proves that there are \(\ll |A+A-A||A|^{-1}\) possible values that \(a-b\) may take.

The number of values of that *ac* may take is trivially upper-bounded by |*AA*|. Once *ac* is fixed, since *bc* lies in \(c\cdot B_A(a)\) and (14) holds, there are \(\ll |AA||A|^{-1}\) values that *bc* may take.

Putting this together we obtain

which we rearrange to arrive at the desired result. \(\square \)

### 1.2 Sum-Product Type Result in \(\mathbb {F}_q((t^{-1}))\)

Next we prove Theorem 1.6 with the method of Bloom and Jones [3] with necessary modifications to incorporate the \(|A+A-A|\) term.

We will firstly introduce some notation that will be used throughout the proof. We can put a norm structure on \(\mathbb {F}_q((t^{-1}))\) by saying that \(\Vert x\Vert = q^{\deg x}\), where \(\deg \) is the standard degree for Laurent series. Observe that the norm of a difference \(\Vert a-a'\Vert \) is a measure of how similar *a* and \(a'\) are; that is, to the right of which term the Laurent series of *a* and \(a'\) agree. Next let

and

We will argue that any intersecting balls in \(\mathbb {F}_q((t^{-1}))\) are nested; that is, if \(y \in B(x_1,r_1) \cap B(x_2,r_2)\), then either \(B(x_1,r_1) \subset B(x_2,r_2)\) or \(B(x_2,r_2) \subset B(x_1,r_1)\). Indeed, if \(r_1 \le r_2\) the former occurs, if \(r_2 \le r_1\) the latter occurs. It follows that if \(r_1 = r_2\) then \(B(x_1,r_1) = B(x_2,r_2)\).

For a proof suppose \(r_1 \le r_2\) and that \(y \in B(x_1, r_1) \cap B(x_2,r_2)\). This means that the Laurent series for \(x_1, y\) agree on degrees \(\ge r_1\) and \(x_2, y\) agree on degrees \(\ge r_2\). Since \(r_1\le r_2\), this implies that \(x_1,x_2\) agree on degrees \(\ge r_2\). Now if \(w \in B(x_1,r_1)\), then *w* agrees with \(x_1\) on degree \(\ge r_1\). It follows that *w* agrees with \(x_2\) on degree \(\ge r_2\). This demonstrates that \(w \in B(x_2,r_2)\), completing the proof.

We will use the same definition and notation for separable sets and *A*-chains as in [3].

### Definition A.1

A finite set \(A \in \mathbb {F}_q((t^{-1}))\) is *separable* if its elements can be indexed as

such that for each \(1\le j \le |A|\) there is a ball \(B_j\) with

We also say that \({\mathcal {C}} = (c_1, \dots , c_n) \in A^n\) is an *A*-*chain* of length *n* if all the \(c_i\) are different and

The following two lemmas are proved in [3]. We list them here without proof.

### Lemma A.2

If \(A \subset \mathbb {F}_q((t^{-1}))\) is a separable set, then for any natural numbers *k*, *n*, *m* such that \(n+m = k\),

### Remark

In [3], this Lemma is stated only for sumsets, not difference sets. However, since the result is proved by showing that the corresponding energy is the minimum possible, it also proves the corresponding bound for difference sets.

### Lemma A.3

If the elements of \({\mathcal {C}}\) form an *A*-chain, then \({\mathcal {C}}\) contains a separable set of size at most \(|{\mathcal {C}}|/q\).

The strategy of our proof is as follows: if we can find a suitably large *A*-chain, then Lemma A.3 shows that it contains a large separable set *U*. Then Lemma A.2 shows that *U* has a large *k*-fold sumset, and therefore so does *A*. Applying Plünnecke’s Inequality will then complete the proof. Thus the key result is the following:

### Proposition A.1

Let \(A \subset \mathbb {F}_q((t^{-1}))\) be finite. Then *A* contains an *A*-chain \({\mathcal {C}}\) with

### Proof

For each \(a \in A\) write *N*(*a*) to be the length of the longest *A*-chain \((c_1,\dots ,c_k)\) where \(c_k = a\). We begin by dyadic pigeonholing: for each \(0 \le j \le \log |A|\), let \(A_j\) be the set of \(a\in A\) such that \(2^j \le N(a) < 2^{j+1}\). There exists some \(j_0\) such that \(|A_{j_0}| \ge |A|/\log |A|\).

To complete the proof, it suffices to show that

To this end, we say that a triple \((a,b,c)\in A^3\) is *good* if

Let *T* be the set of all good triples. We complete the proof by showing the following upper and lower bounds on |*T*|:

We begin by proving (21). Observe that

where \(C_{j_0}(v)\) is the set of \(a \in A_{j_0}\) such that \(v \in B_A(a)\). Similarly, for any \(c \in A\)

It is worth noting that unike when we are working in \(\mathbb {C}\), \(|C_{j_0}(v)|\) is not bounded by any constant. However, for all \(a \in C_{j_0}(v)\), the corresponding balls \(B_A(a)\) all share the point *v* and are therefore nested. In other words, the elements of \(C_{j_0}(v)\) can be ordered to form an *A*-chain. It follows that

For each \(c \in A\), apply the pigeonhole principle to (23) and (24) to obtain a subset \(A'_c \subset A_{j_0}\) with \(|A'_c| \gg |A_{j_0}|\) such that for each \(a \in A'_c\), (19) and (20) both hold.

Given an *A*-chain \({\mathcal {C}} = (c_1,\dots ,c_{N(a)})\) with \(c_{N(a)} = a\), the definition of an *A*-chain shows that \(c_i \in B_A(a)\) for \(i = 1, \dots , N(a)\). It follows that

Now for any \(c \in A\) and \(a \in A'_c\), it follows that (17), (19), (20) all hold. Once *a* is fixed (25) shows that at least \(2^{j_0}\) values of *b* will satisfy (18), completing the proof that

We now prove (22). The map

is manifestly injective when restricted to *T*. Similar to the proof of Theorem 1.5, we upper bound \(|\Psi (T)|\) by

Since *a* satisfies (19) and \(b \in B_A(a)\), Lemma A.1 proves that there are \(\ll \frac{2^{j_0}|A+A-A|}{|A_{j_0}|}\) possible values that \(a-b\) may take. The number of values of that *ac* may take is trivially upper-bounded by |*AA*|. Once *ac* is fixed, since *bc* lies in \(c\cdot B_A(a)\) and (20) holds, there are \(\ll \frac{2^{j_0}|AA|}{|A_{j_0}|}\) values that *bc* may take.

Putting this all together, we get

whereupon using \(|A_{j_0}| \gg \frac{|A|}{\log |A|}\), and rearranging, completes the proof. \(\square \)

### Proof of Theorem 1.6

By Lemma A.3 and Proposition A.1, there exists a separable subset \(U \in A\) of size at least

Then using Lemma A.2 and Plünnecke’s inequality (see [15] for a short proof), we have

Taking *k*th roots we get

Rearranging yields the desired result for sufficiently large *k*. \(\square \)

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Bradshaw, P.J. Growth in Sumsets of Higher Convex Functions.
*Combinatorica* **43**, 769–789 (2023). https://doi.org/10.1007/s00493-023-00035-6

Received:

Revised:

Accepted:

Published:

Issue Date:

DOI: https://doi.org/10.1007/s00493-023-00035-6