## 1 Introduction

Studying growth under sums and differences of convex sets has received significant attention over the last twenty years. Recently, there has been important progress culminating in, among other results, a threshold-breaking bound for the number of dot products in the plane [10]. Another central problem which appears related is whether enough products of the difference set $$A-A$$ must grow ad infinitum.

### 1.1 Growth of Sets Under Convex Functions

Let A be a finite set of reals. The sum-product problem is one of the most studied problems in additive combinatorics. It deals with the relative sizes of the sumset and product set:

\begin{aligned} A+A = \{a+a': a,a'\in A\} \qquad \text {and} \qquad AA = \{aa': a,a'\in A\}. \end{aligned}
(2)

The following was conjectured by Erdős and Szemerédi [8] (originally over the integers).

### Conjecture 1.1

(Sum-Product Conjecture) For any positive $$\delta <1$$, there exists a constant $$C(\delta )>0$$, such that for any sufficiently large $$A\subset \mathbb {R}$$, we have

\begin{aligned} \max \{|A+A|,|AA|\} \ge C(\delta ) |A|^{1+\delta }. \end{aligned}
(3)

### Notation

We use Vinogradov’s symbol extensively. We write $$X \ll Y$$ to mean that $$X \le CY$$ for some absolute constant C, and $$X \lesssim Y$$ to mean that $$X \le C_1Y(\log Y)^{C_2}$$ for some absolute constants $$C_1,C_2$$. If $$X \ll Y$$ and $$Y \ll X$$, we will write $$X \approx Y$$. If additionally a suppressed constant $$C:=C(k)$$ depends on some parameter k, we write $$\ll _k, \lesssim _k, \approx _k$$. Usually in such cases k will be fixed and therefore small compared to the growing parameters.

Equipped with this notation, we could rewrite (3) as

\begin{aligned} \max \{ |A+A|,|AA| \} \gg _\delta |A|^{1+\delta }. \end{aligned}

While the full sum-product conjecture remains open, proving the above for increasing values of $$\delta$$ represents the progress of the last forty years. The best known to date is $$\delta = 1/3 + \frac{2}{1167} - o(1)$$ due to Rudnev and Stevens [17].

The notation in (2) naturally generalises to incorporate differences and ratios as well as longer sums. One of the key quantities of interest in this paper will be

\begin{aligned} A+A-A = \{a+a'-a'': a,a',a'' \in A \}. \end{aligned}

For longer sums and products we will suppress notation so that nA is the n-fold sumset and $$A^{(n)}$$ is the n-fold product set. Incorporating differences and division, we allow sets of the form $$mA-nA$$ and $$A^{(m)}/A^{(n)}$$.

A set $$B = \{ b_1< \dots < b_N \}$$ is said to be convex if

\begin{aligned} b_2-b_1< b_3-b_2< \dots < b_N - b_{N-1}. \end{aligned}

It can be easily shown that a convex set can be expressed as $$B = f([N])$$ where f is an increasing function with an increasing first derivative and $$[N]:=\{1,\dots ,N\}$$.

We say a $$C^1(\mathbb {R})$$ function f is convex if both f and $$f'$$ are either strictly increasing or strictly decreasing. There is an aphorism in additive combinatorics that convex functions destroy additive structure; if f is a convex function, then either A or f(A) can be additively structured, but not both. Statements about the additive structure of A and f(A) are more general than sum-product results. Indeed, if for some positive $$\delta <1$$, we have

\begin{aligned} \max \{|A+A|,|f(A)+f(A)|\} \gg |A|^{1+\delta }, \end{aligned}
(4)

then one immediately obtains a sum-product theorem by choosing $$f(x) = \log (x)$$. Elekes, Nathanson and Ruzsa [7] proved a more general form of (4) with $$\delta = 1/4$$. We firstly prove the following, a generalisation of (4) to longer sums/differences.

### Theorem 1.1

Let A be a finite set of reals and f be a convex function. Then

\begin{aligned} |A+A-A| |f(A)+f(A)-f(A)| \gg |A|^3. \end{aligned}

In [9], Hanson, Roche-Newton and Rudnev proved the slightly weaker statement

\begin{aligned} |A+A-A||f(A)+f(A)-f(A)| \gg \frac{|A|^3}{\log ^3 |A|}. \end{aligned}

The removal of the logarithmic factors makes the bound of Theorem 1.1 sharp. Indeed, if $$f(x) = x^2$$ and $$A = [N]$$ then

\begin{aligned} |A+A-A||f(A)+f(A)-f(A)| \approx |A|^3. \end{aligned}

Theorem 1.1 is simply the base case for the forthcoming Theorem 1.2. It is stated separately because we present a short, new proof inspired by the very simple method by which Solymosi establishes the $$\delta = 1/4$$ sum-product bound in $$\mathbb {C}$$ [19].

A simple idea which emerged in a recent paper of Hanson, Rudnev and the author [5] is that $$A+A-A$$ must, in some sense, be evenly spaced among the elements of A. More specifically, if $$a,a'\in A$$ have very few elements of $$A+A-A$$ between them, then a and $$a'$$ must be close to each other. The proof of Theorem 1.1 elucidates how this idea may be leveraged in proving sumset bounds.

### 1.2 Higher Convex Functions

It is expected that the more convex a function, the less likely f(A) is to exhibit additive structure. We say a $$C^k(\mathbb {R})$$ function is k-convex if it has strictly monotone derivatives $$f^{(0)}, f^{(1)},\dots , f^{(k)}$$. Here monotone means either increasing or decreasing. A 1-convex function is simply a convex function and a 0-convex function is a strictly monotone function. To streamline exposition, we henceforth assume that a k-convex function f has monotone increasing derivatives $$f^{(0)}, f^{(1)}, \dots , f^{(k)}$$. If any of them are decreasing, the proofs herein can be easily modified to handle this. Such modifications are discussed in [9]. It is nonetheless worth noting that if f has all derivatives positive, then $$F(x):= -f(-x)$$ has the signs of its derivatives alternating and F has the same sumset behaviour as f. This is a more direct way to see that our theorems will apply in the important case where $$f = \log$$.

The forthcoming Theorem 1.2 improves the main result in [9] by logarithmic factors. Specifically, we use an idea from [5] to sidestep the need for dyadic pigeonholing.

### Theorem 1.2

Let A be a finite set of real numbers and f be a k-convex function. Then if $$|A+A-A|\le K|A|$$, we have

\begin{aligned} |2^kf(A)-(2^k-1)f(A)| \gg _k \frac{|A|^{2^{k+1}-1}}{|A+A-A|^{2^{k+1}-k-2}} \ge |A|^{k+1} K^{-(2^{k+1}-k-2)}. \end{aligned}

Setting $$A = [N]$$ and $$f(x) = x^{k+1}$$ verifies that the term $$|A|^{k+1}$$ on the right-hand side the best possible.

In [10], it is proved that if A is a finite real set and f is any convex function, then

\begin{aligned} |2f(A\pm A) - f(A\pm A)| \gg |A|^2. \end{aligned}

We prove the following generalisation.

### Theorem 1.3

Let A be a set of reals and f be a k-convex function. If k is even with $$k = 2s$$ then

\begin{aligned} | 2^k f((s+1)A-sA) - (2^k-1) f((s+1)A-sA)| \gg _k |A|^{k+1} \end{aligned}

and if k is odd with $$k = 2s-1$$ then

\begin{aligned} | 2^k f(sA-sA) - (2^k-1) f(sA-sA)| \gg _k |A|^{k+1}. \end{aligned}

Notice that in both cases, the number of summands inside the function f is the same as the power of |A| on the right-hand side. For the purposes of discussion, assume we are in the odd k case. A generic set A is expected to have $$|sA-sA| \approx |A|^{2\,s} = |A|^{k+1}$$, and therefore $$|f(sA-sA)| \approx |A|^{k+1}$$ as well since f is a monotone function. However, obviously $$sA-sA$$ may be significantly smaller than $$|A|^{k+1}$$. Theorem 1.3 affirms that even in this case, sufficiently many sums (and differences) of $$f(sA-sA)$$ guarantee the same $$|A|^{k+1}$$ growth.

It is also worth mentioning that our proof method actually permits a stronger conclusion: in the even k case, $$(s+1)A-sA$$ may be replaced by $$A + \{ -s,\dots , s \}d$$, and in the odd k case, $$sA-sA$$ may be replaced by $$A-A + \{ -(s-1),\dots , s-1 \}d$$, where d is some fixed element of $$A-A$$. So even though generically $$|A + \{-s, \dots , s\}d| \approx s|A|$$, adding sufficiently many copies of $$f(A + \{-s, \dots , s\}d)$$ guarantees $$|A|^{k+1}$$ growth.

While it may be possible to improve Theorem 1.3 by reducing the number of summands of $$f(sA-sA)$$ and $$f((s+1)A-sA)$$ on the left hand side, the bound $$|A|^{k+1}$$ cannot be improved, evinced by setting $$A = [N]$$ and $$f(x) = x^{k+1}$$.

It is easy to see that $$A-A + \{ -s+1,\dots , s-1 \}d$$ is the union of a small number of translates of $$A-A$$. (A similar statement applies to $$A + \{ -s,\dots , s \}d$$.) This motivates the following conjecture.

### Conjecture 1.2

Let A be a set of reals and f be a k-convex function. Then

\begin{aligned} |2^kf(A-A) - (2^k-1)f(A-A)| \gg _k |A|^{k+1}. \end{aligned}

We feel that bounding the size of sumsets of convex functions with many summands as well as the iteration techniques to prove them are of independent interest. However, additionally they allow improvements of bounds for 2-fold sumsets and energy of highly convex sets. See [5] for more details.

### 1.3 Applications

#### 1.3.1 Growth for Products of Generalised Difference Sets

It was conjectured in [1] that for any $$s >0$$, there exists $$m = m(s)$$ such that if A is a finite real set then

\begin{aligned} |(A-A)^{(m)}| \gg _s |A|^s, \end{aligned}
(5)

where we recall that $$(A-A)^{(m)}$$ denotes an m-fold product set $$\underbrace{(A-A)\dots (A-A)}_{m \text { times}}$$. This was proved in [11] for the case when $$A \subset \mathbb {Q}$$. Balog, Roche-Newton and Zhelezov proved (5) for $$s=3$$, and additionally that for $$s=17/8$$, choosing $$m = 3$$ suffices. These were the first known results for $$s>2$$. Recently, Hanson, Roche-Newton and Senger proved (implicitly) [10] that (5) holds if $$m = 8, s = 33/16$$, but their method was stronger in the sense that some of the $$A-A$$ terms could be replaced with $$A-a$$ for specific values of $$a\in A$$. They use this to improve the best known lower bound for $$|\Lambda (P)|$$ where $$\Lambda (P)$$ is the set of dot products induced by a point set P in $$\mathbb {R}^2$$. They proved

\begin{aligned} |\Lambda (P)| \gtrsim |P|^{\frac{2}{3} + \frac{1}{3057}}, \end{aligned}

the first result to break the threshold $$|P|^{2/3}$$. See [16] for more connections between similar problems and growth in $$A-A$$.

We prove a result approaching this conjecture from a different direction, namely allowing for products of many-fold differences.

### Theorem 1.4

Given any natural number $$s \in \mathbb {N}$$, there exists $$m = m(s)$$ such that if A is a finite set of reals, then

\begin{aligned} |(sA-sA)^{(m)}| \gg _s |A|^{s}. \end{aligned}
(6)

The proof of Theorem 1.4 is an easy corollary of Theorem 1.3, and is proved in Sect. 4. If Conjecture 1.2 holds, then (5) is the natural corollary. Again, our proofs permit a stronger conclusion than (6), namely

\begin{aligned} (A-A + \{ -s+1,\dots , s-1 \}d)^{(m)} \gg _s |A|^{s}, \end{aligned}

which is remarkably close to (5). The proof also delivers the explicit value $$m(s) = 2^{2s-1}$$.

#### 1.3.2 Sum-Product Type Results

Studying the sizes of sumsets and product sets which are not the traditional $$A+A$$ and AA has led to many variations of the sum-product problem. See for example [4, 14, 20]. What these and many other results do share with the sum-product problem is they all enshrine the philosophy that additive structure and multiplicative structure cannot coexist in the same set. We may refer to any such result as a sum-product type result.

The sum-product problem has been studied in other fields as well. We particularly note that for $$A\subset \mathbb {C}$$, (3) is known for all $$\delta < 1/3 + c$$ (for some small c) [2] and for subsets of a function field $$A \subset \mathbb {F}_q((t^{-1}))$$, it is known for all $$\delta < 1/5$$ (with the implied constant C also depending on q) [3]. In this paper, we prove a related sum-product type result in each setting. Over the complex numbers we have the following.

### Theorem 1.5

Let $$A \subset \mathbb {C}$$ be a finite set. Then the following holds:

\begin{aligned} |A+A-A||AA|^2 \gg |A|^4. \end{aligned}

The function field $$\mathbb {F}_q((t^{-1}))$$ is the field of all Laurent series of the form

\begin{aligned} \sum _{i=-\infty }^k \alpha _i t^i, \qquad \text {where } \alpha _i \in \mathbb {F}_q \text { for all } i. \end{aligned}

We prove the following.

### Theorem 1.6

For any finite $$A \subset \mathbb {F}_q((t^{-1}))$$ and any $$\epsilon > 0$$, we have

\begin{aligned} |A+A-A|^3 |AA|^4 \gg _\epsilon q^{-2}|A|^{9-\epsilon }. \end{aligned}

Theorem 1.6 does not extend to function fields where the base field is not finite. Furthermore, the dependence in this result on q is necessary (and cannot be improved), since $$\mathbb {F}_q((t^{-1}))$$ has small non-trivial subfields. Indeed, setting $$A=\mathbb {F}_q$$ demonstrates this sharpness. It is also worth mentioning that the same proof of Theorem 1.6 holds for finite subsets of any field with nonarchimedean norm and finite residue field. In particular, it holds for finite subsets of the p-adic numbers $$\mathbb {Q}_p$$.

A form of Plünnecke’s inequality [12, Cor 1.5] shows that given any set A in some group G, there exists $$A' \subset A$$ with $$|A'|\ge |A|/2$$ such that

\begin{aligned} |-A'+A+A| \ll \frac{|-A'+A|^2}{|A|}. \end{aligned}

If $$A' \subset A \in \mathbb {C}$$, then applying Theorem 1.5 to $$A'$$ and some simple inequalities yields

\begin{aligned} |A-A|^2 |AA|^2 \gg |A|^5, \end{aligned}
(7)

which matches Solymosi’s bound [19] (which has since been improved). Similarly, if $$A' \subset A \in \mathbb {F}_q((t^{-1}))$$, then applying Theorem 1.6 to $$A'$$ yields

\begin{aligned} |A-A|^3 |AA|^2 \gg _\epsilon q^{-1}|A|^{6-\epsilon }. \end{aligned}
(8)

This matches Bloom and Jones’ bound in [3], which is the best known sum-product bound in function fields with finite residue field. Unfortunately, the bounds (7) and (8) do not follow from our theorems if $$A-A$$ is replaced with $$A+A$$.

It is in the few products, many sums framework that Theorems 1.5 and 1.6 are most relevant. In studying sum-product phenomena, it is often instructive to find bounds for $$|A+A|$$ given that |AA| is small. In other words, given that there are few products, we show that there are many sums. This problem has been studied in [6, 13, 14].

In particular, we are addressing the few products, many 3-fold sums problem. For example, if we know that $$|AA| \approx |A|$$, then for $$A \subset \mathbb {C}$$ we have

\begin{aligned} |A+A-A| \gg |A|^{2} \end{aligned}

and for $$A \subset \mathbb {F}_q((t^{-1}))$$ we have

\begin{aligned} |A+A-A| \gg _{q,\epsilon } |A|^{5/3-\epsilon }. \end{aligned}

The proofs of Theorems 1.5 and 1.6 are both inspired by the combination of techniques used to prove Theorem 1.1.

### 1.4 Structure of this Paper

In Sect. 2, we present some essential preliminaries and a proof of Theorem 1.1, which forms the base step for the induction proof in the following section. In Sect. 3, we prove Theorem 1.2 using a variation of the induction proof in [9]. Section 4 is devoted to proving Theorem 1.3 through the auxiliary result Proposition 4.1. We finally demonstrate that Theorem 1.4 is an easy corollary.

The proofs of Theorems 1.5 and 1.6 are almost identical to the corresponding proofs in [19] and [3] respectively. Simple modifications using a version of Lemma 2.3 produce the improvements. For this reason, both proofs appear only in Appendix 1, which is the only section in which we do not work exclusively over $$\mathbb {R}$$.

## 2 Preliminaries

### 2.1 Convex Functions and Squeezing Elements

A simple and clever squeezing approach for proving sumset bounds for convex sets was given in [18], providing a sharp lower bound

\begin{aligned} |B+B-B| \gg |B|^2 \end{aligned}

where B is a convex set. This approach was significantly extended in [9], providing estimates for longer sums of more convex sets and also the images f(A) of sets with small additive doubling under k-convex functions.

The basic idea is the following. Recall that a set $$B = \{b_1< \dots < b_N\}$$ is convex if the adjacent differences between elements is increasing

\begin{aligned} b_2-b_1< b_3 - b_2< \dots < b_N - b_{N-1}. \end{aligned}

It follows that if i is fixed, then for any $$j < i$$ the terms

\begin{aligned} b_{i} + (b_{j}-b_{j-1}) \in B+B-B \end{aligned}

are all unique and lie in $$(b_i,b_{i+1})$$. We may apply this for any i to obtain $$\sum _{i=1}^N (i-2) \approx |B|^2$$ elements of $$|B+B-B|$$.

More can be said if we are looking at the images of convex functions. Given a function f, define its d-derivative by

\begin{aligned} \Delta _d f(x):= f(x+d)-f(x). \end{aligned}

### Lemma 2.1

If f is a k-convex function, then for any d, $$\Delta _d f$$ is a $$(k-1)$$-convex function.

### Proof

We use induction on k. Suppose f is 1-convex. We have

\begin{aligned} \Delta _d f(x):= f(x+d)-f(x) = \int _x^{x+d} f'(y)dy. \end{aligned}

Since $$f'$$ is monotone, it follows that $$\Delta _d f$$ is also monotone, and hence 0-convex.

Next assume the statement holds for $$(k-1)$$-convex functions. Let f be a k-convex function. By definition, this implies that $$f'$$ is a $$(k-1)$$-convex function. The induction hypothesis implies that $$\Delta _d(f')$$ is a $$(k-2)$$-convex function. But since $$\Delta _d(f') = (\Delta _d f)'$$, it follows that $$\Delta _d f$$ is $$(k-1)$$-convex, completing the induction. $$\square$$

If f is convex, $$A = \{a_1< \dots < a_N\}$$ and $$d >0$$, then Lemma 2.1 implies that $$\Delta _d f$$ is increasing, meaning that

\begin{aligned} f(a_1 + d) - f(a_1)< \dots < f(a_N + d) - f(a_N). \end{aligned}

Consequently if i is fixed, then for any $$j\le i$$ the terms

\begin{aligned} f(a_i) + f(a_j+d) - f(a_j) \end{aligned}

are all different and lie in $$(f(a_i),f(a_i + d)]$$. We usually take d to be at most the smallest difference between adjacent elements of A to ensure that these intervals are disjoint. These observations are summarised in the following.

### Lemma 2.2

(The squeezing lemma) Let f be a convex function and $$d > 0$$. Let s be a real number and $$S_-$$ a set of real numbers no larger than s. Then

\begin{aligned} f(s) + \Delta _d f(S_-) \subset (f(s),f(s+d)]. \ \end{aligned}

### 2.2 The Basic Argument

The following method will be extended to different fields ($$\mathbb {C}$$ and $$\mathbb {F}_q((t^{-1}))$$) in Appendix 1, but ideas herein will also be employed in Sect. 3.

We will also introduce the following notation: if $$a'<a$$, then

\begin{aligned} n_A(a',a):= (A+A-A)\cap (a',a]. \end{aligned}

Taking $$a,a' \in A$$, the quantity $$n_A(a',a)$$ tells us about how $$A+A-A$$ is distributed among the elements of A. Intuitively we would think that wide intervals contain many elements of $$A+A-A$$. This intuition is quantified by the following Lemma, which also appears in [5].

### Lemma 2.3

Let $$D:=\{d_1<d_2\dots <d_{|D|}\}$$ be the positive differences in $$A-A$$. If $$a,a' \in A$$ with $$a'<a$$ and $$n_A(a',a)\le Z$$, then $$a-a' \le d_Z$$.

In other words, if there are at most Z elements of $$A+A-A$$ in $$(a',a]$$, then $$a-a'$$ must be among the Z smallest positive differences in $$A-A$$.

### Proof

If not then $$a-a' = d_Y$$ where $$Y>Z$$. But then

\begin{aligned} a'<a'+d_i \le a, \end{aligned}

for $$i = 1,\dots ,Y$$. Thus there are at least $$Y>Z$$ elements of $$A+A-A$$ in $$(a',a]$$, contradicting that $$n_A(a',a) \le Z$$. $$\square$$

We are now equipped to prove Theorem 1.1.

### Proof of Theorem 1.1

Let $$A:= \{a_1< \dots < a_{|A|}\}$$. We say that $$a_i$$ is good if

\begin{aligned}{} & {} n_A(a_i,a_{i+1}) \le \frac{4|A+A-A|}{|A|} \qquad \text {and}\\{} & {} n_{f(A)}(f(a_i),f(a_{i+1})) \le \frac{4|f(A) + f(A) - f(A)|}{|A|}. \end{aligned}

Since

\begin{aligned}{} & {} \sum _{i=1}^{|A|-1} n_A(a_i,a_{i+1}) \le |A+A-A| \quad \text {and} \quad \sum _{i=1}^{|A|-1} n_{f(A)}(f(a_i),f(a_{i+1})) \le |f(A)\\{} & {} \quad +f(A)-f(A)|, \end{aligned}

by the pigeonhole principle it follows that there is a set $$A'$$ with $$|A'| \ge |A|/2$$ such that each element of $$A'$$ is good.

Now consider the map

\begin{aligned} \Psi : a_i \mapsto (a_{i+1}-a_i, f(a_{i+1}) - f(a_i)). \end{aligned}

By the mean value theorem, there exists a sequence $$\{c_i\}$$ where $$c_i \in (a_i,a_{i+1})$$ and

\begin{aligned} \frac{f(a_{i+1})-f(a_i)}{a_{i+1}-a_i} = f'(c_i). \end{aligned}

Once $$f(a_{i+1})-f(a_i)$$ and $$a_{i+1}-a_i$$ are fixed, $$c_i$$ is known uniquely since $$f'$$ is strictly monotone. Thus $$a_i$$ is also known uniquely, and $$\Psi$$ is injective.

Restrict the domain of $$\Psi$$ to $$A'$$. Since $$\Psi$$ is injective, the size of its domain $$A'$$ equals the size of its image $$\Psi (A')$$, proving that

\begin{aligned} |A| \le 2|A'| = 2|\Psi (A')|. \end{aligned}
(9)

A suitable upper bound for $$|\Psi (A')|$$ will complete the proof.

Since each $$a_i \in A'$$ is good it satisfies

\begin{aligned} n_{A}(a_i,a_{i+1}) \le \frac{4|A+A-A|}{|A|}. \end{aligned}

Lemma 2.3 shows that $$a_{i+1}- a_i$$ is among the smallest $$\frac{4|A+A-A|}{|A|}$$ positive elements in $$A-A$$ and therefore, there are $$\le \frac{4|A+A-A|}{|A|}$$ values it can take. By an identical argument there are $$\le \frac{4|f(A)+f(A)-f(A)|}{|A|}$$ values $$f(a_{i+1})-f(a_i)$$ can take.

This proves that $$|\Psi (A')| \le 16\ |A+A-A||f(A)+f(A)-f(A)||A|^{-2}$$. It follows from (9) that

\begin{aligned} |A| \le 32\ |A+A-A||f(A)+f(A)-f(A)||A|^{-2}, \end{aligned}

and rearranging completes the proof. $$\square$$

### Remark

In Sect. 3, we will claim that Theorem 1.1 is the base case for the induction proof of Theorem 1.2. In fact, we will need the slightly stronger statement that $$f(A)+f(A)-f(A)$$ contains $$\gg |A|^3 |A+A-A|^{-1}$$ elements in $$(\min (f(A)),\max (f(A))]$$. This is immediate after modifying the definition of “good” in the above proof: say $$a_i$$ is good if

\begin{aligned}&n_A(a_i,a_{i+1}) \le \frac{4|A+A-A|}{|A|}, \quad \text {and} \\&\quad n_{f(A)}(f(a_i),f(a_{i+1})) \le \frac{4|(f(A) + f(A) - f(A))\cap (f(a_1),f(a_{|A|}]|}{|A|}. \end{aligned}

## 3 Proof of Theorem 1.2

### Proof of Theorem 1.2

The proof will be by induction on k. The actual statement we will prove is the slightly stronger statement that all sums produced lie in the interval $$(\min f(A),\max f(A))]$$. So the base step (which is proved in Theorem 1.1) states that $$f(A)+f(A)-f(A)$$ contains $$\gg |A|^3 |A+A-A|^{-1}$$ elements in $$(\min f(A),\max f(A))]$$.

Similar to previous proofs, we say that $$a_i\in A$$ is good if

\begin{aligned} n_A(a_i,a_{i+1}) \le \frac{2|A+A-A|}{|A|}. \end{aligned}

By the pigeonhole principle, at least half of all $$a_i \in A$$ are good. We henceforth restrict our attention to the good $$a_i$$. Consider the differences $$a_{i+1}-a_i$$ and let H be the set of all such differences. Since we are only considering good values of $$a_i$$, Lemma 2.3 implies that $$|H| \ll |A+A-A||A|^{-1}$$. For each $$h \in H$$, define

\begin{aligned} A_{h} = \{a_i: a_{i+1}-a_i = h\}. \end{aligned}

We furthermore know that $$\sum _{h\in H} |A_{h}| \approx |A|$$.

If $$A_{h} = \{ a_{e_1}< \dots <a_{e_{|A_h|}}\}$$ then let $$A_h^i = \{ a_{e_1}< \dots <a_{e_{i}}\}$$ be the truncation taking only the smallest i elements of $$A_h$$. For any $$a_{e_i} \in A_{h}$$, the squeezing lemma (Lemma 2.2) implies that

\begin{aligned} f(a_{e_i}) + \Delta _{h}f(A_h^i) \subset (f(a_{e_i}),f(a_{e_i+1})]. \end{aligned}

Since f is a k-convex function, Lemma 2.1 proves that $$g_i:= f(a_{e_i}) + \Delta _{h}f$$ is a $$(k-1)$$-convex function, and we have

\begin{aligned} g_i(A_{h}^i) \subset (f(a_{e_i}),f(a_{e_i+1})]. \end{aligned}

It follows from the induction hypothesis that

\begin{aligned} 2^{k-1}g(A_{h}^i) - (2^{k-1}-1)g(A_{h}^i) \subset 2^{k}f(A) - (2^{k}-1)f(A) \end{aligned}

contains $$\gg _k \frac{|A_{h}^i|^{2^{k}-1}}{|A_{h}^i+A_{h}^i-A_{h}^i|^{2^{k}-k-1}}$$ elements in $$(f(a_{e_i}),f(a_{e_i+1})]$$. This argument can be run for every element of $$A_{h}$$, and then also for each $$h \in H$$ to obtain

\begin{aligned} |2^k f(A) - (2^{k-1}-1) f(A)|&\gg _k \sum _{h \in H} \sum _{i=1}^{|A_h|} \frac{|A_{h}^i|^{2^{k}-1}}{|A_{h}^i+A_{h}^i-A_{h}^i|^{2^{k}-k-1}} \nonumber \\&\gg _k \frac{1}{|A+A-A|^{2^{k}-k-1}}\cdot \sum _{h \in H} |A_{h}|^{2^{k}}. \end{aligned}
(10)

Above we have used the trivial facts that $$|A+A-A| \gg |A_{h}^i+A_{h}^i - A_{h}^i|$$ and $$|A_h^i| = i$$. Now by Hölder’s inequality, we have

\begin{aligned} \sum _{h \in H} |A_{h}|^{2^k} \cdot |H|^{2^k-1} \ge \left( \sum _{h\in H} |A_{h}|\right) ^{2^k} \approx |A|^{2^k}. \end{aligned}
(11)

Recalling that $$|H| \ll |A+A-A||A|^{-1}$$, (10) and (11) yield the desired

\begin{aligned} |2^k f(A) - (2^{k-1}-1) f(A)| \gg _k \frac{|A|^{2^{k+1}-1}}{|A+A-A|^{2^{k+1}-k-2}}, \end{aligned}

with all constructed elements lying in $$(\min f(A),\max f(A)]$$. $$\square$$

### Remark

In [9], Lemma 3.1 is used to find a consecutive difference $$h\in A-A$$ with many realisations, and this is done by dyadic pigeonholing.

Instead we have found a large set of consecutive pairs $$(a_i,a_{i+1})$$ which don’t have many elements of $$A+A-A$$ in between them. By Lemma 2.3, there are few possible values that $$a_{i+1}-a_i$$ can take, and therefore the pigeonhole principle proves that some of these differences must be realised many times. This approach avoids the logarithmic factors intrinsic to a dyadic pigeonholing argument.

## 4 Proof of Theorem 1.3

In proving Theorem 1.3, we will actually prove the following stronger but more cumbersome result, because it makes the induction step more manageable.

### Proposition 4.1

Let $$A = \{a_1< \dots < a_N\}$$ be any set of reals (with $$N\ge 2$$), k be a positive integer and f be a k-convex function. Also let $$d >0$$ be such that

\begin{aligned} kd \le a_i-a_j \qquad \text {for all} \qquad j<i. \end{aligned}
(12)

For $$i = 1,\dots , N$$, set

\begin{aligned} P_{k,i} = \{ a_i, a_i + d, \dots , a_i + kd \} \end{aligned}

and for $$i = 2,\dots ,N$$, set

\begin{aligned} S_{k,i} = \cup _{j=1}^{i-1} P_{k,j}. \end{aligned}

Then the set

\begin{aligned} 2^kf(S_{k,N}) -(2^k-1)f(S_{k,N}) \end{aligned}

contains $$\gg _k |A|^{k+1}$$ elements in $$(\min f(A),\max f(A)]$$.

### Proof

The proof is by induction on k. We begin with the base step. Let $$d < a_i-a_j$$ for all $$j<i$$. We have

\begin{aligned} S_{1,N} = \{ a_1, a_1+d, a_2, a_2 + d, \dots , a_{N-1}, a_{N-1} +d \}. \end{aligned}

Since f is a convex function it follows that

\begin{aligned} f(a_1+d)-f(a_1)< \dots < f(a_{N-1}+d) - f(a_{N-1}) \end{aligned}

and consequently if i is fixed, then for any $$j\le i$$ the terms

\begin{aligned} f(a_i) + f(a_j+d) - f(a_j) \end{aligned}

are all different and lie in the interval $$I_i = (f(a_i),f(a_i + d)]$$. This produces i elements of $$f(S_{1,N}) + f(S_{1,N}) - f(S_{1,N})$$. Since $$d < a_i-a_j$$ for all $$j<i$$, the intervals $$I_i$$ are disjoint. Apply this argument for $$i = 1, \dots , N-1$$, producing

\begin{aligned} \sum _{i=1}^{N-1} i \approx |A|^2 \end{aligned}

elements of $$f(S_{1,N}) + f(S_{1,N}) - f(S_{1,N})$$ lying in $$(\min f(A), \max f(A)]$$, thus completing the base step.

We proceed to the induction. Note that (12) guarantees that the intervals

\begin{aligned} (a_i, a_i+kd) \qquad 1\le i \le N \end{aligned}

spanned by the sets $$P_{k,i}$$ are all disjoint. Using the squeezing lemma (Lemma 2.2) and (12), we have

\begin{aligned} f(a_i) + \Delta _d f(S_{k-1,i}) \subset (f(a_i), f(a_{i}+d)] \subset (f(a_i),f(a_{i+1})]. \end{aligned}

Since f is k-convex, the new functions $$g_i:= f(a_i) + \Delta _d f$$ are all $$(k-1)$$-convex by Lemma 2.1, and also

\begin{aligned} g_i(S_{k-1,i}) \subset (f(a_i),f(a_{i+1})]. \end{aligned}

From (12), the inequality $$(k-1)d \le a_i-a_j$$ trivially holds for all $$j<i$$, so the induction hypothesis can be applied to $$A_i:= \{a_1, \dots , a_{i}\}$$, showing that the set

\begin{aligned} 2^{k-1}g_i(S_{k-1,i}) - (2^{k-1}-1)g_i(S_{k-1,i}) \subset 2^k f(S_{k,N}) - (2^k-1) f(S_{k,N}) \end{aligned}

contains $$\gg |A_i|^{k} = i^{k}$$ elements in $$(f(a_i),f(a_{i+1})]$$. Applying this for each function $$g_i$$ we get

\begin{aligned} |2^k f(S_{k,N}) - (2^k-1) f(S_{k,N})| \gg \sum _{i=1}^{N} i^k \approx _k |A|^{k+1}, \end{aligned}

and all constructed elements lie in $$(\min f(A),\max f(A)]$$, closing the induction. $$\square$$

### Proof of Theorem 1.3

Given $$A = \{a_1< \dots < a_N\}$$, let $$b,b' \in A$$ be such that $$d_0 = b-b'$$ is the smallest positive element of $$A-A$$.

We start by proving the case where $$k= 2s$$ is even. We set

\begin{aligned} A' = \{ a_k - sd_0, \dots , a_{kM}-sd_0 \}, \end{aligned}

where $$M = \lfloor N/k \rfloor$$. Now apply Proposition 4.1 to $$A'$$ with $$d = d_0$$. Since $$d_0 \in A-A$$ we get $$S_{k,M} \subset (s+1)A-sA$$, and the result follows.

If $$k = 2s-1$$ is odd, then instead using

\begin{aligned} A' = \{ a_k - sd_0 -b', \dots , a_{kM}-sd_0-b' \} \end{aligned}

completes the proof. $$\square$$

We now prove Theorem 1.4.

### Proof of Theorem 1.4

Set $$f(x) = \log (x)$$ in the case $$k = 2s-1$$ of Theorem 1.3. Using a crude upper bound on the size of the quotient set, we get that for any natural number s,

\begin{aligned} |(sA-sA)^{(2^k)}|^2 \gg \left| \frac{(sA-sA)^{(2^k)}}{(sA-sA)^{(2^k-1)}}\right| \gg |A|^{2s}. \end{aligned}

Taking square roots and setting $$m(s) = 2^k = 2^{2s-1}$$ completes the proof. $$\square$$

One can think of a k-convex set A as f([N]) where f is a k-convex function. Then [9, Theorem 1.3] shows that

\begin{aligned} |2^kA - (2^k-1)A| \gg _k |A|^{k+1}. \end{aligned}

This is also a corollary of Proposition 4.1 by setting A to be an arithmetic progression. In fact, Proposition 4.1 is designed to synthesise the important properties of [N] which make f([N]) grow under many sums and differences.