1 Introduction and main results

The Mittag-Leffler function

$$\begin{aligned} E_\alpha (x) := \sum _{k=0}^\infty \frac{x^k}{\Gamma (\alpha k + 1)}, \quad \alpha >0, \end{aligned}$$

is a well-known special function with a large number of applications in pure and applied mathematics; see [9, 13] for surveys. Clearly, we have \(E_1(x)=e^x.\) Somewhat surprisingly, the “identity”

$$\begin{aligned} E_\alpha ((x+y)^\alpha ) {\mathop {=}\limits ^{?}} E_\alpha (x^\alpha ) E_\alpha (y^\alpha ) \end{aligned}$$
(1.1)

has been used in a few papers. As discussed in [6, 21], it is not correct for \(\alpha \ne 1\). In [21], a correct identity involving integrals of \(E_\alpha (x^\alpha )\) is proven, which reduces to \(e^{x+y}=e^x e^y\) as \(\alpha \rightarrow 1\). Besides this, it seems natural to ask whether the left and right hand sides of (1.1) are comparable. This is indeed the case:

Theorem 1.1

For \(0<\alpha <1,\) we have

$$\begin{aligned} E_\alpha ((x+y)^\alpha ) \le E_\alpha (x^\alpha ) E_\alpha (y^\alpha ),\quad x,y\ge 0, \end{aligned}$$
(1.2)

and for \(\alpha >1\)

$$\begin{aligned} E_\alpha ((x+y)^\alpha ) \ge E_\alpha (x^\alpha ) E_\alpha (y^\alpha ),\quad x,y\ge 0. \end{aligned}$$
(1.3)

These inequalities are strict for \(x,y>0\).

The strictness assertion strengthens the observation made at the beginning of Section 2 of [6], where it is argued that the validity of (1.1) for all \(x,y\ge 0\) implies \(\alpha =1.\) According to Theorem 1.1, this equality never holds, except in the obvious cases (\(\alpha =1,\) or \(xy=0\)). We note that (1.2) has been used in [25, Proposition 4] to show a convolution inequality for a fractional version of the moment generating function of a random variable. Although apparently not made explicit in the literature, the lower estimate

$$\begin{aligned} E_\alpha ((x+y)^\alpha ) \ge \alpha E_\alpha (x^\alpha ) E_\alpha (y^\alpha ),\quad x,y\ge 0,\ 0<\alpha <1, \end{aligned}$$

complementing (1.2), follows from the calculation

$$\begin{aligned} \begin{aligned} E_\alpha ((x+y)^\alpha )&= \sum _{k=0}^\infty \frac{(x+y)^{\alpha k}}{\Gamma (\alpha k+1)} \\&\ge \sum _{k=0}^\infty \frac{\alpha }{\Gamma (\alpha k+1)} \sum _{j=0}^k \left( {\begin{array}{c}\alpha k\\ \alpha j\end{array}}\right) x^{\alpha j}y^{\alpha (k-j)}\\&= \alpha \sum _{k=0}^\infty \sum _{j=0}^k \frac{x^{\alpha j}y^{\alpha (k-j)}}{\Gamma (\alpha j+1)\Gamma (\alpha (k-j)+1)} = \alpha E_\alpha (x^\alpha ) E_\alpha (y^\alpha ). \end{aligned} \end{aligned}$$
(1.4)

In the second line, we have used the following result.

Theorem 1.2

(Neo-classical inequality; Theorem 1.2 in [12]) For \(k\in \mathbb N\) and \(0<\alpha <1,\) we have

$$\begin{aligned} \alpha \sum _{j=0}^k \left( {\begin{array}{c}\alpha k\\ \alpha j\end{array}}\right) x^{\alpha j} y^{\alpha (k-j)} \le (x+y)^{\alpha k}, \quad x,y\ge 0. \end{aligned}$$
(1.5)

With a slightly weaker factor of \(\alpha ^2\) instead of \(\alpha \), this result was proven by Lyons in 1998, who also coined the term neo-classical inequality, in a pioneering paper on rough path theory [18]. Later, it has been applied by several other authors, see e.g. [2, 5, 8, 15]. Analogously to (1.4), it is clear that Theorem 1.1 follows from the following new binomial inequalities.

Theorem 1.3

For \(k\in \mathbb N\) and \(0<\alpha <1,\) we have the converse neo-classical inequality

$$\begin{aligned} \sum _{j=0}^k \left( {\begin{array}{c}\alpha k\\ \alpha j\end{array}}\right) x^{\alpha j} y^{\alpha (k-j)} \ge (x+y)^{\alpha k}, \quad x,y\ge 0, \end{aligned}$$
(1.6)

and for \(\alpha >1\) we have

$$\begin{aligned} \sum _{j=0}^k \left( {\begin{array}{c}\alpha k\\ \alpha j\end{array}}\right) x^{\alpha j} y^{\alpha (k-j)} \le (x+y)^{\alpha k}, \quad x,y\ge 0. \end{aligned}$$
(1.7)

These inequalities are strict for \(x,y>0\).

The inequalities (1.5)–(1.7) look deceptively simple. For instance, it is not obvious that the proof of (1.7)—at least with the approach used here—is much harder when the integer \(\lfloor \alpha \rfloor \) is even than when it is odd. Lyons’s proof of (the weaker version of) (1.5) applies the maximum principle for sub-parabolic functions in a non-trivial way. Hara and Hino [12] use fractional calculus to derive an extension of the binomial theorem (see Theorem 3.1 below), which immediately implies Theorem 1.2. This extended binomal theorem will be our starting point when proving Theorem 1.3. We also note that in a preliminary version of [7] (available at arXiv:1104.0577v2[math.PR]), it was shown that a multinomial extension can be derived from (1.5), by induction over the number of variables. A binomial inequality that is somewhat similar to the neo-classical inequality is proven in [17].

Our first theorem, Theorem 1.1, says that the function \(\log E_\alpha (x^\alpha )\) is sub- resp. super-additive on \(\mathbb {R}^+=(0,\infty ).\) In Sect. 2, we will show stronger statements for \(0<\alpha <2\): The function \(E_\alpha (x^\alpha )\) is log-concave for \(0<\alpha <1,\) and log-convex for \(1<\alpha <2\). Sections 36 are devoted to proving Theorem 1.3. Some preliminaries and the plan of the proof are given in Sect. 3. In that section we also state a conjecture concerning a converse inequality to (1.7).

2 Proof of Theorem 1.1 for \(0<\alpha <2\) and related statements

For brevity, we do not discuss strictness in this section. This would be straightforward, and the strictness assertion in Theorem 1.1 will follow anyways from Theorem 1.3. The following easy fact is well-known; see, e.g., [3].

Lemma 2.1

Let \(f:[0,\infty ) \rightarrow \mathbb R\) be convex with \(f(0)=0.\) Then f is superadditive, i.e.,

$$\begin{aligned} f(x+y)\ge f(x)+f(y),\quad x,y\ge 0. \end{aligned}$$

Thus, the following theorem implies (1.2), and (1.3) for \(1<\alpha <2\).

Theorem 2.2

For \(0<\alpha <1,\) the function \(x\mapsto E_\alpha (x^{\alpha })\) is log-concave on \(\mathbb {R}^+\). For \(1<\alpha <2,\) it is log-convex.

For \(\alpha >2,\) it seems that \(E_\alpha (x^\alpha )\) is not log-convex. For instance, \(E_4(x^4)=\tfrac{1}{2}(\cos x+\cosh x),\) and

$$\begin{aligned} \big (\log E_4(x^4) \big )'' = \frac{2 \sin x\ \sinh x}{(\cos x+\cosh x)^2} \end{aligned}$$

changes sign. To prove (1.3) for \(\alpha >2,\) we thus rely on the binomial inequality (1.7), which is proven later.

Proof of Theorem 2.2

We start from the representation, given in (3.4) of [24],

$$\begin{aligned} E_\alpha (x^{\alpha }) = \frac{e^x}{\alpha }- \varphi _\alpha (x),\quad x>0,\ 0<\alpha <1, \end{aligned}$$

where

$$\begin{aligned} \varphi _\alpha (x):= \frac{\sin \alpha \pi }{\pi } \int _{0}^\infty \frac{t^{\alpha -1}e^{-xt}}{t^{2\alpha } -2 \cos (\alpha \pi )t^\alpha +1}\, dt \end{aligned}$$

is a completely monotone function. (Recall that a smooth function f on \(\mathbb {R}^+\) is completely monotone if

$$\begin{aligned} (-1)^n f^{(n)}(x)\ge 0,\quad n\ge 0,\ x>0.) \end{aligned}$$

By well-known closure properties of completely monotone functions (see e.g. Corollaries 1.6 and 1.7 in [23]),

$$\begin{aligned} \frac{1}{E_\alpha (x^{\alpha })} = \alpha e^{-x} \sum _{n=0}^\infty \big (\alpha e^{-x} \varphi _\alpha (x) \big )^n \end{aligned}$$

is completely monotone as well and hence log-convex. Thus, \(E_\alpha (x^{\alpha })\) is log-concave.

Now suppose that \(1<\alpha <2.\) Setting \(\beta = \alpha /2\in (1/2,1),\) we have

$$\begin{aligned} E_\alpha (x^\alpha ) = \frac{E_\beta (x^\beta ) + E_\beta (-x^\beta )}{2} = \frac{e^x}{\alpha } + \frac{E_\beta (-x^\beta ) - \varphi _\beta (x)}{2}\cdot \end{aligned}$$

Inserting the well-known representation

$$\begin{aligned} E_\beta (-x^\beta ) = \frac{\sin \beta \pi }{\pi } \int _{0}^\infty \frac{t^{\beta -1}e^{-xt}}{t^{2\beta } +2 \cos (\beta \pi )t^\beta +1}dt, \end{aligned}$$

which is e.g. a consequence of the Perron-Stieltjes inversion formula applied to (3.7.7) in [9], and making some trigonometric simplifications, we get the representation

$$\begin{aligned} E_\alpha (x^\alpha ) = \frac{e^x}{\alpha } - \frac{\sin \alpha \pi }{\pi } \int _{0}^\infty \frac{t^{\alpha -1}e^{-xt}}{t^{2\alpha } - 2 \cos (\alpha \pi )t^\alpha +1}\,dt, \end{aligned}$$

which is also given in (3.6) of [24]. This implies

$$\begin{aligned} \log (E_\alpha (x^{\alpha })) = x - \log \alpha + \log (\psi _\alpha (x)) \end{aligned}$$

with

$$\begin{aligned}\psi _\alpha (x) := 1 -\frac{\alpha \sin \alpha \pi }{\pi } \int _{0}^\infty \frac{t^{\alpha -1}e^{-x(1+t)}}{t^{2\alpha } - 2 \cos (\alpha \pi )t^\alpha +1}\,dt \end{aligned}$$

a completely monotone function. Thus, \(E_\alpha (x^{\alpha })\) is log-convex.

Observe that Theorem 2.2 extends to the boundary case \(\alpha =2,\) where

$$\begin{aligned} E_2(x^2) = \cosh x \end{aligned}$$

is clearly log-convex. Let us also mention an alternative probabilistic argument for (1.2) based on the \(\alpha \)-stable subordinator \(\{ Z_t^{(\alpha )}, \; t\ge 0\}\) with normalization \(\mathbb {E} \big [ e^{-\lambda Z_t^{(\alpha )}}\big ] = e^{-t\lambda ^{\alpha }}.\) It is indeed well-known—see e.g. Exercise 50.7 in [22]—that

$$\begin{aligned} E_\alpha (x^\alpha )=\mathbb {E}[e^{R^{(\alpha )}_x}], \end{aligned}$$

where \(R^{(\alpha )}_x := \inf \{t> 0 : Z^{(\alpha )}_t > x\}.\) Now if \({\tilde{R}}^{(\alpha )}_y\) is an independent copy of \(R^{(\alpha )}_y,\) the Markov property implies

$$\begin{aligned} R^{(\alpha )}_x + {\tilde{R}}^{(\alpha )}_y&{\mathop {=}\limits ^{\textrm{d}}}\inf \big \{t> \! R^{(\alpha )}_x : Z_t^{(\alpha )}> Z_{R^{(\alpha )}_x}^{(\alpha )} +y\big \} \\ {}&\ge \inf \big \{t> \! R^{(\alpha )}_x : Z_t^{(\alpha )}\!> x +y\big \} \\&= \inf \big \{t> 0 : Z_t^{(\alpha )}\! > x +y\big \} {\mathop {=}\limits ^{\textrm{d}}} R^{(\alpha )}_{x+y}, \end{aligned}$$

where the inequality follows from the obvious fact that \(Z_{R^{(\alpha )}_x}^{(\alpha )} \ge x.\) This shows the desired inequality

$$E_\alpha (x^\alpha )E_\alpha (x^\alpha )\ge E_\alpha ((x+y)^\alpha ).$$

To conclude this section, we give some related results for the function \(E_\alpha (x).\)

Proposition 2.3

For \(0<\alpha <1,\) the function \(x\mapsto E_\alpha (x)\) is log-convex on \(\mathbb {R}\). For \(\alpha >1,\) it is log-concave on \(\mathbb {R}^+\).

Proof

The logarithmic derivative of \(E_\alpha (x)\) is the ratio of series

$$\begin{aligned} \frac{E_\alpha '(x)}{E_\alpha (x)} = \frac{{\displaystyle \sum _{n\ge 0} \frac{x^n}{\Gamma (\alpha +\alpha n)}}}{{\displaystyle \sum _{n\ge 0} \frac{\alpha \, x^n}{\Gamma (1+\alpha n)}}}, \end{aligned}$$

and it is clear by log-convexity of the gamma function that the sequence

$$\begin{aligned} n \mapsto \frac{\Gamma (1+\alpha n)}{\Gamma (\alpha + \alpha n)} \end{aligned}$$

is increasing for \(\alpha < 1\) and decreasing for \(\alpha > 1.\) By Biernacki and Krzyż’s lemma (see [1]), this shows that

$$\begin{aligned} x\mapsto \frac{E_\alpha '(x)}{E_\alpha (x)} \end{aligned}$$

is non-decreasing on \(\mathbb {R}^+\) for \(\alpha < 1\) and non-increasing on \(\mathbb {R}^+\) for \(\alpha > 1.\) Since moment generating functions of random variables are log-convex (see Theorem 2.3 in [16]), the log-convexity of \(E_\alpha (x)\) on the whole real line for \(0< \alpha < 1\) is a consequence of the classic m.g.f. representation

$$\begin{aligned} E_\alpha (x)=\mathbb {E}[e^{x R^{(\alpha )}_1}], \end{aligned}$$

where we have used the above notation and the easily established self-similar identity \( R^{(\alpha )}_x {\mathop {=}\limits ^{\textrm{d}}} x^\alpha R^{(\alpha )}_1.\) Note that \(R^{(\alpha )}_1\) is known as Mittag-Leffler random variable of type 2, with moments \(\Gamma (1+n)/\Gamma (1+\alpha n)\); see [14] and the references therein.

For \(\alpha \ge 2,\) there is actually a stronger result. It has been shown by Wiman [26] that the zeros of the Mittag-Leffler function are real and negative for \(\alpha \ge 2.\) As this function is of order \(1/\alpha <1,\) the Hadamard factorization theorem (Theorem XI.3.4 in [4]) implies that

$$\begin{aligned} \frac{1}{E_\alpha (x)} = \prod _{n=1}^\infty \Big (1+\frac{x}{x_{n,\alpha }}\Big )^{-1}, \end{aligned}$$

where

$$\begin{aligned} 0 < x_{1,\alpha } \le x_{2,\alpha } \le \cdots \end{aligned}$$

are the absolute values of the zeros of \(E_\alpha (x).\) We conclude that \(1/E_\alpha (x)\) is completely monotone, and thus log-convex, on \({\mathbb R}^+.\) An interesting open question is whether \(1/E_\alpha (x)\) remains completely monotone on \({\mathbb R}^+\) for \(1< \alpha < 2.\) Unfortunately, the above argument fails because the large zeroes of \(E_\alpha (x)\) have non-trivial imaginary part (see Proposition 3.13 in [9]).

Recall that for \(\alpha \in (0,1)\), the function \(1/E_\alpha (x^\alpha )\) is completely monotone on \(\mathbb {R}^+\), as seen in the proof of Theorem 2.2, and that the function \(1/E_\alpha (x)\) is log-concave by Proposition 2.3 and hence not completely monotone on \(\mathbb {R}^+\). One can also show that \(1/E_\alpha (x)\) is decreasing and convex on \(\mathbb {R}^+\) for all \(\alpha > 0\).

3 Proof of Theorem 1.3: preliminaries

By symmetry and scaling, it is clearly sufficient to prove

$$\begin{aligned} \sum _{j=0}^k \left( {\begin{array}{c}\alpha k\\ \alpha j\end{array}}\right) \lambda ^{\alpha j}>(1+\lambda )^{\alpha k}, \quad \alpha \in (0,1),\ \lambda \in (0,1],\ k\in \mathbb N, \end{aligned}$$
(3.1)

and

$$\begin{aligned} \sum _{j=0}^k \left( {\begin{array}{c}\alpha k\\ \alpha j\end{array}}\right) \lambda ^{\alpha j}<(1+\lambda )^{\alpha k}, \quad \alpha >1,\ \lambda \in (0,1],\ k\in \mathbb N. \end{aligned}$$
(3.2)

As a sanity check, we verify these statements for \(\lambda >0\) sufficiently small:

$$\begin{aligned} 1 + \left( {\begin{array}{c}\alpha k\\ \alpha \end{array}}\right) \lambda ^{\alpha } + \textrm{O}(\lambda ^{2\alpha }) > 1 +\alpha k \lambda + \textrm{O}(\lambda ^2), \quad \alpha \in (0,1),\ k\in \mathbb N, \end{aligned}$$

and

$$\begin{aligned} 1 + \textrm{O}(\lambda ^{\alpha }) < 1 +\alpha k \lambda + \textrm{O}(\lambda ^2), \quad \alpha >1,\ k\in \mathbb N. \end{aligned}$$

These inequalities are obviously correct for small \(\lambda >0,\) the first one by \(\left( {\begin{array}{c}\alpha k\\ \alpha \end{array}}\right) >0.\) We now recall a remarkable generalization of the binomial theorem, due to Hara and Hino [12] . Their proof, using fractional Taylor series, builds on earlier work by Osler [20]. Following [12], for \(\alpha >0\) define

$$\begin{aligned} K_\alpha&:= \{ \omega \in \mathbb {C} : \omega ^\alpha =1 \} \nonumber \\&= \{ e^{i\theta } : -\pi<\theta \le \pi ,\ e^{i\theta \alpha }=1 \} \nonumber \\&=\Big \{ \exp \Big (\frac{2k\pi i}{\alpha }\Big ) : k\in \mathbb {Z},\ -\frac{\alpha }{2}< k \le \frac{\alpha }{2} \Big \}. \end{aligned}$$
(3.3)

For \(t,\lambda \in (0,1]\) and \(k\in \mathbb {N},\) define

$$\begin{aligned} F(t,\lambda ,k):= t^{\alpha -1}(1-t)^{\alpha k} \bigg ( \frac{1}{|t^\alpha -\lambda ^\alpha e^{-i\alpha \pi }|^2} + \frac{\lambda ^{\alpha k}}{|e^{-i\alpha \pi }-(\lambda t)^\alpha |^2} \bigg ). \end{aligned}$$
(3.4)

Theorem 3.1

(Theorem 3.2 in [12]) Let \(\alpha >0,\) \(\lambda \in (0,1]\) and \(k\in \mathbb {N}_0.\) Then we have

$$\begin{aligned} \alpha \sum _{j=0}^k \left( {\begin{array}{c}\alpha k\\ \alpha j\end{array}}\right) \lambda ^{\alpha j} =\sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k} - \frac{\alpha \lambda ^\alpha \sin \alpha \pi }{\pi } \int _0^1 F(t,\lambda ,k) dt. \end{aligned}$$
(3.5)

In [12], the theorem was stated for \(k\in \mathbb N\), but it is not hard to see that the proof also works for \(k=0.\) Clearly, the classical binomial theorem is recovered from (3.5) by putting \(\alpha =1.\) As noted in [12], for \(\alpha \in (0,1)\) we have \(K_\alpha =\{1\},\) and thus Theorem 1.2 is an immediate consequence of Theorem 3.1. Hara and Hino also mention that Theorem 3.1 implies

$$\begin{aligned} \alpha \sum _{j=0}^k \left( {\begin{array}{c}\alpha k\\ \alpha j\end{array}}\right) x^{\alpha j} y^{\alpha (k-j)} \ge (x+y)^{\alpha k}, \quad x,y\ge 0,\ \alpha \in (1,2], \end{aligned}$$

a partial converse to (1.7). It seems that this inequality does not hold for \(\alpha >2.\) We leave it to future research to find an appropriate inequality comparing the binomial sum with \((x+y)^{\alpha k}\) for \(\alpha >2.\) Different methods than in the subsequent sections will be required, as a lower estimate for \(\sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k}\) is needed. The following statement might be true:

Conjecture 3.2

For \(k\in \mathbb N\) and \(\alpha >2,\) we have

$$\begin{aligned} 2^{\alpha -1} \sum _{j=0}^k \left( {\begin{array}{c}\alpha k\\ \alpha j\end{array}}\right) x^{\alpha j} y^{\alpha (k-j)} \ge (x+y)^{\alpha k}, \quad x,y\ge 0. \end{aligned}$$

The factor \(2^{\alpha -1}=\sup _{0<\lambda \le 1}(1+\lambda )^\alpha /(1+\lambda ^\alpha )\) would be sharp for \(k=1,\) \(x=\lambda \in (0,1],\) \(y=1.\) In the following sections, we will use arguments based on Theorem 3.1 to prove (3.1) and (3.2), which imply Theorem 1.3, from which Theorem 1.1 follows. The proof of (3.1) is presented in Sect. 4. It profits from the fact that

$$\begin{aligned} \sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k} = (1+\lambda )^{\alpha k}, \quad \alpha \in (0,1), \end{aligned}$$

and requires only obvious properties of the function F defined in (3.4). At the beginning of Sect. 5, we show that, for \(2\le \alpha \in \mathbb N,\) the inequality (3.2) immediately follows from the classical binomial theorem. We then continue with the case where \(\alpha >1\) is not an integer, and \(\lfloor \alpha \rfloor \) is odd. Then, the set \(K_\alpha \) has \(\lfloor \alpha \rfloor \) elements, and the crude estimate

$$\begin{aligned} \sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k} \le \lfloor \alpha \rfloor (1+\lambda )^{\alpha k} \end{aligned}$$

suffices to show (3.2), again using only simple properties of F. The case where \(\lfloor \alpha \rfloor \) is even is more involved, and is handled in Sect. 6. In this case, \(|K_\alpha |=\lceil \alpha \rceil \), and the obvious estimate

$$\begin{aligned} \sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k} \le \lceil \alpha \rceil (1+\lambda )^{\alpha k} \end{aligned}$$
(3.6)

is too weak to lead anywhere. We first show (Lemma 6.1) that, for \(\lambda \in [\tfrac{1}{2},1],\) the right hand side of (3.6) can be strengthened to \(\alpha (1+\lambda )^{\alpha k},\) and that (3.2) easily follows from this for these values of \(\lambda .\) For smaller \(\lambda >0,\) more precise estimates for the sum \(\sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k}\) and the integral in (3.5) are needed, which are developed in the remainder of Sect. 6.

4 Proof of Theorem 1.3 for \(0<\alpha <1\)

As mentioned above, it suffices to prove (3.1). For \(\alpha \in (0,1),\) we have \(K_\alpha =\{1\}.\) Since \(\sin \alpha \pi >0,\) we see from (3.5) that the desired inequality (3.1) is equivalent to

$$\begin{aligned} \int _0^1 F(t,\lambda ,k)dt< G(\lambda ,k),\quad \lambda \in (0,1],\ k\in \mathbb N, \end{aligned}$$
(4.1)

where

$$\begin{aligned} G(\lambda ,k) := \frac{\pi (1-\alpha )}{\alpha \lambda ^\alpha \sin \alpha \pi } (1+\lambda )^{\alpha k}. \end{aligned}$$

Define

$$\begin{aligned} \tilde{\delta } := \inf _{k\in \mathbb N}\Big ((1+\lambda )^{\alpha k}-1)\Big )^{\frac{1}{\alpha k}} =(1+\lambda ) \inf _{k\in \mathbb N}\Big (1-(1+\lambda )^{-\alpha k}\Big )^{\frac{1}{\alpha k}}>0. \end{aligned}$$

This number is positive, as it is defined by the infimum of a sequence of positive numbers converging to a positive limit. Moreover, we define

$$\begin{aligned} \delta := \tfrac{1}{2} \min \{1,\tilde{\delta }\}>0. \end{aligned}$$

By definition, \(F(t,\lambda ,k)\) decreases w.r.t. k, and so

$$\begin{aligned} F(t,\lambda ,k) \le F(t,\lambda ,0),\quad t,\lambda \in (0,1],\ k \in \mathbb N. \end{aligned}$$

Applying Theorem 3.1 with \(k=0\) yields

$$\begin{aligned} \int _0^1 F(t,\lambda ,0)dt = G(\lambda ,0), \quad \lambda \in (0,1]. \end{aligned}$$

By these two observations,

$$\begin{aligned} \int _0^{1-\delta }F(t,\lambda ,k)dt \le \int _0^{1-\delta }F(t,\lambda ,0)dt \le G(\lambda ,0). \end{aligned}$$
(4.2)

Since \(\lambda ^{\alpha k}\le 1,\) and \((1-t)^{\alpha k}\) decreases w.r.t. t, it is clear from the definition of F that

$$\begin{aligned} \int _{1-\delta }^1 F(t,\lambda ,k)dt \le \delta ^{\alpha k} \int _0^1 F(t,\lambda ,0)dt = \delta ^{\alpha k}G(\lambda ,0). \end{aligned}$$
(4.3)

By definition of \(\delta \), we have

$$\begin{aligned} 1+\delta ^{\alpha k}<(1+\lambda )^{\alpha k}, \quad \lambda \in (0,1],\ k\in \mathbb N. \end{aligned}$$
(4.4)

Now note that

$$\begin{aligned} \int _0^{1}F(t,\lambda ,k)dt \le (1+\delta ^{\alpha k})G(\lambda ,0) < G(\lambda ,k), \end{aligned}$$
(4.5)

where the first estimate follows from (4.2) to (4.3), and the second one from (4.4). Thus, (4.1) is established.

5 Proof of Theorem 1.3 for \(\alpha \in \mathbb N\) or \(\lfloor \alpha \rfloor \) odd

If \(2\le \alpha \in \mathbb N\) is an integer, then the proof of (3.2) is very easy, as we are dealing with a classical binomial sum with some summands removed:

$$\begin{aligned} \sum _{j=0}^k \left( {\begin{array}{c}\alpha k\\ \alpha j\end{array}}\right) \lambda ^{\alpha j} < \sum _{j=0}^{\alpha k} \left( {\begin{array}{c}\alpha k\\ j\end{array}}\right) \lambda ^{j}=(1+\lambda )^{\alpha k}. \end{aligned}$$

In the remainder of this section, we prove (3.2) for \(1<\alpha \notin \mathbb N\) with \(\lfloor \alpha \rfloor \) odd. Our approach is similar to the preceding section. First, observe that

$$\begin{aligned} |K_\alpha | = \Big \lfloor \frac{\alpha }{2} \Big \rfloor - \Big \lfloor {-\frac{\alpha }{2}} \Big \rfloor = \lfloor \alpha \rfloor , \end{aligned}$$

and that

$$\begin{aligned} \alpha (1+\lambda )^{\alpha k}&- \sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k}\\&\ge \alpha (1+\lambda )^{\alpha k}- \lfloor \alpha \rfloor (1+\lambda )^{\alpha k} > \alpha - \lfloor \alpha \rfloor . \end{aligned}$$

Therefore, the sequence

$$\begin{aligned} A_k:=\bigg ( \frac{\alpha (1+\lambda )^{\alpha k} - \sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k}}{\alpha - \lfloor \alpha \rfloor } -1 \bigg )^\frac{1}{\alpha k}, \quad k\in \mathbb N, \end{aligned}$$

is well-defined and positive, and

$$\begin{aligned} \lim _{k\rightarrow \infty } A_k&= (1+\lambda ) \lim _{k\rightarrow \infty }(\alpha -1)^{\frac{1}{\alpha k}} \\&\qquad \qquad \times \bigg ( \frac{1 - \sum _{\omega \in K_\alpha \setminus \{1\}} \frac{(1+\lambda \omega )^{\alpha k}}{(\alpha -1)(1+\lambda )^{\alpha k}}}{\alpha - \lfloor \alpha \rfloor }-\frac{1}{(\alpha -1)(1+\lambda )^{\alpha k}} \bigg )^\frac{1}{\alpha k}\\&=1+\lambda >0. \end{aligned}$$

We can thus define the positive number

$$\begin{aligned} \hat{\delta } := \frac{1}{2}\min \Big \{1, \inf _{k\in \mathbb N} A_k\Big \}>0. \end{aligned}$$

Since odd \(\lfloor \alpha \rfloor \) implies \(\sin \alpha \pi <0\) for \(\alpha \notin \mathbb N,\) it is clear from (3.5) that (3.2) is equivalent to

$$\begin{aligned} \int _0^1 F(t,\lambda ,k)dt < \hat{G}(\lambda ,k),\quad \lambda \in (0,1],\ k\in \mathbb N, \end{aligned}$$
(5.1)

where

$$\begin{aligned} \hat{G}(\lambda ,k):= -\frac{\pi }{\alpha \lambda ^\alpha \sin \alpha \pi }\left( \alpha (1+\lambda )^{\alpha k}- \sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k} \right) . \end{aligned}$$

By the same argument that gave us the first inequality in (4.5), we find

$$\begin{aligned} \int _0^1 F(t,\lambda ,k)dt \le (1+\hat{\delta }^{\alpha k})\hat{G}(\lambda ,0), \qquad \lambda \in (0,1],\ k\in \mathbb N. \end{aligned}$$

Note that

$$\begin{aligned} \hat{G}(\lambda ,0)= -\frac{\pi }{\alpha \lambda ^\alpha \sin \alpha \pi }( \alpha - \lfloor \alpha \rfloor ). \end{aligned}$$

The proof of (3.2) with \(\lfloor \alpha \rfloor \) odd will thus be finished if we can show that

$$\begin{aligned} (1+\hat{\delta }^{\alpha k})\hat{G}(\lambda ,0) < \hat{G}(\lambda ,k),\quad \lambda \in (0,1],\ k\in \mathbb N. \end{aligned}$$

But this is equivalent to

$$\begin{aligned} (1+\hat{\delta }^{\alpha k})(\alpha - \lfloor \alpha \rfloor ) < \alpha (1+\lambda )^{\alpha k}- \sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k}, \end{aligned}$$

which follows from the definition of \(\hat{\delta }.\)

6 Proof of Theorem 1.3 for \(\lfloor \alpha \rfloor \) even

We now prove (3.2) in the case that

$$\begin{aligned} 2<\alpha \notin \mathbb N,\quad \lfloor \alpha \rfloor =2m,\quad m\in \mathbb N. \end{aligned}$$
(6.1)

As \(\sin \alpha \pi >0,\) it follows from (3.5) to (3.2) is equivalent to

$$\begin{aligned} \int _0^1 F(t,\lambda ,k)dt > \frac{\pi }{\alpha \lambda ^\alpha \sin \alpha \pi } \left( \sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k} -\alpha (1+\lambda )^{\alpha k}\right) ,\nonumber \\ \lambda \in (0,1],\ k\in \mathbb N. \end{aligned}$$
(6.2)

Since \(F(\cdot ,\lambda ,k)\) is positive on (0, 1),  the following lemma establishes this for \(\lambda \in [\tfrac{1}{2},1].\)

Lemma 6.1

Let \(\alpha >2\) be as in (6.1). Then we have

$$\begin{aligned} \sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k} \le \alpha (1+\lambda )^{\alpha k}, \quad \lambda \in [\tfrac{1}{2},1],\ k\in \mathbb N. \end{aligned}$$

Proof

By (3.3), we have

$$\begin{aligned} \sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k}&\le \sum _{\omega \in K_\alpha }|1+\lambda \omega |^{\alpha k}\nonumber \\&=\sum _{j=-m}^m \Big (1+2\lambda \cos \frac{2j\pi }{\alpha }+\lambda ^2\Big )^{\alpha k/2}, \quad \lambda \in (0,1],\ k\in \mathbb N. \end{aligned}$$
(6.3)

We will show that

$$\begin{aligned} \sum _{j=-m}^m \Big (1+2\lambda \cos \frac{2j\pi }{\alpha }+\lambda ^2\Big ) \le \alpha (1+\lambda )^2,\quad \lambda \in [\tfrac{1}{2},1]. \end{aligned}$$
(6.4)

Then, (6.3) and (6.4) imply

$$\begin{aligned} \sum _{j=-m}^m \Big (1+&2\lambda \cos \frac{2j\pi }{\alpha }+\lambda ^2\Big )^{\alpha k/2}= (1+\lambda )^{\alpha k} \sum _{j=-m}^m \bigg (\frac{1+2\lambda \cos \frac{2j\pi }{\alpha }+\lambda ^2}{(1+\lambda )^2}\bigg )^{\alpha k/2} \\&\qquad \qquad \le (1+\lambda )^{\alpha k} \sum _{j=-m}^m \frac{1+2\lambda \cos \frac{2j\pi }{\alpha }+\lambda ^2}{(1+\lambda )^2} \le \alpha (1+\lambda )^{\alpha k}, \end{aligned}$$

which proves the lemma. To prove (6.4), observe that (6.1) implies

$$\begin{aligned} |K_\alpha | = \Big \lfloor \frac{\alpha }{2} \Big \rfloor - \Big \lfloor {-\frac{\alpha }{2}} \Big \rfloor = m-(-m-1) = 2m+1 = \lceil \alpha \rceil . \end{aligned}$$

Using the geometric series to evaluate the cosine sum (see, e.g., p. 102 in [19]), we obtain

$$\begin{aligned}{} & {} \sum _{j=-m}^m \Big (1+2\lambda \cos \frac{2j\pi }{\alpha }+\lambda ^2\Big ) = \lceil \alpha \rceil (1+\lambda ^2) \nonumber \\{} & {} \qquad +2\lambda \bigg ( 1 + 2 \frac{\cos \big ((m+1)\pi /\alpha \big )\sin (m\pi /\alpha )}{\sin (\pi /\alpha )} \bigg ). \end{aligned}$$
(6.5)

Here, \(\cos \big ((m+1)\pi /\alpha \big )<0,\) and the sines are both positive. By (6.5) and the elementary inequalities

$$\begin{aligned} \cos x&\le -1 +\tfrac{1}{2}(x-\pi )^2,\quad x\in \mathbb R, \\ \sin x&\le x,\quad x\ge 0, \\ \sin x&\ge x-\tfrac{1}{6} x^3,\quad x\ge 0, \end{aligned}$$

the following statement is sufficient for the validity of (6.4):

$$\begin{aligned}{} & {} (A+1)(1+\lambda ^2)+2\lambda \bigg ( 1 + 2\frac{\big ({-1}+\frac{1}{2}\big (\frac{(M+1)\pi }{A}-\pi \big )^2\big )\frac{M\pi }{A}}{\frac{\pi }{A}-\frac{1}{6} (\frac{\pi }{A})^3} \bigg ) \le A(1+\lambda )^2,\nonumber \\{} & {} \quad A>2,\ M>0,\ 2M< A < 2M+1,\ \tfrac{1}{2} \le \lambda \le 1. \end{aligned}$$
(6.6)

This is a polynomial inequality in real variables with polynomial constraints, which can be verified by cylindrical algebraic decomposition, using a computer algebra system. For instance, using Mathematica’s Reduce command on (6.6), with the first \(\le \) replaced by >, yields False. This shows that (6.6) is correct, which finishes the proof.

The following two lemmas will be required for some estimates of the function \(F(t,\lambda ,k)\) for small \(\lambda .\)

Lemma 6.2

For \(\alpha \) as in (6.1), we have

$$\begin{aligned} \int _0^\infty \frac{s^\alpha }{s^{2\alpha } -2s^\alpha \cos \alpha \pi +1}ds =\frac{\pi \sin \big (\frac{\lfloor \alpha \rfloor +1}{\alpha }\pi \big )}{\alpha \sin (\alpha \pi ) \sin \big (\frac{\alpha +1}{\alpha }\pi \big )} \end{aligned}$$
(6.7)

and

$$\begin{aligned} \int _0^\infty \frac{s^{\alpha -1}}{s^{2\alpha } -2s^\alpha \cos \alpha \pi +1}ds =\frac{\pi (\lceil \alpha \rceil -\alpha )}{\alpha \sin \alpha \pi }. \end{aligned}$$
(6.8)

Proof

By substituting \(s^\alpha =w,\)

$$\begin{aligned} \int _0^\infty \frac{s^\alpha }{s^{2\alpha } -2s^\alpha \cos \alpha \pi +1}ds&= \frac{1}{\alpha } \int _0^\infty \frac{w^{1/\alpha }}{w^{2} -2w\cos \alpha \pi +1}dw\\&= \frac{1}{\alpha } \int _0^\infty \frac{w^{1/\alpha }}{w^{2} +2w\cos (\pi (\alpha -\lfloor \alpha \rfloor -1))+1}dw, \end{aligned}$$

where we have used that \(\lfloor \alpha \rfloor \) is even. The first formula now follows from 3.242 on p. 322 of [10], with \(m=1/(2\alpha ),\) \(n=\tfrac{1}{2}\) and \(t=\pi (\alpha -\lfloor \alpha \rfloor -1).\) As for (6.8),

$$\begin{aligned} \int _0^\infty \frac{s^{\alpha -1}}{s^{2\alpha } -2s^\alpha \cos \alpha \pi +1}ds&= \frac{1}{\alpha } \int _0^\infty \frac{1}{w^{2} -2w\cos \alpha \pi +1}dw\\&= \frac{1}{\alpha } \int _0^\infty \frac{1}{w^{2} +2w\cos (\pi (\alpha -\lfloor \alpha \rfloor -1))+1}dw. \end{aligned}$$

The identity (6.8) then follows from 11a) on p. 14 of [11], with

$$\begin{aligned} \lambda = \pi (\alpha -\lfloor \alpha \rfloor -1) =\pi (\alpha -\lceil \alpha \rceil ). \end{aligned}$$

\(\square \)

Lemma 6.3

Again, suppose that \(\alpha \) satisfies (6.1). For \(k\in \mathbb N,\) we have

$$\begin{aligned}{} & {} \int _0^{1/\lambda } \frac{s^{\alpha -1}(1-\lambda s)^{\alpha k}}{s^{2\alpha }-2s^\alpha \cos \alpha \pi +1} ds= \int _0^{\infty } \frac{s^{\alpha -1}}{s^{2\alpha }-2s^\alpha \cos \alpha \pi +1} ds\\{} & {} \quad -\alpha k \lambda \int _0^{\infty } \frac{s^{\alpha }}{s^{2\alpha }-2s^\alpha \cos \alpha \pi +1} ds +\textrm{o}(\lambda ), \quad \lambda \downarrow 0. \end{aligned}$$

Proof

Fix some \(\beta \in (\tfrac{1}{\alpha },\tfrac{1}{2}).\) We have

$$\begin{aligned} \int _{\lambda ^{-\beta }}^\infty \frac{s^{\alpha -1}}{s^{2\alpha } -2s^\alpha \cos \alpha \pi +1}ds =\textrm{O}(\lambda ^{\alpha \beta }) =\textrm{o}(\lambda ),\quad \lambda \downarrow 0, \end{aligned}$$
(6.9)

and

$$\begin{aligned} \int _{\lambda ^{-\beta }}^\infty \frac{s^{\alpha }}{s^{2\alpha } -2s^\alpha \cos \alpha \pi +1}ds =\textrm{O}(\lambda ^{\beta (\alpha -1)}) =\textrm{o}(1),\quad \lambda \downarrow 0. \end{aligned}$$
(6.10)

These assertions easily follow from the fact that the integrand is of order \(\textrm{O}(s^{-\alpha -1})\) resp. \(\textrm{O}(s^{-\alpha })\) at infinity. Moreover, we have the uniform expansion

$$\begin{aligned} (1-\lambda s)^{\alpha k}&= 1 - \alpha k\lambda s + \textrm{O}(\lambda ^{2-2\beta }) \nonumber \\&=1 - \alpha k\lambda s +\textrm{o}(\lambda ), \quad 0\le s\le \lambda ^{-\beta },\ \lambda \downarrow 0. \end{aligned}$$
(6.11)

Since \(0\le (1-\lambda s)^{\alpha k}\le 1,\) (6.9) implies

$$\begin{aligned} \int _0^{1/\lambda } \frac{s^{\alpha -1}(1-\lambda s)^{\alpha k}}{s^{2\alpha }-2s^\alpha \cos \alpha \pi +1} ds&= \int _0^{\lambda ^{-\beta }} \frac{s^{\alpha -1}(1-\lambda s)^{\alpha k}}{s^{2\alpha }-2s^\alpha \cos \alpha \pi +1} ds + \textrm{o}(\lambda ). \end{aligned}$$

The statement now follows from (6.11), (6.10) and (6.9).

We now continue the proof of (6.2). From the definition of F, it is clear that

$$\begin{aligned} \int _0^1 F(t,\lambda ,k)dt&\ge \int _0^1 \frac{t^{\alpha -1}(1-t)^{\alpha k}}{|t^\alpha -\lambda ^\alpha e^{-i\alpha \pi }|^2} dt \nonumber \\&= \lambda ^{-\alpha } \int _0^{1/\lambda } \frac{s^{\alpha -1}(1-\lambda s)^{\alpha k}}{s^{2\alpha }-2s^\alpha \cos \alpha \pi +1} ds. \end{aligned}$$
(6.12)

As noted above, from (3.3), we have

$$\begin{aligned} \sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k}&\le \sum _{\omega \in K_\alpha }|1+\lambda \omega |^{\alpha k} \nonumber \\&=1 + 2\sum _{j=1}^m \Big (1+2\lambda \cos \frac{2j\pi }{\alpha }+\lambda ^2\Big )^{\alpha k/2}, \quad \lambda \in (0,1],\ k\in \mathbb N. \end{aligned}$$
(6.13)

Since Lemma 6.1 settles the case \(\lambda \in [\tfrac{1}{2},1],\) we may assume \(\lambda \in (0,\tfrac{1}{2})\) in what follows. Using (6.12) and (6.13) in (6.2), we see that it is sufficient to show

$$\begin{aligned}{} & {} 2\sum _{j=1}^m \Big (1+2\lambda \cos \frac{2j\pi }{\alpha }+\lambda ^2\Big )^{\alpha k/2} +(1-\alpha )(1+\lambda )^{\alpha k}\nonumber \\{} & {} \quad -\frac{\alpha \sin \alpha \pi }{\pi } \int _0^{1/\lambda } \frac{s^{\alpha -1}(1-\lambda s)^{\alpha k}}{s^{2\alpha }-2s^\alpha \cos \alpha \pi +1} ds < 0, \quad \lambda \in (0,\tfrac{1}{2}),\ k\in \mathbb N.\qquad \end{aligned}$$
(6.14)

This will be proven in the following two lemmas.

Lemma 6.4

For \(\lambda \in (0,\tfrac{1}{2}),\) the left hand side of (6.14) decreases w.r.t. \(\lambda \).

Proof

The derivative of this expression is

$$\begin{aligned} 2\alpha k \sum _{j=1}^m&\Big ( \cos \frac{2j\pi }{\alpha }+\lambda \Big ) \Big (1+2\lambda \cos \frac{2j\pi }{\alpha }+\lambda ^2\Big )^{\alpha k/2-1} + \alpha k(1-\alpha )(1+\lambda )^{\alpha k-1}\nonumber \\&\qquad +\frac{\alpha \sin \alpha \pi }{\pi } \int _0^{1/\lambda } \frac{s^{\alpha }(1-\lambda s)^{\alpha k-1}}{s^{2\alpha }-2s^\alpha \cos \alpha \pi +1} ds \nonumber \\&\le 2\alpha k \sum _{1\le j\le \alpha /3}\Big ( \cos \frac{2j\pi }{\alpha }+\lambda \Big ) \Big (1+2\lambda \cos \frac{2j\pi }{\alpha }+\lambda ^2\Big )^{\alpha k/2-1}\nonumber \\&\qquad + \alpha k(1-\alpha )(1+\lambda )^{\alpha k-1} +\frac{\alpha \sin \alpha \pi }{\pi } \int _0^{\infty } \frac{s^{\alpha }}{s^{2\alpha }-2s^\alpha \cos \alpha \pi +1} ds. \end{aligned}$$
(6.15)

Note that it is easy to show that \(\cos (2j\pi /\alpha )+\lambda <0\) for \(j>\alpha /3,\) which is where we use our assumption that \(\lambda <\tfrac{1}{2}\). Thus, we are discarding only negative terms when passing from \(\sum _{j=1}^m\) to \(\sum _{1\le j\le \alpha /3}\) in (6.15). By Lemma 6.2, the last term in (6.15) satisfies

$$\begin{aligned} \frac{\alpha \sin \alpha \pi }{\pi } \int _0^{\infty } \frac{s^{\alpha }}{s^{2\alpha }-2s^\alpha \cos \alpha \pi +1} ds = \frac{\sin \big (\frac{\lfloor \alpha \rfloor +1}{\alpha }\pi \big )}{\sin \big (\frac{\alpha +1}{\alpha }\pi \big )} < 1, \end{aligned}$$
(6.16)

where the inequality follows from

$$\begin{aligned} 1< \frac{\lfloor \alpha \rfloor +1}{\alpha }< \frac{\alpha +1}{\alpha } < \frac{3}{2}. \end{aligned}$$
(6.17)

We can thus estimate (6.15) further by

$$\begin{aligned} 2\alpha k\sum _{1\le j\le \alpha /3} (1+\lambda )&\big ((1+\lambda )^2\big )^{\alpha k/2-1} +\alpha k(1-\alpha )(1+\lambda )^{\alpha k-1} +1 \\&= 2\alpha k \Big \lfloor \frac{\alpha }{3} \Big \rfloor (1+\lambda )^{\alpha k-1} +\alpha k(1-\alpha )(1+\lambda )^{\alpha k-1} +1\\&\le \alpha k(1+\lambda )^{\alpha k-1}\Big (2\Big \lfloor \frac{\alpha }{3} \Big \rfloor +2-\alpha \Big ) \le 0. \end{aligned}$$

Indeed, it is easy to see that \(2\lfloor \alpha /3 \rfloor +2-\alpha \le 0\) for \(\alpha \) as in (6.1).

Presumably, the preceding lemma can be extended to \(\lambda \in (0,1],\) but this would require a much better estimate for \(\sum _{\omega \in K_\alpha }(1+\lambda \omega )^{\alpha k}\) than the one we have used. The proof of (3.2) could then possibly be streamlined, because Lemma 6.1 would no longer be required.

Lemma 6.5

Let \(\alpha \) be as in (6.1), and \(k\in \mathbb N.\) Then (6.14) holds for \(\lambda >0\) sufficiently small.

Proof

Since \(m=(\lceil \alpha \rceil -1) /2,\) expanding the first terms of (6.14) gives

$$\begin{aligned}{} & {} \lceil \alpha \rceil -\alpha +\alpha k\left( 2\sum _{j=1}^m \cos \frac{2j\pi }{\alpha }+1-\alpha \right) \lambda +\textrm{O}(\lambda ^2)\\{} & {} \quad -\frac{\alpha \sin \alpha \pi }{\pi } \int _0^{1/\lambda } \frac{s^{\alpha -1}(1-\lambda s)^{\alpha k}}{s^{2\alpha }-2s^\alpha \cos \alpha \pi +1} ds. \end{aligned}$$

By Lemmas 6.2 and 6.3, this further equals

$$\begin{aligned} \lceil \alpha \rceil -\alpha +\alpha&k\bigg ( 2\sum _{j=1}^m \cos \frac{2j\pi }{\alpha }+1-\alpha \bigg )\lambda \nonumber \\&\qquad \qquad \quad +\alpha - \lceil \alpha \rceil +\alpha k \lambda \frac{\sin \big (\frac{\lfloor \alpha \rfloor +1}{\alpha }\pi \big )}{\sin \big (\frac{\alpha +1}{\alpha }\pi \big )} +\textrm{o}(\lambda )\nonumber \\&=\alpha k\left( 2\sum _{j=1}^m \cos \frac{2j\pi }{\alpha }+1-\alpha +\frac{\sin \big (\frac{\lfloor \alpha \rfloor +1}{\alpha }\pi \big )}{\sin \big (\frac{\alpha +1}{\alpha }\pi \big )} \right) \lambda +\textrm{o}(\lambda ), \end{aligned}$$
(6.18)

and we see that (6.14) becomes sharp as \(\lambda \downarrow 0,\) as its left hand side is \(\textrm{O}(\lambda )\). This is no surprise, since the inequality (3.2) we are proving is obviously sharp for \(\lambda \downarrow 0.\) It remains to show that the coefficient of \(\lambda \) in (6.18) is negative. Similarly to (6.15), we have the bound

$$\begin{aligned} 2\sum _{j=1}^m \cos \frac{2j\pi }{\alpha }\le 2\sum _{1\le j\le \alpha /3}\cos \frac{2j\pi }{\alpha } \le 2\Big \lfloor \frac{\alpha }{3} \Big \rfloor . \end{aligned}$$

Thus, we wish to show that

$$\begin{aligned} 2\Big \lfloor \frac{\alpha }{3} \Big \rfloor +1-\alpha +\frac{\sin \big (\frac{\lfloor \alpha \rfloor +1}{\alpha }\pi \big )}{\sin \big (\frac{\alpha +1}{\alpha }\pi \big )} <0. \end{aligned}$$
(6.19)

As the sine quotient is \(<1,\) by (6.16), this is clearly true for \(\alpha >6.\) Now consider \(\alpha \in (4,5).\) It is easy to verify that

$$\begin{aligned} \sin \Big (\frac{4+1}{\alpha }\pi \Big )< -\frac{1}{\sqrt{2}}+ \frac{6\pi (\alpha -4)}{16\sqrt{2}},\quad 4<\alpha <5, \end{aligned}$$

and that

$$\begin{aligned} \sin \Big (\frac{\alpha +1}{\alpha }\pi \Big ) > -\frac{1}{\sqrt{2}}+ \frac{\pi (\alpha -4)}{20\sqrt{2}},\quad 4<\alpha <5. \end{aligned}$$

Using these estimates in (6.19) leads to a quadratic inequality, which is straightforward to check. The proof of (6.19) for \(\alpha \in (2,3)\) is similar.

Clearly, Lemmas 6.4 and 6.5 establish (6.14). As argued above (6.14), we have thus proven (6.2), hence (3.2), and the proof of Theorem 1.3 is complete.