1 Introduction

A polynomial or infinite series p(z) is said to be unimodal / log-concave if the sequence of its coefficients is unimodal / log-concave. More explicitly, let \(a_k\) denote the coefficient of \(z^k\) in p(z). We say p(z) is unimodal if there exists an index m such that \(a_0 \le a_1 \le \cdots \le a_m\) and \(a_m \ge a_{m + 1} \ge \cdots \). We say p(z) is log-concave if \(a_k \ge 0\) for all \(k\ge 0\), and \(a_k^2 \ge a_{k - 1}a_{k + 1}\) for all \(k \ge 1\).

Many methods have been developed to show the unimodality and log-concavity of various combinatorial polynomials. We refer to the excellent surveys of Brenti [3] and Stanley [10, 11] for a collection of these methods, which encompass algebra, combinatorics, and geometry. For example, breakthrough works by Adiprasito, Braden, Huh, Katz, Matherne, Proudfoot, Wang, and many others [1, 2, 7] use methods in algebraic geometry to settle the Mason and Heron-Rota-Welsh conjecture on the log-concavity of the chromatic polynomial of graphs and the characteristic polynomial of matroids.

This paper settles the unimodality of the Nekrasov-Okounkov polynomials, an important family of polynomials in combinatorics and representation theory. The Nekrasov-Okounkov polynomials are defined by

$$\begin{aligned} Q_n(z) := \sum _{\left| {\lambda } \right| = n} \prod _{h \in \mathcal {H}(\lambda )} \left( 1 + \frac{z}{h^2}\right) \end{aligned}$$

where \(\lambda \) runs over all Young tableaux of size n, and \(\mathcal {H}(\lambda )\) denotes the multiset of hook lengths associated with \(\lambda \). They are central in the groundbreaking Nekrasov-Okounkov identity [8]

$$\begin{aligned} \sum _{n = 0}^{\infty } Q_n(z)q^n = \prod _{m = 1}^{\infty } (1 - q^m)^{-z-1}. \end{aligned}$$

In [4], Heim and Neuhauser posed the conjecture that \(Q_n(z)\) are unimodal polynomials. In [6], the author and Letong Hong showed that for sufficiently large n, the unimodality of \(Q_n(z)\) is implied by the following conjecture.

Theorem 1.1

Let f(x) be the infinity series defined by

$$\begin{aligned} f(z) := \sum _{n \ge 1} \sigma _{-1}(n)z^n \end{aligned}$$
(1)

where \(\sigma _{-1}(n) = \sum _{d | n} d^{-1}\). Let \(a_{n,k}\) be the coefficient of \(z^n\) in \(f^k(z)\). There exists a constant \(C > 1\) such that for all \(k\ge 2\) and \(n \le C^k\), we have

$$\begin{aligned} a_{n,k}^2 \ge a_{n - 1, k}a_{n + 1, k}. \end{aligned}$$

In other words, an exponentially long initial segment of \(f^k(z)\) is log-concave as k goes to infinity. See [5] for more comments on this conjecture.

This conjecture is closely related to a theorem of Odlyzko and Richmond in 1985. In [9], they showed the following remarkable result: if p(z) is a degree d polynomial with non-negative coefficients such that the coefficients \(a_0, a_1, a_{d - 1}, a_d\) are all strictly positive, then there exists an \(n_0\) such that \(p^n(z)\) is log-concave for all \(n > n_0\). However, Odlyzko and Richmond pointed out that this result does not trivially generalize to infinite series.

In this paper, we prove Conjecture 1.1. Our main theorem generalizes Conjecture 1.1 to all infinite series f(z) that “behave like \((1 - z)^{-1}\)”.

We call an infinite series f(z) 1-lower bounded if for any \(n \ge 0\), the coefficient of \(z^n\) in f(z) is at least 1.

Theorem 1.2

Let

$$\begin{aligned} f(z) = \sum _{n = 0}^{\infty } a_n z^n \end{aligned}$$

be a 1-lower bounded infinite series. Suppose there exist constants \(c,C > 0\) such that for any \(r \in (0,1)\) and \(i \in \{0,1,2,3\}\), we have

$$\begin{aligned} \frac{c}{(1 - r)^{i + 1}} \le f^{(i)}(r) \le \frac{C}{(1 - r)^{i + 1}}. \end{aligned}$$

Let \(a_{n,k}\) denote the coefficient of \(z^n\) in the infinite series \(f^k(z)\). Then there exists a constant \(A > 1\) depending on c and C only such that

$$\begin{aligned} a_{n,k}^2 \ge a_{n - 1, k}a_{n + 1, k} \end{aligned}$$

for all \(n \le A^{k^{1/3}}\).

To rephrase the conditions in the main theorem, we have

Corollary 1.3

Let

$$\begin{aligned} f(z) = \sum _{n = 0}^{\infty } a_n z^n \end{aligned}$$

be a 1-lower bounded infinite series. Suppose there exists a constant \(C > 0\) such that for any n, we have

$$\begin{aligned} \frac{1}{n + 1}(a_0 + \cdots + a_n) \le C \end{aligned}$$

Let \(a_{n,k}\) denote the coefficient of \(z^n\) in the infinite series \(f^k(z)\). Then there exists a constant \(A > 1\) depending only on C such that

$$\begin{aligned} a_{n,k}^2 \ge a_{n - 1, k}a_{n + 1, k} \end{aligned}$$

for all \(n \le A^{k^{1/3}}\).

We use an ad-hoc argument to strengthen Theorem 1.2 for infinite series that satisfy a stronger condition.

Theorem 1.4

Let

$$\begin{aligned} f(z) = \sum _{n = 0}^{\infty } a_n z^n \end{aligned}$$

be a 1-lower bounded infinite series. Suppose there exists constants \(C > 1\), \(D > 0\) and \(\alpha \in [0,1)\) such that for all n, we have

$$\begin{aligned} 0 \le C(n + 1) - (a_0 + \cdots + a_n) \le D(n + 1)^{\alpha }. \end{aligned}$$

Let \(a_{n,k}\) be the coefficient of \(z^n\) in the infinite series \(f^k(z)\). Then for sufficiently large k and any \(n \le A^k / k^2\), we have \(a_{n,k}^2 \ge a_{n - 1, k}a_{n + 1, k}\). Here A is the constant

$$\begin{aligned} A := \root 2 + \alpha \of {\frac{C}{C - 1}}. \end{aligned}$$

As a corollary, we establish Conjecture 1.1.

Corollary 1.5

Let fa be as defined in Conjecture 1.1. Then for any fixed \(\eta < \eta _0\), for all sufficiently large k and \(n \le \eta ^k\), we have

$$\begin{aligned} a_{n,k}^2 \ge a_{n - 1, k}a_{n + 1, k}. \end{aligned}$$

Here \(\eta _0\) is the constant

$$\begin{aligned} \eta _0 := \sqrt{\frac{\pi ^2}{\pi ^2 - 6}} > 1.59. \end{aligned}$$

Combining with Theorem 1.3 of [6], we have shown that

Theorem 1.6

The Nekrasov-Okounkov polynomial \(Q_n(z)\) is unimodal for all sufficiently large n.

2 Proof of Theorem 1.2 and Corollary 1.3

In this section, we show Theorem 1.2. We use essentially the same method as Odlyzko-Richmond [9], with some modifications to accommodate the different behavior of our infinite series f(z). We also note that our notation of kn is reversed from the convention in [9].

Throughout the proof, for every positive integer i, let \(c_i\) denote a small positive constant that depends on cC only, and let \(C_i\) denote a large positive constant that depends on cC only. We denote \(c_0 = c\) and \(C_0 = C\). To ensure acyclic constant choice, any \(c_i, C_i\) is determined by \(c_j\) and \(C_j\) for \(j < i\). We also assume k is sufficiently large relative to all the constants. If \(n \le k^{1/4}\), the proof in [9] can be applied verbatim to prove Theorem 1.2. We now prove Theorem 1.2 when \(n \ge k^{1/4}\).

The function f(z) is holomorphic for \(|z| < 1\). For any \(r\in (0,1)\), we have

$$\begin{aligned} a_{n,k}&= \frac{1}{2\pi i}\int _{|z| = r}f^k(z)z^{-n-1}dz \\&= \frac{1}{2\pi }r^{-n} \int _{-\pi }^\pi f^k(re^{i\theta })e^{-in\theta }d\theta . \end{aligned}$$

So it suffices to show that for any \(n \in [k^{1/4}, A^{k^{1/3}}]\), there exists an \(r \in (0,1)\) such that for \(\alpha \in [n - 1, n + 1]\), the real-valued function

$$\begin{aligned} F(\alpha ) = \int _{-\pi }^\pi f^k(re^{i\theta })e^{-i\alpha \theta }d\theta \end{aligned}$$

is positive and log-concave. We will show the stronger conclusion that F is concave, that is

$$\begin{aligned} F''(\alpha ) < 0 \end{aligned}$$

or

$$\begin{aligned} \int _{-\pi }^\pi \theta ^2 f^k(re^{i\theta })e^{-i\alpha \theta }d\theta > 0. \end{aligned}$$

The crucial idea is to study the argument of \(f(re^{i\theta })\). We first note that f is nonzero in a neighborhood of \(z = 1 - r\).

Lemma 2.1

For any \(r \in (0,1)\) and \(\theta \), we have

$$\begin{aligned} \left| {f(re^{i\theta })} \right| \ge \left( 1 - \frac{C_1\left| {\theta } \right| }{1 - r}\right) f(r). \end{aligned}$$

In particular, if \(\left| {\theta } \right| < c_1(1 - r)\), then \(f(re^{i\theta }) \ne 0\).

Proof

By assumption, we have \(f(r) \ge c(1 - r)\) and \(f'(r) \le \frac{C}{(1 - r)^2}\). Therefore, \(\left| {f'(z)} \right| \le \frac{C_1}{(1 - \left| {z} \right| )}f(r)\) if \(\left| {z} \right| \le r\). By the intermediate value theorem, we conclude that

$$\begin{aligned} \left| {f(re^{i\theta })} \right| \ge f(r) - \left| {r - re^{i\theta }} \right| \cdot \frac{C_1}{(1 - r)}f(r) \ge \left( 1 - \frac{C_1\left| {\theta } \right| }{1 - r}\right) f(r) \end{aligned}$$

as desired. \(\square \)

Therefore, we can define a smooth function \(\psi _r(\theta )\) with domain \((-c_1(1 - r), c_1(1 - r))\) such that

$$\begin{aligned} \psi _r(0) = 0, \psi _r(\theta ) \in \arg (f(re^{i\theta })). \end{aligned}$$

As \(f(\bar{z}) = \overline{f(z)}\), \(\psi _r(\theta )\) is an odd function. A crucial component of the proof is the following estimate of \(\psi _r(\theta )\).

Lemma 2.2

If \(\left| {\theta } \right| \le c_2(1 - r)\), then we have

$$\begin{aligned} \left| {\psi _r(\theta ) - A(r)\theta } \right| \le C_2\frac{r\left| {\theta } \right| ^3}{(1 - r)^{3}} \end{aligned}$$

where \(A(r) = \frac{rf'(r)}{f(r)}.\)

Proof

We can write

$$\begin{aligned} \psi _r(\theta ) = \Im \ln (f(re^{i\theta })). \end{aligned}$$

So we have

$$\begin{aligned} \psi '_r(\theta ) = \Im rie^{i\theta } \frac{f'(re^{i\theta })}{f(re^{i\theta })}. \end{aligned}$$

In particular,

$$\begin{aligned} \psi '_r(0) = r\frac{f'(r)}{f(r)} = A(r). \end{aligned}$$

Furthermore, \(\psi \) is odd. By Taylor’s formula, there exists some \(\xi \in (0, \theta )\) such that

$$\begin{aligned} \psi _r(\theta ) = A(r)\theta + \frac{1}{6}\psi _r^{(3)}(\xi )\theta ^3. \end{aligned}$$

We compute that

$$\begin{aligned} \psi _r^{(3)}(\xi )&= \Im \left( -ir^3 e^{3i\xi } \frac{f^{(3)}f^3 - 4f'f''f^2 + 2f(f')^3}{f^4}(re^{i\xi }) \right) \\&+ \Im \left( - 3ir^2 e^{2i\xi } \frac{f''f - (f')^2}{f^2}(re^{i\xi }) - ire^{i\xi } \frac{f'}{f}(re^{i\xi })\right) . \end{aligned}$$

By Lemma 2.1, we have \(\left| {f(re^{i\xi })} \right| > c(1 - r) / 2\), and by assumption,

$$\begin{aligned} \left| {f^{(j)}(re^{i\xi })} \right| \le \left| {f^{(j)}(r)} \right| \le \frac{C}{(1 - r)^{j + 1}}, j \in \{0,1,2,3\}. \end{aligned}$$

We conclude that

$$\begin{aligned} \left| {\psi _r^{(3)}(\xi )} \right| \le \frac{C_2r}{(1 - r)^3}. \end{aligned}$$

Substituting in, we get the desired estimate. \(\square \)

As \(A(0) = 0\) and \(\lim _{r \rightarrow 1} A(r) = \infty \), there exists an \(r_0 \in (0,1)\) such that \(A(r_0) = \frac{n}{k}\). We have \(r_0 = \Theta (\min (1, n/k))\). As \(n \ge k^{1/4}\), for k sufficiently large we have

$$\begin{aligned} k r_0 \ge 1. \end{aligned}$$

For any \(\alpha \in [n - 1, n + 1]\), we now show that

$$\begin{aligned} \int _{-\pi }^\pi \theta ^2 f^k(r_0e^{i\theta })e^{-i\alpha \theta }d\theta > 0. \end{aligned}$$

We take

$$\begin{aligned} \theta _0 = c_3(1 - r_0)(kr_0)^{-1/3} \end{aligned}$$

for a \(c_3 < c_2\) to be determined later, and split the integral

$$\begin{aligned} \int _{-\pi }^\pi \theta ^2 f^k(r_0e^{i\theta })e^{-i\alpha \theta }d\theta = \int _{\left| {\theta } \right| < \theta _0} \theta ^2 f^k(r_0e^{i\theta })e^{-i\alpha \theta }d\theta + \int _{\left| {\theta } \right| \in (\theta _0, \pi )} \theta ^2 f^k(r_0e^{i\theta })e^{-i\alpha \theta }d\theta . \end{aligned}$$

We now estimate the two summands. By Lemma 2.2, for any \(\theta \) with \(\left| {\theta } \right| < \theta _0\) we have

$$\begin{aligned} \left| {\arg {f^k(r_0e^{i\theta })e^{-i\alpha \theta }}} \right|&= \left| {k\psi _{r_0}(\theta ) - \alpha \theta } \right| \\&\le k\left| {\psi _{r_0}(\theta ) - A(r_0)\theta } \right| + \left| {(n - \alpha )\theta } \right| \\&\le \frac{C_2r_0k}{(1 - r_0)^3} \left| {\theta } \right| ^3 + \left| {\theta } \right| \le 2c_3C_2. \end{aligned}$$

In particular, if we take the \(c_3\) in the definition of \(\theta _0\) to be \(\min (c_2, \pi / (8C_2))\), then

$$\begin{aligned} \left| {\arg {f^k(r_0e^{i\theta })e^{-i\alpha \theta }}} \right| < \frac{\pi }{4}. \end{aligned}$$

Thus we conclude that

$$\begin{aligned} \int _{\left| {\theta } \right|< \theta _0} \theta ^2 f^k(r_0e^{i\theta })e^{-i\alpha \theta }d\theta\ge & {} \frac{1}{2}\int _{\left| {\theta } \right|< \theta _0} \theta ^2 \left| {f^k(r_0e^{i\theta })e^{-i\alpha \theta }} \right| d\theta \\&= \frac{1}{2}\int _{\left| {\theta } \right| < \theta _0} \theta ^2 \left| {f(r_0e^{i\theta })} \right| ^kd\theta . \end{aligned}$$

We apply Lemma 2.1 to obtain

$$\begin{aligned} \int _{\left| {\theta } \right|< \theta _0} \theta ^2 \left| {f(r_0e^{i\theta })} \right| ^kd\theta \ge \int _{\left| {\theta } \right| < \theta _0} \theta ^2 \left( 1 - C_1\frac{\left| {\theta } \right| }{1 - r_0}\right) ^k f^k(r_0)d\theta . \end{aligned}$$

We can extract the constant \(f^k(r_0)\) and apply a change of variable \(t = \theta / (1 - r_0)\)

$$\begin{aligned} \int _{\left| {\theta } \right| < \theta _0} \theta ^2 \left( 1 - C_1\frac{\left| {\theta } \right| }{1 - r_0}\right) ^k d \theta = 2(1 - r_0)^3\int _{0}^{\theta _0 / (1 - r_0)} t^2(1 - t)^k d t. \end{aligned}$$

As \(\theta _0 / (1 - r_0) = c_3(kr_0)^{-1/3} > 2k^{-1}\), we conclude that

$$\begin{aligned} \int _{0}^{\theta _0 / (1 - r)} t^3(1 - t)^k d t \ge \int _{k^{-1}}^{2k^{-1}} t^3(1 - t)^k dt \ge c_4 k^{-4}. \end{aligned}$$

So we obtain the estimate

$$\begin{aligned} \int _{\left| {\theta } \right| < \theta _0} \theta ^2 f^k(r_0e^{i\theta })e^{-i\alpha \theta }d\theta \ge c_4(1 - r_0)^3k^{-4}f^k(r_0). \end{aligned}$$

By assumption we have

$$\begin{aligned} A(r) = \frac{rf'(r)}{f(r)} \ge \frac{rc(1 - r)^{-2}}{C(1 - r)^{-1}} = \frac{c_5r}{1 - r} \end{aligned}$$

and as \(A(r_0) = n/k\), we get

$$\begin{aligned} 1 - r_0 \ge \frac{c_6k}{\max (n, k)}. \end{aligned}$$

So we conclude that

$$\begin{aligned} \int _{\left| {\theta } \right| < \theta _0} \theta ^2 f^k(r_0e^{i\theta })e^{-i\alpha \theta }d\theta \ge c_7\max (n, k)^{-4}f^k(r_0). \end{aligned}$$
(2)

To estimate the second summand, which is the integral over \(\left| {\theta } \right| \in (\theta _0, \pi )\), we need a lemma about the upper bound of f away from the positive real axis.

Lemma 2.3

For any \(r \in (0,1)\) and \(\theta \in [-\pi , \pi )\), we have

$$\begin{aligned} \left| {f(re^{i\theta })} \right| \le \left( 1 - c_8r\frac{\min (\left| {\theta } \right| , 1 - r)^2}{(1 - r)^2}\right) f(r). \end{aligned}$$

Proof

As we assumed that \(a_n \ge 1\) for every n, we have

$$\begin{aligned} f(z) = \frac{1}{1 - z} + \sum _{n = 0}^{\infty } (a_n - 1)z^n. \end{aligned}$$

In particular, we get

$$\begin{aligned} f(r) = \frac{1}{1 - r} + \sum _{n = 0}^{\infty } (a_n - 1)r^n, \end{aligned}$$
$$\begin{aligned} \left| {f(re^{i\theta })} \right| \le \left| {\frac{1}{1 - re^{i\theta }}} \right| + \sum _{n = 0}^{\infty } (a_n - 1)r^n. \end{aligned}$$

Thus we have

$$\begin{aligned} f(r) - \left| {f(re^{i\theta })} \right| \ge \frac{1}{1 - r} - \left| {\frac{1}{1 - re^{i\theta }}} \right| \ge \frac{c_8r\min (\left| {\theta } \right| , 1 - r)^2}{(1 - r)^3} \end{aligned}$$

where the second inequality follows from

$$\begin{aligned} \frac{1}{1 - r} - \left| {\frac{1}{1 - re^{i\theta }}} \right| = \frac{\sqrt{(1 - r)^2 + 4\sin ^2\frac{\theta }{2}} - (1 - r)}{(1 - r)\left| {1 - re^{i\theta }} \right| }. \end{aligned}$$

As \(f(r) \le C(1 - r)^{-1}\), we conclude that

$$\begin{aligned} f(r) - \left| {f(re^{i\theta })} \right| \ge \frac{1}{1 - r} - \left| {\frac{1}{1 - re^{i\theta }}} \right| \ge c_8r\frac{\min (\left| {\theta } \right| , 1 - r)^2}{(1 - r)^2} f(r) \end{aligned}$$

as desired. \(\square \)

By the lemma, for every \(\theta \) with \(\left| {\theta } \right| > \theta _0 = c_3(1 - r_0)(kr_0)^{-1/3}\), we have

$$\begin{aligned} \left| {f(r_0e^{i\theta })} \right| \le \left( 1 - \frac{c_8r_0\left| {\theta _0} \right| ^2}{(1 - r_0)^2}\right) f(r_0). \end{aligned}$$

Thus we conclude that

$$\begin{aligned} \left| {\int _{\left| {\theta } \right| \in (\theta _0, \pi )} \theta ^2 f^k(r_0e^{i\theta })e^{-i\alpha \theta }d\theta } \right|&\le \pi ^3 \left( 1 - \frac{c_8r_0\left| {\theta _0} \right| ^2}{(1 - r_0)^2}\right) ^kf^k(r_0) \\&\le \pi ^3 e^{-c_8 kr_0\frac{cr_0\left| {\theta _0} \right| ^2}{(1 - r_0)^2}}f^k(r_0). \end{aligned}$$

Substituting the value of \(\theta _0\), we get

$$\begin{aligned} \left| {\int _{\left| {\theta } \right| \in (\theta _0, \pi )} \theta ^2 f^k(r_0e^{i\theta })e^{-i\alpha \theta }d\theta } \right| \le \pi ^3 e^{-c_9(kr_0)^{1/3}}f^k(r_0). \end{aligned}$$
(3)

Finally, we combine the estimates (2) and (3) to conclude that

$$\begin{aligned} \int _{-\pi }^\pi \theta ^2 f^k(r_0e^{i\theta })e^{-i\alpha \theta }d\theta \ge c_7\max (n, k)^{-4}f^k(r_0) - \pi ^3 e^{-c_9(kr_0)^{1/3}}f^k(r_0). \end{aligned}$$

We note that

$$\begin{aligned} \frac{n}{k} = A(r_0) = \frac{r_0f'(r_0)}{f(r_0)} \le \frac{r_0C(1 - r_0)^{-2}}{c(1 - r_0)^{-1}} = \frac{C_{10}r_0}{(1 - r_0)} \end{aligned}$$

which implies

$$\begin{aligned} r_0 \ge \frac{n}{n + C_{10}k}. \end{aligned}$$

If \(C_{10}k < n\), then for sufficiently large k, we have

$$\begin{aligned} c_7\max (n, k)^{-4} - \pi ^3 e^{-c_9(kr_0)^{1/3}} \ge c_7n^{-4} - C_{11} e^{-c_{10}k^{1/3}}. \end{aligned}$$

So there exists a \(c_{11} > 0\) such that if \(C_{10}k < n \le e^{c_{11}k^{1/3}}\), then

$$\begin{aligned} c_7\max (n, k)^{-4} - \pi ^3 e^{-c_9(kr_0)^{1/3}} > 0. \end{aligned}$$

If \(k^{1/4} \le n \le C_{10}k\), then for sufficiently large k, we have

$$\begin{aligned} c_{7}\max (n, k)^{-4} - \pi ^3 e^{-c_9(kr_0)^{1/3}}\ge & {} c_{12}k^{-4} - \pi ^3 e^{-c_{10}(k\cdot n/k)^{1/3}}\\\ge & {} c_{12}k^{-4} - C_{12} e^{-c_{10}k^{1/12}} > 0. \end{aligned}$$

In both cases we have

$$\begin{aligned} \int _{-\pi }^\pi \theta ^2 f^k(r_0e^{i\theta })e^{-i\alpha \theta }d\theta > 0. \end{aligned}$$

Thus, we have shown Theorem 1.2.

Corollary 1.3 is an easy corollary of Theorem 1.2.

Proof (Proof of Corollary 1.3)

For each non-negative integer i and \(r \in (0,1)\), we note that

$$\begin{aligned} f^{(i)}(r) = i!\sum _{n = 0}^{\infty } \left( {\begin{array}{c}n + i\\ i\end{array}}\right) a_{n + i}r^n. \end{aligned}$$

On one hand, we have

$$\begin{aligned} f^{(i)}(r) \ge i!\sum _{n = 0}^{\infty } \left( {\begin{array}{c}n + i\\ i\end{array}}\right) r^n = \frac{i!}{(1 - r)^{i + 1}}. \end{aligned}$$

On the other hand, by Abel summation, we have

$$\begin{aligned} f^{(i)}(r)&= i!\sum _{n = 0}^{\infty } (r^n - r^{n + 1}) \sum _{k = 0}^n \left( {\begin{array}{c}k + i\\ i\end{array}}\right) a_{k + i} \\&\le i!\sum _{n = 0}^{\infty }(r^n - r^{n + 1}) \cdot C(n + i + 1) \cdot \left( {\begin{array}{c}n + i\\ i\end{array}}\right) \\&= C(i + 1)!\sum _{n = 0}^{\infty }\left( {\begin{array}{c}n + i + 1\\ i + 1\end{array}}\right) (r^n - r^{n + 1}) = \frac{C(i + 1)!}{(1 - r)^{i + 1}}. \end{aligned}$$

Thus for any \(i \ge 0\) and \(r \in (0,1)\) we have

$$\begin{aligned} \frac{i!}{(1 - r)^{i + 1}} \le f^{(i)}(r)\le \frac{C(i + 1)!}{(1 - r)^{i + 1}}. \end{aligned}$$

The corollary follows by applying Theorem 1.2. \(\square \)

3 Proof of Theorem 1.4 and Corollary 1.5

In this section, we assume that \(f(z) = \sum _n a_nz^n\) satisfies the condition of Theorem 1.4: For all n, we have \(a_n \ge 1\) and

$$\begin{aligned} 0 \le C(n + 1) - (a_0 + \cdots + a_n) \le D(n + 1)^{\alpha }. \end{aligned}$$
(4)

We observe that f(z) also satisfies the condition in Corollary 1.3, so there exists a \(B > 1\) such that \(a_{n,k}^2 \ge a_{n - 1,k}a_{n + 1, k}\) for all \(n \le B^{k^{1/3}}\). We now use a different method to show that \(a_{n,k}^2 \ge a_{n - 1,k}a_{n + 1, k}\) for all \(B^{k^{1/3}} \le n \le A^k / k^2\) and k sufficiently large.

The inequality \(a_{n,k}^2 \ge a_{n - 1,k}a_{n + 1, k}\) is equivalent to the inequality

$$\begin{aligned} (a_{n,k} - a_{n - 1, k})^2 \ge a_{n - 1, k} (a_{n + 1, k} - 2a_{n, k} + a_{n - 1, k}). \end{aligned}$$

The key observation is that the second order difference \(a_{n + 1, k} - 2a_{n, k} + a_{n - 1, k}\) can be bounded.

We introduce a notation: for any \(n \ge 0\), define \(a_n^{(0)} = 1\) and \(a_n^{(1)} = a_n - 1\). Then we have

$$\begin{aligned} a_{n, k}&= \sum _{x_1 + \cdots + x_k = n} a_{x_1}a_{x_2}\cdots a_{x_k} \\&= \sum _{x_1 + \cdots + x_k = n} \prod _{i = 1}^k \left( a^{(0)}_{x_i} + a^{(1)}_{x_i}\right) \\&= \sum _{(i_1, i_2,\ldots ,i_k) \in \{0,1\}^k} \sum _{x_1 + \cdots + x_k = n}a^{(i_1)}_{x_1}\cdots a^{(i_k)}_{x_k}. \end{aligned}$$

For a tuple \(I = (i_1, i_2,\ldots ,i_k) \in \{0,1\}^k\), we let

$$\begin{aligned} a^I_n = \sum _{x_1 + \cdots + x_k = n}a^{(i_1)}_{x_1}\cdots a^{(i_k)}_{x_k}. \end{aligned}$$

Then we have

$$\begin{aligned} a_{n,k} = \sum _{I \in \{0,1\}^k}a_n^I. \end{aligned}$$

Let \({\mathbf {1}}_{k - 1}\) denote the length \(k - 1\) tuple \((1,1,\ldots , 1)\) and \(({\mathbf {1}}_{k - 1}, 0)\) denote the length k tuple \((1,1,\ldots , 1,0)\).

We prove a series of lemmas that gives the desired control over the second-order difference.

Lemma 3.1

There exists a constant \(C_1 > 0\) such that for any n and \(k \ge 2\), we have

$$\begin{aligned} a^{({\mathbf {1}}_{k - 1}, 0)}_n = \left( {\begin{array}{c}n + k - 1\\ k - 1\end{array}}\right) (C - 1)^{k - 1}\left( 1 - R_{n,k}\right) \end{aligned}$$

where

$$\begin{aligned} 0 \le R_{n,k} \le \frac{C_1k(k - 1)}{(n + k - 1)^{1 - \alpha }}. \end{aligned}$$

Proof

We take \(C_1 = D / \min (C - 1,1)\), and argue by induction on k. For \(k = 2\), the statement is clear as

$$\begin{aligned} a^{({\mathbf {1}}_1, 0)}_n = a_0 + \cdots + a_n - (n + 1). \end{aligned}$$

So (4) implies

$$\begin{aligned} 0 \le R_{n,2} \le \frac{D}{(n + 1)^{1 - \alpha }}. \end{aligned}$$

Now suppose the lemma holds for \(k' = k - 1\). To prove the lemma for k, we observe

$$\begin{aligned} a^{({\mathbf {1}}_{k - 1}, 0)}_n = \sum _{x_1 + x_2 = n}a^{(1)}_{x_1}a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2} = \sum _{x_1 + x_2 = n}(a^{(1)}_0 + \cdots + a^{(1)}_{x_1})(a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2} - a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2 - 1}). \end{aligned}$$

Using (4), we have

$$\begin{aligned} a^{({\mathbf {1}}_{k - 1}, 0)}_n = (C - 1)\sum _{x_1 + x_2 = n}(x_1 + 1)\left( a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2} - a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2 - 1}\right) - S_{n,k}. \end{aligned}$$

where

$$\begin{aligned} S_{n,k} = \sum _{x_1 + x_2 = n} ((C - 1)(x_1 + 1) - a_0^{(1)} - \cdots - a_{x_1}^{(1)})\left( a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2} - a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2 - 1}\right) . \end{aligned}$$

We first continue estimating the main term. By the induction hypothesis, we have

$$\begin{aligned}&(C - 1)\sum _{x_1 + x_2 = n}(x_1 + 1)\left( a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2} - a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2 - 1}\right) \\ =&(C - 1)\sum _{x_2 = 0}^n a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2} \\ =&(C - 1)^{k - 1}\left( \sum _{m = 0}^{n} \left( {\begin{array}{c}m + k - 2\\ k - 2\end{array}}\right) - \left( {\begin{array}{c}m + k - 2\\ k - 2\end{array}}\right) R_{m,k - 1}\right) \\ =&(C - 1)^{k - 1}\left( \left( {\begin{array}{c}n + k - 1\\ k - 1\end{array}}\right) - \sum _{m = 0}^{n} \left( {\begin{array}{c}m + k - 2\\ k - 2\end{array}}\right) R_{m,k - 1}\right) . \end{aligned}$$

By the induction hypothesis, the subtracted term is positive. Again by the induction hypothesis, we estimate that

$$\begin{aligned}&(C - 1)^{k - 1}\sum _{m = 0}^{n} \left( {\begin{array}{c}m + k - 2\\ k - 2\end{array}}\right) R_{m,k - 1} \\ \le&(C - 1)^{k - 1} \sum _{m = 0}^{n} \left( {\begin{array}{c}m + k - 2\\ k - 2\end{array}}\right) \frac{C_1(k - 1)(k - 2)}{(m + k - 2)^{1 - \alpha }} \\ =&(C - 1)^{k - 1}\sum _{m = 0}^{n} \left( {\begin{array}{c}m + k - 3\\ k - 3\end{array}}\right) C_1(k - 1)(m + k - 2)^{\alpha } \\ \le&(C - 1)^{k - 1}\sum _{m = 0}^{n} \left( {\begin{array}{c}m + k - 3\\ k - 3\end{array}}\right) C_1(k - 1)(n + k - 2)^{\alpha } \\ =&(C - 1)^{k - 1}\left( {\begin{array}{c}n + k - 2\\ k - 2\end{array}}\right) C_1(k - 1)(m + k - 2)^{\alpha } \\ \le&\left( {\begin{array}{c}n + k - 1\\ k - 1\end{array}}\right) (C - 1)^{k - 1} \frac{C_1(k - 1)^2}{(m + k - 2)^{1 - \alpha }}. \end{aligned}$$

To estimate error term \(S_{n,k}\), we first note that

$$\begin{aligned} a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2} = \sum _{x = 0}^{x_2} a^{{\mathbf {1}}_{k - 2}}_{x} \end{aligned}$$

so \(a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2} \ge a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2 - 1}\) for any \(x_2\). By (4), we conclude that \(S_{n,k}\) is non-negative. On the other hand, by (4) and the induction hypothesis we have

$$\begin{aligned} S_{n,k}&\le \sum _{x_1 + x_2 = n} D(x_1 + 1)^{\alpha } (a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2} - a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2 - 1}) \\&= \sum _{x_1 + x_2 = n} D((x_1 + 1)^{\alpha } - x_1^{\alpha }) a^{({\mathbf {1}}_{k - 2}, 0)}_{x_2} \\&\le \sum _{x_2 = 0}^n D((n + 1 - x_2)^{\alpha } - (n - x_2)^{\alpha }) \left( {\begin{array}{c}x_2 + k - 2\\ k - 2\end{array}}\right) C^{k - 2}. \end{aligned}$$

We estimate that

$$\begin{aligned} S_{n,k}&\le \sum _{x_2 = 0}^n D((n + 1 - x_2)^{\alpha } - (n - x_2)^{\alpha }) \left( {\begin{array}{c}n + k - 2\\ k - 2\end{array}}\right) (C - 1)^{k - 2} \\&= D(n + 1)^{\alpha } \left( {\begin{array}{c}n + k - 2\\ k - 2\end{array}}\right) (C - 1)^{k - 2} \\&\le D \left( {\begin{array}{c}n + k - 1\\ k - 1\end{array}}\right) (C - 1)^{k - 2} \frac{k - 1}{(n + k - 1)^{1 - \alpha }} \\&\le \left( {\begin{array}{c}n + k - 1\\ k - 1\end{array}}\right) (C - 1)^{k - 1} \frac{C_1(k - 1)}{(n + k - 1)^{1 - \alpha }}. \end{aligned}$$

Combining all the estimates, we conclude that

$$\begin{aligned} a_n^{({\mathbf {1}}_{k - 1},0)} = \left( {\begin{array}{c}n + k - 1\\ k - 1\end{array}}\right) (C - 1)^{k - 1}\left( 1 - R_{n,k}\right) \end{aligned}$$

where

$$\begin{aligned} 0 \le R_{n,k} \le \frac{C_1(k - 1)^2}{(n + k - 1)^{1 - \alpha }} + \frac{C_1(k - 1)}{(n + k - 1)^{1 - \alpha }} = \frac{C_1k(k - 1)}{(n + k - 1)^{1 - \alpha }} \end{aligned}$$

as desired. \(\square \)

Lemma 3.2

For any n and \(k \ge 2\), if a tuple \(I \in \{0,1\}^k\) has \(k_0\) zeros and \(k_1\) ones with \(k_0 \ge 1\), then

$$\begin{aligned} a^{I}_n = \left( {\begin{array}{c}n + k - 1\\ k - 1\end{array}}\right) (C - 1)^{k_1}\left( 1 - S^{(0)}_{n,I}\right) \end{aligned}$$

where

$$\begin{aligned} 0 \le S^{(0)}_{n,I} \le \frac{C_1k(k - 1)}{(n + k - 1)^{1 - \alpha }}. \end{aligned}$$

Proof

By definition, permuting the entries of I does not change the value of \(a^I_{n}\), so without loss of generality we can assume \(I = (1,\ldots ,1,0,\ldots ,0)\). If \(k_0 = 1\) then the lemma is precisely Lemma 3.1, so we assume \(k_0 \ge 2\). If \(k_1 = 0\) then

$$\begin{aligned} a^{I}_n = \sum _{x_1 + \cdots + x_k = n} 1 = \left( {\begin{array}{c}n + k - 1\\ k - 1\end{array}}\right) \end{aligned}$$

so \(S^{(0)}_{n,I} = 0\), and the lemma is obvious. Now assume \(k_1 \ge 1\). We have

$$\begin{aligned} a^{I}_n&= \sum _{x_1 + x_2 = n} a^{({\mathbf {1}}_{k_1}, 0)}_{x_1}a^{(0,\ldots ,0)}_{x_2} \\&= \sum _{x_1 + x_2 = n} a^{({\mathbf {1}}_{k_1}, 0)}_{x_1}\left( {\begin{array}{c}x_2 + k_0 - 2\\ k_0 - 2\end{array}}\right) \\&= \sum _{x_1 + x_2 = n} \left( {\begin{array}{c}x_1 + k_1\\ k_1\end{array}}\right) (C - 1)^{k_1}\left( 1 - R_{x_1,k_1 + 1}\right) \left( {\begin{array}{c}x_2 + k_0 - 2\\ k_0 - 2\end{array}}\right) \\&= \left( {\begin{array}{c}n + k - 1\\ k - 1\end{array}}\right) (C - 1)^{k_1} - \sum _{x_1 + x_2 = n} \left( {\begin{array}{c}x_1 + k_1\\ k_1\end{array}}\right) (C - 1)^{k_1} R_{x_1,k_1 + 1}\left( {\begin{array}{c}x_2 + k_0 - 2\\ k_0 - 2\end{array}}\right) . \end{aligned}$$

By Lemma 3.1 we have the bound

$$\begin{aligned} 0 \le R_{x_1,k_1 + 1} \le \frac{C_1(k_1 + 1)k_1}{(x_1 + k_1)^{1 - \alpha }}. \end{aligned}$$

Thus \(S^{(0)}_{n,I} \ge 0\). We also have the upper bound

$$\begin{aligned} S^{(0)}_{n,I} =&\sum _{x_1 + x_2 = n} \left( {\begin{array}{c}x_1 + k_1\\ k_1\end{array}}\right) (C - 1)^{k_1} R_{x_1,k_1 + 1}\left( {\begin{array}{c}x_2 + k_0 - 2\\ k_0 - 2\end{array}}\right) \\ \le&\sum _{x_1 + x_2 = n} \left( {\begin{array}{c}x_1 + k_1\\ k_1\end{array}}\right) (C - 1)^{k_1} \frac{C_1(k_1 + 1)k_1}{(x_1 + k_1)^{1 - \alpha }}\left( {\begin{array}{c}x_2 + k_0 - 2\\ k_0 - 2\end{array}}\right) \\ \le&\sum _{x_1 + x_2 = n} \left( {\begin{array}{c}x_1 + k_1 - 1\\ k_1 - 1\end{array}}\right) (C - 1)^{k_1} C_1(k_1 + 1)\cdot (x_1 + k_1)^{\alpha }\left( {\begin{array}{c}x_2 + k_0 - 2\\ k_0 - 2\end{array}}\right) \\ \le&(C - 1)^{k_1} C_1(k_1 + 1)\cdot (n + k_1)^{\alpha } \sum _{x_1 + x_2 = n} \left( {\begin{array}{c}x_1 + k_1 - 1\\ k_1 - 1\end{array}}\right) \left( {\begin{array}{c}x_2 + k_0 - 2\\ k_0 - 2\end{array}}\right) \\ \le&(C - 1)^{k_1} C_1(k_1 + 1)\cdot (n + k_1)^{\alpha } \left( {\begin{array}{c}n + k - 2\\ k - 2\end{array}}\right) \\ \le&(C - 1)^{k_1} \left( {\begin{array}{c}n + k - 1\\ k - 1\end{array}}\right) \cdot C_1 \frac{(k - 1)k}{(n + k - 1)^{1 - \alpha }}. \end{aligned}$$

So we have the desired inequality

$$\begin{aligned} S^{(0)}_{n, I} \le \frac{C_1k(k - 1)}{(n + k - 1)^{1 - \alpha }}. \end{aligned}$$

\(\square \)

We arrive at the crucial second-order difference estimates.

Lemma 3.3

There exists a constant \(C_2 > 0\) such that for any \(n \ge -1\) and \(k \ge 3\), if the tuple \(I \in \{0,1\}^k\) has \(k_0\) zeros and \(k_1\) ones with \(k_0 \ge 3\), then

$$\begin{aligned} a^{I}_{n + 1} - 2a^{I}_n + a^{I}_{n - 1} = \frac{(n + k)^{k - 3}}{(k - 3)!}(C - 1)^{k_1}\left( 1 - S^{(2)}_{n,I}\right) \end{aligned}$$

where

$$\begin{aligned} 0 \le S^{(2)}_{n,I} \le \frac{C_2k^2}{(n + k)^{1 - \alpha }}. \end{aligned}$$

Proof

If \(I'\) is the tuple obtained by removing two zeros from I, then

$$\begin{aligned} a^{I}_n = \sum _{x_1 + x_2 = n} a^{I'}_{x_1}a^{(0,0)}_{x_2} = \sum _{x_1 + x_2 = n} a^{I'}_{x_1} (x_2 + 1){\mathbf {1}}_{x_2 \ge 0}. \end{aligned}$$

Thus we find that

$$\begin{aligned} a^{I}_{n + 1} - 2a^{I}_n + a^{I}_{n - 1}&= \sum _{x_1 = 0}^{n + 1} a^{I'}_{x_1} ((n - x_1 + 2){\mathbf {1}}_{x_1 \le n + 1} - 2(n - x_1 + 1){\mathbf {1}}_{x_1 \le n} \\&\quad + (n - x_1){\mathbf {1}}_{x_1 \le n - 1}) = a^{I'}_{n + 1}. \end{aligned}$$

Applying Lemma 3.2, we obtain

$$\begin{aligned} a^{I'}_{n + 1} = \left( {\begin{array}{c}n + k - 2\\ k - 3\end{array}}\right) (C - 1)^{k_1}\left( 1 - S^{(0)}_{n + 1,I'}\right) \end{aligned}$$

where

$$\begin{aligned} 0 \le S^{(0)}_{n + 1,I'} \le \frac{C_1(k - 2)(k - 3)}{(n + k - 2)^{1 - \alpha }} \le \frac{3C_1k^2}{(n + k)^{1 - \alpha }}. \end{aligned}$$

Finally, we note that

$$\begin{aligned} \left( {\begin{array}{c}n + k - 2\\ k - 3\end{array}}\right) = \frac{(n + k)^{k - 3}}{(k - 3)!} (1 - S_{n, I}^{(1)}) \end{aligned}$$

where

$$\begin{aligned} 0 \le S_{n, I}^{(1)} = 1 - \prod _{i = 2}^{k - 2} \left( 1 - \frac{i}{n + k}\right) \le \frac{k^2}{n + k}. \end{aligned}$$

The error term \(S_{n,I}^{(2)}\) satisfies

$$\begin{aligned} 1 - S_{n,I}^{(2)} = (1 - S^{(0)}_{n + 1,I'})(1 - S_{n,I}^{(1)}). \end{aligned}$$

Therefore we have

$$\begin{aligned} 0 \le S_{n,I}^{(2)} \le S^{(0)}_{n + 1,I'}+S_{n,I}^{(1)} \end{aligned}$$

and the desired estimate follows. \(\square \)

Lemma 3.4

For any \(n \ge -1,k \ge 3\), we have

$$\begin{aligned} a_{n + 1, k} - 2a_{n, k} + a_{n - 1, k} = C^{k}\frac{(n + k)^{k - 3}}{(k - 3)!}(1 + R^{(2)}_{n,k}). \end{aligned}$$

where \(R^{(2)}_{n, k}\) satisfies

$$\begin{aligned} \left| {R^{(2)}_{n,k}} \right| \le E\left( \frac{k^2}{(n + k)^{1 - \alpha }} + (n + k)^{2 + \alpha }A^{-(2 + \alpha )k}\right) \end{aligned}$$

for some constant \(E > 0\).

Proof

Throughout the proof, we use \(R_i\) to denote the various error term that contribute to \(R^{(2)}_{n,k}\). Recall the identity

$$\begin{aligned} a_{n, k}&= \sum _{I \in \{0,1\}^k} a^I_{n} \end{aligned}$$

We split the sum into two parts. Let \(S_1\) be the set of \(I \in \{0,1\}^k\) with at least three ones, and let \(S_2\) be the set of \(I \in \{0,1\}^k\) with at most 2 ones. Then

$$\begin{aligned} a_{n, k} =&\sum _{I \in S_1} a^I_{n} + \sum _{I \in S_2} a^I_{n} \end{aligned}$$

Let \(k_1(I)\) denote the number of ones in I. By Lemma 3.3, the second-order difference of the first term is

$$\begin{aligned} \sum _{I \in S_1}\frac{(n + k)^{k - 3}}{(k - 3)!}(C - 1)^{k_1(I)}+ R_1 \end{aligned}$$
(5)

where

$$\begin{aligned} \left| {R_1} \right| \le \sum _{I \in S_1} \frac{(n + k)^{k - 3}}{(k - 3)!}(C - 1)^{k_1(I)}\left| {S_{n,I}^{(2)}} \right| \le \sum _{I \in S_1}\frac{(n + k)^{k - 3}}{(k - 3)!}(C - 1)^{k_1(I)} \cdot \frac{C_2k^2}{(n + k)^{1 - \alpha }}. \end{aligned}$$

The residue \(R_1\) of (5) is bounded by

$$\begin{aligned} \left| {R_1} \right| \le \sum _{I \in \{0,1\}^k}\frac{(n + k)^{k - 3}}{(k - 3)!}(C - 1)^{k_1(I)} \cdot \frac{C_2k^2}{(n + k)^{1 - \alpha }} = \frac{(n + k)^{k - 3}}{(k - 3)!}C^k \cdot \frac{C_2k^2}{(n + k)^{1 - \alpha }}.\nonumber \\ \end{aligned}$$
(6)

The main term of (5) satisfies

$$\begin{aligned}&\sum _{I \in S_1}\frac{(n + k)^{k - 3}}{(k - 3)!}(C - 1)^{k_1(I)} \\ =&\frac{(n + k)^{k - 3}}{(k - 3)!}\sum _{I \in \{0,1\}^k}(C - 1)^{k_1(I)} - \frac{(n + k)^{k - 3}}{(k - 3)!}\sum _{I \in S_2}(C - 1)^{k_1(I)} \\ =&\frac{(n + k)^{k - 3}}{(k - 3)!}C^{k} - \frac{(n + k)^{k - 3}}{(k - 3)!}\sum _{I \in S_2}(C - 1)^{k_1(I)}, \end{aligned}$$

Let \(R_2\) denote

$$\begin{aligned} R_2 := -\frac{(n + k)^{k - 3}}{(k - 3)!}\sum _{I \in S_2}(C - 1)^{k_1(I)}. \end{aligned}$$

Note that

$$\begin{aligned} \sum _{I \in S_2}(C - 1)^{k_1(I)} = \left( {\begin{array}{c}k\\ 2\end{array}}\right) (C - 1)^{k - 2} + k(C - 1)^{k - 1} + (C - 1)^k \le k^2C^2(C - 1)^{k - 2} \end{aligned}$$

so

$$\begin{aligned} \left| {R_2} \right| \le \frac{(n + k)^{k - 3}}{(k - 3)!} k^2C^2(C - 1)^{k - 2}. \end{aligned}$$
(7)

Thus we conclude that

$$\begin{aligned} \sum _{I \in S_1} a^I_{n + 1} - 2\sum _{I \in S_1} a^I_{n} + \sum _{I \in S_1} a^I_{n - 1} = \frac{(n + k)^{k - 3}}{(k - 3)!}C^{k} + R_1 + R_2 \end{aligned}$$

where \(R_1, R_2\) are controlled by (6) and (7) respectively.

It remains to estimate

$$\begin{aligned} \sum _{I \in S_2} a^I_{n}. \end{aligned}$$

For each \(I = (i_1,\ldots ,i_n)\in S_2\), we have

$$\begin{aligned} a^I_n = \sum _{x_1 + \cdots + x_k = n} a^{(i_1)}_{x_1}\cdots a^{(i_n)}_{x_n}. \end{aligned}$$

By 4, we have \(a_n \le C + Dn^{\alpha } \le (C + D)n^{\alpha }\). Thus

$$\begin{aligned} \sum _{x_1 + \cdots + x_k = n} a^{(i_1)}_{x_1}\cdots a^{(i_k)}_{x_k} \le (C + D)n^{\alpha }\sum _{x_1 + \cdots + x_k = n} a^{(i_2)}_{x_2}\cdots a^{(i_k)}_{x_k}. \end{aligned}$$

We can appeal to Lemma 3.2 to obtain

$$\begin{aligned} \sum _{x_1 + \cdots + x_k = n} a^{(i_2)}_{x_2}\cdots a^{(i_k)}_{x_k}\le & {} \left( {\begin{array}{c}n + k - 1\\ k - 1\end{array}}\right) (C - 1)^{k_1((i_2,\ldots ,i_n))}\\\le & {} \left( {\begin{array}{c}n + k - 1\\ k - 1\end{array}}\right) (C - 1)^{k_1(I) - 1}C. \end{aligned}$$

Thus we obtain

$$\begin{aligned} \sum _{I \in S_2} a^I_{n}&\le (C + D)n^{\alpha } \sum _{I \in S_2} \left( {\begin{array}{c}n + k - 1\\ k - 1\end{array}}\right) (C - 1)^{k_1(I) - 1}C \\&\le (C + D)n^{\alpha } \left( {\begin{array}{c}n + k - 1\\ k - 1\end{array}}\right) \cdot k(k - 1)C^3(C - 1)^{k - 3}. \end{aligned}$$

We conclude that

$$\begin{aligned} \sum _{I \in S_2} a^I_{n} \le 3(C + D)C^3(C - 1)^{k - 3} (n + k)^{2 + \alpha } \frac{(n + k)^{k - 3}}{(k - 3)!} \end{aligned}$$

Thus the second order difference \(R_3\) of \(\sum _{I \in S_2} a^I_{n}\) is bounded by

$$\begin{aligned} R_3 \le E_1 n^{2 + \alpha } \frac{(n + k)^{k - 3}}{(k - 3)!} (C - 1)^{k} \end{aligned}$$
(8)

for some constant \(E_1\).

We have thus finished the second order difference estimate

$$\begin{aligned} a_{n + 1, k} - 2a_{n,k} + a_{n - 1, k} = \frac{(n + k)^{k - 3}}{(k - 3)!}C^k + R_1 + R_2 + R_3 \end{aligned}$$

where the errors \(R_i\) satisfy (6), (7) and (8) respectively. We now note that the absolute value of each \(R_i\) is at most a constant times

$$\begin{aligned} \frac{(n + k)^{k - 3}}{(k - 3)!}C^k \cdot \left( \frac{k^2}{(n + k)^{1 - \alpha }} + (n + k)^{2 + \alpha } \cdot A^{-(2 + \alpha )k}\right) . \end{aligned}$$

Thus we obtain the desired estimate. \(\square \)

Using the identity

$$\begin{aligned} a_{n,k} - a_{n - 1, k} = \sum _{n' = -1}^{n - 1} (a_{n' + 1,k} - 2a_{n', k} + a_{n' - 1, k}) \end{aligned}$$

We conclude an analogous estimate on the first-order difference.

Corollary 3.5

For any \(n\ge 0\) and \(k \ge 3\), we have

$$\begin{aligned} a_{n, k} - a_{n - 1, k} = C^{k}\frac{(n + k)^{k - 2}}{(k - 2)!}(1 + R^{(1)}_{n,k}). \end{aligned}$$

where \(R^{(1)}_{n, k}\) satisfies

$$\begin{aligned} \left| {R^{(1)}_{n,k}} \right| \le E\left( \frac{k^2}{(n + k)^{1 - \alpha }} + (n + k)^{2 + \alpha }A^{-(2 + \alpha )k}\right) \end{aligned}$$

for some constant E.

Again using the identity

$$\begin{aligned} a_{n-1,k} = \sum _{n' = 0}^{n-1} (a_{n',k} - a_{n' - 1, k}) \end{aligned}$$

We conclude an analogous estimate on the zeroth-order difference.

Corollary 3.6

For any \(n\ge 1\) and \(k \ge 3\), we have

$$\begin{aligned} a_{n-1, k}= C^{k}\frac{(n + k)^{k - 1}}{(k - 1)!}(1 + R^{(0)}_{n,k}). \end{aligned}$$

where \(R^{(0)}_{n, k}\) satisfies

$$\begin{aligned} \left| {R^{(0)}_{n,k}} \right| \le E\left( \frac{k^2}{(n + k)^{1 - \alpha }} + (n + k)^{2 + \alpha }A^{-(2 + \alpha )k}\right) \end{aligned}$$

for some constant E.

Finally, we conclude by Lemma 3.4, Corollary 3.5 and Corollary 3.6 that for any \(n \ge -1\), we have

$$\begin{aligned} \frac{a_{n - 1, k} (a_{n + 1, k} - 2a_{n, k} + a_{n - 1, k})}{(a_{n,k} - a_{n - 1, k})^2} = \frac{k - 2}{k - 1} \cdot \frac{(1 + R_{n,k}^{(0)})(1 + R_{n,k}^{(2)})}{(1 + R_{n,k}^{(1)})^2} \end{aligned}$$

where for each \(i \in \{0,1,2\}\) we have

$$\begin{aligned} \left| {R_{n,k}^{(i)}} \right| \le E_i\left( \frac{k^2}{(n + k)^{1 - \alpha }} + (n + k)^{2 + \alpha }A^{-(2 + \alpha )k}\right) \end{aligned}$$

for constants \(E_0, E_1, E_2\). If

$$\begin{aligned} k^{5/(1 - \alpha )} \le n \le \frac{A^k}{k^2} \end{aligned}$$

then for sufficiently large k, we have

$$\begin{aligned} \left| {R_{n,k}^{(i)}} \right| \le \frac{1}{k^2} \end{aligned}$$

for each \(i \in \{0,1,2\}\). Therefore we get

$$\begin{aligned} \frac{a_{n - 1, k} (a_{n + 1, k} - 2a_{n, k} + a_{n - 1, k})}{(a_{n,k} - a_{n - 1, k})^2} \le \frac{k - 2}{k - 1} \cdot \left( 1 + \frac{1}{k^2}\right) ^4 < 1. \end{aligned}$$

So \(\{a_{n, k}\}\) is log-concave for \(k^{5/(1 - \alpha )} \le n \le \frac{\eta ^k}{k^2}\). As we have shown that \(\{a_{n, k}\}\) is log-concave for \(n \le B^{k^{1/3}}\), where \(B > 1\) is a constant, the two intervals glue together to obtain Theorem 1.4.

Finally, we prove Corollary 1.5. Let f(z) be defined in Conjecture 1.1 and let

$$\begin{aligned} g(z) := \frac{f(z)}{z} = \sum _{n = 0}^{\infty } \sigma _{-1}(n + 1)z^n. \end{aligned}$$

As \(\sigma _{-1}(n) \ge 1\) for any \(n \ge 1\), g is 1-lower bounded. Furthermore, we have

$$\begin{aligned} \sigma _{-1}(1) + \cdots + \sigma _{-1}(n + 1) = \sum _{m = 1}^{n + 1}\sum _{d | m} \frac{1}{d} = \sum _{d = 1}^{n + 1} \frac{1}{d}\left\lfloor {\frac{n + 1}{d}}\right\rfloor . \end{aligned}$$

Thus we have

$$\begin{aligned} \sigma _{-1}(1) + \cdots + \sigma _{-1}(n + 1) \le \sum _{d = 1}^{\infty }\frac{n + 1}{d^2} = \frac{\pi ^2}{6}(n + 1) \end{aligned}$$

and

$$\begin{aligned} \sigma _{-1}(1) + \cdots + \sigma _{-1}(n + 1) \ge \sum _{d = 1}^{n + 1} \left( \frac{n + 1}{d^2} - \frac{d - 1}{d^2}\right) \ge \frac{\pi ^2}{6}(n + 1) - \log (n + 1) - 1. \end{aligned}$$

So g(z) satisfies the condition of Theorem 1.4 for \(C = \pi ^2 / 6\) and any \(\alpha > 0\). Corollary 1.5 then follows from Theorem 1.4.