1 Introduction

Properties of coefficients of generating series [27], especially Fourier coefficients of powers of the Dedekind \(\eta \)-function have been in the focus of research since the times of Euler [1, 2, 16, 20, 23, 25]:

$$\begin{aligned} \eta \left( \tau \right) ^{r}:=q^{\frac{r}{24}}\prod _{m=1}^{\infty }\left( 1-q^{m}\right) ^{r}= q^{\frac{r}{24}}\sum _{n=0}^{\infty }a_{n}\left( r \right) q^{n}. \end{aligned}$$
(1)

Here, \(q := e^{2\pi i\tau }\), \(\mathop {\mathrm{Im}}\left( \tau \right) >0\) and \(r \in {\mathbb {Z}}\). The coefficients are special values of the D’Arcais polynomials \(P_n(x)\) [6, 7, 22, 26]. It has been recently noticed that the growth and vanishing properties of these polynomials have much in common with properties of other interesting polynomials [10, 13]. These include special orthogonal polynomials, such as associated Laguerre polynomials and Chebyshev polynomials of the second kind. Also included are polynomials attached to reciprocals of the Klein’s j-invariant and Eisenstein series [12, 14].

In this paper we investigate growth properties and the zero distribution of polynomials attached to arithmetic functions g and h inspired by Rota [18].

Let g be normalized and of moderate growth. Further, let \(0<h(n) \le h(n+1)\). We put \(P_0^{g,h}(x)=1\) and

$$\begin{aligned} P_n^{g,h}(x) := \frac{x}{h(n)} \sum _{k=1}^{n} g(k) \, P_{n-k}^{g,h}(x). \end{aligned}$$
(2)

This definition includes all mentioned examples. Before providing examples and explicit formulas for these polynomials, we give one application for the coefficients of the Dedekind \(\eta \)-function. Let \(g(n)=\sigma (n):= \sum _{d \mid n}d\), \(h(n)= \mathop {\mathrm{id}}(n)=n\) and \(a_n(r)\) be defined by (1), the nth coefficient of the rth power of the Dedekind \(\eta \)-function. Let \(P_n^{\sigma }(x):= P_n^{\sigma , \mathop {\mathrm{id}}}(x)\), then

$$\begin{aligned} a_n(r)= P_n^{\sigma }(-r). \end{aligned}$$

Han [9] observed that the Nekrasov–Okounkov hook length formula [21, 26] implies that \(a_n(r) \ne 0\) if \( r > n^2 -1\). This improves a previous result by Kostant [17]. In [13] we proved that

$$\begin{aligned} a_n(r) \ne 0 \text { holds for } r > \kappa \cdot (n-1) \text { where } \kappa =15. \end{aligned}$$
(3)

Numerical investigations show that \(\kappa \) has to be larger than 9.5 (see Table 5). In the present paper we prove that (3) is already true for \(\kappa = 10.82\).

Since the definition of \(P_n^{g,h}(x)\) is quite abstract, we provide two examples of families of polynomials, to familiarize the reader with the types of polynomials we are studying. At first, they appear to have nothing in common.

Let us start with the Nekrasov–Okounkov hook length formula [21]. Let \(\eta (\tau )\) be the Dedekind \(\eta \)-function. Let \(\lambda \) be a partition of n and let \(|\lambda |=n\). By \({\mathcal {H}}(\lambda )\) we denote the multiset of hook lengths associated with \(\lambda \) and by \({\mathcal {P}}\) the set of all partitions.

Partitions are presented by their Young diagram. Let \(\lambda = (7,3,2)\). Then \( n = \vert \lambda \vert = 12\):

figure a

We attach to each cell u of the diagram the arm \(a_u(\lambda )\), the amount of cells in the same row of u to the right of u. Further we have the leg \(\ell _{u}\left( \lambda \right) \), the number of cells in the same column of u below of u. The hook length \(h_u(\lambda )\) of the cell u is given by \(h_u(\lambda ):= a_u(\lambda ) + \ell _{u}\left( \lambda \right) + 1\). Then the hook length multiset \({\mathcal {H}}(\lambda )\) is the multiset of all hook lengths of \(\lambda \). The example gives

$$\begin{aligned} {\mathcal {H}}(\lambda ) = \{ 9,8,6,4,3,2,1,4,3,1,2,1 \}. \end{aligned}$$

The list is given from left to right and from top to bottom in the Young diagram. Cells have the coordinates (ij) following the same procedure. We refer to Han [9] and [11].

The Nekrasov–Okounkov hook length formula ([9], Theorem 1.2) states that

$$\begin{aligned} \sum _{n=0}^{\infty } P_n^{\sigma }(z) \, q^n = \sum _{ \lambda \in {\mathcal {P}}} q^{|\lambda |} \prod _{ h \in {\mathcal {H}}(\lambda )} \left( 1 + \frac{z-1}{h^2} \right) = q^{\frac{z}{24}} \, \eta (\tau )^{-z}. \end{aligned}$$
(4)

The identity (4) is valid for all \(z \in {\mathbb {C}}\). Note that the \(P_n^{\sigma }(x)\) are integer-valued polynomials of degree n. From the formula it follows that \((-1)^n P_n^{\sigma }(x) >0\) for all real \(x < 1 -n^2\).

The second example is of a more artificial nature, discovered recently [12], when studying the q-expansion of the reciprocals of Klein’s j-invariant and reciprocals of Eisenstein series [4, 5, 14]. Let

$$\begin{aligned} j(\tau )= \sum _{n=-1}^{\infty } c(n) q^n = q^{-1} + 744 + 196884q+ \cdots \end{aligned}$$

denote Klein’s j-invariant. Asai, Kaneko, and Ninomiya [3] proved that the coefficients of the q-expansion of \(1/j(\tau )\) are non-vanishing and have strictly alternating signs. This follows from their result on the zero distribution of the nth Faber polynomials \(\varphi _n \left( x\right) \) and the denominator formula for the monster Lie algebra. The zeros of the Faber polynomials are simple and lie in the interval (0, 1728). Asai, Kaneko, and Ninomiya obtained the remarkable identity:

$$\begin{aligned} \frac{1}{j(\tau )} = \sum _{n=1}^{\infty } \varphi _n'(0) \, \frac{q^n}{n}. \end{aligned}$$

Let \(c^{*}(n):= c(n)/744\). Define the polynomials \(Q_{j,n}(x)\) by

$$\begin{aligned} \sum _{n=0}^{\infty } Q_{j,n}(x) \, q^n := \frac{1}{ 1 - x \sum _{n=1}^{\infty } c^{*}(n) \, q^n }. \end{aligned}$$

We have proved in [12] that \(Q_{j,n}(x) = Q_{\gamma _2,n}(x) + 2x Q_{\gamma _2,n}'(x)+ \frac{x^2}{2} Q_{\gamma _2,n}''(x)\), where \(Q_{\gamma _2,n}(x)\) are polynomials attached to Weber’s cubic root function \(\gamma _2\) of j in a similar way. We have also proved that \(Q_{\gamma _2,n}(z)\ne 0\) for all \( \vert z \vert > 82.5\). Hence, the identity

$$\begin{aligned} \frac{\varphi _n'(0)}{n} = Q_{j,n}(-744) = \left( Q_{\gamma _2,n}(x) + 2 x Q_{\gamma _2,n}'(x)+ \frac{x^2}{2} Q_{\gamma _2,n}''(x)\right) _{\vert _{x= - 248}} \end{aligned}$$

restates and extends the result of [3].

Now, let g(n) be a normalized arithmetic function with moderate growth, such that \(\sum _{n=1}^{\infty } \vert g(n) \vert \, T^n\) is analytic at \(T=0\). Then the illustrated examples are special cases of polynomials \(P_n^g(x)\) and \(Q_n^g(x)\) defined by

$$\begin{aligned} \sum _{n=0}^{\infty } P_{n}^{g}\left( x\right) \, q^n= & {} \exp \left( x \sum _{n=1}^{\infty } g(n) \frac{q^{n}}{n}\right) , \nonumber \\ \sum _{n=0}^{\infty } Q_{n}^{g}\left( x\right) \, q^n= & {} \frac{1}{1- x \sum _{n=1}^{\infty } g(n) q^n}. \end{aligned}$$
(5)

Note that \(P_{n}^{\mathop {\mathrm{id}}}\left( x\right) = \frac{x}{n} \, L_{n-1}^{\left( 1\right) }\left( -x\right) \) are essentially associated Laguerre polynomials (see [10]). Letting \(g(n)= \sigma (n)\), then we recover the polynomials provided by the Nekrasov–Okounkov hook length formula. The polynomials \(Q_n^{\mathop {\mathrm{id}}}(x)\) are related to the Chebyshev polynomials of the second kind [15].

It is easy to see that \(P_n^g(x)\) and \(Q_n^g(x)\) are special cases of polynomials \(P_n^{g,h}(x)\) defined by the recursion formula (2). Here, \(P_n^g(x) = P_n^{g,\mathop {\mathrm{id}}}(x)\) and \(Q_n^g(x) = P_n^{g,{\mathbf {1}}}(x)\). In the next section, we state the main results of this paper.

2 Statement of main results

Let gh be arithmetic functions. Assume that g be normalized and \( 0 < h(n) \le h(n+1)\). It is convenient to extend h by \(h(0):=0\).

We start by recalling what is known [12, 13, 15]. Assume that \(G_1(T):= \sum _{k=1}^{\infty } \vert g(k+1)\vert \, T^k\) has a positive radius R of convergence. Let \(\kappa > 0\) be given, such that \(G_1(2/\kappa ) \le \frac{1}{2}\). Let \(x \in {\mathbb {C}}\). Then we have for all \(\vert x \vert > \kappa \,\, h(n-1)\):

$$\begin{aligned} \frac{ \vert x \vert }{2 \, h(n) } \vert P_{n-1}^{g,h}(x) \vert< \vert P_n^{g,h}(x) \vert < \frac{ 3 \, \vert x \vert }{2 \, h(n) } \left| P_{n-1}^{g,h}(x)\right| . \end{aligned}$$
(6)

This implies that \(P_n^{g,h}(x)\ne 0\) for all \(\vert x \vert > \kappa \,\, h(n-1)\) and \((-1)^n P_n^{g,h}(x) > 0\) if \(x<-\kappa h\left( n-1\right) \). Let \(g(n)= \sigma (n)\). In [13] we proved that \(\kappa =15\) is an acceptable value.

In the following we state our two main results: Improvement A and Improvement B.

2.1 Improvement A

The following result reproduces our previous result (6), if we choose \(\varepsilon = \frac{1}{2}\).

Theorem 1

Let \(0<\varepsilon <1\). Let \(R>0\) be the radius of convergence of

$$\begin{aligned} G_{1}\left( T\right) =\sum _{k=1}^{\infty }\left| g\left( k+1\right) \right| T^{k}. \end{aligned}$$

Let \(0<T_{\varepsilon }<R\) be such that \(G_{1}\left( T_{\varepsilon }\right) \le \varepsilon \) and \(\kappa =\kappa _{\varepsilon }=\frac{1}{1-\varepsilon }\frac{1}{T_{\varepsilon }}\). Then

$$\begin{aligned} \left| P_{n}^{g,h}\left( x\right) - \frac{x}{h\left( n\right) } P_{n-1}^{g,h}\left( x\right) \right| < \varepsilon \frac{\left| x\right| }{h\left( n\right) }\left| P_{n-1}^{g,h}\left( x\right) \right| , \end{aligned}$$
(7)

if \(\left| x \right| > {\kappa }\,\, h(n-1)\) for all \(n\ge 1\).

This result can be reformulated in the following way, which is more suitable for applications to growth and non-vanishing properties.

Theorem 2

Let \(0<\varepsilon <1\). Let \(R>0\) be the radius of convergence of

$$\begin{aligned} G_{1}\left( T\right) =\sum _{k=1}^{\infty }\left| g\left( k+1\right) \right| T^{k}. \end{aligned}$$

Let \(0<T_{\varepsilon }<R\) be such that \(G_{1}\left( T_{\varepsilon }\right) \le \varepsilon \) and \(\kappa =\kappa _{\varepsilon }=\frac{1}{1-\varepsilon }\frac{1}{T_{\varepsilon }}\). Then

$$\begin{aligned} \left( 1-\varepsilon \right) \frac{\left| x\right| }{h\left( n\right) }\left| P_{n-1}^{g,h}\left( x\right) \right|<\left| P_{n}^{g,h}\left( x\right) \right| < \left( 1+\varepsilon \right) \frac{\left| x\right| }{h\left( n\right) }\left| P_{n-1}^{g,h}\left( x\right) \right| , \end{aligned}$$

if \(\left| x \right| > {\kappa } \,\, h(n-1) \) for all \(n\ge 1\).

Corollary 1

Let \(\kappa \) be chosen as in Theorem 1 or as in Theorem 2. Then

$$\begin{aligned} P_{n}^{g,h}\left( x\right) \ne 0 \text { for } \left| x\right| >\kappa \,\, h(n-1). \end{aligned}$$

Proof

This follows from Theorem 2, since \(\left( 1-\varepsilon \right) \frac{\left| x\right| }{h\left( n\right) } \ne 0\) and \(P_0^{g,h}(x)=1\). \(\square \)

We note that the smallest possible \(\kappa \) is independent of the function h(n). It is also possible to provide a lower bound for the best possible \(\kappa \).

Proposition 1

The constant \(\kappa _{\varepsilon }\) obtained in Theorem 1 has the following lower bound:

$$\begin{aligned} \kappa _{\varepsilon }\ge \frac{\left| g\left( 2\right) \right| }{\left( 1-\varepsilon \right) \varepsilon }. \end{aligned}$$

As a lower bound independent of \(\varepsilon \) we have \(4\left| g\left( 2\right) \right| \).

Proof

If we consider only the first order term of the power series

$$\begin{aligned} G_{1}\left( T\right) =\sum _{k=1}^{\infty }\left| g\left( k+1\right) \right| T^{k}, \end{aligned}$$

then for positive T we always have \(G_{1}\left( T\right) =\sum _{k=1}^{\infty }\left| g\left( k+1\right) \right| T^{k}\ge \left| g\left( 2\right) \right| T\). Thus, \(G_{1}\left( T\right) >\varepsilon \) if \(T>\frac{\varepsilon }{\left| g\left( 2\right) \right| }\). The case \(G_{1}\left( T\right) \le \varepsilon \) is only possible if \(T\le \frac{\varepsilon }{\left| g\left( 2\right) \right| }\). This forces \(T_{\varepsilon }\le \frac{\varepsilon }{\left| g\left( 2\right) \right| }\).

Applying the last inequality now to

$$\begin{aligned} \kappa _{\varepsilon }:=\frac{1}{\left( 1-\varepsilon \right) T_{\varepsilon }} \end{aligned}$$

from Theorem 1 shows the lower bound \(\kappa _{\varepsilon }\ge \frac{\left| g\left( 2\right) \right| }{\left( 1-\varepsilon \right) \varepsilon }\) in the proposition depending on \(\varepsilon \). The minimal value of this lower bound is at \(\varepsilon =\frac{1}{2}\) because of the inequality of arithmetic and geometric means \(\left( 1-\varepsilon \right) \varepsilon \le \left( \frac{1-\varepsilon +\varepsilon }{2}\right) ^{2}=\frac{1}{4}\). \(\square \)

2.2 Improvement B

Theorem 3

Let \(0<\varepsilon <1\). Let \(R>0\) be the radius of convergence of

$$\begin{aligned} G_{2}\left( T \right) =\sum _{k=2}^{\infty }\left| g\left( k+1\right) -g\left( 2\right) g\left( k\right) \right| T^{k}. \end{aligned}$$

Let \(0<T_{\varepsilon }<R\) be such that \(G_{2}\left( T_{\varepsilon }\right) \le \varepsilon \) and

$$\begin{aligned} \kappa =\kappa _{\varepsilon }:=\frac{1}{1-\varepsilon }\left( \frac{1}{T_{\varepsilon }}+\left| g\left( 2\right) \right| \right) . \end{aligned}$$

Then

$$\begin{aligned} \left| P_{n}^{g,h}\left( x\right) - \frac{x+g\left( 2\right) h\left( n-1\right) }{h\left( n\right) } P_{n-1}^{g,h}\left( x\right) \right| < \varepsilon \frac{\left| x\right| }{h\left( n\right) }\left| P_{n-1}^{g,h}\left( x\right) \right| \end{aligned}$$
(8)

if \(\left| x \right| > {\kappa } \,\, h(n-1) \) for all \(n\ge 1\).

Theorem 4

Let \(0<\varepsilon <1\). Let \(R>0\) be the radius of convergence of

$$\begin{aligned} G_{2}\left( T \right) =\sum _{k=2}^{\infty }\left| g\left( k+1\right) -g\left( 2\right) g\left( k\right) \right| T^{k}. \end{aligned}$$

Let \(0<T_{\varepsilon }<R\) be such that \(G_{2}\left( T_{\varepsilon }\right) \le \varepsilon \) and

$$\begin{aligned} \kappa =\kappa _{\varepsilon }:=\frac{1}{1-\varepsilon }\left( \frac{1}{T_{\varepsilon }}+\left| g\left( 2\right) \right| \right) . \end{aligned}$$

Then

$$\begin{aligned}&\frac{ \left| x+g\left( 2\right) h\left( n-1\right) \right| -\varepsilon \left| x\right| }{h\left( n\right) }\left| P_{n-1}^{g,h}\left( x\right) \right| \nonumber \\&\quad<\left| P_{n}^{g,h}\left( x\right) \right| < \frac{ \left| x+g\left( 2\right) h\left( n-1\right) \right| +\varepsilon \left| x\right| }{h\left( n\right) }\left| P_{n-1}^{g,h}\left( x\right) \right| \end{aligned}$$
(9)

if \(\left| x \right| > {\kappa } \,\, h (n-1) \) for all \(n\ge 1\).

Corollary 2

Let \(\kappa \) be chosen as in Theorem 3 or as in Theorem 4. Then

$$\begin{aligned} P_{n}^{g,h}\left( x\right) \ne 0 \text { for } \left| x\right| >\kappa \,\, h(n-1). \end{aligned}$$
(10)

Proposition 2

The constant \(\kappa _{\varepsilon }\) obtained in Theorem 3 has the following lower bound:

$$\begin{aligned} \kappa _{\varepsilon }\ge \frac{1}{1-\varepsilon }\left( \sqrt{\frac{\left| \left( g\left( 2\right) \right) ^{2}-g\left( 3\right) \right| }{\varepsilon }}+\left| g\left( 2\right) \right| \right) . \end{aligned}$$

As a lower bound independent of \(\varepsilon \) we have \(\frac{3}{2}\sqrt{3\left| \left( g\left( 2\right) \right) ^{2}-g\left( 3\right) \right| }+\left| g\left( 2\right) \right| \).

Proof

If we consider only the second order term of the power series \(G_{2}\left( T\right) =\sum _{k=2}^{\infty }\left| g\left( k+1\right) -g\left( 2\right) g\left( k\right) \right| T^{k}\), then for positive T we always have

$$\begin{aligned} G_{2}\left( T\right) =\sum _{k=2}^{\infty }\left| g\left( k+1\right) -g\left( 2\right) g\left( k\right) \right| T^{k}\ge \left| \left( g\left( 2\right) \right) ^{2}-g\left( 3\right) \right| T^{2}. \end{aligned}$$

Thus, \(G_{2}\left( T\right) >\varepsilon \) if \(T>\sqrt{\frac{\varepsilon }{\left| \left( g\left( 2\right) \right) ^{2}-g\left( 3\right) \right| }}\). The case \(G_{2}\left( T\right) \le \varepsilon \) is only possible if \(T\le \sqrt{\frac{\varepsilon }{\left| \left( g\left( 2\right) \right) ^{2}-g\left( 3\right) \right| }}\). This forces \(T_{\varepsilon }\le \sqrt{\frac{\varepsilon }{\left| \left( g\left( 2\right) \right) ^{2}-g\left( 3\right) \right| }}\).

Applying the last inequality now to

$$\begin{aligned} \kappa _{\varepsilon }:=\frac{1}{1-\varepsilon }\left( \frac{1}{T_{\varepsilon }}+\left| g\left( 2\right) \right| \right) \end{aligned}$$

from Theorem 3 shows the lower bound \(\kappa _{\varepsilon }\ge \frac{1}{1-\varepsilon }\left( \sqrt{\frac{\left| \left( g\left( 2\right) \right) ^{2}-g\left( 3\right) \right| }{\varepsilon }}+\left| g\left( 2\right) \right| \right) \) in the proposition depending on \(\varepsilon \).

It is clear that

$$\begin{aligned} \frac{1}{1-\varepsilon }\left( \sqrt{\frac{\left| \left( g\left( 2\right) \right) ^{2}-g\left( 3\right) \right| }{\varepsilon }}+\left| g\left( 2\right) \right| \right) \ge \frac{1}{1-\varepsilon }\sqrt{\frac{\left| \left( g\left( 2\right) \right) ^{2}-g\left( 3\right) \right| }{\varepsilon }}+\left| g\left( 2\right) \right| \end{aligned}$$

for \(0<\varepsilon <1\). To estimate \(\kappa _{\varepsilon }\) independent of \(\varepsilon \) we consider the right hand side of the last inequality as a function in \(\varepsilon \). Thus, we are interested in the minimal value of this function for \(0<\varepsilon <1\). The inequality of arithmetic and geometric means yields

$$\begin{aligned} \left( 1-\varepsilon \right) \varepsilon ^{1/2}= & {} 2\left( \left( 1-\varepsilon \right) /2\right) ^{1/2}\cdot \left( \left( 1-\varepsilon \right) /2\right) ^{1/2}\cdot \varepsilon ^{1/2}\\\le & {} 2\left( \frac{\left( 1-\varepsilon \right) /2+\left( 1-\varepsilon \right) /2+\varepsilon }{3}\right) ^{3/2}=\frac{2}{3\sqrt{3}}. \end{aligned}$$

We obtain \(\frac{3}{2}\sqrt{3\left| \left( g\left( 2\right) \right) ^{2}-g\left( 3\right) \right| }+\left| g\left( 2\right) \right| \). \(\square \)

2.3 Comparing improvement A and improvement B

Let \(0<\varepsilon _{1}<1\) and \(T_{\varepsilon _{1}}\) as in Theorem 1. For all \(T\ge 0\) we have that

$$\begin{aligned} G_{2}\left( T\right)\le & {} \sum _{k=2}^{\infty }\left( \left| g\left( k+1\right) \right| +\left| g\left( 2\right) g\left( k\right) \right| \right) T^{k} \\= & {} \left( 1+\left| g\left( 2\right) \right| T\right) G_{1}\left( T\right) -\left| g\left( 2\right) \right| T. \end{aligned}$$

Let \(\varepsilon _{2}\) be such that

$$\begin{aligned} \left( 1+\left| g\left( 2\right) \right| T_{\varepsilon _{1}}\right) G_{1}\left( T_{\varepsilon _{1}}\right) -\left| g\left( 2\right) \right| T_{\varepsilon _{1}}\le \varepsilon _{2}\le \left( 1+\left| g\left( 2\right) \right| T_{\varepsilon _{1}}\right) \varepsilon _{1}-\left| g\left( 2\right) \right| T_{\varepsilon _{1}} <1. \end{aligned}$$

Then

$$\begin{aligned} 0\le G_{2}\left( T_{\varepsilon _{1}}\right) \le \left( 1+\left| g\left( 2\right) \right| T_{\varepsilon _{1}}\right) G_{1}\left( T_{\varepsilon _{1}}\right) -\left| g\left( 2\right) \right| T_{\varepsilon _{1}} \le \varepsilon _{2}. \end{aligned}$$

This shows that we can choose \(T_{\varepsilon _{2}}=T_{\varepsilon _{1}}\) as the corresponding \(T_{\varepsilon }\) from Theorem 3.

Let \(\kappa _{1,\varepsilon }\) and \(\kappa _{2,\varepsilon }\) be the respective constants from Theorems 1 and 3. Then

$$\begin{aligned} \kappa _{2,\varepsilon _{2}}= & {} \frac{1}{1-\varepsilon _{2}}\left( \frac{1}{T_{\varepsilon _{1}}}+\left| g\left( 2\right) \right| \right) = \frac{1}{1-\varepsilon _{2}}\left( 1+\left| g\left( 2\right) \right| T_{\varepsilon _{1}}\right) \frac{1}{T_{\varepsilon _{1}}} \\\le & {} \frac{1}{1-\left( 1+\left| g\left( 2\right) \right| T_{\varepsilon _{1}}\right) \varepsilon _{1}+\left| g\left( 2\right) \right| T_{\varepsilon _{1}} }\left( 1+\left| g\left( 2\right) \right| T_{\varepsilon _{1}}\right) \frac{1}{T_{\varepsilon _{1}}} \\= & {} \frac{1}{1-\varepsilon _{1}}\frac{1}{T_{\varepsilon _{1}}}=\kappa _{1,\varepsilon _{1}}. \end{aligned}$$

This shows that the minimal value of the \(\kappa _{2,\varepsilon }\) is never larger than the minimal value of the \(\kappa _{1,\varepsilon }\).

3 Applications

3.1 Toy example

Let us consider the case \(g(n)=1\) for all \(n \in {\mathbb {N}}\). We observe that \(G_2(T)=0\) for all T. Let \(0< \varepsilon < 1\). Then we apply Theorem 4. For all \(\vert x \vert > \frac{1}{1- \varepsilon } h(n-1)\) we obtain

$$\begin{aligned} \frac{ \left| x+ h\left( n-1\right) \right| -\varepsilon \left| x\right| }{h\left( n\right) }\left| P_{n-1}^{{\mathbf {1}},h}\left( x\right) \right|<\left| P_{n}^{{\mathbf {1}},h}\left( x\right) \right| < \frac{ \left| x+ h\left( n-1\right) \right| +\varepsilon \left| x\right| }{h\left( n\right) }\left| P_{n-1}^{{\mathbf {1}},h}\left( x\right) \right| . \end{aligned}$$

Let \(\varepsilon \rightarrow 0\), then for all \(\vert x \vert > h(n-1)\):

$$\begin{aligned} \frac{ \left| x+ h\left( n-1\right) \right| }{h\left( n\right) }\left| P_{n-1}^{{\mathbf {1}},h}\left( x\right) \right| \le \left| P_{n}^{{\mathbf {1}},h}\left( x\right) \right| \le \frac{ \left| x+ h\left( n-1\right) \right| }{h\left( n\right) } \left| P_{n-1}^{{\mathbf {1}},h}\left( x\right) \right| . \end{aligned}$$

Then, \(\left| P_{n}^{{\mathbf {1}},h}\left( x\right) \right| = \prod _{k=0}^{n-1} \frac{\vert x + h(k)\vert }{h(k+1)}\) (we define \(h(0):=0\)). Since \(P_{1}^{{\mathbf {1}},h}\left( x\right) = x/h(1)\) and \(P_{n}^{{\mathbf {1}},h}\left( x\right) \) is a polynomial of degree n with positive leading coefficient, it follows:

$$\begin{aligned} P_n^{{\mathbf {1}},h}(x) = \frac{ x (x+h(1)) \cdots (x+h(n-1))}{h(1) \cdots h(n)}. \end{aligned}$$

3.2 Reciprocals of Eisenstein series

Let \(\sigma _{k}(n)= \sum _{d \vert n} d^{k}\) and let \(B_{k}\) be the kth Bernoulli number. Then we define Eisenstein series of weight k:

$$\begin{aligned} E_k(\tau ):= 1 - \frac{2k}{B_k} \sum _{n=1}^{\infty } \sigma _{k-1}(n) \, q^n \qquad (k=2,4,6,\ldots ). \end{aligned}$$

In [3] it was indicated that the q-expansion of the reciprocal of \(E_4(\tau )=1 +240 \sum _{n=1}^{\infty } \sigma _3(n) \, q^n\) given by

$$\begin{aligned} \frac{1}{E_4(\tau )} = \sum _{n=0}^{\infty } \beta _n \, q^n, \end{aligned}$$

has strictly alternating sign changes: \((-1)^n \beta _n >0\). Let \(\varepsilon _1= \frac{1}{25} \) and \(\varepsilon _2= \frac{1}{982} \). We can chose \(\kappa \) in Theorems 1–4, such that \(240 > \kappa \). (In both cases \(T_{\varepsilon }=\frac{87}{20000}\) does the job. Then \(\kappa _{1}=\frac{62500}{261}\approx 239.46\) and \(\kappa _{2}=\frac{20408906}{85347}\approx 239.13\). Note that an approximation of the smallest possible value that can be obtained by our method is \(\kappa _{2}=\frac{539}{16}\approx 33.7\). This we obtain for \(\varepsilon _{2}=\frac{5}{21}\) and \(T_{\varepsilon _{2}}=\frac{3}{20}\).)

Proof of \(\kappa _{1}\le \frac{62500}{261}\) and \(\kappa _{2}\le \frac{20408906}{85347}\) Let \(T_{\varepsilon }=\frac{87}{20000}\). Let further \(\varepsilon _{1}=\frac{1}{25}\) and \(\varepsilon _{2}=\frac{1}{982}\). We have the well-known estimate

$$\begin{aligned} \sigma _{3}\left( k\right) \le \left( 1+\int _{1}^{ \infty }t^{-3}\,\mathrm {d}t\right) k^{3}= 3k^{3}/2 . \end{aligned}$$
(11)

Thus, \(\sigma _{3}\left( k\right) \le 3k^{3}/2\le 9\left( {\begin{array}{c}k+2\\ 3\end{array}}\right) \). Let \(c_{1}\left( k\right) =\sigma _{3}\left( k+1\right) \) for \(k\le 2\) and \(c_{1}\left( k\right) =9\left( {\begin{array}{c}k+3\\ 3\end{array}}\right) \) for \(k\ge 3\). Then \(G_{1}\left( T\right) \le \sum _{k=1}^{\infty }c_{1}\left( k\right) T^{k}=9\frac{1}{\left( 1-T\right) ^{4}}-9-27T-62T^{2}\) and

$$\begin{aligned} G_{1}\left( \frac{87}{20000}\right) \le \frac{1248274072444709335238721}{31446822595409952200000000}<\frac{1}{25}. \end{aligned}$$

Thus, \(\kappa _{1}\le \frac{20000}{87}\frac{25}{24}=\frac{62500}{261}\approx 239.46\).

With (11) it also follows that \( \left| 9\sigma _{3}\left( k\right) -\sigma _{3}\left( k+1\right) \right| \le 15\left( k+1\right) ^{3}\le 90\left( {\begin{array}{c}k+3\\ 3\end{array}}\right) \). Let \(c_{2}\left( k\right) =\left| 9\sigma _{3}\left( k\right) -\sigma _{3}\left( k+1\right) \right| \) for \(k\le 4\) and \(c_{2}\left( k\right) =90\left( {\begin{array}{c}k+3\\ 3\end{array}}\right) \) for \(k\ge 5\). Then \(G_{2}\left( T\right) \le \sum _{k=2}^{\infty }c_{2}\left( k\right) T^{k}=\frac{90}{\left( 1-T\right) ^{4}}-90-360T-847T^{2}-1621T^{3}-2619T^{4}\) for \(T>0\) and

$$\begin{aligned} G_{2}\left( \frac{87}{20000}\right) \le \frac{25605878110865247894531439480101}{25157458076327961760000000000000000}<\frac{1}{982}. \end{aligned}$$

Thus, \(\kappa _{2}\le \left( \frac{20000}{87}+9\right) \frac{982}{981}=\frac{20408906}{85347} \approx 239.13\). \(\square \)

Note that \(\beta _1= -240\), \(\beta _n \in {\mathbb {Z}}\) and \(\beta _1 \mid \beta _n\) for all \(n \ge 1\). From (6), Theorems 1–4 and Corollary 1 the following properties are obtained:

$$\begin{aligned} \frac{1}{2} \vert \beta _1 \beta _{n-1} \vert<&\vert \beta _n \vert&< \frac{3}{2} \vert \beta _1 \beta _{n-1} \vert , \\ \vert \beta _n - \beta _1 \vert \, \left| \beta _{n-1} \right|< & {} \varepsilon _1 \, \vert \beta _1 \beta _{n-1} \vert ,\\ (1 - \varepsilon _1) \vert \beta _1 \beta _{n-1} \vert<&\vert \beta _n \vert&< (1 + \varepsilon _1) \vert \beta _1 \beta _{n-1} \vert , \\ \vert \beta _n -( \beta _1 + 9) \vert< & {} \varepsilon _2 \vert \beta _1 \beta _{n-1} \vert ,\\ \vert 231 + \varepsilon _2 \beta _1 \vert \, \vert \beta _{n-1} \vert<&\vert \beta _n \vert&< \vert 231 - \varepsilon _2 \beta _1 \vert \, \vert \beta _{n-1}\vert . \end{aligned}$$

Since \(\beta _0 =1\) we can deduce that \((-1)^n \beta _n >0\).

In the previous proof we showed that \(G_{2}\left( T_{\varepsilon }\right)<\frac{1}{982}<\frac{1}{250}\) for \(T_{\varepsilon }=\frac{87}{20000}\) and \(\kappa _{2}<240\). This leads to the following:

Theorem 5

([14]) Let \(G_{2}\left( T \right) \) be defined by

$$\begin{aligned} \sum _{m=2}^{\infty }\left| \sigma _{3} \left( m+1\right) - 9 \sigma _{3} \left( m\right) \right| T^m \end{aligned}$$

with positive radius of convergence R. Suppose that there is \(0<T_{\varepsilon }<1\) such that \(G_{2}\left( T_{\varepsilon }\right) \le \frac{1}{250} \) and \(\kappa _{2 } \le \frac{250}{249}\left( \frac{1}{T_{\varepsilon }}+ \sigma _{3} \left( 2\right) \right) <\frac{8}{\left| B_{4}\right| }=240\), then the absolute value of the nth coefficient \(\beta _n\) of \(1/E_{4}\) can be estimated by

$$\begin{aligned} 240 \left( \left( 1\pm \frac{1}{250} \frac{240}{231} \right) 231 \right) ^{n-1}. \end{aligned}$$

This implies

$$\begin{aligned} 230^{n-1}\le \frac{\left( -1\right) ^{n}\beta _{n}}{240}\le 232^{n-1}. \end{aligned}$$
(12)

The following table displays the first values (Table 1).

Table 1 Estimation given by (12)

By dividing \(\beta _{n}\) by the estimates we obtain the figures displayed in Table 2:

Table 2 Normalization

Remarks

The value \(\varepsilon _{2}=\frac{1}{982}\) improves the inequalities (12) to

$$\begin{aligned} 230.7556^{n-1}\le \frac{\left( -1\right) ^{n}\beta _{n}}{240}\le 231.2444^{n-1}. \end{aligned}$$

The lower bound is quite close to the correct value \(e^{\pi \sqrt{3}} =230.764588\ldots \). It can be shown using for instance the circle method that \( \beta _n \sim C \,\, (-1)^n \,\, e^{\pi \, n \sqrt{3}}\) with some suitable constant \(C >0\) (e.g. [5]).

3.3 Associated Laguerre polynomials and Chebyshev polynomials of the second kind

We briefly recall the definition of associated Laguerre polynomials \(L_n^{(\alpha )}(x)\) and Chebyshev polynomials \(U_n(x)\) of the second kind [8, 24]. Both are orthogonal polynomials. We have

$$\begin{aligned} L_{n}^{\left( \alpha \right) }\left( x\right) = \sum _{k=0}^{n}\left( {\begin{array}{c}n+\alpha \\ n-k\end{array}}\right) \frac{(-x)^{k}}{k!} \qquad (\alpha > -1). \end{aligned}$$

The Chebyshev polynomials are uniquely characterized by

$$\begin{aligned} U_{n}(\cos (t)) = \frac{\sin ((n+1)t)}{\sin (t)} \qquad ( 0< t < \pi ). \end{aligned}$$

The Chebyshev polynomials are of special interest and use, since they are the only classical orthogonal polynomials whose zeros can be determined in explicit form (see Rahman and Schmeisser [24], Introduction) (Tables 3, 4, 5).

Let \(g(n)=\mathop {\mathrm{id}}(n)= n\). Then

$$\begin{aligned} P_n^{\mathop {\mathrm{id}}}(x)= & {} \frac{x}{n} L_{n-1}^{(1)}(-x),\nonumber \\ Q_n^{\mathop {\mathrm{id}}}(x)= & {} x \, U_{n-1}\left( \frac{x}{2} + 1 \right) . \end{aligned}$$
(13)

The generating function of the Chebyshev polynomial of the second kind is given by

$$\begin{aligned} \sum _{n=0}^{\infty } U_n(x) \, q^n = \frac{1}{1 - 2x q + q^2}, \qquad \vert x \vert , \vert q \vert <\frac{1}{\sqrt{3}}. \end{aligned}$$

With this we can prove Eq. (13). We have

$$\begin{aligned} 1+xq\sum _{n=0}^{\infty }U_{n}\left( \frac{x}{2}+1\right) q^{n}= & {} 1+\frac{xq}{1-\left( 2+x\right) q+q^{2}}= \frac{1-2q+q^{2}}{1-\left( 2+x\right) q+q^{2}}\\= & {} \frac{1}{1-xq\frac{1}{\left( 1-q\right) ^{2}}} = \frac{1}{1-xq\sum _{n=1}^{\infty }nq^{n-1}}\\= & {} \sum _{n=0}^{\infty }Q_{n}\left( x\right) q^{n} \end{aligned}$$

using Definition (5). Note that \(G_{1}\left( T\right) =\sum _{k=1}^{\infty }\left( k+1\right) T^{k}=\frac{1}{\left( 1-T\right) ^{2}}-1\) and

$$\begin{aligned} G_{2}\left( T\right) =\sum _{k=2}^{\infty }\left( k-1\right) T^{k}=\frac{T^{2}}{\left( 1-T\right) ^{2}}. \end{aligned}$$

From this we obtain the following values:

Table 3 Case \(g(n) = n\)

If we consider the special case \(\varepsilon _1 = 1/2\) in Improvement A, we can chose \(T_{\varepsilon _{1}}=2/11\) and finally get \(\kappa _1 = 11\).

This leads to several applications. For example, let \(\vert x \vert >\left( 20/3\right) \, n \) then \(L_n^{(1)}(x) \ne 0\) and the estimates hold

$$\begin{aligned} \left( \left| 2n-x\right| - \left| x \right| /4\right) \,\, \left| L_{n-1}^{\left( 1\right) }\left( x\right) \right|< n \left| L_{n}^{\left( 1\right) }\left( x\right) \right| < \left( \left| 2n-x\right| + \left| x \right| /4\right) \,\, \left| L_{n-1}^{\left( 1\right) }\left( x\right) \right| . \end{aligned}$$

3.4 Powers of the Dedekind \(\eta \)-function

Let us recall the well-known identity:

$$\begin{aligned} \prod _{n=1}^{\infty } \left( 1 - q^n \right) = \exp \left( - \sum _{n=1}^{\infty } \sigma (n) \, \frac{q^n}{n} \right) \qquad (z \in {\mathbb {C}}). \end{aligned}$$

The q-expansion of the \(-z\)th power of the Euler product defines the D’Arcais polynomials

$$\begin{aligned} \sum _{n=0}^{\infty } P_n^{\sigma }(z) \, q^n = \prod _{n=1}^{\infty } \left( 1 - q^n \right) ^{-z} \qquad ( z \in {\mathbb {C}}), \end{aligned}$$

where \(P_0^{\sigma }(x)=1\) and \(P_n^{\sigma }\left( x\right) = \frac{x}{n} \sum _{k=1}^{n} \sigma (k) P_{n-k}^{\sigma }(x)\), as polynomials. Note that these polynomials evaluated at \(-24\) are directly related to the Ramanujan \(\tau \)-function: \(\tau (n)= P_{n-1}^{\sigma }(-24)\), which indicates a small link to the Lehmer conjecture [19].

In the spirit of this paper, let \(\varepsilon :=\frac{3}{14}\). Then \(T_{\varepsilon }: =\frac{2}{11}\) satisfies the assumptions of Theorem 4. We obtain the

Corollary 3

Let \(\kappa = \frac{119}{11}\). Then \(P_n^{\sigma }(z) \ne 0\) for all complex z with \(\vert z \vert > \kappa \,\, (n-1)\).

Proof

We have to show that \(G_2\left( T_{\varepsilon }\right) = \sum _{k=2}^{\infty }\left| \sigma \left( k+1\right) -3\sigma \left( k\right) \right| T_{\varepsilon }^{k}<\varepsilon \). For this let \(c\left( k\right) =\left| \sigma \left( k+1\right) -3\sigma \left( k\right) \right| \) for \(1\le k\le 7\) and \(c\left( k\right) =4\left( {\begin{array}{c} k+2\\ 2\end{array}}\right) \) for \(k\ge 8\). Then \(\left| \sigma \left( k+1\right) -3\sigma \left( k\right) \right| \le c\left( k\right) \) for all \(k\in {\mathbb {N}}\) since

$$\begin{aligned} \sigma \left( k\right) \le \left( 1+\ln \left( k\right) \right) k\le \left( \frac{k}{4}+\ln \left( 4\right) \right) k\le \left( {\begin{array}{c}k+1\\ 2\end{array}}\right) \end{aligned}$$

for \(k\ge 4\). This implies \(G_2\left( T \right) \le \sum _{k=2}^{\infty }c\left( k\right) T^{k}\) for \(0\le T < 1\le R\). The upper bound is now almost, except for the first 8 terms, a multiple of the second derivative of the geometric series of T. Hence,

$$\begin{aligned} G_2(T) \le \frac{4}{\left( 1-T\right) ^{3}}-4 -12T- 19T^{2}- 35T^{3}-45T^{4}-78T^{5}-84T^{6}-135T^{7}. \end{aligned}$$

For \(T=T_{\varepsilon }=\frac{2}{11}\) we obtain

$$\begin{aligned} G_2 \left( T_{\varepsilon }\right) \le \frac{3043993780}{14206147659} < \frac{3}{14}=\varepsilon . \end{aligned}$$

The claim now follows from Corollary 2. \(\square \)

Remarks

  1. a)

    Let \(\varepsilon \) and \(\kappa \) be as above, and let h be an arbitrary arithmetic function with \(0 < h(n) \le h(n+1)\). Then \(P_n^{\sigma ,h}(x)\) satisfies (8), (9), and (10) obtained by Improvement B.

  2. b)

    The value \(\varepsilon = \frac{3}{14}\) already leads to

    $$\begin{aligned} \kappa _{\varepsilon } = \frac{119}{11} = 10.{\overline{81}}. \end{aligned}$$

    Note only minor further improvements can be achieved.

  3. c)

    Corollary 3 improves our previous result [13], where \(\kappa =15\).

Proposition 3

Let \(\varepsilon = 0.217\) and \(T_{\varepsilon }= 0.18289\). Then the assumptions of Theorem 3 are fulfilled. Furthermore we can take \(\kappa =10.815\).

Proof

Let \(\varepsilon \) and \(T_{\varepsilon }\) be given. We have to show that

$$\begin{aligned} G_2\left( T_{\varepsilon }\right) =\sum _{k=2}^{\infty }\left| \sigma \left( k+1\right) - 3\sigma \left( k\right) \right| T_{\varepsilon }^{k}<\varepsilon . \end{aligned}$$

Let \(c\left( k\right) =\left| \sigma \left( k+1\right) -3\sigma \left( k\right) \right| \) for \(1\le k\le 11\) and \(c\left( k\right) =4\left( {\begin{array}{c} k+2\\ 2\end{array}}\right) \) for \(k\ge 12\). Then \(\left| \sigma \left( k+1\right) -3\sigma \left( k\right) \right| \le c\left( k\right) \) for all \(k\in {\mathbb {N}}\) as

$$\begin{aligned} \sigma \left( k\right) \le \left( 1+\ln \left( k\right) \right) k\le \left( \frac{k}{4}+\ln \left( 4\right) \right) k\le \left( {\begin{array}{c}k+1\\ 2\end{array}}\right) \end{aligned}$$

for \(k\ge 4\). This implies \(G_2\left( T\right) \le \sum _{k=2}^{\infty }c\left( k\right) T^{k}\) for \(0\le T \le 1\le R\). The upper bound is almost (except for the first 12 terms) a multiple of the second derivative of the geometric series of T. Hence \(G_2(T) \le \sum _{k=2}^{\infty }c\left( k\right) T^{k} \le \)

$$\begin{aligned}&4\sum _{k=0}^{\infty }\left( {\begin{array}{c}k+2\\ 2\end{array}}\right) T^{k}-4-12T-19T^{2}-35T^{3}-45T^{4}-78T^{5}-84T^{6}-135T^{7} \\&\qquad -148T^{8}-199T^{9}-222T^{10}-304T^{11} \\&\quad = \frac{4}{\left( 1-T\right) ^{3}}-4 -12T- 19T^{2}- 35T^{3}-45T^{4}-78T^{5}-84T^{6}-135T^{7} \\&\qquad -148T^{8}-199T^{9}-222T^{10}-304T^{11}. \end{aligned}$$

For \(T=T_{\varepsilon }=0.18289\) we obtain

$$\begin{aligned} G_2 \left( T_{\varepsilon }\right)< 0.216998<\varepsilon . \end{aligned}$$

The claim now follows from Corollary 2. \(\square \)

Table 4 Values of \(\left| \sigma \left( k+1\right) -3\sigma \left( k\right) \right| \)

4 Proof of Theorems 1 and 2

Proof of Theorem 1

The proof will be by induction on n. The case \(n=1\) is obvious: \(\left| P_{1}^{g,h}\left( x\right) - \frac{x}{h\left( 1\right) }P_{0}^{g,h}\left( x\right) \right| =0<\varepsilon \frac{\left| x\right| }{h\left( 1\right) }\left| P_{0}^{g,h}\left( x\right) \right| \) for \(\left| x\right| >\kappa \,\, h( 0)\).

Let now \(n\ge 2\). Then

$$\begin{aligned} P_{n}^{g,h}\left( x\right) = \frac{x}{h\left( n\right) }\left( P_{n-1}^{g,h}\left( x\right) +\sum _{k=1}^{n-1}g\left( k+1\right) P_{n-1-k}^{g,h}\left( x\right) \right) . \end{aligned}$$

The basic idea for the induction step is to use the inequality

$$\begin{aligned} \left| P_{n}^{g,h}\left( x\right) -\frac{x}{h\left( n\right) }P_{n-1}^{g,h}\left( x\right) \right| \le \frac{\left| x\right| }{h\left( n\right) }\sum _{k=1}^{n-1}\left| g \left( k+1\right) \right| \left| P_{n-1-k}^{g,h}\left( x\right) \right| . \end{aligned}$$

We estimate the sum by the following property for \(1\le j\le n-1\):

$$\begin{aligned} \left| P_{j}^{g,h}\left( x\right) \right|\ge & {} \left| \frac{x}{h\left( j\right) }\right| \left| P_{j-1}^{g,h}\left( x\right) \right| -\left| P_{j}^{g,h}(x)-\frac{x}{h\left( j\right) }P_{j-1}^{g,h}\left( x\right) \right| \\> & {} \left( \frac{\left| x\right| }{h\left( j\right) }-\varepsilon \frac{\left| x\right| }{h\left( j\right) }\right) \left| P_{j-1}^{g,h}\left( x\right) \right| \\= & {} \frac{\left( 1-\varepsilon \right) \left| x\right| }{h\left( j\right) }\left| P_{j-1}^{g,h}\left( x\right) \right| \end{aligned}$$

for \(\left| x\right| > \kappa \,\, h(n-1) \). Thus,

$$\begin{aligned} \left| P_{j-1}^{g,h}\left( x\right) \right| <\frac{h\left( j\right) }{\left( 1-\varepsilon \right) \left| x\right| }\left| P_{j}^{g,h}\left( x\right) \right| . \end{aligned}$$

Further, we have

$$\begin{aligned} \left| P_{n-k}^{g,h}\left( x\right) \right|< & {} \left| P_{n-k+1}^{g,h}\left( x\right) \right| \frac{h\left( n-k+1\right) }{\left( 1-\varepsilon \right) \left| x\right| }<\ldots \\< & {} \left| P_{n-1}^{g,h}\left( x\right) \right| \prod _{j=1}^{k-1}\frac{h\left( n-j\right) }{\left( 1-\varepsilon \right) \left| x\right| } \\\le & {} \left| P_{n-1}^{g,h}\left( x\right) \right| \left( \frac{h\left( n-1\right) }{\left( 1-\varepsilon \right) \left| x \right| }\right) ^{k-1} \end{aligned}$$

for \(\left| x\right| >\kappa \,\, h (n-1) \ge \kappa \,\, h( n-k)\) for all \(2 \le k \le n\) by assumption. Using this, we can now estimate the sum by

$$\begin{aligned} \sum _{k=1}^{n-1}\left| g \left( k+1\right) \right| \left| P_{n-1-k}^{g,h}\left( x\right) \right| < \left| P_{n-1}^{g,h}\left( x\right) \right| \sum _{k=2}^{n-1}\left| g \left( k+1\right) \right| \left( \frac{h\left( n-1\right) }{\left( 1-\varepsilon \right) \left| x\right| }\right) ^{k} \end{aligned}$$

and we obtain

$$\begin{aligned}&\left| P_{n}^{g,h}\left( x\right) -\frac{x}{h\left( n\right) }P_{n-1}^{g,h}\left( x\right) \right| \\&\quad < \frac{\left| x\right| }{h\left( n\right) }\left| P_{n-1}^{g,h}\left( x\right) \right| \sum _{k=1}^{n-1}\left| g\left( k+1\right) \right| \left( \frac{h\left( n-1\right) }{\left( 1-\varepsilon \right) \left| x\right| }\right) ^{k}. \end{aligned}$$

Estimating the sum using the assumption from the theorem we obtain

$$\begin{aligned} \sum _{k=1}^{n-1} \left| g(k+1) \right| \left( \frac{h(n-1)}{\left( 1-\varepsilon \right) \left| x \right| } \right) ^{k} \le G_{1} \left( \frac{h\left( n-1\right) }{\left( 1-\varepsilon \right) \left| x\right| }\right) \le G_{1}\left( T_{\varepsilon }\right) \le \varepsilon , \end{aligned}$$

since \(\left| x\right| >\kappa \,\, h(n-1) =\frac{h\left( n-1\right) }{1-\varepsilon }\frac{1}{T_{\varepsilon }}\) which is equivalent to \(\frac{\left( 1-\varepsilon \right) \left| x\right| }{h\left( n-1\right) }>\frac{1}{T_{\varepsilon }}\) and \(G_{1}\) increases on \(\left[ 0,R\right) \) as \(\left| g\left( k+1\right) \right| \ge 0\) for all \(k\in {\mathbb {N}}\). \(\square \)

Proof of Theorem 2

Consider the following upper and lower bounds:

$$\begin{aligned} \vert P^{g,h}_{n}\left( x\right) \vert\le & {} \left| \frac{x}{h\left( n\right) }P^{g,h}_{n-1}\left( x\right) \right| + \left| P^{g,h}_{n}\left( x\right) -\frac{x}{h\left( n \right) }P^{g,h}_{n-1}\left( x\right) \right| , \\ \vert P^{g,h}_{n}\left( x\right) \vert\ge & {} \left| \frac{x}{h\left( n\right) }P^{g,h}_{n-1}\left( x\right) \right| - \left| P^{g,h}_{n}\left( x\right) -\frac{x}{h\left( n \right) }P^{g,h}_{n-1}\left( x\right) \right| . \end{aligned}$$

Applying (7) leads to the desired result. \(\square \)

5 Proof of Theorems 3 and 4

Proof of Theorem 3

The proof will be by induction on n. The case \(n=1\) is obvious:

$$\begin{aligned} \left| P_{1}^{g,h}\left( x\right) - \frac{x+g\left( 2\right) h\left( 0\right) }{h\left( 1\right) }P_{0}^{g,h}\left( x\right) \right| =0<\varepsilon \frac{\left| x\right| }{h\left( 1\right) }\left| P_{0}^{g,h}\left( x\right) \right| \end{aligned}$$

for \(\left| x\right| >\kappa \,\, h(0) \). Let now \(n\ge 2\). Then

$$\begin{aligned}&P_{n}^{g,h}\left( x\right) -g\left( 2\right) \frac{h\left( n-1\right) }{h\left( n\right) }P_{n-1}^{g,h}\left( x\right) \\&\quad =\frac{x}{h\left( n\right) }\left( P_{n-1}^{g,h}\left( x\right) +\sum _{k=2}^{n-1}\left( g\left( k+1\right) -g\left( 2\right) g\left( k\right) \right) P_{n-1-k}^{g,h}\left( x\right) \right) . \end{aligned}$$

The basic idea for the induction step is to use the inequality

$$\begin{aligned}&\left| P_{n}^{g,h}\left( x\right) -\frac{x+g\left( 2\right) h\left( n-1\right) }{h\left( n\right) }P_{n-1}^{g,h}\left( x\right) \right| \\&\quad \le \frac{\left| x\right| }{h\left( n\right) }\sum _{k=2}^{n-1}\left| g \left( k+1\right) -g\left( 2\right) g\left( k\right) \right| \left| P_{n-1-k}^{g,h}\left( x\right) \right| . \end{aligned}$$

The sum can be estimated using for \(1\le j\le n-1\) that

$$\begin{aligned}&\left| P_{j}^{g,h}\left( x\right) \right| \\&\quad \ge \left| \frac{x+g\left( 2\right) h\left( j-1\right) }{h\left( j\right) }\right| \left| P_{j-1}^{g,h}\left( x\right) \right| -\left| P_{j}^{g,h}(x)-\frac{x+g\left( 2\right) h\left( j-1\right) }{h\left( j\right) }P_{j-1}^{g,h}\left( x\right) \right| \\&\quad >\left( \frac{\left| x\right| }{h\left( j\right) }-\frac{\left| g\left( 2\right) \right| h\left( j-1\right) }{h\left( j\right) }-\varepsilon \frac{\left| x\right| }{h\left( j\right) }\right) \left| P_{j-1}^{g,h}\left( x\right) \right| \\&\quad =\frac{\left( 1-\varepsilon \right) \left| x\right| -\left| g\left( 2\right) \right| h\left( j-1\right) }{h\left( j\right) }\left| P_{j-1}^{g,h}\left( x\right) \right| \\&\quad \ge \frac{\left( 1-\varepsilon \right) \left| x\right| -\left| g\left( 2\right) \right| h\left( j\right) }{h\left( j\right) }\left| P_{j-1}^{g,h}\left( x\right) \right| \end{aligned}$$

for \(\left| x\right| > \kappa \,\, h(n-1) \). Note that for \(\left| x\right| >\kappa \,h(n-1) \) we have

$$\begin{aligned} \left( 1-\varepsilon \right) \left| x \right| -g\left( 2\right) h\left( j\right)>\left( \frac{1}{T_\varepsilon }+\left| g\left( 2\right) \right| \right) h\left( n-1\right) -g\left( 2\right) h\left( j\right) >0. \end{aligned}$$

Thus,

$$\begin{aligned} \left| P_{j-1}^{g,h}\left( x\right) \right| <\frac{h\left( j\right) }{\left( 1-\varepsilon \right) \left| x\right| -g\left( 2\right) h\left( j\right) }\left| P_{j}^{g,h}\left( x\right) \right| . \end{aligned}$$

We use this inequality and obtain

$$\begin{aligned} \left| P_{n-k}^{g,h}\left( x\right) \right|< & {} \left| P_{n-k+1}^{g,h}\left( x\right) \right| \frac{h\left( n-k+1\right) }{\left( 1-\varepsilon \right) \left| x\right| -\left| g\left( 2\right) \right| h\left( n-k+1\right) }<\ldots \\< & {} \left| P_{n-1}^{g,h}\left( x\right) \right| \prod _{j=1}^{k-1}\frac{h\left( n-j\right) }{\left( 1-\varepsilon \right) \left| x\right| -\left| g\left( 2\right) \right| h\left( n-j\right) } \\\le & {} \left| P_{n-1}^{g,h}\left( x\right) \right| \left( \frac{h\left( n-1\right) }{\left( 1-\varepsilon \right) \left| x \right| -\left| g\left( 2\right) \right| h\left( n-1\right) }\right) ^{k-1} \end{aligned}$$

for \(\left| x\right| >\kappa \,\, h( n-1) \ge \kappa \,\, h(n-k) \) for all \(2 \le k \le n\) by assumption. Using this, we can now estimate the sum by

$$\begin{aligned}&\sum _{k=2}^{n-1}\left| g \left( k+1\right) -g\left( 2\right) g\left( k\right) \right| \left| P_{n-1-k}^{g,h}\left( x\right) \right| \\&\quad < \left| P_{n-1}^{g,h}\left( x\right) \right| \sum _{k=2}^{n-1}\left| g \left( k+1\right) -g\left( 2\right) g\left( k\right) \right| \left( \frac{h\left( n-1\right) }{\left( 1-\varepsilon \right) \left| x\right| -\left| g\left( 2\right) \right| h\left( n-1\right) }\right) ^{k} \end{aligned}$$

and we obtain

$$\begin{aligned}&\left| P_{n}^{g,h}\left( x\right) -\frac{x+g\left( 2\right) h\left( n-1\right) }{h\left( n\right) }P_{n-1}^{g,h}\left( x\right) \right| \\&\quad < \frac{\left| x\right| }{h\left( n\right) }\left| P_{n-1}^{g,h}\left( x\right) \right| \sum _{k=2}^{n-1}\left| g\left( k+1\right) -g\left( 2\right) g \left( k\right) \right| \left( \frac{h\left( n-1\right) }{\left( 1-\varepsilon \right) \left| x\right| -\left| g\left( 2\right) \right| h\left( n-1\right) }\right) ^{k}. \end{aligned}$$

Estimating the sum using the assumption from the theorem we obtain

$$\begin{aligned}&\sum _{k=2}^{n-1}\left| g\left( k+1\right) -g\left( 2\right) g \left( k\right) \right| \left( \frac{h\left( n-1\right) }{\left( 1-\varepsilon \right) \left| x\right| -\left| g\left( 2\right) \right| h\left( n-1\right) }\right) ^{k} \\&\quad \le G_2 \left( \frac{h\left( n-1\right) }{\left( 1-\varepsilon \right) \left| x\right| -\left| g\left( 2\right) \right| h\left( n-1\right) }\right) \le G_2\left( T_{\varepsilon }\right) \le \varepsilon \end{aligned}$$

since \(\left| x\right| >\kappa \,\, h( n-1) =\frac{ h\left( n-1\right) }{1-\varepsilon }\left( \frac{1}{T_{\varepsilon }}+\left| g\left( 2\right) \right| \right) \) which is equivalent to \(\frac{\left( 1-\varepsilon \right) \left| x\right| }{h\left( n-1\right) }-\left| g\left( 2\right) \right| >\frac{1}{T_{\varepsilon }}\) and \(G_2\) is increasing on \(\left[ 0,R\right) \) as \(\left| g\left( k+1\right) -g\left( 2\right) g\left( k\right) \right| \ge 0\) for all \(k\in {\mathbb {N}}\). \(\square \)

Proof of Theorem 4

This basically follows from Theorem 3 (see also the proof of Theorem 2). \(\square \)

Table 5 Minimal zeros of \(P_{n}^{\sigma ,\mathop {\mathrm{id}}}\left( x\right) \)