1 Introduction

1.1 Motivation

In the theory of orthogonal polynomials and special functions, it is of special interest under which conditions (general or referring to a specific class of polynomials) a suitably normalized orthogonal polynomial sequence \((P_n(x))_{n\in {\mathbb {N}}_0}\subseteq {\mathbb {R}}[x]\) satisfies the “nonnegative linearization of products” property, i.e., the product of any two polynomials \(P_m(x),P_n(x)\) is contained in the conical hull of \(\{P_k(x):k\in {\mathbb {N}}_0\}\). In other words, nonnegative linearization of products means that the linearization coefficients appearing in the (Fourier) expansions of \(P_m(x)P_n(x)\) w.r.t. the basis \(\{P_k(x):k\in {\mathbb {N}}_0\}\) are always nonnegative. One reason for the intense study of this property, and for the extensive corresponding literature, is a fruitful relation to harmonic analysis, which will be briefly recalled below.

Given a specific sequence \((P_n(x))_{n\in {\mathbb {N}}_0}\), deciding whether nonnegative linearization of products is satisfied or not may be quite difficult, however: in many cases, the aforementioned linearization coefficients are not explicitly known, or explicit representations are of involved, cumbersome or inappropriate structure. In a series of papers starting with [31] and extending earlier work of Askey [2], Szwarc et al. have provided some general criteria that can be helpful. However, to our knowledge, there is no general criterion which is strong enough to cover the full parameter range for which the Jacobi polynomials

$$\begin{aligned} R_n^{(\alpha ,\beta )}(x)= & {} {}_2F_1\left( \left. \begin{matrix}-n,n+\alpha +\beta +1 \\ \alpha +1\end{matrix}\right| \frac{1-x}{2}\right) \\= & {} \sum _{k=0}^n\frac{(-n)_k(n+\alpha +\beta +1)_k}{(\alpha +1)_k}\frac{(1-x)^k}{2^k k!} \end{aligned}$$

[21, (9.8.1)] satisfy nonnegative linearization of products. Moreover, we are not aware of an explicit representation of the corresponding linearization coefficients which allows one to easily identify all pairs \((\alpha ,\beta )\in (-1,\infty )^2\) such that nonnegative linearization of products is fulfilled.

Note that since \((R_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) is normalized such that \(R_n^{(\alpha ,\beta )}(1)\equiv 1\), one has \(R_n^{(\alpha ,\beta )}(x)=n!P_n^{(\alpha ,\beta )}(x)/(\alpha +1)_n\) if \((P_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) denotes the standard normalization of the Jacobi polynomials. \((R_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) is equivalently given by the orthogonalization measure

$$\begin{aligned} (1-x)^{\alpha }(1+x)^{\beta }\chi _{(-1,1)}(x)\,\mathrm {d}x \end{aligned}$$

(and the normalization \(R_n^{(\alpha ,\beta )}(1)\equiv 1\)) [6, Chapter V 2 (B)] [17, (4.0.2)].

In some more detail, the situation concerning Jacobi polynomials is as follows: starting with the full (positive-definite case) parameter range \((\alpha ,\beta )\in (-1,\infty )^2\) and defining

$$\begin{aligned} a:=\alpha +\beta +1>-1,\;b:=\alpha -\beta \in (-1-a,1+a) \end{aligned}$$
(1.1)

and a proper subset V of \([-1/2,\infty )\times (-1,\infty )\) via

$$\begin{aligned} V:=\left\{ (\alpha ,\beta )\in (-1,\infty )^2:a(a+5)(a+3)^2\ge (a^2-7a-24)b^2,b\ge 0\right\} \qquad \end{aligned}$$
(1.2)

(see Fig. 1), Gasper showed the following [10, Theorem 1] (or [13, Theorem 3]):

Theorem 1.1

Let \(\alpha ,\beta >-1\). The following are equivalent:

  1. (i)

    \((R_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies nonnegative linearization of products, i.e., all \(g_R(m,n;k)\) given by the expansions

    $$\begin{aligned} R_m^{(\alpha ,\beta )}(x)R_n^{(\alpha ,\beta )}(x)=\sum _{k=|m-n|}^{m+n}g_R(m,n;k)R_k^{(\alpha ,\beta )}(x) \end{aligned}$$

    are nonnegative.

  2. (ii)

    \((\alpha ,\beta )\in V\).

Although Theorem 1.1 can be regarded as a “classical result” nowadays, it is still of considerable interest and was used in the recent publications [4] (dealing with certain strictly positive definite functions) and [15] (dealing with semigroups defined by Fourier-Jacobi series), for instance. Also [5], dealing with spherical codes, uses [10].

In this paper, we find an analogue to Gasper’s result Theorem 1.1 for the class of generalized Chebyshev polynomials (Theorem 3.2), which are the quadratic transformations of the Jacobi polynomials; for all \(\alpha ,\beta >-1\), the sequence of generalized Chebyshev polynomials \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) is given by

$$\begin{aligned} T_{2n}^{(\alpha ,\beta )}(x)&:=R_n^{(\alpha ,\beta )}(2x^2-1), \end{aligned}$$
(1.3)
$$\begin{aligned} T_{2n+1}^{(\alpha ,\beta )}(x)&:=x R_n^{(\alpha ,\beta +1)}(2x^2-1) \end{aligned}$$
(1.4)

[6, Chapter V 2 (G)] or, equivalently, by the orthogonalization measure

$$\begin{aligned} (1-x^2)^{\alpha }|x|^{2\beta +1}\chi _{(-1,1)}(x)\,\mathrm {d}x \end{aligned}$$

and the normalization \(T_n^{(\alpha ,\beta )}(1)\equiv 1\) [6, Chapter V 2 (G)] [17, (4.0.2)]. This solves a problem posted in [32, Section 5] by Szwarc who asked to determine the parameter range for which these polynomials satisfy nonnegative linearization of products. Since our result will immediately imply the nontrivial direction of Gasper’s result Theorem 1.1, it can also be regarded as a sharpening of Theorem 1.1.

Moreover, we shall characterize strict positivityFootnote 1 of the linearization coefficients \(g_R(m,n;k)\) (Theorem 2.1); an analogous result for the generalized Chebyshev polynomials cannot exist due to symmetry. On the one hand, this characterization will immediately imply the nontrivial direction of Gasper’s result Theorem 1.1, too. On the other hand, our proof of positive linearization is based on Gasper’s approach [10] but shorter and more elementary, so it can be regarded both as another sharpening and as a simplification. In Gasper’s original proof, the most computational part was establishing the nonnegativity of the coefficients \(g_R(m,n;|m-n|+2)\) and \(g_R(m,n;m+n-2)\) (provided \((\alpha ,\beta )\in V\)). We will get rid of these long computations and provide a more explicit approach. Furthermore, we give characterizations concerning a certain oscillatory behavior of the \(g_R(m,n;k)\).

1.2 Underlying Setting

Let us briefly describe the basic underlying setting: in this paper, we consider sequences \((P_n(x))_{n\in {\mathbb {N}}_0}\subseteq {\mathbb {R}}[x]\) with \(\mathrm {deg}\;P_n(x)=n\) which are orthogonal w.r.t. a probability (Borel) measure \(\mu \) on the real line with \(|\mathrm {supp}\;\mu |=\infty \) and \(\mathrm {supp}\;\mu \subseteq [-1,1]\). Under these conditions, it is well known from the theory of orthogonal polynomials that \((P_n(x))_{n\in \mathbb {N}_0}\) determines \(\mu \) uniquely [6]. Moreover, we assum \((P_n(x))_{n\in {\mathbb {N}}_0}\) to be normalized by \(P_n(1)\equiv 1\). This normalization is always possible because the assumptions on \(\mathrm {supp}\;\mu \) yield that all zeros are (real and) located in \((-1,1)\) (see [6, 17] for standard results on orthogonal polynomials and on corresponding expansions). Orthogonality is then given by

$$\begin{aligned} \int _{{\mathbb {R}}}\!P_m(x)P_n(x)\,\mathrm {d}\mu (x)=\frac{\delta _{m,n}}{h(n)} \end{aligned}$$
(1.5)

with some function \(h:{\mathbb {N}}_0\rightarrow (0,\infty )\) satisfying \(h(0)=1\).

Under these conditions, nonnegative linearization of products corresponds to the property that the product of any two polynomials \(P_m(x),P_n(x)\) is a convex combination of \(P_{|m-n|}(x),\ldots ,P_{m+n}(x)\), or to the nonnegativity of all linearization coefficients g(mnk) defined by the expansions

$$\begin{aligned} P_m(x)P_n(x)=\sum _{k=|m-n|}^{m+n}g(m,n;k)P_k(x), \end{aligned}$$
(1.6)

where \(\sum _{k=|m-n|}^{m+n}g(m,n;k)=1\).

Using (1.5) and (1.6), one clearly has

$$\begin{aligned} g(m,n;k)=h(k)\int _{{\mathbb {R}}}\!P_m(x)P_n(x)P_k(x)\,\mathrm {d}\mu (x) \end{aligned}$$
(1.7)

and \(h(n)=1/g(n,n;0)\). As the assumptions on the support and the normalization yield that the polynomials \(P_n(x)\) have positive leading coefficients, it is also clear that \(g(m,n;m+n)>0\) and \(g(m,n;|m-n|)>0\). In particular, it is well known that \((P_n(x))_{n\in {\mathbb {N}}_0}\) satisfies the three-term recurrence relation \(P_0(x)=1\), \(P_1(x)=(x-b_0)/a_0\),

$$\begin{aligned} P_1(x)P_n(x)=a_n P_{n+1}(x)+b_n P_n(x)+c_n P_{n-1}(x)\;(n\in {\mathbb {N}}), \end{aligned}$$

where \(a_0>0\), \(b_0=1-a_0\) and the sequences \((a_n)_{n\in {\mathbb {N}}},(c_n)_{n\in {\mathbb {N}}}\subseteq (0,\infty )\), \((b_n)_{n\in {\mathbb {N}}}\subseteq {\mathbb {R}}\) satisfy \(a_n+b_n+c_n=1\;(n\in {\mathbb {N}})\).

Throughout the paper, as in Theorem 1.1 we use additional appropriate subscripts or superscripts when referring to the Jacobi polynomials \((R_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) or generalized Chebyshev polynomials \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\). Moreover, we use an additional superscript “\(+\)” when referring to the sequence \((R_n^{(\alpha ,\beta +1)}(x))_{n\in {\mathbb {N}}_0}\). For instance, there will occur linearization coefficients \(g_R(m,n;k)\), \(g_R^+(m,n;k)\) and \(g_T(m,n;k)\). Observe that a transition from \(\beta \) to \(\beta +1\), which will play a crucial role in this paper, corresponds to a transition from (ab) to \((a+1,b-1)\) in the notation of (1.1).

In the literature, the nonnegativity of all linearization coefficients g(mnk) is sometimes called “property (P)”. For the sake of clarity, we shall say “nonnegative linearization of products” throughout the paper. This property implies that \((a_n)_{n\in {\mathbb {N}}},(c_n)_{n\in {\mathbb {N}}}\subseteq (0,1)\) and \((b_n)_{n\in {\mathbb {N}}}\subseteq [0,1)\). Furthermore, nonnegative linearization of products gives rise to a certain polynomial hypergroup structure, including associated Banach (\(L^1\)-) algebras and the fruitful possibility of applying Gelfand’s theory, which yields a deep and rich harmonic analysis [3, 24]. Hence, nonnegative linearization of products is not only of interest for a better understanding of general or specific orthogonal polynomials, but also has high relevance for functional and abstract harmonic analysis and, in particular, for the theory of Banach algebras. Within such polynomial hypergroups, the classes of Jacobi polynomials and generalized Chebyshev polynomials play a special role concerning product formulas and duality structures [7, 8, 11, 12, 23, 24, 28].

We mention that in this paper the hypergroup context appears only as a kind of additional motivation to study nonnegative linearization of products. In particular, it shows the high relevance for harmonic analysis. The paper can be read without knowledge about hypergroups, however. Roughly speaking, a hypergroup is a generalization of a (locally compact) group which allows the convolution of two Dirac measures to be a probability measure which satisfies certain compatibility and non-degeneracy properties but does not have to be a Dirac measure again (see [3, 18, 25] for precise axioms).

Fig. 1
figure 1

The set V. The dashed line corresponds to the boundary of \(\Delta \) (see Sect. 1.3)

1.3 Previous Results and Outline of the Paper

Let us come back to the Jacobi and generalized Chebyshev polynomials. Concerning Theorem 1.1, it is not difficult to see that (ii) is necessary for (i). In fact, Gasper has shown that if \(b<0\), then \(g_R(1,1;1)<0\), whereas if \(b\ge 0\) and \((\alpha ,\beta )\notin V\), then \(g_R(2,2;2)<0\) [10]. The implication “(ii) \(\Rightarrow \) (i)” is highly nontrivial, however. The subcase \((\alpha ,\beta )\in \Delta \), where \(\Delta \subsetneq V\) is given by

$$\begin{aligned} \Delta :=\{(\alpha ,\beta )\in (-1,\infty )^2:a,b\ge 0\}=\left\{ (\alpha ,\beta )\in (-1,\infty )^2:\alpha \ge \left| \beta +\frac{1}{2}\right| -\frac{1}{2}\right\} \end{aligned}$$

(see Fig. 1), is easier and was already solved in [9], and concerning the special case \(\alpha \ge \beta \ge -1/2\) Koornwinder gave a less computational proof via addition formulas [22]. Moreover, if \((\alpha ,\beta )\in \Delta \), then the nonnegativity of the \(g_R(m,n;k)\) can be seen via explicit representations in terms of \({}_9F_8\) hypergeometric series given by Rahman [30, (1.7) to (1.9)].Footnote 2 Alternatively, the case \((\alpha ,\beta )\in \Delta \) can also be obtained from one of the aforementioned general criteria of Szwarc [31]. The simplest subcase is given by \(\alpha =\beta \ge -1/2\) (these are the ultraspherical polynomials), for which the nonnegativity of the \(g_R(m,n;k)\) follows from Dougall’s formula (see [1, Theorem 6.8.2])

$$\begin{aligned}&R_m^{(\alpha ,\alpha )}(x)R_n^{(\alpha ,\alpha )}(x)\\&\quad =\sum _{j=0}^{\min \{m,n\}}j!\left( \alpha +\frac{1}{2}\right) _j\left( {\begin{array}{c}m\\ j\end{array}}\right) \left( {\begin{array}{c}n\\ j\end{array}}\right) \\&\qquad \times \frac{\left( m+n+\alpha +\frac{1}{2}-2j\right) \left( \alpha +\frac{1}{2}\right) _{m-j}\left( \alpha +\frac{1}{2}\right) _{n-j}(2\alpha +1)_{m+n-j}}{\left( m+n+\alpha +\frac{1}{2}-j\right) \left( \alpha +\frac{1}{2}\right) _{m+n-j}(2\alpha +1)_m(2\alpha +1)_n}R_{m+n-2j}^{(\alpha ,\alpha )}(x), \end{aligned}$$

where \(\alpha >-1/2\); for \(\alpha =\beta =-1/2\), the linearization coefficients reduce to 1/2.

Besides the original proof given in [10], Gasper found a very different one in [14]. The second proof is based on the continuous q-Jacobi polynomials and explicit corresponding linearization formulas in terms of \({}_{10}\phi _9\) basic hypergeometric series due to Rahman [29]. In the following, we will always refer to the first proof [10].

For our purposes, it will be more convenient to rewrite V (1.2) as

$$\begin{aligned} V=\left\{ (\alpha ,\beta )\in (-1,\infty )^2:a^2+2b^2+3a\ge 3\frac{(a+1)(a+2)}{(a+3)(a+5)}b^2,b\ge 0\right\} .\quad \end{aligned}$$
(1.8)

The small region \(V\backslash \Delta \) is bounded on the left by a curve c in the \((\alpha ,\beta )\)-plane which has the following properties (cf. Fig. 1): c starts at the point \((\alpha ,\beta )=(-11/8+\sqrt{73}/8,-1)\approx (-0.307,-1)\), approaches the line \(\alpha +\beta +1=0\) tangentially and meets this line at the point \((\alpha ,\beta )=(-1/2,-1/2)\) (which corresponds to the Chebyshev polynomials of the first kind) [10]. The angle between the line \(\beta =-1\) and c can easily be computed and is given by \(\approx 87.6^{\circ }\) (in particular, c cannot be written as \(\beta =f(\alpha )\) with a single function f).

Despite the more involved arguments which are required to establish nonnegative linearization of products for \(V\backslash \Delta \), from a harmonic analysis point of view there is no reason for restricting to \(\Delta \) when studying the associated hypergroups or \(L^1\)-algebras; we are not aware of a general advantage or benefit a restriction to \(\Delta \) would be accompanied with. For instance, in [19, Theorem 3.1] (or [20, Theorem 3.1]) we have shown that the \(L^1\)-algebraFootnote 3 associated with \((R_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\), \((\alpha ,\beta )\in V\), is weakly amenable (i.e., there exist no nonzero bounded derivations into the dual module \(\ell ^{\infty }\), which acts on the \(L^1\)-algebra via convolution [26, 27]) if and only if \(\alpha <0\), and the proof for \(\{(\alpha ,\beta )\in V:a=0\}\subseteq \Delta \) does not differ from the proof for \(V\backslash \Delta \): both cases are traced back to the interior of \(\Delta \) via the same argument using inheritance via homomorphisms. This example also shows that important Banach algebraic features like amenability properties may strongly vary even within the same specific class of orthogonal polynomials satisfying nonnegative linearization of products. Hence, also in considering other example classes it is desirable to find various—or even all—sequences \((P_n(x))_{n\in {\mathbb {N}}_0}\) such that nonnegative linearization of products holds.

It is an obvious consequence of Theorem 1.1 and (1.3) that \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) cannot satisfy nonnegative linearization of products if \((\alpha ,\beta )\notin V\); moreover, Szwarc has already shown that nonnegative linearization of products is fulfilled for all \((\alpha ,\beta )\in \Delta \) [31, 32]. The special case \(\alpha \ge \beta +1\) was already shown in [24]. The simplest subcase is given by \(\alpha \ge -1/2\wedge \beta =-1/2\), which can be obtained as above via Dougall’s formula; note that \(T_n^{(\alpha ,-1/2)}(x)=R_n^{(\alpha ,\alpha )}(x)\) for all \(n\in {\mathbb {N}}_0\), or in other words: the ultraspherical polynomials are the common subclass of the Jacobi and the generalized Chebyshev polynomials.

In Theorem 3.2, we will obtain that \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies nonnegative linearization of products if and only if \((\alpha ,\beta )\in V\). Hence, the generalized Chebyshev polynomials share this property with the Jacobi polynomials. Having in mind the interesting structure of (1.4) (where \(\beta +1\) instead of \(\beta \) appears on the right-hand side), we will also precisely characterize the pairs \((\alpha ,\beta )\in (-1,\infty )^2\) for which all \(g_T(m,n;k)\) with at least one odd entry mn are nonnegative (Theorem 3.1), and we will describe the geometry of the resulting set \(V^{\prime }\supsetneq V\).

The main results and proofs are given in Sect. 2 (Jacobi polynomials) and Sect. 3 (generalized Chebyshev polynomials). At several stages, our arguments are based on appropriate decompositions of multivariate polynomials. To find such decompositions (more precisely, appropriate nested sums of suitable factorizations), we also used computer algebra systems (Maple). However, the final proofs can be understood without any computer usage.

2 Linearization of the Product of Jacobi Polynomials: Sharpening and Simplification of Gasper’s Result

Let \(\alpha ,\beta >-1\), and let ab be defined as in Sect. 1. \((R_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies the three-term recurrence relation \(R_0^{(\alpha ,\beta )}(x)=1\), \(R_1^{(\alpha ,\beta )}(x)=(x-b_0^R)/a_0^R\),

$$\begin{aligned} R_1^{(\alpha ,\beta )}(x)R_n^{(\alpha ,\beta )}(x)=a_n^R R_{n+1}^{(\alpha ,\beta )}(x)+b_n^R R_n^{(\alpha ,\beta )}(x)+c_n^R R_{n-1}^{(\alpha ,\beta )}(x)\;(n\in {\mathbb {N}})\qquad \end{aligned}$$
(2.1)

with

$$\begin{aligned} a_0^R&=\frac{2\alpha +2}{\alpha +\beta +2}=\frac{a+b+1}{a+1},\nonumber \\ a_n^R&=\frac{(\alpha +\beta +2)(n+\alpha +\beta +1)(n+\alpha +1)}{(\alpha +1)(2n+\alpha +\beta +1)(2n+\alpha +\beta +2)}\nonumber \\&=\frac{(a+1)(n+a)(2n+a+b+1)}{(a+b+1)(2n+a)(2n+a+1)}\;(n\in {\mathbb {N}}),\nonumber \\ b_0^R&=-\frac{\alpha -\beta }{\alpha +\beta +2}=-\frac{b}{a+1},\nonumber \\ b_n^R&=\frac{2(\alpha -\beta )n(n+\alpha +\beta +1)}{(\alpha +1)(2n+\alpha +\beta )(2n+\alpha +\beta +2)}\nonumber \\&=\frac{4b n(n+a)}{(a+b+1)(2n+a-1)(2n+a+1)}\;(n\in {\mathbb {N}}),\nonumber \\ c_n^R&\equiv \frac{(\alpha +\beta +2)n(n+\beta )}{(\alpha +1)(2n+\alpha +\beta )(2n+\alpha +\beta +1)}\nonumber \\&=\frac{(a+1)n(2n+a-b-1)}{(a+b+1)(2n+a-1)(2n+a)} \end{aligned}$$
(2.2)

[9, (4)]. It is well known that

$$\begin{aligned} R_n^{(\beta ,\alpha )}(x)=(-1)^n\frac{(\alpha +1)_n}{(\beta +1)_n}R_n^{(\alpha ,\beta )}(-x) \end{aligned}$$
(2.3)

[17, (4.1.4), (4.1.6)].

One of our central tools will be a recurrence relation for the \(g_R(m,n;k)\) which is taken from [10] and relies on earlier work of Hylleraas [16]. Let \(n\ge m\ge 1\); following [10], we use a more convenient notation and write

$$\begin{aligned} s&=n-m,\\ j&=k-s \end{aligned}$$

with \(s\in {\mathbb {N}}_0\) and \(j\in \{0,\ldots ,2m\}\). [10, (2.1)] states that the linearization coefficients are linked to each other via the following recursion: for \(1\le j\le 2m-1\), one has

$$\begin{aligned} \begin{aligned}&\theta (m,m+s;j)g_R(m,m+s;s+j+1)\\&\quad =\iota (m,m+s;j)g_R(m,m+s;s+j)\\&\qquad +\kappa (m,m+s;j)g_R(m,m+s;s+j-1), \end{aligned} \end{aligned}$$
(2.4)

where \(\theta (m,m+s;.),\iota (m,m+s;.),\kappa (m,m+s;.):\{1,\ldots ,2m-1\}\rightarrow {\mathbb {R}}\) read

$$\begin{aligned} \theta (m,m+s;j):&=(2m-j+a-1)(2m+2s+j+a+1)\nonumber \\&\quad \times \frac{(2s+j+1)(2s+2j+a-b+1)}{(2s+2j+a+1)(2s+2j+a+2)}(j+1), \end{aligned}$$
(2.5)
$$\begin{aligned} \iota (m,m+s;j):&=b\left[ (2m-j)(2m+2s+j+2a)\frac{2s+j+1}{2s+2j+a+1}(j+1)\right. \nonumber \\&\quad \left. -(2m-j+1)(2m+2s+j+2a-1)\frac{2s+j}{2s+2j+a-1}j\right] , \end{aligned}$$
(2.6)
$$\begin{aligned} \kappa (m,m+s;j):&=(2m-j+1)(2m+2s+j+2a-1)\nonumber \\&\quad \times {\left\{ \begin{array}{ll} 0, &{} j-1=s=a=0, \\ \frac{(2s+j+a-1)(2s+2j+a+b-1)}{(2s+2j+a-2)(2s+2j+a-1)}(j+a-1), &{} \text{ else }. \end{array}\right. } \end{aligned}$$
(2.7)

Moreover, one has

$$\begin{aligned} g_R(m,m+s;s)&=\frac{\left( {\begin{array}{c}m+s\\ m\end{array}}\right) \left( {\begin{array}{c}2m+a-1\\ m\end{array}}\right) \left( {\begin{array}{c}m+s+\frac{a-b-1}{2}\\ m\end{array}}\right) }{\left( {\begin{array}{c}2m\\ m\end{array}}\right) \left( {\begin{array}{c}2m+2s+a\\ 2m\end{array}}\right) \left( {\begin{array}{c}m+\frac{a+b-1}{2}\\ m\end{array}}\right) }, \end{aligned}$$
(2.8)
$$\begin{aligned} g_R(m,m+s;s+2m)&=\frac{\left( {\begin{array}{c}2m+2s+a-1\\ m+s\end{array}}\right) \left( {\begin{array}{c}2m+a-1\\ m\end{array}}\right) \left( {\begin{array}{c}2m+s+\frac{a+b-1}{2}\\ 2m+s\end{array}}\right) }{\left( {\begin{array}{c}4m+2s+a-1\\ 2m+s\end{array}}\right) \left( {\begin{array}{c}m+s+\frac{a+b-1}{2}\\ m+s\end{array}}\right) \left( {\begin{array}{c}m+\frac{a+b-1}{2}\\ m\end{array}}\right) } \end{aligned}$$
(2.9)

and

$$\begin{aligned}&g_R(m,m+s;s+1)\nonumber \\&\quad =\frac{4b m(m+s+a)(2s+a+2)}{(2m+2s+a+1)(2m+a-1)(2s+a-b+1)}g_R(m,m+s;s), \end{aligned}$$
(2.10)
$$\begin{aligned}&g_R(m,m+s;s+2m-1)\nonumber \\&\quad =\frac{4b m(m+s)(4m+2s+a-2)}{(4m+2s+a+b-1)(2m+2s+a-1)(2m+a-1)}g_R(m,m+s;s+2m) \end{aligned}$$
(2.11)

[10, (2.2) to (2.4), (2.9)]. The following auxiliary result deals with the canonical continuation of the coefficient function \(\iota (m,m+s;.)\) to \([1,2m-1]\) and can be seen from [9, Section 2] and [10, Section 2]:

Lemma 2.1

Let \((\alpha ,\beta )\in V\) with \(\alpha \ne \beta \), and let for \(m\in {\mathbb {N}}\) and \(s\in {\mathbb {N}}_0\) the function \(\iota (m,m+s;.):[1,2m-1]\rightarrow {\mathbb {R}}\) be defined by (2.6). Then, \(\iota (m,m+s;.)\) has at most one zero. Moreover, if \(m\ge 2\) or \(a\ge 0\), then \(\iota (m,m+s;1)\ge 0\).

There are several ways how Lemma 2.1 can be proven. The proofs given in [9, 10] rely on Descartes’ rule of signs. We found elementary variants which completely avoid Descartes’ rule of signs (based on the classical mean value theorem, for instance). We omit the corresponding details, however.

The proof of Theorem 2.1, which is the main result of this section, will essentially rely on Lemma 2.1. Concerning the functions \(\theta (m,m+s;.)\) and \(\kappa (m,m+s;.)\), we will only need that

$$\begin{aligned} \theta (m,m+s;.)|_{\{1,\ldots ,2m-2\}}>0\;(m\ge 2) \end{aligned}$$
(2.12)

and

$$\begin{aligned} \kappa (m,m+s;.)|_{\{2,\ldots ,2m-1\}}>0\;(m\ge 2), \end{aligned}$$
(2.13)

which is an obvious consequence of (2.5) and (2.7) and was also used in [10].

We now give two characterizations. The first deals with positivity of the linearization coefficients \(g_R(m,n;k)\) and can be regarded as a sharpening of Theorem 1.1 because the nontrivial direction “(ii) \(\Rightarrow \) (i) (Theorem 1.1)” is implied by “(ii) \(\Rightarrow \) (i) (Theorem 2.1)” via continuity. Our proof is a—considerably less computational—modification of Gasper’s approach [10]. The second characterization follows from the first and deals with a certain oscillatory behavior of the \(g_R(m,n;k)\). It will play an important role for the proof of Theorem 3.1 on generalized Chebyshev polynomials below.

Theorem 2.1

Let \(\alpha ,\beta >-1\). The following are equivalent:

  1. (i)

    All \(g_R(m,n;k)\) are positive.

  2. (ii)

    \((\alpha ,\beta )\) is located in the interior of V (denoted by \(V^{\circ }\) in the following), i.e.,

    $$\begin{aligned} a^2+2b^2+3a>3\frac{(a+1)(a+2)}{(a+3)(a+5)}b^2 \end{aligned}$$
    (2.14)

    and

    $$\begin{aligned} b>0. \end{aligned}$$
    (2.15)

Corollary 2.1

Let \(\alpha ,\beta >-1\), and let \(\widetilde{g_R}(m,n;k)\) denote the linearization coefficients belonging to the sequence \((R_n^{(\beta ,\alpha )}(x))_{n\in {\mathbb {N}}_0}\). Then, the following hold:

  1. (i)

    All numbers \((-1)^{m+n+k}\widetilde{g_R}(m,n;k)\) are nonnegative if and only if \((\alpha ,\beta )\in V\).

  2. (ii)

    All numbers \((-1)^{m+n+k}\widetilde{g_R}(m,n;k)\) are positive if and only if \((\alpha ,\beta )\in V^{\circ }\).

Proof (Theorem 2.1)

We preliminarily note that \(V^{\circ }\) can indeed be characterized as the set of all \((\alpha ,\beta )\in (-1,\infty )^2\) satisfying the strict inequalities (2.14) and (2.15), which can be seen as follows: let \(\varphi ,\psi :(-1,\infty )^2\rightarrow {\mathbb {R}}\) be defined by

$$\begin{aligned} \varphi (\alpha ,\beta ):&=(a^2+2b^2+3a)(a+3)(a+5)-3(a+1)(a+2)b^2\\&=(4\beta +20)\alpha ^3+(8\beta ^2+40\beta +108)\alpha ^2+(4\beta ^3+40\beta ^2+96\beta +160)\alpha \\&\quad +20\beta ^3+108\beta ^2+160\beta +96 \end{aligned}$$

and

$$\begin{aligned} \psi (\alpha ,\beta ):=b=\alpha -\beta ; \end{aligned}$$

then

$$\begin{aligned} V=\left\{ (\alpha ,\beta )\in (-1,\infty )^2:\varphi (\alpha ,\beta )\ge 0,\psi (\alpha ,\beta )\ge 0\right\} \end{aligned}$$

and we have to show that \(V^{\circ }\) coincides with the set

$$\begin{aligned} {\widetilde{V}}:=\left\{ (\alpha ,\beta )\in (-1,\infty )^2:\varphi (\alpha ,\beta )>0,\psi (\alpha ,\beta )>0\right\} . \end{aligned}$$

It is clear that \({\widetilde{V}}\) is a subset of \(V^{\circ }\). Moreover, if there was an element \((\alpha _0,\beta _0)\in V^{\circ }\) with \(\varphi (\alpha _0,\beta _0)=0\), then \(\varphi \) would particularly attain a local minimum at \((\alpha _0,\beta _0)\), so the gradient of \(\varphi \) would have to vanish at \((\alpha _0,\beta _0)\). However, every \((\alpha ,\beta )\in V\) satisfies

$$\begin{aligned} \frac{\partial \varphi }{\partial \alpha }(\alpha ,\beta )&\ge \frac{\partial \varphi }{\partial \alpha }(\alpha ,\beta )-(a^2+2b^2+3a)(4\beta +20)\\&=(24\beta ^2+100\beta +116)\alpha -8\beta ^3-40\beta ^2-20\beta +80\\&\ge (24\beta ^2+100\beta +116)\beta -8\beta ^3-40\beta ^2-20\beta +80\\&=(4\beta +8)(4\beta ^2+7\beta +10)\\&>0, \end{aligned}$$

so the gradient of \(\varphi \) does not vanish on V. In the same way, one sees that \(\psi (\alpha ,\beta )>0\) for all \((\alpha ,\beta )\in V^{\circ }\). Putting all together, we obtain that indeed \(V^{\circ }={\widetilde{V}}\).

We establish the easy direction “(i) \(\Rightarrow \) (ii)” in a similar way as in [10]: if \(b\le 0\), then (2.1) and (2.2) (or also (2.8) and (2.10)) show that

$$\begin{aligned} g_R(1,1;1)=b_1^R=\frac{4b}{(a+3)(a+b+1)}\le 0; \end{aligned}$$

if \(b>0\) but \((\alpha ,\beta )\) is not located in the interior of V, then the equations (2.4) to (2.8) and (2.10) yield

$$\begin{aligned} g_R(2,2;2)=\frac{4[(a^2+2b^2+3a)(a+3)(a+5)-3(a+1)(a+2)b^2]}{(a+3)(a+5)(a+6)(a+b+1)(a+b+3)}\le 0. \end{aligned}$$

We now come to the interesting direction “(ii) \(\Rightarrow \) (i)”. Our proof is based on Gasper’s approach [10, Section 2] but simpler and shorter. We will make use of the equations [10, (2.1) to (2.4), (2.9)], which correspond to the equations (2.4) to (2.11). Moreover, following Gasper we will make use of (2.12), Lemma 2.1 and (2.13) concerning the functions \(\theta (m,m+s;.)\), \(\iota (m,m+s;.)\) and \(\kappa (m,m+s;.)\). We will also use an induction argument which is motivated by Gasper’s approach [10, bottom of p. 587]. However, we will not make use of the painful argument which relies on [10, (2.5), (2.8), (2.10), (2.11)].

Let \((\alpha ,\beta )\in V^{\circ }\), let \(m\in {\mathbb {N}}\) and let \(s\in {\mathbb {N}}_0\). We have to show that \(g_R(m,m+s;s+j)>0\) for all \(j\in \{0,\ldots ,2m\}\). Starting similarly to [10], we use “two-sided induction” and proceed as follows: (2.8) to (2.11) yield \(g_R(m,m+s;s)>0\), \(g_R(m,m+s;s+1)>0\), \(g_R(m,m+s;s+2m)>0\) and \(g_R(m,m+s;s+2m-1)>0\). Of course, the positivity of \(g_R(m,m+s;s)\) and \(g_R(m,m+s;s+2m)\) is also clear from general results, cf. Sect. 1. If \(m=1\), we are already done (this case is also clear from (2.1) and (2.2)). Hence, assume that \(m\ge 2\) from now on; it is then left to show that \(g_R(m,m+s;s+j)>0\) for all \(j\in \{2,\ldots ,2m-2\}\).

(2.12) and (2.13) yield \(\theta (m,m+s;1)>0\) and \(\kappa (m,m+s;2m-1)>0\), and via the equations (2.4) to (2.7) and (2.10), (2.11) we compute

$$\begin{aligned} \begin{aligned}&\frac{(2m+a-1)(2s+a-b+1)(2m+2s+a+1)(2s+a+3)}{4m(m+s+a)(2s+a+1)g_R(m,m+s;s)}\\&\qquad \times \theta (m,m+s;1)g_R(m,m+s;s+2)\\&\quad =\frac{(2m+a-1)(2s+a-b+1)(2m+2s+a+1)(2s+a+3)}{4m(m+s+a)(2s+a+1)g_R(m,m+s;s)}\\&\qquad \times [\iota (m,m+s;1)g_R(m,m+s;s+1)\\&\qquad +\kappa (m,m+s;1)g_R(m,m+s;s)]\\&\quad =(b^2+a)(2m-4)(2m+2s+2a+4)2s\\&\qquad +(a^2+2b^2+3a)[(2m-4)(2m+2s+2a+4)\\&\qquad +2s(2s+2a+8)+(a+3)(a+5)]\\&\qquad -3(a+1)(a+2)b^2 \end{aligned} \end{aligned}$$
(2.16)

and

$$\begin{aligned} \begin{aligned}&\frac{(2m+a-1)(2m+2s+a-1)(4m+2s+a-3)(4m+2s+a+b-1)}{4m(m+s)(4m+2s+a-1)g_R(m,m+s;s+2m)}\\&\qquad \times \kappa (m,m+s;2m-1)g_R(m,m+s;s+2m-2)\\&\quad =\frac{(2m+a-1)(2m+2s+a-1)(4m+2s+a-3)(4m+2s+a+b-1)}{4m(m+s)(4m+2s+a-1)g_R(m,m+s;s+2m)}\\&\qquad \times [\theta (m,m+s;2m-1)g_R(m,m+s;s+2m)\\&\qquad -\iota (m,m+s;2m-1)g_R(m,m+s;s+2m-1)]\\&\quad =(b^2+a)(2m-4)(2m+2s-4)(4m+2s+2a)\\&\qquad +(a^2+2b^2+3a)[(2m-4)(6m+6s+4a+4)\\&\qquad +2s(2s+2a+8)+(a+3)(a+5)]-3(a+1)(a+2)b^2. \end{aligned} \end{aligned}$$
(2.17)

Observe that \(b^2+a>0\): if \(a\le 0\), this is a consequence of the decomposition

$$\begin{aligned} 2b^2+2a=a^2+2b^2+3a-a(a+1). \end{aligned}$$

Therefore, the right-hand sides of (2.16) and (2.17) are greater than or equal to \((a^2+2b^2+3a)(a+3)(a+5)-3(a+1)(a+2)b^2\), so these equations imply that \(g_R(m,m+s;s+2)>0\) and \(g_R(m,m+s;s+2m-2)>0\). We note at this stage that (2.16) and (2.17) allow us to obtain the positivity of \(g_R(m,m+s;s+2)\) and \(g_R(m,m+s;s+2m-2)\) in a much faster way than Gasper estimated via [10, (2.5), (2.8), (2.10), (2.11)] (establishing the nonnegativity of \(g_R(m,m+s;s+2)\) and \(g_R(m,m+s;s+2m-2)\) under the assumption \((\alpha ,\beta )\in V\)). This is our essential simplification of Gasper’s approach.

As we are done if \(m=2\), we assume that \(m\ge 3\) from now on. The remaining proof works similarly as in [10] again. We first apply Lemma 2.1 and obtain the existence of an \(N\in \{1,\ldots ,2m-1\}\) such that \(\iota (m,m+s;j)\ge 0\) for \(1\le j\le N\) and \(\iota (m,m+s;j)<0\) for those \(1\le j\le 2m-1\) which satisfy \(j\ge N+1\). We then make use of (2.12) and (2.13) and distinguish two cases:

Case 1: \(N\ge 3\). Then, (2.4) and induction yield

$$\begin{aligned}&g_R(m,m+s;s+j+1) =\frac{\iota (m,m+s;j)}{\theta (m,m+s;j)}g_R(m,m+s;s+j)\\&\quad +\frac{\kappa (m,m+s;j)}{\theta (m,m+s;j)}g_R(m,m+s;s+j-1)\\&\quad >0\;(2\le j\le N-1). \end{aligned}$$

This shows the positivity of \(g_R(m,m+s;s+3),\ldots ,g_R(m,m+s;s+N)\).

Case 2: \(N\le 2m-3\). In this case, (2.4) and induction yield

$$\begin{aligned}&g_R(m,m+s;s+j-1) = -\frac{\iota (m,m+s;j)}{\kappa (m,m+s;j)}g_R(m,m+s;s+j)\\&\quad +\frac{\theta (m,m+s;j)}{\kappa (m,m+s;j)}g_R(m,m+s;s+j+1)\\&\quad >0\;(N+1\le j\le 2m-2), \end{aligned}$$

which establishes the positivity of \(g_R(m,m+s;s+N),\ldots ,g_R(m,m+s;s+2m-3)\).

If \(N\le 2\), then \(N<2m-3\) and the positivity of

$$\begin{aligned} g_R(m,m+s;s+3),\ldots ,g_R(m,m+s;s+2m-3) \end{aligned}$$

is a consequence of Case 2. If \(N\ge 2m-2\), then \(N>3\) and the positivity of \(g_R(m,m+s;s+3),\ldots ,g_R(m,m+s;s+2m-3)\) is a consequence of Case 1. Finally, if \(3\le N\le 2m-3\), then the combination of both cases yields the positivity of \(g_R(m,m+s;s+3),\ldots ,g_R(m,m+s;s+2m-3)\). \(\square \)

Our argument via the central equations (2.16) and (2.17) in the initial step above shows a typical aspect of the strategy (this aspect will also be important in Sect. 3): as soon as one knows decompositions like in (2.16), (2.17) which allow one to see the signs of the relevant parts, these decompositions can easily (yet more or less tediously) be verified by comparing the expansions, or by comparing common zeros and leading coefficients and so on. Hence, the actual task is finding such decompositions.

Remark 2.1

As was similarly observed in [9, 10], the proof of the direction “(ii) \(\Rightarrow \) (i)” considerably simplifies in the special case \(a>0\), i.e., for \((\alpha ,\beta )\in \Delta ^{\circ }\subsetneq V^{\circ }\). On the one hand, for \(a>0\) the functions \(\theta (m,m+s;.)\) and \(\kappa (m,m+s;.)\) are positive on their full domains (cf. (2.5) and (2.7)). Hence, one can avoid the computations (and appropriate decompositions) of \(g_R(m,m+s;s+2)\) and \(g_R(m,m+s;s+2m-2)\). On the other hand, the proof of the important ingredient Lemma 2.1 is simpler for \(a>0\) [9, 10]. If \((\alpha ,\beta )\) is located in the interior of \(\Delta \), then the positivity of all \(g_R(m,n;k)\) can also be seen via Rahman’s formulas (A.1) and (A.2). The simplest subcase is given by \(\alpha =\beta +1\), for which the positivity of the \(g_R(m,n;k)\) can be seen via a very simple explicit formula [16].

Proof (Corollary 2.1)

As a consequence of (2.3), the linearization coefficients are connected to each other via

$$\begin{aligned} (-1)^{m+n+k}\widetilde{g_R}(m,n;k)=\frac{(\alpha +1)_m(\alpha +1)_n}{(\alpha +1)_k}\frac{(\beta +1)_k}{(\beta +1)_m(\beta +1)_n}g_R(m,n;k). \end{aligned}$$

Hence, the assertions are immediate consequences of Theorems 1.1 and  2.1, cf. also the remarks at the end of [10, Section 1]. \(\square \)

3 Linearization of the Product of Generalized Chebyshev Polynomials: Solution to a Problem of Szwarc

Let \(\alpha ,\beta >-1\) again, and let ab be defined as in Sect. 1. In the following, we use the notation and auxiliary functions of Sect. 2. \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies the recurrence relation \(T_0^{(\alpha ,\beta )}(x)=1\), \(T_1^{(\alpha ,\beta )}(x)=x\),

$$\begin{aligned} x T_n^{(\alpha ,\beta )}(x)=a_n^T T_{n+1}^{(\alpha ,\beta )}(x)+c_n^T T_{n-1}^{(\alpha ,\beta )}(x)\;(n\in {\mathbb {N}}) \end{aligned}$$
(3.1)

with \((a_n^T)_{n\in {\mathbb {N}}},(c_n^T)_{n\in {\mathbb {N}}}\subseteq (0,1)\) given by

$$\begin{aligned} a_{2n-1}^T&\equiv \frac{n+\alpha }{2n+\alpha +\beta }=\frac{2n+a+b-1}{4n+2a-2},\;a_{2n}^T\equiv \frac{n+\alpha +\beta +1}{2n+\alpha +\beta +1}=\frac{n+a}{2n+a},\nonumber \\ c_{2n-1}^T&\equiv \frac{n+\beta }{2n+\alpha +\beta }=\frac{2n+a-b-1}{4n+2a-2},\;c_{2n}^T\equiv \frac{n}{2n+\alpha +\beta +1}=\frac{n}{2n+a}\nonumber \\ \end{aligned}$$
(3.2)

[24, 3 (f)]. Using (1.3), (1.4), (3.1) and (3.2), one can relate the \(g_T(m,n;k)\) to the \(g_R(m,n;k)\) and \(g_R^+(m,n;k)\). For instance, this was done in [24]: one has

$$\begin{aligned} g_T(2m,2n;2k)=g_R(m,n;k) \end{aligned}$$
(3.3)

and

$$\begin{aligned}&g_T(2m+1,2n+1;2k)\nonumber \\&\quad ={\left\{ \begin{array}{ll} c_{2|m-n|+1}^T g_R^+(m,n;|m-n|), &{} k=|m-n|, \\ a_{2m+2n+1}^T g_R^+(m,n;m+n), &{} k=m+n+1, \\ a_{2k-1}^T g_R^+(m,n;k-1)+c_{2k+1}^T g_R^+(m,n;k), &{} \text{ else }; \end{array}\right. } \end{aligned}$$
(3.4)

moreover, \(g_T(m,n;k)=0\) if \(m+n-k\) is odd (trivial consequence of symmetry), and \(g_T(2m+1,2n;2k+1)\) and \(g_T(2m,2n+1;2k+1)\) relate to (3.4) via

$$\begin{aligned} g_T(2m+1,2n;2k+1)=\frac{h_T(2k+1)}{h_T(2n)}g_T(2m+1,2k+1;2n), \end{aligned}$$
(3.5)

which is a consequence of (1.7), and

$$\begin{aligned} g_T(2m,2n+1;2k+1)=g_T(2n+1,2m;2k+1). \end{aligned}$$
(3.6)

We deal with the following problems:

  1. (A)

    Szwarc’s problem, cf. Sect. 1: find all pairs \((\alpha ,\beta )\in (-1,\infty )^2\) such that \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies nonnegative linearization of products, i.e., such that all \(g_T(m,n;k)\) are nonnegative.

  2. (B)

    Find all pairs \((\alpha ,\beta )\in (-1,\infty )^2\) such that all \(g_T(m,n;k)\) with at least one odd entry mn are nonnegative.

The pairs \((\alpha ,\beta )\in (-1,\infty )^2\) such that all \(g_T(m,n;k)\) with two even entries mn are nonnegative are exactly the \((\alpha ,\beta )\in V\), which is an obvious consequence of (3.3) and Theorem 1.1. Hence, it will be interesting to compare the resulting set of (B) to V.

The solutions to (A) and (B) will be given in Theorems 3.2 and 3.1, respectively. We want to motivate these results by establishing two necessary conditions for the pairs \((\alpha ,\beta )\) which are as in (B): in the following, we show that all such pairs \((\alpha ,\beta )\) necessarily fulfill the conditions \(b\ge 0\) and \(a^2+2b^2+3a\ge 0\). More precisely, we show that

  • if \(b<0\), then \(g_T(2m+1,2m+2s+1;2s+2)<0\) for sufficiently large \(m\in {\mathbb {N}}\),

  • if \(a^2+2b^2+3a<0\), then \(g_T(2m+1,2m+1;4)<0\) for sufficiently large \(m\in {\mathbb {N}}\).

Given any \(\alpha ,\beta >-1\) and arbitrary \(m,n\in {\mathbb {N}}\) with \(n\ge m\), we use the notation of the previous sections and compute

$$\begin{aligned}&(2m+a)(2m+2s+a+2)\frac{2s+a+b+1}{2s+a+2}\left( \frac{c_{2s+3}^T}{a_{2s+1}^T}\frac{g_R^+(m,m+s;s+1)}{g_R^+(m,m+s;s)}+1\right) \nonumber \\&\quad =4bm^2+4b(s+a+1)m+a(2s+a+b+1) \end{aligned}$$
(3.7)

via (2.10) and (3.2). Making also use of (2.4), which yields

$$\begin{aligned} \frac{g_R^+(m,m+s;s+2)}{\underbrace{g_R^+(m,m+s;s+1)}_{\ne 0}}=\frac{\iota ^+(m,m+s;1)}{\underbrace{\theta ^+(m,m+s;1)}_{>0}}+\frac{\kappa ^+(m,m+s;1)}{\theta ^+(m,m+s;1)}\frac{g_R^+(m,m+s;s)}{g_R^+(m,m+s;s+1)} \end{aligned}$$

for \(b\ne 1\), and combining this with (2.5) to (2.7), (2.10) and (3.2), we furthermore obtain

$$\begin{aligned}&4(b-1)(2m+a-1)(2m+2s+a+3)\frac{(s+1)(2s+a+b+3)}{2s+a+4}\nonumber \\&\qquad \times \left( \frac{c_{2s+5}^T}{a_{2s+3}^T}\frac{g_R^+(m,m+s;s+2)}{g_R^+(m,m+s;s+1)}+1\right) \nonumber \\&\quad =(4m-4)(m+s+a+2)\left[ (a^2+2b^2+3a)(s+1)-a(a+1)s\right] \nonumber \\&\qquad +(a+1)(2s+a+b+3)\left[ (a+2b)(2s+2-b)+a^2+2b^2+3a\right] \;(b\ne 1).\nonumber \\ \end{aligned}$$
(3.8)

If \(b<0\), then the right-hand side of (3.7) becomes negative for (all) sufficiently large \(m\in {\mathbb {N}}\), whereas

$$\begin{aligned} (2m+a)(2m+2s+a+2)\frac{2s+a+b+1}{2s+a+2} \end{aligned}$$

is always positive. Hence, if \(b<0\), then

$$\begin{aligned} \frac{c_{2s+3}^T}{a_{2s+1}^T}\frac{g_R^+(m,m+s;s+1)}{g_R^+(m,m+s;s)}+1 \end{aligned}$$

is negative for sufficiently large \(m\in {\mathbb {N}}\). Since \(g_R^+(m,m+s;s)\) is always positive, the latter yields the negativity of

$$\begin{aligned}&g_T(2m+1,2m+2s+1;2s+2)\\&\quad =a_{2s+1}^T g_R^+(m,m+s;s)+c_{2s+3}^T g_R^+(m,m+s;s+1) \end{aligned}$$

(3.4) for sufficiently large \(m\in {\mathbb {N}}\).

Now assume that \(a^2+2b^2+3a<0\). On the one hand, one necessarily has \(b<1\) then (because \(a^2+2+3a=(a+1)(a+2)>0\)), so

$$\begin{aligned} 4(b-1)(2m+a-1)(2m+2s+a+3)\frac{(s+1)(2s+a+b+3)}{2s+a+4} \end{aligned}$$

is always negative. On the other hand, if \(s=0\), then the right-hand side of (3.8) becomes negative for (all) sufficiently large \(m\in {\mathbb {N}}\). Hence,

$$\begin{aligned} \frac{c_5^T}{a_3^T}\frac{g_R^+(m,m;2)}{g_R^+(m,m;1)}+1 \end{aligned}$$

is positive for sufficiently large \(m\in {\mathbb {N}}\). Since, due to (2.10), \(g_R^+(m,m;1)\) is negative, we obtain the negativity of

$$\begin{aligned} g_T(2m+1,2m+1;4)=a_3^T g_R^+(m,m;1)+c_5^T g_R^+(m,m;2) \end{aligned}$$

(3.4) for sufficiently large \(m\in {\mathbb {N}}\).

Putting all together, we see that every pair \((\alpha ,\beta )\) which is as in (B) indeed has to satisfy both \(b\ge 0\) and \(a^2+2b^2+3a\ge 0\). Our following result deals with the converse and shows that these two conditions already characterize (B); the set \(V^{\prime }\) defined in Theorem 3.1 is illustrated in Fig. 2.

Theorem 3.1

Let \(\alpha ,\beta >-1\). The following are equivalent:

  1. (i)

    For all \(m,n\in {\mathbb {N}}_0\) such that at least one of these numbers is odd, all linearization coefficients \(g_T(m,n;k)\) are nonnegative.

  2. (ii)

    \((\alpha ,\beta )\in V^{\prime }\), where

    $$\begin{aligned} V^{\prime }:=\left\{ (\alpha ,\beta )\in (-1,\infty )^2:a^2+2b^2+3a\ge 0,b\ge 0\right\} \supsetneq V. \end{aligned}$$

If \((\alpha ,\beta )\in V^{\prime }\backslash \Delta \) and \(m,n\in {\mathbb {N}}_0\) are such that at least one of these numbers is odd, and if \(k\in \{|m-n|,\ldots ,m+n\}\) is such that \(m+n-k\) is even, then \(g_T(m,n;k)\) is positive.

Fig. 2
figure 2

The set \(V^{\prime }\). The dotdashed line and the dashed line correspond to the boundaries of V and \(\Delta \) (see Sect. 1)

Fig. 3
figure 3

Geometry of the set \(V^{\prime }\)

The inclusion \(V^{\prime }\supsetneq V\) is clear from the rewritten form of V (1.8) (one can easily find points which show that the inclusion is proper). \(V^{\prime }\) is a subset of \([-1/2,\infty )\times (-1,\infty )\), which can be seen as follows: let \((\alpha ,\beta )\in V^{\prime }\). If \(a\ge 0\), then \((\alpha ,\beta )\in \Delta \subseteq [-1/2,\infty )\times (-1,\infty )\), and if \(a<0\), then

$$\begin{aligned} 2\underbrace{(b-a)}_{>0}(b+a)=a^2+2b^2+3a-3a(a+1)>0 \end{aligned}$$

and consequently

$$\begin{aligned} 0<b+a=2\alpha +1. \end{aligned}$$

This establishes \(V^{\prime }\subseteq [-1/2,\infty )\times (-1,\infty )\); it is clear that the inclusion is proper. Concerning the geometry of \(V^{\prime }\), we note that one obtains the set \(\left\{ (\alpha ,\beta )\in {\mathbb {R}}^2:a^2+2b^2+3a=0\right\} \) by rotating the ellipse

$$\begin{aligned} \left( \frac{x}{\frac{3}{4}\sqrt{2}}\right) ^2+\left( \frac{y}{\frac{3}{4}}\right) ^2=1 \end{aligned}$$

by \(\pi /4\) and shifting the image by \((-5/4,-5/4)^T\) (cf. Fig. 3). The small region \(V^{\prime }\backslash V\) is bounded on the left by a curve \(c^{\prime }\) in the \((\alpha ,\beta )\)-plane which starts at the point \((\alpha ,\beta )=(-1/3,-1)\), approaches the line \(\alpha +\beta +1=0\) tangentially and meets this line at the point \((\alpha ,\beta )=(-1/2,-1/2)\) (cf. also the related set W considered in [10]). The angle between the line \(\beta =-1\) and \(c^{\prime }\) is \(\approx 86.2^{\circ }\) (in particular, \(c^{\prime }\) cannot be written as \(\beta =f(\alpha )\) with a single function f).

As a consequence of Theorem 3.1, we will obtain our second main result of this section and the answer to (A):

Theorem 3.2

Let \(\alpha ,\beta >-1\). The following are equivalent:

  1. (i)

    \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies nonnegative linearization of products, i.e., all \(g_T(m,n;k)\) are nonnegative.

  2. (ii)

    \((\alpha ,\beta )\in V\).

Comparing Theorems 1.1,  2.1 and  3.2, one may ask whether all \(g_T(m,n;k)\) are positive if \((\alpha ,\beta )\) is located in the interior of V. However, this is not true: just recall that for every choice of \((\alpha ,\beta )\in (-1,\infty )^2\) one has \(g_T(m,n;k)=0\) if \(m+n-k\) is odd.

We now come to the proofs.

Since Theorem 3.1 implies that the set of all pairs \((\alpha ,\beta )\in (-1,\infty )^2\) such that \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies nonnegative linearization of products is given by \(V\cap V^{\prime }=V\), Theorem 3.2 follows from Theorem 3.1.

The implication “(i) \(\Rightarrow \) (ii)” of Theorem 3.1 was already established above. In view of Szwarc’s earlier result, which already shows that \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies nonnegative linearization of products at least for all \((\alpha ,\beta )\in \Delta \) (cf. Sect. 1), the converse “(ii) \(\Rightarrow \) (i)” is a consequence of the assertion made in the second part of Theorem 3.1.

In view of these observations, and in view of (3.5) and (3.6), Theorems 3.1 and  3.2 trace back to the following lemma:

Lemma 3.1

Let \((\alpha ,\beta )\in V^{\prime }\backslash \Delta \), and let \(m\in {\mathbb {N}}\), \(s\in {\mathbb {N}}_0\). Then, \(g_T(2m+1,2m+2s+1;2s+2j)>0\) for all \(j\in \{0,\ldots ,2m+1\}\).

Due to the positivity of the sequences \((a_n^T)_{n\in {\mathbb {N}}}\) and \((c_n^T)_{n\in {\mathbb {N}}}\), the assertion of Lemma 3.1 is also true for \(m=0\), of course.

Our task is to establish Lemma 3.1, which will be done via Corollary 2.1, another “two-sided induction” argument (cf. the proof of Theorem 2.1) and an auxiliary result. The latter will be needed for the (more involved) induction step.

For the rest of the section, we always assume that \((\alpha ,\beta )\in V^{\prime }\backslash \Delta \) and that \(m\in {\mathbb {N}}\), \(s\in {\mathbb {N}}_0\).

Under these conditions, we have

$$\begin{aligned} a\in \left( -\frac{1}{3},0\right) \end{aligned}$$

and

$$\begin{aligned} b\in (-a,1+a)\subseteq (0,1). \end{aligned}$$

The inequality \(a>-1/3\) follows immediately from

$$\begin{aligned} 0\le a^2+2b^2+3a<a^2+2(1+a)^2+3a=(a+2)(3a+1) \end{aligned}$$

(and can also be found in [10]). The inequality \(b>-a\) was shown above.

We now define two auxiliary functions \(p:\{1,\ldots ,2m-1\}\rightarrow {\mathbb {R}}\), \(q:\{1,\ldots ,2m-1\}\rightarrow (0,\infty )\) by

$$\begin{aligned} p(j)&:=\frac{c_{2s+2j+3}^T}{a_{2s+2j+1}^T}\frac{\iota ^+(m,m+s;j)}{\theta ^+(m,m+s;j)},\\ q(j)&:=\frac{c_{2s+2j+1}^T c_{2s+2j+3}^T}{a_{2s+2j-1}^T a_{2s+2j+1}^T}\frac{\kappa ^+(m,m+s;j)}{\theta ^+(m,m+s;j)}. \end{aligned}$$

Concerning well-definedness, observe that \(\theta ^+(m,m+s;.)\) and \(\kappa ^+(m,m+s;.)\) are positive on their full domains, which follows directly from the definitions (2.5), (2.7). Using (2.5) to (2.7) and (3.2), we compute

$$\begin{aligned} \begin{aligned} p(j)&=p^{\infty }(j)+\frac{p^{*}(j)}{(2m-j+a)(2m+2s+j+a+2)},\\ q(j)&=q^{\infty }(j)+\frac{q^{*}(j)}{(2m-j+a)(2m+2s+j+a+2)} \end{aligned} \end{aligned}$$
(3.9)

for all \(j\in \{1,\ldots ,2m-1\}\), where the four functions \(p^{\infty }:{\mathbb {N}}\rightarrow (-1,\infty )\), \(p^{*},q^{\infty },q^{*}:{\mathbb {N}}\rightarrow (0,\infty )\) are given by

$$\begin{aligned}&p^{\infty }(j)\nonumber \\&\quad =-1+\frac{2s+2j+a+2}{(2s+j+1)(2s+2j+a)(2s+2j+a+b+1)(j+1)}\nonumber \\&\qquad \times \left[ b(2s+j+1)(2s+2j+a)(j+1)+(1-b)(2s+j)(2s+2j+a+1)j\right] ,\nonumber \\&p^{*}(j)\nonumber \\&\quad =(1-b)\frac{(2s+j+a)(2s+2j+a+1)(2s+2j+a+2)(j+a)(2s+2j+1)}{(2s+j+1)(2s+2j+a)(2s+2j+a+b+1)(j+1)},\nonumber \\&q^{\infty }(j)\nonumber \\&\quad =\frac{(2s+2j+a+2)(2s+j+a)(2s+2j+a-b+1)(j+a)}{(2s+j+1)(2s+2j+a)(2s+2j+a+b+1)(j+1)},\nonumber \\&q^{*}(j)\nonumber \\&\quad =\frac{2s+2j+a+2}{(2s+j+1)(2s+2j+a)(2s+2j+a+b+1)(j+1)}\nonumber \\&\qquad \times (1-a)(2s+j+a)(2s+2j+a+1)(j+a)(2s+2j+a-b+1). \end{aligned}$$
(3.10)

Note that the functions \(p^{\infty }\), \(p^{*}\), \(q^{\infty }\) and \(q^{*}\) are independent of m. The superscript “\(\infty \)” is used because \(p^{\infty }\) and \(q^{\infty }\) are just the pointwise limits of p and q as m tends to infinity.

As a first consequence of (3.9) and (3.10), we obtain that p maps into the interval \((-1,\infty )\).

The following lemma is the announced auxiliary result and provides an inequality in p and q which will be central in the proof of Lemma 3.1.

Lemma 3.2

Let \((\alpha ,\beta )\in V^{\prime }\backslash \Delta \) and \(m\ge 2\), \(s\in {\mathbb {N}}_0\). Then, for every \(j\in \{1,\ldots ,2m-2\}\) the inequality

$$\begin{aligned}{}[1+p(j+1)][q(j)-p(j)]<q(j+1) \end{aligned}$$
(3.11)

is valid.

Proof

The basic idea is to use (3.9) and (3.10) in order to isolate m in an appropriate way. Let \(j\in \{1,\ldots ,2m-2\}\). We decompose

$$\begin{aligned} \begin{aligned}&q(j+1)-[1+p(j+1)][q(j)-p(j)]\\&\quad =q^{\infty }(j+1)-[1+p^{\infty }(j+1)][q^{\infty }(j)-p^{\infty }(j)]\\&\qquad +\frac{q^{*}(j+1)-p^{*}(j+1)[q^{\infty }(j)-p^{\infty }(j)]}{(2m-j+a-1)(2m+2s+j+a+3)}\\&\qquad -\frac{[1+p^{\infty }(j+1)][q^{*}(j)-p^{*}(j)]}{(2m-j+a)(2m+2s+j+a+2)}\\&\qquad -\frac{p^{*}(j+1)[q^{*}(j)-p^{*}(j)]}{(2m-j+a-1)(2m-j+a)(2m+2s+j+a+2)(2m+2s+j+a+3)} \end{aligned} \end{aligned}$$
(3.12)

and compute

$$\begin{aligned} \begin{aligned} \omega _j:&=q^{\infty }(j+1)-[1+p^{\infty }(j+1)][q^{\infty }(j)-p^{\infty }(j)]\\&=\frac{(b-a)b[2s(2s+2j+a+2)+(j+a)(2j+4)+1-a]}{(2s+j+1)(2s+j+2)(j+1)(j+2)}\\&\quad \times \frac{(2s+2j+a+2)(2s+2j+a+4)}{(2s+2j+a+b+1)(2s+2j+a+b+3)}\\&>0. \end{aligned} \end{aligned}$$
(3.13)

Combining (3.13) with (3.12), we obtain

$$\begin{aligned} \begin{aligned}&\frac{(2m-j+a-1)(2m-j+a)(2m+2s+j+a+2)(2m+2s+j+a+3)}{\omega _j}\\&\qquad \times [q(j+1)-[1+p(j+1)][q(j)-p(j)]]\\&\quad =[(2m-j+a-1)(2m+2s+j+a+3)+\alpha _j]\\&\qquad \times [(2m-j+a)(2m+2s+j+a+2)+\beta _j]+\rho _j\\&\quad =[(2m-j+a-1)((2m-j+a-1)+\sigma _j+1)+\alpha _j]\\&\qquad \times [((2m-j+a-1)+1)((2m-j+a-1)+\sigma _j)+\beta _j]+\rho _j \end{aligned} \end{aligned}$$
(3.14)

with

$$\begin{aligned} \alpha _j&:=\frac{q^{*}(j+1)-p^{*}(j+1)[q^{\infty }(j)-p^{\infty }(j)]}{\omega _j},\\ \beta _j&:=-\frac{[1+p^{\infty }(j+1)][q^{*}(j)-p^{*}(j)]}{\omega _j},\\ \rho _j&:=-\frac{p^{*}(j+1)[q^{*}(j)-p^{*}(j)]}{\omega _j}-\alpha _j\beta _j,\\ \sigma _j&:=2s+2j+3. \end{aligned}$$

We now define

$$\begin{aligned} f:\left[ \frac{j-a+1}{2},\infty \right) \rightarrow {\mathbb {R}} \end{aligned}$$

by

$$\begin{aligned} \begin{aligned} f(x):&=[(2x-j+a-1)((2x-j+a-1)+\sigma _j+1)+\alpha _j]\\&\quad \times [((2x-j+a-1)+1)((2x-j+a-1)+\sigma _j)+\beta _j]+\rho _j \end{aligned} \end{aligned}$$

and claim that f maps into \((0,\infty )\); once the claim is proven, inequality (3.11) will follow for j via

$$\begin{aligned} m\in \left[ \frac{j-a+1}{2},\infty \right) , \end{aligned}$$

(3.13) and (3.14). To establish the claim, we first compute

$$\begin{aligned} \begin{aligned} f^{\prime }(x)&=[4(2x-j+a-1)+2\sigma _j+2]\\&\quad \times \left[ ((2x-j+a-1)+1)((2x-j+a-1)+\sigma _j)+\beta _j\right. \\&\quad \left. +(2x-j+a-1)((2x-j+a-1)+\sigma _j+1)+\alpha _j\right] . \end{aligned} \end{aligned}$$

Then, two rather tedious calculations yield

$$\begin{aligned} f\left( \frac{j-a+1}{2}\right)&=\alpha _j(\sigma _j+\beta _j)+\rho _j\\&=\frac{(2s+j+a+1)(2s+2j+3)(2s+2j+a+3)(j+a+1)}{b[2s(2s+2j+a+2)+(j+a)(2j+4)+1-a]}\\&\quad \times [b(2s+j+1)(j+1)+(1-b)(2-a)(2s+2j+a+1)]\\&>0 \end{aligned}$$

and, for each \(x\ge (j-a+1)/2\),

$$\begin{aligned}&((2x-j+a-1)+1)((2x-j+a-1)+\sigma _j)+\beta _j\\&\qquad +(2x-j+a-1)((2x-j+a-1)+\sigma _j+1)+\alpha _j\\&\quad \ge \sigma _j+\beta _j+\alpha _j\\&\quad =\frac{1}{b[2s(2s+2j+a+2)+(j+a)(2j+4)+1-a]}\\&\qquad \times \left[ b\left( (1-a)(2s+j+2)(2s+j+a)(2s+2j+a+3)\right. \right. \\&\qquad \left. \left. +(1-a)(2s+2j+a+3)(j+1)(j+a+1)\right. \right. \\&\qquad \left. \left. +(2s+j+2)(2s+j+a+1)(j+a)(2j+4)\right. \right. \\&\qquad \left. \left. +2s(2s+2j+3)(2s+2j+a+2)+(j+1)(j+a)(2j+4)\right. \right. \\&\qquad \left. \left. +(1-a)(2s+2j+3)\right) \right. \\&\qquad \left. +(1-b)\left( (2s+j)(2j+a+2)+(2+a)j+2+3a\right) \right. \\&\qquad \left. \times (2s+2j+a+1)(2s+2j+a+3)\right] \\&\quad >0 \end{aligned}$$

(like in Sect. 2, verifying such decompositions, which allow to directly see positivity, is easy, so the actual difficulty is finding them). Obviously, we also have

$$\begin{aligned} 4(2x-j+a-1)+2\sigma _j+2\ge 2\sigma _j+2>0 \end{aligned}$$

for every \(x\ge (j-a+1)/2\), so \(f^{\prime }\) maps into \((0,\infty )\). This finishes the proof. \(\square \)

We now come to the proof of Lemma 3.1.

Proof (Lemma 3.1)

As a consequence of Corollary 2.1, all numbers

$$\begin{aligned} (-1)^j g_R^+(m,m+s;s+j),\;j\in \{0,\ldots ,2m\}, \end{aligned}$$

are positive (observe that \((\beta +1,\alpha )\) is located in the interior of \(\Delta \)). Alternatively, the positivity of the numbers \((-1)^j g_R^+(m,m+s;s+j)\) can be obtained from (2.3) and Rahman’s formula (A.3) (take into account that \(\beta +1>0>\alpha >-1/2\)). Hence, we may define \(\phi :\{1,\ldots ,2m\}\rightarrow (-\infty ,0)\),

$$\begin{aligned} \phi (j):=\frac{c_{2s+2j+1}^T}{a_{2s+2j-1}^T}\frac{g_R^+(m,m+s;s+j)}{g_R^+(m,m+s;s+j-1)}. \end{aligned}$$

As a consequence of (2.4), we have

$$\begin{aligned}&p(j)+\frac{q(j)}{\phi (j)}\\&\quad =\frac{c_{2s+2j+3}^T}{a_{2s+2j+1}^T}\frac{\iota ^+(m,m+s;j)}{\theta ^+(m,m+s;j)}\\&\qquad +\frac{c_{2s+2j+1}^T c_{2s+2j+3}^T}{a_{2s+2j-1}^T a_{2s+2j+1}^T}\frac{\kappa ^+(m,m+s;j)}{\theta ^+(m,m+s;j)}\frac{a_{2s+2j-1}^T}{c_{2s+2j+1}^T}\frac{g_R^+(m,m+s;s+j-1)}{g_R^+(m,m+s;s+j)}\\&\quad =\frac{c_{2s+2j+3}^T}{a_{2s+2j+1}^T}\frac{g_R^+(m,m+s;s+j+1)}{g_R^+(m,m+s;s+j)} \end{aligned}$$

and obtain the recurrence relation

$$\begin{aligned} \phi (j+1)=p(j)+\frac{q(j)}{\phi (j)}\;(1\le j\le 2m-1). \end{aligned}$$

We now use this recurrence relation and induction to show that

$$\begin{aligned} \phi (2j)<-1 \end{aligned}$$
(3.15)

and

$$\begin{aligned} \phi (2j-1)>-1 \end{aligned}$$
(3.16)

for all \(j\in \{1,\ldots ,m\}\). As a consequence of (3.8) and

$$\begin{aligned} a+2b>-a>0, \end{aligned}$$

we see that \(\phi (2)<-1\). Moreover, making use of (2.4), which yields

$$\begin{aligned}&\frac{g_R^+(m,m+s;s+2m-2)}{\underbrace{g_R^+(m,m+s;s+2m-1)}_{\ne 0}}\\&\quad =-\frac{\iota ^+(m,m+s;2m-1)}{\underbrace{\kappa ^+(m,m+s;2m-1)}_{>0}}+\frac{\theta ^+(m,m+s;2m-1)}{\kappa ^+(m,m+s;2m-1)}\frac{g_R^+(m,m+s;s+2m)}{g_R^+(m,m+s;s+2m-1)}, \end{aligned}$$

and combining this with (2.5) to (2.7), (2.11) and (3.2), we obtain that

$$\begin{aligned} \begin{aligned}&4(b-1)\frac{(2m+a-1)(2m+s+a)(2m+2s+a-1)(4m+2s+a-b-1)}{4m+2s+a-2}\\&\qquad \times \left( \frac{a_{4m+2s-3}^T}{c_{4m+2s-1}^T}\frac{g_R^+(m,m+s;s+2m-2)}{g_R^+(m,m+s;s+2m-1)}+1\right) \\&\quad =(2m+a-1)(2m+2s+a-1)\\&\qquad \times \left[ (a^2+2b^2+3a)(2m+s-1)-a(a+1)(2m+s-2)+(2+2a)b^2\right] \\&\qquad +(a+1)b(2-b)(4m+2s+a)(4m+2s+2a-1). \end{aligned} \end{aligned}$$

Therefore, we obtain that \(\phi (2m-1)>-1\).

If \(m=1\), then (3.15) and (3.16) are already verified to hold for all \(j\in \{1,\ldots ,m\}\) by the preceding calculations; hence, we assume that \(m\ge 2\) from now on. Let \(j\in \{1,\ldots ,m-1\}\) be arbitrary but fixed and assume that \(\phi (2j)<-1\). Then,

$$\begin{aligned} \phi (2j+1)=p(2j)+\frac{q(2j)}{\phi (2j)}>p(2j)-q(2j). \end{aligned}$$

Since p maps into \((-1,\infty )\), we obtain

$$\begin{aligned} (1+p(2j+1))\phi (2j+1)>(1+p(2j+1))(p(2j)-q(2j)), \end{aligned}$$

and now Lemma 3.2 implies that

$$\begin{aligned} (1+p(2j+1))\phi (2j+1)>-q(2j+1). \end{aligned}$$

Since \(\phi (2j+1)<0\), the latter equation yields

$$\begin{aligned} \phi (2j+2)=p(2j+1)+\frac{q(2j+1)}{\phi (2j+1)}<-1. \end{aligned}$$

Finally, let \(j\in \{2,\ldots ,m\}\) be arbitrary but fixed and assume that \(\phi (2j-1)>-1\). We have

$$\begin{aligned} \frac{1}{\phi (2j-2)}=\frac{1}{q(2j-2)}(\phi (2j-1)-p(2j-2))>-\frac{1}{q(2j-2)}(1+p(2j-2)), \end{aligned}$$

so

$$\begin{aligned} 1+p(2j-2)>-\frac{q(2j-2)}{\phi (2j-2)}. \end{aligned}$$

Since

$$\begin{aligned} 0>\phi (2j-2)=p(2j-3)+\frac{q(2j-3)}{\phi (2j-3)}, \end{aligned}$$

we can conclude that

$$\begin{aligned} (1+p(2j-2))\left( p(2j-3)+\frac{q(2j-3)}{\phi (2j-3)}\right) <-q(2j-2). \end{aligned}$$

Now we apply Lemma 3.2 again and obtain

$$\begin{aligned}&(1+p(2j-2))\left( p(2j-3)+\frac{q(2j-3)}{\phi (2j-3)}\right) \\&\quad <(1+p(2j-2))(p(2j-3)-q(2j-3)). \end{aligned}$$

Since p maps into \((-1,\infty )\), this shows that

$$\begin{aligned} p(2j-3)+\frac{q(2j-3)}{\phi (2j-3)}<p(2j-3)-q(2j-3) \end{aligned}$$

or, equivalently, \(\phi (2j-3)>-1\), which finishes the induction. Hence, (3.15) and (3.16) are established to hold for all \(j\in \{1,\ldots ,m\}\) (for every \(m\ge 1\)). Combining this with the positivity of all numbers \((-1)^j g_R^+(m,m+s;s+j)\) (see above) and (3.4), we can conclude that all

$$\begin{aligned}&g_T(2m+1,2m+2s+1;2s+2j)\\&\quad =a_{2s+2j-1}^T g_R^+(m,m+s;s+j-1)+c_{2s+2j+1}^T g_R^+(m,m+s;s+j)\\&\quad =a_{2s+2j-1}^T\cdot \underbrace{(-1)^{j-1}g_R^+(m,m+s;s+j-1)}_{>0}\cdot \underbrace{(-1)^{j-1}(1+\phi (j))}_{>0}, \end{aligned}$$

\(j\in \{1,\ldots ,2m\}\), are positive. Since the positivity of \(g_T(2m+1,2m+2s+1;2s)\) and \(g_T(2m+1,2m+2s+1;4m+2s+2)\) is clear, the proof is complete. \(\square \)