Abstract
In the theory of orthogonal polynomials, as well as in its intersection with harmonic analysis, it is an important problem to decide whether a given orthogonal polynomial sequence \((P_n(x))_{n\in \mathbb {N}_0}\) satisfies nonnegative linearization of products, i.e., the product of any two \(P_m(x),P_n(x)\) is a conical combination of the polynomials \(P_{|m-n|}(x),\ldots ,P_{m+n}(x)\). Since the coefficients in the arising expansions are often of cumbersome structure or not explicitly available, such considerations are generally very nontrivial. Gasper (Can J Math 22:582–593, 1970) was able to determine the set V of all pairs \((\alpha ,\beta )\in (-1,\infty )^2\) for which the corresponding Jacobi polynomials \((R_n^{(\alpha ,\beta )}(x))_{n\in \mathbb {N}_0}\), normalized by \(R_n^{(\alpha ,\beta )}(1)\equiv 1\), satisfy nonnegative linearization of products. Szwarc (Inzell Lectures on Orthogonal Polynomials, Adv. Theory Spec. Funct. Orthogonal Polynomials, vol 2, Nova Sci. Publ., Hauppauge, NY pp 103–139, 2005) asked to solve the analogous problem for the generalized Chebyshev polynomials \((T_n^{(\alpha ,\beta )}(x))_{n\in \mathbb {N}_0}\), which are the quadratic transformations of the Jacobi polynomials and orthogonal w.r.t. the measure \((1-x^2)^{\alpha }|x|^{2\beta +1}\chi _{(-1,1)}(x)\,\mathrm {d}x\). In this paper, we give the solution and show that \((T_n^{(\alpha ,\beta )}(x))_{n\in \mathbb {N}_0}\) satisfies nonnegative linearization of products if and only if \((\alpha ,\beta )\in V\), so the generalized Chebyshev polynomials share this property with the Jacobi polynomials. Moreover, we reconsider the Jacobi polynomials themselves, simplify Gasper’s original proof and characterize strict positivity of the linearization coefficients. Our results can also be regarded as sharpenings of Gasper’s one.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
1.1 Motivation
In the theory of orthogonal polynomials and special functions, it is of special interest under which conditions (general or referring to a specific class of polynomials) a suitably normalized orthogonal polynomial sequence \((P_n(x))_{n\in {\mathbb {N}}_0}\subseteq {\mathbb {R}}[x]\) satisfies the “nonnegative linearization of products” property, i.e., the product of any two polynomials \(P_m(x),P_n(x)\) is contained in the conical hull of \(\{P_k(x):k\in {\mathbb {N}}_0\}\). In other words, nonnegative linearization of products means that the linearization coefficients appearing in the (Fourier) expansions of \(P_m(x)P_n(x)\) w.r.t. the basis \(\{P_k(x):k\in {\mathbb {N}}_0\}\) are always nonnegative. One reason for the intense study of this property, and for the extensive corresponding literature, is a fruitful relation to harmonic analysis, which will be briefly recalled below.
Given a specific sequence \((P_n(x))_{n\in {\mathbb {N}}_0}\), deciding whether nonnegative linearization of products is satisfied or not may be quite difficult, however: in many cases, the aforementioned linearization coefficients are not explicitly known, or explicit representations are of involved, cumbersome or inappropriate structure. In a series of papers starting with [31] and extending earlier work of Askey [2], Szwarc et al. have provided some general criteria that can be helpful. However, to our knowledge, there is no general criterion which is strong enough to cover the full parameter range for which the Jacobi polynomials
[21, (9.8.1)] satisfy nonnegative linearization of products. Moreover, we are not aware of an explicit representation of the corresponding linearization coefficients which allows one to easily identify all pairs \((\alpha ,\beta )\in (-1,\infty )^2\) such that nonnegative linearization of products is fulfilled.
Note that since \((R_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) is normalized such that \(R_n^{(\alpha ,\beta )}(1)\equiv 1\), one has \(R_n^{(\alpha ,\beta )}(x)=n!P_n^{(\alpha ,\beta )}(x)/(\alpha +1)_n\) if \((P_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) denotes the standard normalization of the Jacobi polynomials. \((R_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) is equivalently given by the orthogonalization measure
(and the normalization \(R_n^{(\alpha ,\beta )}(1)\equiv 1\)) [6, Chapter V 2 (B)] [17, (4.0.2)].
In some more detail, the situation concerning Jacobi polynomials is as follows: starting with the full (positive-definite case) parameter range \((\alpha ,\beta )\in (-1,\infty )^2\) and defining
and a proper subset V of \([-1/2,\infty )\times (-1,\infty )\) via
(see Fig. 1), Gasper showed the following [10, Theorem 1] (or [13, Theorem 3]):
Theorem 1.1
Let \(\alpha ,\beta >-1\). The following are equivalent:
-
(i)
\((R_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies nonnegative linearization of products, i.e., all \(g_R(m,n;k)\) given by the expansions
$$\begin{aligned} R_m^{(\alpha ,\beta )}(x)R_n^{(\alpha ,\beta )}(x)=\sum _{k=|m-n|}^{m+n}g_R(m,n;k)R_k^{(\alpha ,\beta )}(x) \end{aligned}$$are nonnegative.
-
(ii)
\((\alpha ,\beta )\in V\).
Although Theorem 1.1 can be regarded as a “classical result” nowadays, it is still of considerable interest and was used in the recent publications [4] (dealing with certain strictly positive definite functions) and [15] (dealing with semigroups defined by Fourier-Jacobi series), for instance. Also [5], dealing with spherical codes, uses [10].
In this paper, we find an analogue to Gasper’s result Theorem 1.1 for the class of generalized Chebyshev polynomials (Theorem 3.2), which are the quadratic transformations of the Jacobi polynomials; for all \(\alpha ,\beta >-1\), the sequence of generalized Chebyshev polynomials \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) is given by
[6, Chapter V 2 (G)] or, equivalently, by the orthogonalization measure
and the normalization \(T_n^{(\alpha ,\beta )}(1)\equiv 1\) [6, Chapter V 2 (G)] [17, (4.0.2)]. This solves a problem posted in [32, Section 5] by Szwarc who asked to determine the parameter range for which these polynomials satisfy nonnegative linearization of products. Since our result will immediately imply the nontrivial direction of Gasper’s result Theorem 1.1, it can also be regarded as a sharpening of Theorem 1.1.
Moreover, we shall characterize strict positivityFootnote 1 of the linearization coefficients \(g_R(m,n;k)\) (Theorem 2.1); an analogous result for the generalized Chebyshev polynomials cannot exist due to symmetry. On the one hand, this characterization will immediately imply the nontrivial direction of Gasper’s result Theorem 1.1, too. On the other hand, our proof of positive linearization is based on Gasper’s approach [10] but shorter and more elementary, so it can be regarded both as another sharpening and as a simplification. In Gasper’s original proof, the most computational part was establishing the nonnegativity of the coefficients \(g_R(m,n;|m-n|+2)\) and \(g_R(m,n;m+n-2)\) (provided \((\alpha ,\beta )\in V\)). We will get rid of these long computations and provide a more explicit approach. Furthermore, we give characterizations concerning a certain oscillatory behavior of the \(g_R(m,n;k)\).
1.2 Underlying Setting
Let us briefly describe the basic underlying setting: in this paper, we consider sequences \((P_n(x))_{n\in {\mathbb {N}}_0}\subseteq {\mathbb {R}}[x]\) with \(\mathrm {deg}\;P_n(x)=n\) which are orthogonal w.r.t. a probability (Borel) measure \(\mu \) on the real line with \(|\mathrm {supp}\;\mu |=\infty \) and \(\mathrm {supp}\;\mu \subseteq [-1,1]\). Under these conditions, it is well known from the theory of orthogonal polynomials that \((P_n(x))_{n\in \mathbb {N}_0}\) determines \(\mu \) uniquely [6]. Moreover, we assum \((P_n(x))_{n\in {\mathbb {N}}_0}\) to be normalized by \(P_n(1)\equiv 1\). This normalization is always possible because the assumptions on \(\mathrm {supp}\;\mu \) yield that all zeros are (real and) located in \((-1,1)\) (see [6, 17] for standard results on orthogonal polynomials and on corresponding expansions). Orthogonality is then given by
with some function \(h:{\mathbb {N}}_0\rightarrow (0,\infty )\) satisfying \(h(0)=1\).
Under these conditions, nonnegative linearization of products corresponds to the property that the product of any two polynomials \(P_m(x),P_n(x)\) is a convex combination of \(P_{|m-n|}(x),\ldots ,P_{m+n}(x)\), or to the nonnegativity of all linearization coefficients g(m, n; k) defined by the expansions
where \(\sum _{k=|m-n|}^{m+n}g(m,n;k)=1\).
Using (1.5) and (1.6), one clearly has
and \(h(n)=1/g(n,n;0)\). As the assumptions on the support and the normalization yield that the polynomials \(P_n(x)\) have positive leading coefficients, it is also clear that \(g(m,n;m+n)>0\) and \(g(m,n;|m-n|)>0\). In particular, it is well known that \((P_n(x))_{n\in {\mathbb {N}}_0}\) satisfies the three-term recurrence relation \(P_0(x)=1\), \(P_1(x)=(x-b_0)/a_0\),
where \(a_0>0\), \(b_0=1-a_0\) and the sequences \((a_n)_{n\in {\mathbb {N}}},(c_n)_{n\in {\mathbb {N}}}\subseteq (0,\infty )\), \((b_n)_{n\in {\mathbb {N}}}\subseteq {\mathbb {R}}\) satisfy \(a_n+b_n+c_n=1\;(n\in {\mathbb {N}})\).
Throughout the paper, as in Theorem 1.1 we use additional appropriate subscripts or superscripts when referring to the Jacobi polynomials \((R_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) or generalized Chebyshev polynomials \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\). Moreover, we use an additional superscript “\(+\)” when referring to the sequence \((R_n^{(\alpha ,\beta +1)}(x))_{n\in {\mathbb {N}}_0}\). For instance, there will occur linearization coefficients \(g_R(m,n;k)\), \(g_R^+(m,n;k)\) and \(g_T(m,n;k)\). Observe that a transition from \(\beta \) to \(\beta +1\), which will play a crucial role in this paper, corresponds to a transition from (a, b) to \((a+1,b-1)\) in the notation of (1.1).
In the literature, the nonnegativity of all linearization coefficients g(m, n; k) is sometimes called “property (P)”. For the sake of clarity, we shall say “nonnegative linearization of products” throughout the paper. This property implies that \((a_n)_{n\in {\mathbb {N}}},(c_n)_{n\in {\mathbb {N}}}\subseteq (0,1)\) and \((b_n)_{n\in {\mathbb {N}}}\subseteq [0,1)\). Furthermore, nonnegative linearization of products gives rise to a certain polynomial hypergroup structure, including associated Banach (\(L^1\)-) algebras and the fruitful possibility of applying Gelfand’s theory, which yields a deep and rich harmonic analysis [3, 24]. Hence, nonnegative linearization of products is not only of interest for a better understanding of general or specific orthogonal polynomials, but also has high relevance for functional and abstract harmonic analysis and, in particular, for the theory of Banach algebras. Within such polynomial hypergroups, the classes of Jacobi polynomials and generalized Chebyshev polynomials play a special role concerning product formulas and duality structures [7, 8, 11, 12, 23, 24, 28].
We mention that in this paper the hypergroup context appears only as a kind of additional motivation to study nonnegative linearization of products. In particular, it shows the high relevance for harmonic analysis. The paper can be read without knowledge about hypergroups, however. Roughly speaking, a hypergroup is a generalization of a (locally compact) group which allows the convolution of two Dirac measures to be a probability measure which satisfies certain compatibility and non-degeneracy properties but does not have to be a Dirac measure again (see [3, 18, 25] for precise axioms).
The set V. The dashed line corresponds to the boundary of \(\Delta \) (see Sect. 1.3)
1.3 Previous Results and Outline of the Paper
Let us come back to the Jacobi and generalized Chebyshev polynomials. Concerning Theorem 1.1, it is not difficult to see that (ii) is necessary for (i). In fact, Gasper has shown that if \(b<0\), then \(g_R(1,1;1)<0\), whereas if \(b\ge 0\) and \((\alpha ,\beta )\notin V\), then \(g_R(2,2;2)<0\) [10]. The implication “(ii) \(\Rightarrow \) (i)” is highly nontrivial, however. The subcase \((\alpha ,\beta )\in \Delta \), where \(\Delta \subsetneq V\) is given by
(see Fig. 1), is easier and was already solved in [9], and concerning the special case \(\alpha \ge \beta \ge -1/2\) Koornwinder gave a less computational proof via addition formulas [22]. Moreover, if \((\alpha ,\beta )\in \Delta \), then the nonnegativity of the \(g_R(m,n;k)\) can be seen via explicit representations in terms of \({}_9F_8\) hypergeometric series given by Rahman [30, (1.7) to (1.9)].Footnote 2 Alternatively, the case \((\alpha ,\beta )\in \Delta \) can also be obtained from one of the aforementioned general criteria of Szwarc [31]. The simplest subcase is given by \(\alpha =\beta \ge -1/2\) (these are the ultraspherical polynomials), for which the nonnegativity of the \(g_R(m,n;k)\) follows from Dougall’s formula (see [1, Theorem 6.8.2])
where \(\alpha >-1/2\); for \(\alpha =\beta =-1/2\), the linearization coefficients reduce to 1/2.
Besides the original proof given in [10], Gasper found a very different one in [14]. The second proof is based on the continuous q-Jacobi polynomials and explicit corresponding linearization formulas in terms of \({}_{10}\phi _9\) basic hypergeometric series due to Rahman [29]. In the following, we will always refer to the first proof [10].
For our purposes, it will be more convenient to rewrite V (1.2) as
The small region \(V\backslash \Delta \) is bounded on the left by a curve c in the \((\alpha ,\beta )\)-plane which has the following properties (cf. Fig. 1): c starts at the point \((\alpha ,\beta )=(-11/8+\sqrt{73}/8,-1)\approx (-0.307,-1)\), approaches the line \(\alpha +\beta +1=0\) tangentially and meets this line at the point \((\alpha ,\beta )=(-1/2,-1/2)\) (which corresponds to the Chebyshev polynomials of the first kind) [10]. The angle between the line \(\beta =-1\) and c can easily be computed and is given by \(\approx 87.6^{\circ }\) (in particular, c cannot be written as \(\beta =f(\alpha )\) with a single function f).
Despite the more involved arguments which are required to establish nonnegative linearization of products for \(V\backslash \Delta \), from a harmonic analysis point of view there is no reason for restricting to \(\Delta \) when studying the associated hypergroups or \(L^1\)-algebras; we are not aware of a general advantage or benefit a restriction to \(\Delta \) would be accompanied with. For instance, in [19, Theorem 3.1] (or [20, Theorem 3.1]) we have shown that the \(L^1\)-algebraFootnote 3 associated with \((R_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\), \((\alpha ,\beta )\in V\), is weakly amenable (i.e., there exist no nonzero bounded derivations into the dual module \(\ell ^{\infty }\), which acts on the \(L^1\)-algebra via convolution [26, 27]) if and only if \(\alpha <0\), and the proof for \(\{(\alpha ,\beta )\in V:a=0\}\subseteq \Delta \) does not differ from the proof for \(V\backslash \Delta \): both cases are traced back to the interior of \(\Delta \) via the same argument using inheritance via homomorphisms. This example also shows that important Banach algebraic features like amenability properties may strongly vary even within the same specific class of orthogonal polynomials satisfying nonnegative linearization of products. Hence, also in considering other example classes it is desirable to find various—or even all—sequences \((P_n(x))_{n\in {\mathbb {N}}_0}\) such that nonnegative linearization of products holds.
It is an obvious consequence of Theorem 1.1 and (1.3) that \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) cannot satisfy nonnegative linearization of products if \((\alpha ,\beta )\notin V\); moreover, Szwarc has already shown that nonnegative linearization of products is fulfilled for all \((\alpha ,\beta )\in \Delta \) [31, 32]. The special case \(\alpha \ge \beta +1\) was already shown in [24]. The simplest subcase is given by \(\alpha \ge -1/2\wedge \beta =-1/2\), which can be obtained as above via Dougall’s formula; note that \(T_n^{(\alpha ,-1/2)}(x)=R_n^{(\alpha ,\alpha )}(x)\) for all \(n\in {\mathbb {N}}_0\), or in other words: the ultraspherical polynomials are the common subclass of the Jacobi and the generalized Chebyshev polynomials.
In Theorem 3.2, we will obtain that \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies nonnegative linearization of products if and only if \((\alpha ,\beta )\in V\). Hence, the generalized Chebyshev polynomials share this property with the Jacobi polynomials. Having in mind the interesting structure of (1.4) (where \(\beta +1\) instead of \(\beta \) appears on the right-hand side), we will also precisely characterize the pairs \((\alpha ,\beta )\in (-1,\infty )^2\) for which all \(g_T(m,n;k)\) with at least one odd entry m, n are nonnegative (Theorem 3.1), and we will describe the geometry of the resulting set \(V^{\prime }\supsetneq V\).
The main results and proofs are given in Sect. 2 (Jacobi polynomials) and Sect. 3 (generalized Chebyshev polynomials). At several stages, our arguments are based on appropriate decompositions of multivariate polynomials. To find such decompositions (more precisely, appropriate nested sums of suitable factorizations), we also used computer algebra systems (Maple). However, the final proofs can be understood without any computer usage.
2 Linearization of the Product of Jacobi Polynomials: Sharpening and Simplification of Gasper’s Result
Let \(\alpha ,\beta >-1\), and let a, b be defined as in Sect. 1. \((R_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies the three-term recurrence relation \(R_0^{(\alpha ,\beta )}(x)=1\), \(R_1^{(\alpha ,\beta )}(x)=(x-b_0^R)/a_0^R\),
with
[9, (4)]. It is well known that
[17, (4.1.4), (4.1.6)].
One of our central tools will be a recurrence relation for the \(g_R(m,n;k)\) which is taken from [10] and relies on earlier work of Hylleraas [16]. Let \(n\ge m\ge 1\); following [10], we use a more convenient notation and write
with \(s\in {\mathbb {N}}_0\) and \(j\in \{0,\ldots ,2m\}\). [10, (2.1)] states that the linearization coefficients are linked to each other via the following recursion: for \(1\le j\le 2m-1\), one has
where \(\theta (m,m+s;.),\iota (m,m+s;.),\kappa (m,m+s;.):\{1,\ldots ,2m-1\}\rightarrow {\mathbb {R}}\) read
Moreover, one has
and
[10, (2.2) to (2.4), (2.9)]. The following auxiliary result deals with the canonical continuation of the coefficient function \(\iota (m,m+s;.)\) to \([1,2m-1]\) and can be seen from [9, Section 2] and [10, Section 2]:
Lemma 2.1
Let \((\alpha ,\beta )\in V\) with \(\alpha \ne \beta \), and let for \(m\in {\mathbb {N}}\) and \(s\in {\mathbb {N}}_0\) the function \(\iota (m,m+s;.):[1,2m-1]\rightarrow {\mathbb {R}}\) be defined by (2.6). Then, \(\iota (m,m+s;.)\) has at most one zero. Moreover, if \(m\ge 2\) or \(a\ge 0\), then \(\iota (m,m+s;1)\ge 0\).
There are several ways how Lemma 2.1 can be proven. The proofs given in [9, 10] rely on Descartes’ rule of signs. We found elementary variants which completely avoid Descartes’ rule of signs (based on the classical mean value theorem, for instance). We omit the corresponding details, however.
The proof of Theorem 2.1, which is the main result of this section, will essentially rely on Lemma 2.1. Concerning the functions \(\theta (m,m+s;.)\) and \(\kappa (m,m+s;.)\), we will only need that
and
which is an obvious consequence of (2.5) and (2.7) and was also used in [10].
We now give two characterizations. The first deals with positivity of the linearization coefficients \(g_R(m,n;k)\) and can be regarded as a sharpening of Theorem 1.1 because the nontrivial direction “(ii) \(\Rightarrow \) (i) (Theorem 1.1)” is implied by “(ii) \(\Rightarrow \) (i) (Theorem 2.1)” via continuity. Our proof is a—considerably less computational—modification of Gasper’s approach [10]. The second characterization follows from the first and deals with a certain oscillatory behavior of the \(g_R(m,n;k)\). It will play an important role for the proof of Theorem 3.1 on generalized Chebyshev polynomials below.
Theorem 2.1
Let \(\alpha ,\beta >-1\). The following are equivalent:
-
(i)
All \(g_R(m,n;k)\) are positive.
-
(ii)
\((\alpha ,\beta )\) is located in the interior of V (denoted by \(V^{\circ }\) in the following), i.e.,
$$\begin{aligned} a^2+2b^2+3a>3\frac{(a+1)(a+2)}{(a+3)(a+5)}b^2 \end{aligned}$$(2.14)and
$$\begin{aligned} b>0. \end{aligned}$$(2.15)
Corollary 2.1
Let \(\alpha ,\beta >-1\), and let \(\widetilde{g_R}(m,n;k)\) denote the linearization coefficients belonging to the sequence \((R_n^{(\beta ,\alpha )}(x))_{n\in {\mathbb {N}}_0}\). Then, the following hold:
-
(i)
All numbers \((-1)^{m+n+k}\widetilde{g_R}(m,n;k)\) are nonnegative if and only if \((\alpha ,\beta )\in V\).
-
(ii)
All numbers \((-1)^{m+n+k}\widetilde{g_R}(m,n;k)\) are positive if and only if \((\alpha ,\beta )\in V^{\circ }\).
Proof (Theorem 2.1)
We preliminarily note that \(V^{\circ }\) can indeed be characterized as the set of all \((\alpha ,\beta )\in (-1,\infty )^2\) satisfying the strict inequalities (2.14) and (2.15), which can be seen as follows: let \(\varphi ,\psi :(-1,\infty )^2\rightarrow {\mathbb {R}}\) be defined by
and
then
and we have to show that \(V^{\circ }\) coincides with the set
It is clear that \({\widetilde{V}}\) is a subset of \(V^{\circ }\). Moreover, if there was an element \((\alpha _0,\beta _0)\in V^{\circ }\) with \(\varphi (\alpha _0,\beta _0)=0\), then \(\varphi \) would particularly attain a local minimum at \((\alpha _0,\beta _0)\), so the gradient of \(\varphi \) would have to vanish at \((\alpha _0,\beta _0)\). However, every \((\alpha ,\beta )\in V\) satisfies
so the gradient of \(\varphi \) does not vanish on V. In the same way, one sees that \(\psi (\alpha ,\beta )>0\) for all \((\alpha ,\beta )\in V^{\circ }\). Putting all together, we obtain that indeed \(V^{\circ }={\widetilde{V}}\).
We establish the easy direction “(i) \(\Rightarrow \) (ii)” in a similar way as in [10]: if \(b\le 0\), then (2.1) and (2.2) (or also (2.8) and (2.10)) show that
if \(b>0\) but \((\alpha ,\beta )\) is not located in the interior of V, then the equations (2.4) to (2.8) and (2.10) yield
We now come to the interesting direction “(ii) \(\Rightarrow \) (i)”. Our proof is based on Gasper’s approach [10, Section 2] but simpler and shorter. We will make use of the equations [10, (2.1) to (2.4), (2.9)], which correspond to the equations (2.4) to (2.11). Moreover, following Gasper we will make use of (2.12), Lemma 2.1 and (2.13) concerning the functions \(\theta (m,m+s;.)\), \(\iota (m,m+s;.)\) and \(\kappa (m,m+s;.)\). We will also use an induction argument which is motivated by Gasper’s approach [10, bottom of p. 587]. However, we will not make use of the painful argument which relies on [10, (2.5), (2.8), (2.10), (2.11)].
Let \((\alpha ,\beta )\in V^{\circ }\), let \(m\in {\mathbb {N}}\) and let \(s\in {\mathbb {N}}_0\). We have to show that \(g_R(m,m+s;s+j)>0\) for all \(j\in \{0,\ldots ,2m\}\). Starting similarly to [10], we use “two-sided induction” and proceed as follows: (2.8) to (2.11) yield \(g_R(m,m+s;s)>0\), \(g_R(m,m+s;s+1)>0\), \(g_R(m,m+s;s+2m)>0\) and \(g_R(m,m+s;s+2m-1)>0\). Of course, the positivity of \(g_R(m,m+s;s)\) and \(g_R(m,m+s;s+2m)\) is also clear from general results, cf. Sect. 1. If \(m=1\), we are already done (this case is also clear from (2.1) and (2.2)). Hence, assume that \(m\ge 2\) from now on; it is then left to show that \(g_R(m,m+s;s+j)>0\) for all \(j\in \{2,\ldots ,2m-2\}\).
(2.12) and (2.13) yield \(\theta (m,m+s;1)>0\) and \(\kappa (m,m+s;2m-1)>0\), and via the equations (2.4) to (2.7) and (2.10), (2.11) we compute
and
Observe that \(b^2+a>0\): if \(a\le 0\), this is a consequence of the decomposition
Therefore, the right-hand sides of (2.16) and (2.17) are greater than or equal to \((a^2+2b^2+3a)(a+3)(a+5)-3(a+1)(a+2)b^2\), so these equations imply that \(g_R(m,m+s;s+2)>0\) and \(g_R(m,m+s;s+2m-2)>0\). We note at this stage that (2.16) and (2.17) allow us to obtain the positivity of \(g_R(m,m+s;s+2)\) and \(g_R(m,m+s;s+2m-2)\) in a much faster way than Gasper estimated via [10, (2.5), (2.8), (2.10), (2.11)] (establishing the nonnegativity of \(g_R(m,m+s;s+2)\) and \(g_R(m,m+s;s+2m-2)\) under the assumption \((\alpha ,\beta )\in V\)). This is our essential simplification of Gasper’s approach.
As we are done if \(m=2\), we assume that \(m\ge 3\) from now on. The remaining proof works similarly as in [10] again. We first apply Lemma 2.1 and obtain the existence of an \(N\in \{1,\ldots ,2m-1\}\) such that \(\iota (m,m+s;j)\ge 0\) for \(1\le j\le N\) and \(\iota (m,m+s;j)<0\) for those \(1\le j\le 2m-1\) which satisfy \(j\ge N+1\). We then make use of (2.12) and (2.13) and distinguish two cases:
Case 1: \(N\ge 3\). Then, (2.4) and induction yield
This shows the positivity of \(g_R(m,m+s;s+3),\ldots ,g_R(m,m+s;s+N)\).
Case 2: \(N\le 2m-3\). In this case, (2.4) and induction yield
which establishes the positivity of \(g_R(m,m+s;s+N),\ldots ,g_R(m,m+s;s+2m-3)\).
If \(N\le 2\), then \(N<2m-3\) and the positivity of
is a consequence of Case 2. If \(N\ge 2m-2\), then \(N>3\) and the positivity of \(g_R(m,m+s;s+3),\ldots ,g_R(m,m+s;s+2m-3)\) is a consequence of Case 1. Finally, if \(3\le N\le 2m-3\), then the combination of both cases yields the positivity of \(g_R(m,m+s;s+3),\ldots ,g_R(m,m+s;s+2m-3)\). \(\square \)
Our argument via the central equations (2.16) and (2.17) in the initial step above shows a typical aspect of the strategy (this aspect will also be important in Sect. 3): as soon as one knows decompositions like in (2.16), (2.17) which allow one to see the signs of the relevant parts, these decompositions can easily (yet more or less tediously) be verified by comparing the expansions, or by comparing common zeros and leading coefficients and so on. Hence, the actual task is finding such decompositions.
Remark 2.1
As was similarly observed in [9, 10], the proof of the direction “(ii) \(\Rightarrow \) (i)” considerably simplifies in the special case \(a>0\), i.e., for \((\alpha ,\beta )\in \Delta ^{\circ }\subsetneq V^{\circ }\). On the one hand, for \(a>0\) the functions \(\theta (m,m+s;.)\) and \(\kappa (m,m+s;.)\) are positive on their full domains (cf. (2.5) and (2.7)). Hence, one can avoid the computations (and appropriate decompositions) of \(g_R(m,m+s;s+2)\) and \(g_R(m,m+s;s+2m-2)\). On the other hand, the proof of the important ingredient Lemma 2.1 is simpler for \(a>0\) [9, 10]. If \((\alpha ,\beta )\) is located in the interior of \(\Delta \), then the positivity of all \(g_R(m,n;k)\) can also be seen via Rahman’s formulas (A.1) and (A.2). The simplest subcase is given by \(\alpha =\beta +1\), for which the positivity of the \(g_R(m,n;k)\) can be seen via a very simple explicit formula [16].
Proof (Corollary 2.1)
As a consequence of (2.3), the linearization coefficients are connected to each other via
Hence, the assertions are immediate consequences of Theorems 1.1 and 2.1, cf. also the remarks at the end of [10, Section 1]. \(\square \)
3 Linearization of the Product of Generalized Chebyshev Polynomials: Solution to a Problem of Szwarc
Let \(\alpha ,\beta >-1\) again, and let a, b be defined as in Sect. 1. In the following, we use the notation and auxiliary functions of Sect. 2. \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies the recurrence relation \(T_0^{(\alpha ,\beta )}(x)=1\), \(T_1^{(\alpha ,\beta )}(x)=x\),
with \((a_n^T)_{n\in {\mathbb {N}}},(c_n^T)_{n\in {\mathbb {N}}}\subseteq (0,1)\) given by
[24, 3 (f)]. Using (1.3), (1.4), (3.1) and (3.2), one can relate the \(g_T(m,n;k)\) to the \(g_R(m,n;k)\) and \(g_R^+(m,n;k)\). For instance, this was done in [24]: one has
and
moreover, \(g_T(m,n;k)=0\) if \(m+n-k\) is odd (trivial consequence of symmetry), and \(g_T(2m+1,2n;2k+1)\) and \(g_T(2m,2n+1;2k+1)\) relate to (3.4) via
which is a consequence of (1.7), and
We deal with the following problems:
-
(A)
Szwarc’s problem, cf. Sect. 1: find all pairs \((\alpha ,\beta )\in (-1,\infty )^2\) such that \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies nonnegative linearization of products, i.e., such that all \(g_T(m,n;k)\) are nonnegative.
-
(B)
Find all pairs \((\alpha ,\beta )\in (-1,\infty )^2\) such that all \(g_T(m,n;k)\) with at least one odd entry m, n are nonnegative.
The pairs \((\alpha ,\beta )\in (-1,\infty )^2\) such that all \(g_T(m,n;k)\) with two even entries m, n are nonnegative are exactly the \((\alpha ,\beta )\in V\), which is an obvious consequence of (3.3) and Theorem 1.1. Hence, it will be interesting to compare the resulting set of (B) to V.
The solutions to (A) and (B) will be given in Theorems 3.2 and 3.1, respectively. We want to motivate these results by establishing two necessary conditions for the pairs \((\alpha ,\beta )\) which are as in (B): in the following, we show that all such pairs \((\alpha ,\beta )\) necessarily fulfill the conditions \(b\ge 0\) and \(a^2+2b^2+3a\ge 0\). More precisely, we show that
-
if \(b<0\), then \(g_T(2m+1,2m+2s+1;2s+2)<0\) for sufficiently large \(m\in {\mathbb {N}}\),
-
if \(a^2+2b^2+3a<0\), then \(g_T(2m+1,2m+1;4)<0\) for sufficiently large \(m\in {\mathbb {N}}\).
Given any \(\alpha ,\beta >-1\) and arbitrary \(m,n\in {\mathbb {N}}\) with \(n\ge m\), we use the notation of the previous sections and compute
via (2.10) and (3.2). Making also use of (2.4), which yields
for \(b\ne 1\), and combining this with (2.5) to (2.7), (2.10) and (3.2), we furthermore obtain
If \(b<0\), then the right-hand side of (3.7) becomes negative for (all) sufficiently large \(m\in {\mathbb {N}}\), whereas
is always positive. Hence, if \(b<0\), then
is negative for sufficiently large \(m\in {\mathbb {N}}\). Since \(g_R^+(m,m+s;s)\) is always positive, the latter yields the negativity of
(3.4) for sufficiently large \(m\in {\mathbb {N}}\).
Now assume that \(a^2+2b^2+3a<0\). On the one hand, one necessarily has \(b<1\) then (because \(a^2+2+3a=(a+1)(a+2)>0\)), so
is always negative. On the other hand, if \(s=0\), then the right-hand side of (3.8) becomes negative for (all) sufficiently large \(m\in {\mathbb {N}}\). Hence,
is positive for sufficiently large \(m\in {\mathbb {N}}\). Since, due to (2.10), \(g_R^+(m,m;1)\) is negative, we obtain the negativity of
(3.4) for sufficiently large \(m\in {\mathbb {N}}\).
Putting all together, we see that every pair \((\alpha ,\beta )\) which is as in (B) indeed has to satisfy both \(b\ge 0\) and \(a^2+2b^2+3a\ge 0\). Our following result deals with the converse and shows that these two conditions already characterize (B); the set \(V^{\prime }\) defined in Theorem 3.1 is illustrated in Fig. 2.
Theorem 3.1
Let \(\alpha ,\beta >-1\). The following are equivalent:
-
(i)
For all \(m,n\in {\mathbb {N}}_0\) such that at least one of these numbers is odd, all linearization coefficients \(g_T(m,n;k)\) are nonnegative.
-
(ii)
\((\alpha ,\beta )\in V^{\prime }\), where
$$\begin{aligned} V^{\prime }:=\left\{ (\alpha ,\beta )\in (-1,\infty )^2:a^2+2b^2+3a\ge 0,b\ge 0\right\} \supsetneq V. \end{aligned}$$
If \((\alpha ,\beta )\in V^{\prime }\backslash \Delta \) and \(m,n\in {\mathbb {N}}_0\) are such that at least one of these numbers is odd, and if \(k\in \{|m-n|,\ldots ,m+n\}\) is such that \(m+n-k\) is even, then \(g_T(m,n;k)\) is positive.
The set \(V^{\prime }\). The dotdashed line and the dashed line correspond to the boundaries of V and \(\Delta \) (see Sect. 1)
The inclusion \(V^{\prime }\supsetneq V\) is clear from the rewritten form of V (1.8) (one can easily find points which show that the inclusion is proper). \(V^{\prime }\) is a subset of \([-1/2,\infty )\times (-1,\infty )\), which can be seen as follows: let \((\alpha ,\beta )\in V^{\prime }\). If \(a\ge 0\), then \((\alpha ,\beta )\in \Delta \subseteq [-1/2,\infty )\times (-1,\infty )\), and if \(a<0\), then
and consequently
This establishes \(V^{\prime }\subseteq [-1/2,\infty )\times (-1,\infty )\); it is clear that the inclusion is proper. Concerning the geometry of \(V^{\prime }\), we note that one obtains the set \(\left\{ (\alpha ,\beta )\in {\mathbb {R}}^2:a^2+2b^2+3a=0\right\} \) by rotating the ellipse
by \(\pi /4\) and shifting the image by \((-5/4,-5/4)^T\) (cf. Fig. 3). The small region \(V^{\prime }\backslash V\) is bounded on the left by a curve \(c^{\prime }\) in the \((\alpha ,\beta )\)-plane which starts at the point \((\alpha ,\beta )=(-1/3,-1)\), approaches the line \(\alpha +\beta +1=0\) tangentially and meets this line at the point \((\alpha ,\beta )=(-1/2,-1/2)\) (cf. also the related set W considered in [10]). The angle between the line \(\beta =-1\) and \(c^{\prime }\) is \(\approx 86.2^{\circ }\) (in particular, \(c^{\prime }\) cannot be written as \(\beta =f(\alpha )\) with a single function f).
As a consequence of Theorem 3.1, we will obtain our second main result of this section and the answer to (A):
Theorem 3.2
Let \(\alpha ,\beta >-1\). The following are equivalent:
-
(i)
\((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies nonnegative linearization of products, i.e., all \(g_T(m,n;k)\) are nonnegative.
-
(ii)
\((\alpha ,\beta )\in V\).
Comparing Theorems 1.1, 2.1 and 3.2, one may ask whether all \(g_T(m,n;k)\) are positive if \((\alpha ,\beta )\) is located in the interior of V. However, this is not true: just recall that for every choice of \((\alpha ,\beta )\in (-1,\infty )^2\) one has \(g_T(m,n;k)=0\) if \(m+n-k\) is odd.
We now come to the proofs.
Since Theorem 3.1 implies that the set of all pairs \((\alpha ,\beta )\in (-1,\infty )^2\) such that \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies nonnegative linearization of products is given by \(V\cap V^{\prime }=V\), Theorem 3.2 follows from Theorem 3.1.
The implication “(i) \(\Rightarrow \) (ii)” of Theorem 3.1 was already established above. In view of Szwarc’s earlier result, which already shows that \((T_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\) satisfies nonnegative linearization of products at least for all \((\alpha ,\beta )\in \Delta \) (cf. Sect. 1), the converse “(ii) \(\Rightarrow \) (i)” is a consequence of the assertion made in the second part of Theorem 3.1.
In view of these observations, and in view of (3.5) and (3.6), Theorems 3.1 and 3.2 trace back to the following lemma:
Lemma 3.1
Let \((\alpha ,\beta )\in V^{\prime }\backslash \Delta \), and let \(m\in {\mathbb {N}}\), \(s\in {\mathbb {N}}_0\). Then, \(g_T(2m+1,2m+2s+1;2s+2j)>0\) for all \(j\in \{0,\ldots ,2m+1\}\).
Due to the positivity of the sequences \((a_n^T)_{n\in {\mathbb {N}}}\) and \((c_n^T)_{n\in {\mathbb {N}}}\), the assertion of Lemma 3.1 is also true for \(m=0\), of course.
Our task is to establish Lemma 3.1, which will be done via Corollary 2.1, another “two-sided induction” argument (cf. the proof of Theorem 2.1) and an auxiliary result. The latter will be needed for the (more involved) induction step.
For the rest of the section, we always assume that \((\alpha ,\beta )\in V^{\prime }\backslash \Delta \) and that \(m\in {\mathbb {N}}\), \(s\in {\mathbb {N}}_0\).
Under these conditions, we have
and
The inequality \(a>-1/3\) follows immediately from
(and can also be found in [10]). The inequality \(b>-a\) was shown above.
We now define two auxiliary functions \(p:\{1,\ldots ,2m-1\}\rightarrow {\mathbb {R}}\), \(q:\{1,\ldots ,2m-1\}\rightarrow (0,\infty )\) by
Concerning well-definedness, observe that \(\theta ^+(m,m+s;.)\) and \(\kappa ^+(m,m+s;.)\) are positive on their full domains, which follows directly from the definitions (2.5), (2.7). Using (2.5) to (2.7) and (3.2), we compute
for all \(j\in \{1,\ldots ,2m-1\}\), where the four functions \(p^{\infty }:{\mathbb {N}}\rightarrow (-1,\infty )\), \(p^{*},q^{\infty },q^{*}:{\mathbb {N}}\rightarrow (0,\infty )\) are given by
Note that the functions \(p^{\infty }\), \(p^{*}\), \(q^{\infty }\) and \(q^{*}\) are independent of m. The superscript “\(\infty \)” is used because \(p^{\infty }\) and \(q^{\infty }\) are just the pointwise limits of p and q as m tends to infinity.
As a first consequence of (3.9) and (3.10), we obtain that p maps into the interval \((-1,\infty )\).
The following lemma is the announced auxiliary result and provides an inequality in p and q which will be central in the proof of Lemma 3.1.
Lemma 3.2
Let \((\alpha ,\beta )\in V^{\prime }\backslash \Delta \) and \(m\ge 2\), \(s\in {\mathbb {N}}_0\). Then, for every \(j\in \{1,\ldots ,2m-2\}\) the inequality
is valid.
Proof
The basic idea is to use (3.9) and (3.10) in order to isolate m in an appropriate way. Let \(j\in \{1,\ldots ,2m-2\}\). We decompose
and compute
Combining (3.13) with (3.12), we obtain
with
We now define
by
and claim that f maps into \((0,\infty )\); once the claim is proven, inequality (3.11) will follow for j via
(3.13) and (3.14). To establish the claim, we first compute
Then, two rather tedious calculations yield
and, for each \(x\ge (j-a+1)/2\),
(like in Sect. 2, verifying such decompositions, which allow to directly see positivity, is easy, so the actual difficulty is finding them). Obviously, we also have
for every \(x\ge (j-a+1)/2\), so \(f^{\prime }\) maps into \((0,\infty )\). This finishes the proof. \(\square \)
We now come to the proof of Lemma 3.1.
Proof (Lemma 3.1)
As a consequence of Corollary 2.1, all numbers
are positive (observe that \((\beta +1,\alpha )\) is located in the interior of \(\Delta \)). Alternatively, the positivity of the numbers \((-1)^j g_R^+(m,m+s;s+j)\) can be obtained from (2.3) and Rahman’s formula (A.3) (take into account that \(\beta +1>0>\alpha >-1/2\)). Hence, we may define \(\phi :\{1,\ldots ,2m\}\rightarrow (-\infty ,0)\),
As a consequence of (2.4), we have
and obtain the recurrence relation
We now use this recurrence relation and induction to show that
and
for all \(j\in \{1,\ldots ,m\}\). As a consequence of (3.8) and
we see that \(\phi (2)<-1\). Moreover, making use of (2.4), which yields
and combining this with (2.5) to (2.7), (2.11) and (3.2), we obtain that
Therefore, we obtain that \(\phi (2m-1)>-1\).
If \(m=1\), then (3.15) and (3.16) are already verified to hold for all \(j\in \{1,\ldots ,m\}\) by the preceding calculations; hence, we assume that \(m\ge 2\) from now on. Let \(j\in \{1,\ldots ,m-1\}\) be arbitrary but fixed and assume that \(\phi (2j)<-1\). Then,
Since p maps into \((-1,\infty )\), we obtain
and now Lemma 3.2 implies that
Since \(\phi (2j+1)<0\), the latter equation yields
Finally, let \(j\in \{2,\ldots ,m\}\) be arbitrary but fixed and assume that \(\phi (2j-1)>-1\). We have
so
Since
we can conclude that
Now we apply Lemma 3.2 again and obtain
Since p maps into \((-1,\infty )\), this shows that
or, equivalently, \(\phi (2j-3)>-1\), which finishes the induction. Hence, (3.15) and (3.16) are established to hold for all \(j\in \{1,\ldots ,m\}\) (for every \(m\ge 1\)). Combining this with the positivity of all numbers \((-1)^j g_R^+(m,m+s;s+j)\) (see above) and (3.4), we can conclude that all
\(j\in \{1,\ldots ,2m\}\), are positive. Since the positivity of \(g_T(2m+1,2m+2s+1;2s)\) and \(g_T(2m+1,2m+2s+1;4m+2s+2)\) is clear, the proof is complete. \(\square \)
References
Andrews, G.E., Askey, R., Roy, R.: Special Functions, Encyclopedia of Mathematics and Its Applications, vol. 71. Cambridge University Press, Cambridge (1999)
Askey, R.: Linearization of the product of orthogonal polynomials, Problems in analysis (papers dedicated to Salomon Bochner, 1969), pp. 131–138 (1970)
Bloom, W.R., Heyer, H.: Harmonic Analysis of Probability Measures on Hypergroups, De Gruyter Studies in Mathematics, vol. 20. Walter de Gruyter & Co., Berlin (1995)
Bonfim, R.N., Guella, J.C., Menegatto, V.A.: Strictly positive definite functions on compact two-point homogeneous spaces: the product alternative. SIGMA Symmetry Integrability Geom. Methods Appl. 14, Paper No. 112, 14 pages (2018)
Boyvalenkov, P.G., Dragnev, P.D., Hardin, D.P., Saff, E.B., Stoyanova, M.M.: On spherical codes with inner products in a prescribed interval. Des. Codes Cryptogr. 87(2–3), 299–315 (2019)
Chihara, T.S.: An Introduction to Orthogonal Polynomials, Mathematics and Its Applications, vol. 13. Gordon and Breach Science Publishers, New York (1978)
Connett, W.C., Markett, C., Schwartz, A.L.: Jacobi Polynomials and Related Hypergroup Structures. Probability Measures on Groups X (Oberwolfach, 1990), pp. 45–81. Plenum, New York (1991)
Connett, W.C., Schwartz, A.L.: Product formulas, hypergroups, and the Jacobi polynomials. Bull. Am. Math. Soc. N.S. 22(1), 91–96 (1990)
Gasper, G.: Linearization of the product of Jacobi polynomials I. Can. J. Math. 22, 171–175 (1970)
Gasper, G.: Linearization of the product of Jacobi polynomials II. Can. J. Math. 22, 582–593 (1970)
Gasper, G.: Positivity and the convolution structure for Jacobi series. Ann. Math. (2) 93, 112–118 (1971)
Gasper, G.: Banach algebras for Jacobi series and positivity of a kernel. Ann. Math. (2) 95, 261–280 (1972)
Gasper, G.: Positivity and special functions, Theory and application of special functions (Proc. Advanced Sem., Math. Res. Center, Univ. Wisconsin, Madison, Wis., 1975), pp. 375–433. Math. Res. Center, Univ. Wisconsin, Publ. No. 35 (1975)
Gasper, G.: A convolution structure and positivity of a generalized translation operator for the continuous \(q\)-Jacobi polynomials, Conference on harmonic analysis in honor of Antoni Zygmund, Vol. I, II (Chicago, Ill., 1981), Wadsworth Math. Ser., Wadsworth, Belmont, CA, pp. 44–59 (1983)
Guella, J.C., Menegatto, V.A.: A limit formula for semigroups defined by Fourier–Jacobi series. Proc. Am. Math. Soc. 146(5), 2027–2038 (2018)
Hylleraas, E.A.: Linearization of products of Jacobi polynomials. Math. Scand. 10, 189–200 (1962)
Ismail, M.E.H.: Classical and quantum orthogonal polynomials in one variable, Encyclopedia of Mathematics and its Applications, vol. 98, Cambridge University Press, Cambridge, 2009, With two chapters by Walter Van Assche, With a foreword by Richard A. Askey, Reprint of the 2005 original (2009)
Jewett, R.I.: Spaces with an abstract convolution of measures. Adv. Math. 18(1), 1–101 (1975)
Kahler, S.: Orthogonal polynomials and point and weak amenability of \(\ell ^1\)-algebras of polynomial hypergroups. Constr. Approx. 42(1), 31–63 (2015). https://doi.org/10.1007/s00365-014-9246-2
Kahler, S.: Characterizations of Orthogonal Polynomials and Harmonic Analysis on Polynomial Hypergroups. Dissertation, Technische Universität München (2016). http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:91-diss-20160530-1289608-1-3
Koekoek, R., Lesky, P.A., Swarttouw, R.F.: Hypergeometric Orthogonal Polynomials and Their \(q\)-Analogues, Springer Monographs in Mathematics. Springer, Berlin, With a foreword by Tom H. Koornwinder (2010)
Koornwinder, T.: Positivity proofs for linearization and connection coefficients of orthogonal polynomials satisfying an addition formula. J. Lond. Math. Soc. (2) 18(1), 101–114 (1978)
Laine, T.P.: The product formula and convolution structure for the generalized Chebyshev polynomials. SIAM J. Math. Anal. 11(1), 133–146 (1980)
Lasser, R.: Orthogonal polynomials and hypergroups. Rend. Mat. (7) 3(2), 185–209 (1983)
Lasser, R.: Discrete Commutative Hypergroups. In: Inzell Lectures on Orthogonal Polynomials, Adv. Theory Spec. Funct. Orthogonal Polynomials, vol. 2, Nova Sci. Publ., Hauppauge, NY, pp. 55–102 (2005)
Lasser, R.: Amenability and weak amenability of \(l^1\)-algebras of polynomial hypergroups. Studia Math. 182(2), 183–196 (2007)
Lasser, R.: Various amenability properties of the \(L^1\)-algebra of polynomial hypergroups and applications. J. Comput. Appl. Math. 233(3), 786–792 (2009)
Obermaier, J.: Powers of the Dirichlet kernel with respect to orthogonal polynomials and related operators. Acta Sci. Math. (Szeged) 83(3–4), 539–549 (2017)
Rahman, M.: The linearization of the product of continuous \(q\)-Jacobi polynomials. Can. J. Math. 33(4), 961–987 (1981)
Rahman, M.: A nonnegative representation of the linearization coefficients of the product of Jacobi polynomials. Can. J. Math. 33(4), 915–928 (1981)
Szwarc, R.: Orthogonal polynomials and a discrete boundary value problem I, II. SIAM J. Math. Anal. 23(4), 959–964, 965–969 (1992)
Szwarc, R.: Orthogonal polynomials and Banach algebras, Inzell Lectures on Orthogonal Polynomials, Adv. Theory Spec. Funct. Orthogonal Polynomials, vol. 2, Nova Sci. Publ., Hauppauge, NY, pp. 103–139 (2005)
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Jeffrey Geronimo.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The research was begun when the author worked at Technical University of Munich, and the author gratefully acknowledges support from the graduate program TopMath of the ENB (Elite Network of Bavaria) and the TopMath Graduate Center of TUM Graduate School at Technical University of Munich. The main part of the research was done at RWTH Aachen University. The graphics were made with Maple.
Appendix A. Correction of Rahman’s Hypergeometric Representations
Appendix A. Correction of Rahman’s Hypergeometric Representations
This short section contains the announced corrections of small mistakes in Rahman’s hypergeometric representations [30, (1.7) to (1.9)] of the linearization coefficients \(g_R(m,n;k)\) (which belong to the Jacobi polynomials \((R_n^{(\alpha ,\beta )}(x))_{n\in {\mathbb {N}}_0}\)). For all \(m\in {\mathbb {N}}\) and \(s\in {\mathbb {N}}_0\), one has
for even \(j\in \{0,\ldots ,2m\}\) and
for odd \(j\in \{0,\ldots ,2m\}\), which corrects [30, (1.7), (1.8)]. This shows the nonnegativity of the \(g_R(m,n;k)\) for \((\alpha ,\beta )\in \Delta \), as well as the strict positivity for \((\alpha ,\beta )\in \Delta ^{\circ }\). For the subcase \(\alpha \ge \beta \ge -1/2\), the nonnegativity of the \(g_R(m,n;k)\) can also seen via the representation
![](http://media.springernature.com/lw463/springer-static/image/art%3A10.1007%2Fs00365-021-09552-3/MediaObjects/365_2021_9552_Equ44_HTML.png)
which is valid for all \(m\in {\mathbb {N}}\), \(s\in {\mathbb {N}}_0\) and \(j\in \{0,\ldots ,2m\}\) and which corrects a typo in [30, (1.9)]. Note that the expressions in (A.1) to (A.3) may not be well defined if \((\alpha ,\beta )\) is an element of the boundary of \(\Delta \); in this case, the formulas have to be interpreted as limits.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kahler, S. Nonnegative and Strictly Positive Linearization of Jacobi and Generalized Chebyshev Polynomials. Constr Approx 54, 207–236 (2021). https://doi.org/10.1007/s00365-021-09552-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00365-021-09552-3
Keywords
- Jacobi polynomials
- Generalized Chebyshev polynomials
- Fourier expansions
- Nonnegative linearization
- Strictly positive linearization
- Linearization coefficients