Abstract
We introduce matrix-valued weight functions of arbitrary size, which are analogues of the weight function for the Gegenbauer or ultraspherical polynomials for the parameter \(\nu >0\). The LDU-decomposition of the weight is explicitly given in terms of Gegenbauer polynomials. We establish a matrix-valued Pearson equation for these matrix weights leading to explicit shift operators relating the weights for parameters \(\nu \) and \(\nu +1\). The matrix coefficients of the Pearson equation are obtained using a special matrix-valued differential operator in a commutative algebra of symmetric differential operators. The corresponding orthogonal polynomials are the matrix-valued Gegenbauer-type polynomials which are eigenfunctions of the symmetric matrix-valued differential operators. Using the shift operators, we find the squared norm, and we establish a simple Rodrigues formula. The three-term recurrence relation is obtained explicitly using the shift operators as well. We give an explicit nontrivial expression for the matrix entries of the matrix-valued Gegenbauer-type polynomials in terms of scalar-valued Gegenbauer and Racah polynomials using the LDU-decomposition and differential operators. The case \(\nu =1\) reduces to the case of matrix-valued Chebyshev polynomials previously obtained using group theoretic considerations.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Matrix-valued orthogonal polynomials have been introduced and studied by Krein in connection with spectral analysis and moment problems, see [8, 33, 34]. Matrix-valued orthogonal polynomials have applications in the study of higher-order recurrences and their spectral analysis (see, e.g., [2, 16, 19, 20]) and the study of Toda lattices [4, 6, 18], to name a few.
Using matrix-valued spherical functions [17], one is able to associate matrix-valued orthogonal polynomials with certain symmetric pairs. The first case was studied by Grünbaum et al. [23] for the case of the symmetric pair \((G,K)=(\mathrm {SU}(3), \mathrm {U}(2))\) relying heavily on invariant differential operators. Another more direct approach was developed in [29, 30] for the case of \((\mathrm {SU}(2)\times \mathrm {SU}(2), \text {diag})\) inspired by [31]. See also [24, 42, 43] for the general set-up in the context of the so-called multiplicity free pairs. In particular, [29, 30] give a detailed study of the matrix-valued orthogonal polynomials, which can be considered as matrix-valued analogues of the Chebyshev polynomials, i.e., the spherical polynomials on \((\mathrm {SU}(2)\times \mathrm {SU}(2), \text {diag})\) better known as the characters on \(\mathrm {SU}(2)\), see also [3] for the quantum group case.
The purpose of this paper is to extend the family of matrix-valued Chebyshev polynomials in [29, 30] to a family of matrix-valued Gegenbauer-type polynomials using shift operators, where the lowering operator is the derivative. In the matrix-valued setting, the raising operator, being the adjoint of the derivative, is harder to obtain and involves a matrix-valued analogue of the Pearson equation. The ingredients for the matrix-valued Pearson equation are obtained from a matrix-valued differential operator that is well suited for a Darboux factorization. The use of shift operators is a well-established technique in special functions, see, e.g., [32, 36] for explicit examples and [5, 25, 26] for general information.
Let us recall the classical case of the shift operators for the Gegenbauer polynomials. The Chebyshev polynomials can be considered as the Gegenbauer polynomials \(C^{(\nu )}_n\) with \(\nu =1\), and by successive derivation, the Gegenbauer polynomials \(C^{(\nu )}_n\) with \(\nu \in \mathbb {N}{\setminus }\{ 0\}\) can be obtained. Moreover, many properties of the Chebyshev polynomials can be transported to Gegenbauer polynomials \(C^{(\nu )}_n\) with integer \(\nu \). Then one can obtain the general family of Gegenbauer polynomials \(C^{(\nu )}_n\) by continuation in \(\nu \), or one can extend differentation using suitable fractional integral operators. Let us recall this in some more detail, see, e.g., [5, 25, 26]. The Gegenbauer polynomials
are eigenfunctions of an explicit second-order differential operator of hypergeometric type;
The orthogonality for the Gegenbauer polynomials is
with \(w^{(\nu )}(x)=(1-x^2)^{\nu -1/2}\). In particular, using (1.1) shows that \(\frac{\mathrm{d}C^{(\nu )}_n}{\mathrm{d}x}(x)= 2\nu \, C^{(\nu +1)}_{n-1}(x)\). We view the derivative \(\frac{\mathrm{d}}{\mathrm{d}x}\) as an unbounded operator \(L^2(w^{(\nu )})\rightarrow L^2(w^{(\nu +1)})\) with respect to the integral over \((-1,1)\). It is a shift (or lowering) operator since it lowers the degree of a polynomial by one. Its adjoint \(-S^{(\nu )} :L^2(w^{(\nu +1)})\rightarrow L^2(w^{(\nu )})\) is explicitly given by \(\bigl (S^{(\nu )} f\bigr )(x) = (1-x^2)\frac{\mathrm{d}f}{\mathrm{d}x}(x) -(2\nu +1)xf(x)\). This follows from the Pearson equation
see also [40]. It follows that \(S^{(\nu )}:C^{(\nu +1)}_{n-1}\mapsto \frac{-n(2\nu +n)}{2\nu } C^{(\nu )}_n\) by considering the orthogonality relations and the leading coefficients. Then \(D^{(\nu )} = S^{(\nu )}\circ \frac{\mathrm{d}}{\mathrm{d}x}\) gives a factorization in terms of a lowering and a raising operator of the differential operator (1.2) having the Gegenbauer-type polynomials as eigenfunctions. In particular, we find \(\frac{\mathrm{d}}{\mathrm{d}x}\circ D^{(\nu )} = \frac{\mathrm{d}}{\mathrm{d}x}\circ S^{(\nu )}\circ \frac{\mathrm{d}}{\mathrm{d}x}\), so that \(C^{(\nu +1)}_{n-1}(x)\), being a multiple of the derivative of \(C^{(\nu )}_{n}(x)\), is an eigenfunction of \(\frac{\mathrm{d}}{\mathrm{d}x}\circ S^{(\nu )}\). This Darboux transformation, i.e., interchanging the raising and lowering operators, gives \(\frac{\mathrm{d}}{\mathrm{d}x} \circ S^{(\nu )} = D^{(\nu +1)} -(2\nu +1)\), and this is also known as a transmutation property.
The main results of this paper are stated in Sects. 2 and 3. In Sect. 2, we define the weight function, and we state its LDU-decomposition, which is essential in most of the proofs in this paper. The inverse of the matrix L in this decomposition is a particular case of the results of Cagliero and Koornwinder [9]. We obtain a commutative algebra generated by two matrix-valued differential operators that are symmetric with respect to the matrix weight. From these differential operators, we obtain the ingredients for a matrix-valued Pearson equation which, in turn, gives us the matrix-valued adjoint of the derivative. Finally, in Sect. 2, we describe the commutant algebra of the weight, showing that there is a nontrivial orthogonal decomposition for each weight. In [28], we show that no further reduction is possible. The proofs of these statements are given in Sect. 4.
In Sect. 3, we discuss the corresponding monic matrix-valued orthogonal polynomials. Using the shift operators, we explicitly evaluate the squared norms, and we obtain a simple Rodrigues formula of the polynomials. Moreover, the polynomials are eigenfunctions of the matrix-valued differential operators in Sect. 2. The matrix entries of the monic matrix-valued orthogonal polynomials are explicitly calculated in terms of Gegenbauer and Racah polymomials. We give the three-term recurrence relation for the polynomials explicitly. We give the proofs in Sect. 5.
In Sect. 4, we give an elementary approach to symmetry of matrix-valued differential operators, which we expect to be useful in other cases as well. We have relegated all the proofs that only involve Gegenbauer polynomials to the appendices.
In the literature, there are more families of matrix-valued orthogonal polynomials that are considered as matrix analogues of the Gegenbauer polynomials, since, as is the case here, there is a reduction to Gegenbauer polynomials in case the dimension is one. We compare the results with other matrix-valued Gegenbauer-type orthogonal polynomials as in [14, 37, 38] in Remark 3.8. We note that the matrix-valued orthogonal Gegenbauer-type polynomials of this paper are better understood in the sense of explicit formulas and for larger dimensions than the ones in [14, 37, 38].
All proofs are direct and only use special functions. Anyway, we have benefited from the use of computer algebra such as Maple, in order to come up with explicit conjectures. This means that we have checked many of our results up to a sufficiently large size of the matrices involved. For the reader’s convenience, the worksheet is available via the first author’s webpage.Footnote 1
2 The Weight Function and Symmetric Differential Operators
The weight function is introduced by defining its matrix entries, which are taken with respect to the standard orthonormal basis \(\{e_0,e_1,\ldots , e_{2\ell }\}\) of \(\mathbb {C}^{2\ell +1}\) for \(\ell \in \frac{1}{2} \mathbb {N}\). We suppress \(\ell \) from the notation as much as possible, which occurred initially as the spin of a representation of \(\mathrm {SU}(2)\), see [29]. After defining the weight, we discuss its LDU-decomposition, and we study commuting symmetric matrix-differential operators for the weight. These matrix-differential operators follow directly using the fact that we want the transmutation property with the derivative. Then we obtain in this algebra a symmetric second-order matrix-valued differential operator on which we can apply the Darboux factorization, which gives the entries for the matrix-valued Pearson equation as in [10].
Definition 2.1
For \(\ell \in \frac{1}{2}\mathbb {N}\) and \(\nu > 0\), we define the \((2\ell +1)\times (2\ell +1)\)-matrix-valued functions \(W^{(\nu )}\) by
where \(n,m\in \{0,1,\ldots , 2\ell \}\) and \(n\ge m\). The matrix is extended to a symmetric matrix, \(\left( W^{(\nu )}_{\text{ pol }}(x)\right) _{m,n}=\left( W^{(\nu )}_{\text{ pol }}(x)\right) _{n,m}\). Finally, put \(W^{(\nu )}(x) = (1-x^2)^{\nu -1/2} W^{(\nu )}_{\text {pol}}(x)\).
In order to show that matrix-valued weight fits into the general theory of matrix-valued orthogonal polynomials, we calculate its LDU-decomposition. This is also very useful in establishing symmetry of matrix-valued differential operators and in the study of the derivative in this context.
Theorem 2.2
For \(\nu > 0, W^{(\nu )}(x)\) has the following LDU-decomposition:
where \(L^{(\nu )}:[-1,1]\rightarrow M_{2\ell +1}(\mathbb {C})\) is the unipotent lower triangular matrix-valued polynomial
and \(T^{(\nu )}:(-1,1)\rightarrow M_{2\ell +1}(\mathbb {C})\) is the diagonal matrix-valued function
Theorem 2.2 is proved in Appendix A, and it is a straightforward extension of the proof in [30]. L(x) is invertible, its inverse being a unipotent lower triangular matrix as well. Cagliero and Koornwinder [9] give the matrix entries of \(L(x)^{-1}\) explicitly in terms of Gegenbauer polynomials with negative \(\nu \), see (5.3). Note that Theorem 2.2 gives that \(W^{(\nu )}(x)>0\) for \(x\in (-1,1)\). It is possible to extend Definition 2.1 and Theorem 2.2 to \(-\frac{1}{2}< \nu \le 0\), but for \(\nu =0\), the weight \(W^{(\nu )}\) is indefinite, and for \(-\frac{1}{2}< \nu < 0\), it is has nontrivial signature depending on the size. So in this paper we assume \(\nu >0\).
For matrix-valued functions P and Q with, say, \(C([-1,1])\)-entries, we define for \(\nu >0\) the matrix-valued inner product
Note that the integral of the entries are well defined.
A matrix-valued differential operator acts from the right;
where \(D= \frac{\mathrm{d}^2}{\mathrm{d}x^2} F_2 + \frac{\mathrm{d}}{\mathrm{d}x}(x) F_1 + F_0, F_i\) are matrix-valued functions and P is a matrix-valued function with \(C^2\)-entries. The derivatives in (2.2) of a matrix-valued function are taken entry-wise. A matrix-valued differential operator D is symmetric for the weight \(W^{(\nu )}\) if \(\langle PD, Q\rangle ^{(\nu )} = \langle P, QD\rangle ^{(\nu )}\) for all matrix-valued functions P and Q with \(C^2([-1,1])\)-entries.
Theorem 2.3
For \(\nu >0\), let \(D^{(\nu )}\) and \(E^{(\nu )}\) be the matrix-valued differential operators
where the matrices \(C, V, B_0^{(\nu )}, B_1^{(\nu )}\), and \(A_0^{(\nu )}\) are given by
Then \(D^{(\nu )}\) and \(E^{(\nu )}\) are symmetric with respect to the weight \(W^{(\nu )}\), and \(D^{(\nu )}\) and \(E^{(\nu )}\) commute.
The operator \(D^{(\nu )}\) matches the case \(\nu =1\) in [30, Thm. 3.1]. The explicit expression of \(E^{(\nu )}\) for \(\nu =1\) corrects a mistake in [30, Thm. 3.1] and upon a change of variables \(x=1-2u\) matches [30, Cor. 4.1]. The explicit expressions for \(D^{(\nu )}\) and \(E^{(\nu )}\) for \(\nu =1\) are given in [29, 30]. With this definition, we have \(E^{(\nu +1)}\circ \frac{\mathrm{d}}{\mathrm{d}x}= \frac{\mathrm{d}}{\mathrm{d}x} \circ E^{(\nu )}\) and \(\bigl ( D^{(\nu +1)}-(2\ell +2\nu +1)\text {I}\bigr )\circ \frac{\mathrm{d}}{\mathrm{d}x}= \frac{\mathrm{d}}{\mathrm{d}x} \circ D^{(\nu )}\). In order to prove the statement on the symmetry in Theorem 2.3, we use Theorem 2.2.
We next look for a second-order matrix-valued differential operator generated by the commuting operators \(D^{(\nu )}\) and \(E^{(\nu )}\) having no constant term; i.e., \(F_0=0\) in the notation of (2.2). The reason is that in the matrix-valued case, the constant term \(F_0\) cannot be moved into the eigenvalue matrix, see, e.g., Theorem 3.2, unless \(F_0\) is a multiple of the identity. A straightforward calculation gives
and this defines matrix-valued polynomials \(\varPhi ^{(\nu )}\) and \(\varPsi ^{(\nu )}\) of degree 2 and 1. The explicit expressions for \(\varPhi ^{(\nu )}\) and \(\varPsi ^{(\nu )}\) are given in (4.9) and (4.10). In particular, we find that the \(\nu \)-dependence of \(\varPhi ^{(\nu )}(x)= \varPhi (x) + \frac{(\ell +\nu )^2}{\ell ^2}(1-x^2)\text {I}\) is rather simple. Now \(\mathscr {D}^{(\nu )}\) is factored as derivation followed by a first-order matrix-valued differential operator, and in order to study this operator, we need the analogue of the Pearson equation (1.4).
Theorem 2.4
We have
The matrix-valued Pearson equation of Theorem 2.4 is much more involved than its scalar companion (1.4), and it fits into the framework of [10, §3].
The space of matrix-valued functions with continuous entries with respect to (2.1) forms a pre-Hilbert C\(^*\)-module, see [35], with \(M_{2\ell +1}(\mathbb {C})\) the corresponding (finite-dimensional) C\(^*\)-algebra. Note that we consider the pre-Hilbert C\(^*\)-module as a left module for \(M_{2\ell +1}(\mathbb {C})\) and the inner product to be conjugate linear in the second variable, in contrast with [35]. Let \(\mathcal {H}^{(\nu )}\) be the Hilbert C\(^*\)-module which is the completion [35, p. 4]; then \(\frac{\mathrm{d}}{\mathrm{d}x}:\mathcal {H}^{(\nu )} \rightarrow \mathcal {H}^{(\nu +1)}\) is an unbounded operator with dense domain and dense range in \(\mathcal {H}^{(\nu )}\) and \(\mathcal {H}^{(\nu +1)}\). In general, a linear operator from one Hilbert C\(^*\)-module to another does not necessarily have an adjoint, but in this case it does.
Corollary 2.5
-
(i)
Define the first-order matrix-valued differential operator \(S^{(\nu )}\) by
$$\begin{aligned} \bigl ( QS^{(\nu )}\bigr )(x)\, =\, \frac{\mathrm{d}Q}{\mathrm{d}x}(x) \bigl (\varPhi ^{(\nu )}(x)\bigr )^*+ Q(x)\bigl (\varPsi ^{(\nu )}(x)\bigr )^*; \end{aligned}$$then \(\langle \frac{\mathrm{d}P}{\mathrm{d}x}, Q\rangle ^{(\nu +1)} = - c^{(\nu )} \langle P, QS^{(\nu )}\rangle ^{(\nu )}\) for matrix-valued functions P and Q with \(C^1([-1,1])\)-entries, with \(c^{(\nu )}\) as in Theorem 2.4.
-
(ii)
\(\langle P\mathscr {D}^{(\nu )}, Q\rangle ^{(\nu )}= -c^{(\nu )} \langle \frac{\mathrm{d}P}{\mathrm{d}x}, \frac{\mathrm{d}Q}{\mathrm{d}x}\rangle ^{(\nu +1)}\) for matrix-valued functions P and Q with \(C^2([-1,1])\)-entries.
Note that Corollary 2.5(ii) gives the symmetry of \(\mathscr {D}^{(\nu )}\), which is a consequence of Theorem 2.4. However, we use the symmetry of \(\mathscr {D}^{(\nu )}\) in the proof of Theorem 2.4, so that we need another proof of the symmetry of \(\mathscr {D}^{(\nu )}\). The required symmetry follows from Theorem 2.3 and (2.3).
Having factorized \(\mathscr {D}^{(\nu )}\) as a product of a lowering operator and a rising operator in (2.3) and Corollary 2.5, we can take its Darboux transform. The Darboux transform does not give the operator \(\mathscr {D}^{(\nu +1)}\) up to an affine transformation, but it is an element of the algebra generated by \(E^{(\nu +1)}, D^{(\nu +1)}\); namely,
Darboux transformations for matrix-valued differential operators require more study, see, e.g., [4, 11, 12, 21].
Proposition 2.6
The commutant algebra \(A^{(\nu )}=\{ T\in M_{2\ell +1}(\mathbb {C})\mid [T,W^{(\nu )}(x)]=0 \, \forall x\in (-1,1)\}\) is generated by J, where \(J\in M_{2\ell +1}(\mathbb {C})\) is the self-adjoint involution defined by \(J:e_j \mapsto e_{2\ell -j}\).
The commutant algebra \(A^{(\nu )}\) is related to orthogonal decompositions, and Proposition 2.6 states that there is an orthogonal decomposition with respect to the \(\pm 1\)-eigenspaces of J. General nonorthogonal decompositions are governed by the real vector space \(\mathscr {A}^{(\nu )} = \{ T\in M_{2\ell +1}(\mathbb {C})\mid TW^{(\nu )}(x)=W^{(\nu )}(x)T^*\, \forall x\in (-1,1)\}\), see [22, 44]. In [28] we show that \(\mathscr {A}^{(\nu )}\) equals the Hermitian elements of \(A^{(\nu )}\), so that there is no further nonorthogonal decomposition. Proposition 2.6 is proven in Sect. 5.4.
3 The Matrix-Valued Gegenbauer-Type Polynomials and Their Properties
Since the matrix weight function \(W^{(\nu )}\) is strictly positive definite on \((-1,1)\), one can associate matrix-valued orthogonal polynomials, see, e.g., [13, 22, 24]. Denote the corresponding monic orthogonal polynomials with respect to the matrix-valued weight function \(W^{(\nu )}\) on \((-1,1)\) by \(P_n^{(\nu )}\), i.e.,
where \(H_n^{(\nu )}\) is a strictly positive definite matrix. Note that \(H_0^{(\nu )}\) can be calculated using the explicit expression of Definition 2.1 and the orthogonality relations (1.3), which gives the special case \(n=0\) of Theorem 3.1(i). In case \(Q_n\) is another set of matrix-valued orthogonal polynomials with respect to the weight function \(W^{(\nu )}\) on \((-1,1)\), then there exist invertible matrices \(E_n\) so that \(Q_n(x) = E_nP_n(x)\) for all x and all n, see [13, 22].
Theorem 3.1
- (i):
-
The squared norm \(H^{(\nu )}_n\) in (3.1) is given by the diagonal matrix
$$\begin{aligned} \bigl ( H^{(\nu )}_n\bigr )_{k,k}= & {} \sqrt{\pi }\, \frac{\varGamma \left( \nu +\frac{1}{2}\right) }{\varGamma (\nu +1)}\frac{\nu (2\ell +\nu +n)}{\nu +n}\\&\times \frac{n!\, (\ell +\frac{1}{2}+\nu )_n (2\ell +\nu )_n(\ell +\nu )_n}{(2\ell +\nu +1)_n(\nu +k)_n(2\ell +2\nu +n)_n(2\ell +\nu -k)_n}\\&\times \,\frac{k!\, (2\ell -k)!\, (n+\nu +1)_{2\ell }}{(2\ell )!\, (n+\nu +1)_k (n+\nu +1)_{2\ell -k}}. \end{aligned}$$ - (ii):
-
\(\displaystyle {\frac{\mathrm{d}P^{(\nu )}_n}{\mathrm{d}x}(x) = n\, P^{(\nu +1)}_{n-1}(x)}\).
- (iii):
-
The following Rodrigues formula holds:
$$\begin{aligned} P^{(\nu )}_n(x)= & {} G_n^{(\nu )}\, \left( \frac{\mathrm{d}^nW^{(\nu +n)}}{\mathrm{d}x^n}(x)\right) \, W^{(\nu )}(x)^{-1} \\ \bigl (G_n^{(\nu )}\bigr )_{j,k}= & {} \delta _{j,k} \frac{(-1)^n (\nu )_n (\ell +\nu +\frac{1}{2})_n (\ell +\nu )_n (2\ell +\nu )_n}{\left( \nu +\frac{1}{2}\right) _n(\nu +k)_n(2\ell +\nu +1)_n(2\ell +2\nu +n)_n(2\ell +\nu -k)_n}. \end{aligned}$$
Note that the Rodrigues formula has a compact nature very similar to the scalar case [26, (9.8.27)], [27, (1.8.23)], [25, (4.5.12)] and works for any size. It differs from the Rodrigues formula for the irreducible \(2\times 2\)-cases and \(\nu =1\) in [29, §8].
Theorem 3.2
For every integer \(n\ge 0\), the monic matrix-valued Gegenbauer-type polynomials are eigenfunctions of \(D^{(\nu )}\) and \(E^{(\nu )}\);
It follows from (2.3) that \(P^{(\nu )}_n\) are eigenfunctions of \(\mathscr {D}^{(\nu )}\), where the eigenvalue matrix is obtained using the same combination.
Matrix-valued orthogonal polynomials satisfy a three-term recurrence relation, see, e.g., [13, Lemma 2.6], [22, (2.1)].
Theorem 3.3
The monic matrix-valued orthogonal polynomials \(P^{(\nu )}_n\) satisfy the three-term recurrence relation
where the matrices \(B^{(\nu )}_n, C^{(\nu )}_n\) are given by
Note that \(JB^{(\nu )}_n= B^{(\nu )}_nJ\) and \(JC^{(\nu )}_n=C^{(\nu )}_nJ\), which essentially follows from Proposition 2.6, see [28, Lem. 3.1]. The coefficient matrix \(C_n^{(\nu )}\) in Theorem 3.3 follows from Theorem 3.1, but for the coefficient \(B^{(\nu )}_n\), we need the one-but-leading coefficient of \(P_n^{(\nu )}\) for which we use the shift operators. In [30] we calculated the one-but-leading coefficients using Tirao’s matrix-valued hypergeometric functions [41], and we note that a similar expression for the rows of \(P^{(\nu )}_n\) in terms of matrix-valued hypergeometric functions as in [30, §4] is possible.
The matrix-entries of the monic matrix-valued orthogonal polynomials can be expressed in terms of scalar orthogonal polynomials. In this expression, Racah polynomials [45, §4], [26, §9.2] and Gegenbauer polynomials occur. The Gegenbauer polynomials with negative parameter \(\nu \) arise from \(L(x)^{-1}\) and have to be interpreted as in [9].
Theorem 3.4
The monic matrix-valued Gegenbauer-type polynomials \(P^{(\nu )}_n(x)\) have the explicit expansion
Theorem 3.4 is proved in Sect. 5.2.
The right-hand side in Theorem 3.4 is not obviously a polynomial of degree at most n in case \(k>i\). In particular, the coefficients of \(x^p\) with \(p>n\) are zero. In particular, for \(k>i\), the leading coefficient of the right-hand side is zero, and this gives
Remark 3.5
From the orthogonality relations and Theorem 2.4 or Theorem 3.1, it follows that the polynomial \(P_n^{(\nu +k)}\) can be expanded in polynomials \(P_m^{(\nu )}\) with \(n-2k\le m\le n\). However, in general, the coefficients do not seem to have simple expressions. The case \(k=1\) can be done easily by differentiating the three-term recurrence relation of Theorem 3.3 and using Theorem 3.1(ii).
Remark 3.6
The weight matrices \(W^{(\nu +1)}\) are obtained from \(W^{(\nu )}\) by a Christoffel transformation, given by multiplication by the polynomial \(\varPhi ^{(\nu )}\) of degree two, see Theorem 2.4. Therefore our Gegenbauer-type polynomials give a nontrivial example of arbitrary size for the theory developed in [4], see also [4, Example 3].
Remark 3.7
As a consequence of Proposition 2.6, there exists a constant matrix Y (independent of \(\nu \)) so that
where \(W^{(\nu )}_1,W^{(\nu )}_2\) are square matrices and \(P^{(\nu )}_{n,1}, P^{(\nu )}_{n,2}\) are the monic polynomials for \(W^{(\nu )}_1,W^{(\nu )}_2\), respectively. Since Y is independent of \(\nu \), we can take the matrix corresponding to the case \(\nu =1\), which is given in [29, (6.7)]. The block decompositions for the differential operators \(D^{(\nu )}\) and \(E^{(\nu )}\) are analogous to those for the case \(\nu =1\), see [29, §7.2]. We have
where the square blocks \(D^{(\nu )}_1\) and \(D^{(\nu )}_2\) are symmetric with respect to \(W^{(\nu )}_1\) and \(W^{(\nu )}_2\) and have the polynomials \(P^{(\nu )}_{n,1}, P^{(\nu )}_{n,2}\), respectively, as eigenfunctions. Observe that the \(\nu \)-dependence of \(D^{(\nu )}\) is quite simple, so that (3.2) can be easily verified. For the differential operator \(E^{(\nu )}\), the decomposition is analogous to [29, Prop. 7.8]. If \(\ell =(2s+1)/2\) for some \(s\in \mathbb {N}\), then \(E^{(\nu )}\) splits in \((s+1)\times (s+1)\) blocks in the following way:
Here \(I_{s+1}\) denotes the \((s+1)\times (s+1)\) identity matrix. The operators \(E^{(\nu )}_1\) and \(E^{(\nu )}_2\) relate the polynomials \(P^{(\nu )}_{n,1}\) and \(P^{(\nu )}_{n,2}\) in the following way:
where \(\varLambda ^{(\nu )}_{n,1}, \varLambda ^{(\nu )}_{n,2}\) are diagonal matrices given by
If \(\ell \in \mathbb {N}\), then the blocks are no longer square, see [29, Prop. 7.9].
Remark 3.8
Pacharoni and Zurrián [38] introduce \(2\times 2\) matrix-valued Gegenbauer polynomials. Using Proposition 2.6, we have irreducible \(2\times 2\) Gegenbauer polynomials upon restricting to the \(\pm 1\)-eigenspaces of J for the cases \(\ell =1,3/2,2\), see Remark 3.7. The cases \(\ell =1\) and \(\ell =2\) can be connected to the results in [38], but the case \(\ell =3/2\) not, see [38, Rmk. 3.7]. Other Gegenbauer-type matrix-valued polynomials can be obtained from the \(\alpha =\beta \) case of Durán [14] or Pacharoni–Tirao [37]. The results of this paper have no overlap with [14], and the relation with the \(\alpha =\beta \) case of [37] seems slightly similar, but the weight function is different. This can be checked by considering the leading terms of the polynomial part of the weight functions of [14, 37] and comparing these with the leading coefficient of the polynomial part of the weight function. Note that the results for the matrix-valued polynomials of Gegenbauer type in this paper are more complete and explicit than the ones of [14, 37, 38], since all results are very explicit and for any dimension.
4 Differential Operators
In this section, we prove the statements of Sect. 2 except for some technical statements, which are deferred to the appendix, and the commutativity statement of Theorem 2.3 and Proposition 2.6. These statements are easier to derive using the matrix-valued orthogonal polynomials. Section 4.1 is of a general nature, and is next applied to this particular situation in Sect. 4.2. The proof of the Pearson-type result is the most technical, see Sect. 4.3.
4.1 Differential Operators and Conjugation
We consider matrix-valued differential operators of the form \(\frac{\mathrm{d}^2}{\mathrm{d}x} F_2(x) + \frac{\mathrm{d}}{\mathrm{d}x} F_1(x) + F_0(x)\), which act from the right on matrix-valued functions G which have \(C^2\)-matrix entries by
In (4.1) the derivatives are taken entry-wise. Moreover, we assume that all entries of \(F_i, i=0,1,2\), are \(C^2([a,b])\). In the application for this paper, \(F_i\)’s are polynomials.
Next we assume that we have matrix-valued weight function W on (a, b), so that its entries are \(C^2((a,b))\), and we allow for an integrable singularity at the end points. The matrix-valued operator D is symmetric with respect to W if for all matrix-valued \(C^2([a,b])\)-functions G, H we have
as long as both integrals exist. For the explicit weight functions \(W^{(\nu )}\) this is valid.
Lemma 4.1 is due to Durán and Grünbaum [15, Thm 3.1] and can be proved directly by integration by parts.
Lemma 4.1
D as in (4.1) is symmetric with respect to W if and only if the boundary conditions
and the symmetry conditions
for all \(x\in (a,b)\) hold.
Note that such symmetric matrix-valued differential operators form a vector space and that the product of two symmetric first-order matrix-valued differential operators is a symmetric second-order matrix-valued differential operator.
We now consider the differential operator under conjugation. We assume L is a \(C^2([a,b])\) matrix-valued function such that L(x) is a unipotent lower-triangular matrix. In particular, it follows that its inverse \(L(x)^{-1}\) is a unipotent lower-triangular matrices with \(C^2([a,b])\)-entries. Then \(L^*(x)\) and \((L^*(x))^{-1}\) are unipotent upper-triangular matrices with \(C^2([a,b])\)-entries. Note that L as in Theorem 2.2 satisfies these conditions. Let \(\tilde{D} = \frac{\mathrm{d}^2}{\mathrm{d}x^2} \tilde{F}_2 + \frac{\mathrm{d}}{\mathrm{d}x} \tilde{F}_1 + \tilde{F}_0\) be the second-order matrix-valued differential operator obtained by conjugation of D by L; i.e.,
for all \(C^2\)-matrix-valued functions G. Comparing (4.1) and (4.3), we obtain (as matrix-valued functions)
By symmetry, interchanging \(F_i\leftrightarrow \tilde{F}_i\) and \(L\leftrightarrow L^{-1}\), we get the expressions for \(F_i\) in terms of \(\tilde{F}_i\). This can also be obtained by solving for \(\tilde{F}_i\) from (4.4) and using the expressions for the first- and second-order derivatives of \(L^{-1}\) obtained from deriving \(LL^{-1}=\text {I}\).
Proposition 4.2
With the assumptions in Sect. 4.1 and assuming additionally for \(x\in (a,b)\) that \(W(x)=L(x)V(x)L^*(x)\), then D is symmetric with respect to W on the interval (a, b) if and only if \(\tilde{D}\) is symmetric with respect to V on the interval (a, b).
Note that in particular the entries of V are again in \(C^2((a,b))\).
Proof
Note that symmetry of D with respect to W is given by (4.2). Plugging in the decomposition of W gives
Replacing GL and HL by \(\tilde{G}\) and \(\tilde{H}\), and using (4.3) we find
since L, being a unipotent matrix with \(C^2([a,b])\)-entries, has an inverse with the same properties. So \(\tilde{D}\) is symmetric for V on (a, b).
Since we can interchange the roles of (D, W) and \((\tilde{D},V)\) by flipping \(L\leftrightarrow L^{-1}\) we obtain the result. \(\square \)
Proposition 4.2 can also be proved by showing that the symmetry relations of Lemma 4.1 for \((\tilde{D},V)\) on (a, b) can be obtained from the symmetry relations for (D, W) on (a, b) using (4.4), and vice versa.
4.2 Differential Operators \(E^{(\nu )}\) and \(D^{(\nu )}\)
In order to prove the symmetry of \(E^{(\nu )}\) and \(D^{(\nu )}\) as stated in Theorem 2.3 we use Lemma 4.1 with \(L=L^{(\nu )}\) as in Theorem 2.2, so that V of Lemma 4.1 identifies with the diagonal weight \(T^{(\nu )}\).
Lemma 4.3
The conjugated differential operators \((D^{(\nu )} - 2\ell E^{(\nu )})\widetilde{\ }\) and \(( E^{(\nu )} )\widetilde{\ }\) are given explicitly by
where
Moreover, \((D^{(\nu )} - 2\ell E^{(\nu )})\widetilde{\ }\) and \(( E^{(\nu )} )\widetilde{\ }\) are symmetric with respect to the weight \(T^{(\nu )}\).
Lemma 4.3 is proved in Appendix B. Lemma 4.3 and Proposition 4.2 imply the validity of Theorem 2.3 except for the commutativity of the differential operators.
Using (4.4) we calculate \((D^{(\nu )} - 2\ell E^{(\nu )})\widetilde{\ }\) and \(( E^{(\nu )} )\widetilde{\ }\), and since this is an explicit calculation involving Gegenbauer polynomials, we do so in Appendix B. Then the proof of the symmetry can be reduced using Lemma 4.1, see Appendix B. The commutativity is proved in Sect. 5.2.
4.3 Pearson Equation for the Weight \(W^{(\nu )}\)
In order to prove the Pearson equations of Theorem 2.4, we use the fact that \(\mathscr {D}^{(\nu )}\), is symmetric with respect to the weight \(W^{(\nu )}\), and we use the LDU-decomposition of Theorem 2.2. We actually do not need the explicit expression for \(\varPhi ^{(\nu )}\) to prove Theorem 2.4, but we give it for completeness in (4.9) since it gives a highly nontrivial example of the theory for Christoffel transformations for matrix-valued weights as presented in [4].
Proof of Theorem 2.4
In case \(F_0=0\) in Lemma 4.1, we can integrate the last symmetry condition using the last boundary condition to obtain \(\frac{\mathrm{d}(F_2W)}{\mathrm{d}x}(x) = F_1(x)W(x)\). Note that \(\mathscr {D}^{(\nu )}\) in (2.3) has no constant term. \(\mathscr {D}^{(\nu )}\) is expressed in terms of the symmetric differential operators \(E^{(\nu )}\) and \(D^{(\nu )}\), see Theorem 2.3, hence symmetric with respect to \(W^{(\nu )}\). Noting that \(F_2=(\varPhi ^{(\nu )})^*, F_1=(\varPsi ^{(\nu )})^*\), we find \(\frac{\mathrm{d}((\varPhi ^{(\nu )})^*W^{(\nu )})}{\mathrm{d}x}(x) = (\varPsi ^{(\nu )})^*(x)W^{(\nu )}(x)\). Taking adjoints proves the first identity of Theorem 2.4.
The proof of the second identity is more involved. We will show that
implying \(\frac{\mathrm{d}W^{(\nu +1)}}{\mathrm{d}x}(x) =c^{(\nu )} \frac{\mathrm{d}(W^{(\nu )}\varPhi ^{(\nu )})}{\mathrm{d}x}(x)\). Integrating gives the second identity, since \(W^{(\nu +1)}\) vanishes at \(x=-1\) by Definition 2.1 as does \(W^{(\nu )}\varPhi ^{(\nu )}=(\varPhi ^{(\nu )})^*W^{(\nu )}\) by the first boundary condition of Lemma 4.1 applied to \(\mathcal {D}^{(\nu )}\). \(\square \)
In order to prove (4.5), we use the LDU-decompositions of the weights involved. For this we need the relation between \(L^{(\nu )}\) and \(L^{(\nu +1)}\).
Proposition 4.4
The matrix \(L^{(\nu )}\) satisfies the following identities:
We prove Proposition 4.4 in Appendix C. Note that Proposition 4.4 implies
by differentiation of the first identity of Proposition 4.4 and using the second identity.
Using the LDU-decomposition of Theorem 2.2 for \(\nu \) and \(\nu +1\), and Proposition 4.4 and (4.6), the identity (4.5) is equivalent to showing
where
The expression for \(A^{(\nu )}\) might look complicated, but since \(T^{(\nu )}\) is diagonal, and \(M_1^{(\nu )}, M_2^{(\nu )}\) only have two nonzero diagonals, \(A^{(\nu )}\) is band-limited. Using the expressions from Theorem 2.2 and Proposition 4.4, we find that \(A^{(\nu )}\) is a three-diagonal matrix-valued polynomial of degree 2;
In order to prove (4.7), we need the explicit expression for \(\varPsi ^{(\nu )}\), and for completeness we also write down the explicit expression for \(\varPhi ^{(\nu )}\). The matrix-valued polynomials \(\varPhi ^{(\nu )}\) and \(\varPsi ^{(\nu )}\) are introduced in (2.3). A straightforward calculation using Theorem 2.3 gives
and
We prove (4.7) in Appendix C by showing that the matrix entries on both sides, each consisting of at most three Gegenbauer polynomials, are equal using standard identities for Gegenbauer polynomials. Note that a straightforward check of (4.5) using Definition 2.1 and Gegenbauer polynomials is much more involved, but this can also be done.
5 Matrix-Valued Orthogonal Polynomials
We prove all the statements of Sect. 3 concerning the corresponding matrix-valued orthogonal polynomials. We also prove the remaining statements of Sect. 4, i.e., Proposition 2.6 and the commutativity statement of Theorem 2.3.
5.1 Rodrigues Formula and the Squared Norm
Proof of Theorem 3.1
The leading coefficient \(n\text {I}\) of \(\frac{\mathrm{d}P^{(\nu )}_n}{\mathrm{d}x}\) is nonsingular, and for \(k\in \mathbb {N}\) with \(k<n-1\), we get
using integration by parts and Theorem 2.4. Considering the degrees using Theorem 2.4 again, we see that the integral is zero for all \(k<n-1\). Hence \(\frac{\mathrm{d}P^{(\nu )}_n}{\mathrm{d}x}(x) = n P_{n-1}^{(\nu +1)}(x)\), and (ii) follows.
By Corollary 2.5 and the factorization \(P\mathscr {D}^{(\nu )} = \left( \frac{\mathrm{d}P}{\mathrm{d}x}\right) S^{(\nu )}\), we see that \(S^{(\nu )}\) maps \(P_{n-1}^{(\nu +1)}\) to a multiple of \(P_{n}^{(\nu )}\), and this multiple can be obtained from considering leading coefficients, or by considering the eigenvalues of \(\mathscr {D}^{(\nu )}\) obtained from (2.3) and Theorem 3.2. This gives
In particular, \(S^{(\nu )}\) is a shift or raising operator.
Now we use Corollary 2.5 to see that
since \(K_n^{(\nu )}\) is a real diagonal matrix. Note that the matrices commute, so we do not need to specify the order. It suffices to calculate \(H_0^{(\nu )}\) for general \(\nu >0\), and this follows immediately from the explicit expression of Definition 2.1 and the orthogonality relations of the (scalar) Gegenbauer polynomials (1.3). This leads to (i).
For (iii), we use Theorem 2.4 to write \(\bigl ( QS^{(\nu )}\bigr )(x)= \bigl (c^{(\nu )}\bigr )^{-1} \frac{\mathrm{d}}{\mathrm{d}x}\bigl ( Q(x) W^{(\nu +1)}(x)\bigr ) W^{(\nu )}(x)^{-1}\), which is a raising operator preserving polynomials. Iterating shows
Now we take \(Q(x)= P^{(\nu +n)}_0(x) = \text {I}\), so that the left-hand side equals \(\prod _{i=0}^{n-1} c^{(\nu +i)}K^{(\nu +i)}_{n-i}\, P^{(\nu )}_n(x)\). A calculation gives the diagonal matrix \(G_n^{(\nu )}\), and we obtain the Rodrigues formula (iii). \(\square \)
5.2 Eigenfunctions of Differential Operators and Scalar Orthogonal Polynomials
Proof of Theorem 3.2
Since the differential operators of Theorem 2.3 are symmetric with respect to the weight \(W^{(\nu )}\) on \((-1,1)\) by Theorem 2.3, and since the differential operators also preserve the polynomials of fixed degree, we find that such a differential operator acting on \(P^{(\nu )}_n\) is an orthogonal polynomial of degree n with respect to the weight \(W^{(\nu )}\) on \((-1,1)\). Hence it can be written as \(\varLambda _n P^{(\nu )}_n\), and the eigenvalue matrix \(\varLambda _n\) follows by considering leading coefficients. \(\square \)
Final part of the proof of Theorem 2.3
Theorem 2.3 is proved in Sect. 4.2, except for the commutativity of \(E^{(\nu )}\) and \(D^{(\nu )}\). By Theorem 3.2, we see that \(E^{(\nu )}\) and \(D^{(\nu )}\) acting on the matrix-valued Gegenbauer-type polynomials commute, since the diagonal eigenvalue matrices \(\varLambda _n(E^{(\nu )})\) and \(\varLambda _n(D^{(\nu )})\) commute. By [22, Proposition 2.8], it follows that \(E^{(\nu )}\) and \(D^{(\nu )}\) commute.\(\square \)
We can now use the differential operators, especially the diagonal (or uncoupled) differential operator of Lemma 4.3, to obtain precise information on the matrix-valued orthogonal polynomials. In particular, we can derive Theorem 3.4 in this way. Indeed, \(P^{(\nu )}_n\) are eigenfunctions of \(D^{(\nu )}-2\ell E^{(\nu )}\). Hence, Lemma 4.3 shows that \(\mathcal {R}^{(\nu )}_n(x)\, = \, P^{(\nu )}_n(x)L^{(\nu )}(x)\) are eigenfunctions of a diagonal differential operator. Note that in general \(\mathcal {R}^{(\nu )}_n\) is a matrix-valued polynomial of degree \(n+2\ell \) with highly singular leading coefficient. Because of Lemma 4.3, the matrix entry \(\bigl (\mathcal {R}^{(\nu )}_n(x)\bigr )_{k,j}\) is a polynomial solution to the hypergeometric differential operator
Since the polynomial solutions are unique up to a constant, we find
Similarly, by Theorem 3.2 and Lemma 4.3, we have that \(\mathcal {R}^{(\nu )}_n (E^{(\nu )})\widetilde{\ } = \varLambda _n(E^{(\nu )})\mathcal {R}^{(\nu )}_n\). Note that the eigenvalue does not change. Using the explicit expression of Lemma 4.3 and that \(\varLambda _n(E^{(\nu )})\) is diagonal, plugging in \(x=1\) gives the recurrence
which can be solved in terms of Racah polynomials as in [30], see also [39];
Since the inverse of the lower triangular matrix \(L^{(\nu )}\) has been calculated explicitly by Cagliero and Koornwinder [9, Thm. 4.1] as
the proof of Theorem 3.4 follows directly from (5.1) and (5.2) up to an explicit expression of \(c^{(\nu )}_{k,0}\).
It remains to compute \(c^{(\nu )}_{k,0}\). If we combine \(\mathcal {R}^{(\nu )}_n=P_n^{(\nu )}L^{(\nu )}\) with Theorem 3.1(ii), we obtain
It follows from (5.1) and Proposition 4.4 that the (k, 0)th entry of (5.4), evaluated at \(x=1\), is given by
If we write \(c_{k,2}^{(\nu )}(n)\) in terms of \(c_{k,0}^{(\nu )}(n)\) and \(c_{k,1}^{(\nu )}(n)\) using the recurrence (5.2) for \(i=1\), and we replace in (5.5), we obtain
Finally, from the recurrence (5.2) for \(i=0\), we write \(c_{k,1}^{(\nu )}(n)\) in terms of \(c_{k,0}^{(\nu )}(n)\). If we replace this in (5.6), we get
Moreover, since \(\mathcal {R}_0^{(\nu )}=L^{(\nu )}\), we have \(c_{k,0}^{(\nu )}(0)=(2\nu )_k\), and therefore the recurrence (5.7) is solved explicitly by
Observe that for the case \(\nu =1\), the initial value \(c_{k,0}^{(\nu )}\) differs from that in [30] by a factor \((-1)^n 2^{-n} (n+k)!/(2\nu )_{n+k}\). This is due to a different normalization in (5.1) and the fact that the orthogonality interval is \([-1,1]\) and [0, 1] in [30, §6].
5.3 Three-Term Recurrence Relation
To calculate the coefficients \(B^{(\nu )}_n, C^{(\nu )}_n\) in Theorem 3.3, we first note that \(C^{(\nu )}_n\) follows from the squared norm
and the explicit expression of Theorem 3.1.
Writing \(P_n^{(\nu )}(x) = x^n\text {I}+ x^{n-1} X^{(\nu )}_n + \cdots \), we find
by comparing coefficients of \(x^{n}\) in the three-term recurrence relation. Now Theorem 3.3 follows from these observations and Lemma 5.1.
Lemma 5.1
The one-but-leading coefficient \(X^{(\nu )}_n\) is given by
Proof
We differentiate \(P_n^{(\nu )}\) and use Theorem 3.1(ii) to find \((n-1) X^{(\nu )}_n = n X^{(\nu +1)}_{n-1}\). Iterating gives \(X^{(\nu )}_n = n X^{(\nu +n-1)}_{1}\), so that it suffices the check the polynomial of degree 1. Using the Rodrigues formula of Theorem 3.1(iii) and the adjoint of the relations in Theorem 2.4, we find \(P_1^{(\nu )}(x)= c^{(\nu )} G_1^{(\nu )} \bigl ( \varPsi ^{(\nu )}(x)\bigr )^*\). Using (4.10), we find \(X^{(\nu )}_1\), and hence \(X^{(\nu )}_n\). \(\square \)
5.4 Commutant
The commutant algebra \(A^{(\nu )}\) in Proposition 2.6 is a \(*\)-algebra, hence is generated by its self-adjoint elements. Let \(T=T^*\in A^{(\nu )}\) be an invertible self-adjoint element; then \(P\mapsto PT\) is a symmetric operator with respect to \(W^{(\nu )}\), a matrix-valued differential operator of order 0. Since it preserves the polynomials, the matrix-valued Gegenbauer-type polynomials are eigenfunctions. Hence, \(P_n^{(\nu )}(x) T = T P_n^{(\nu )}(x)\) for all n and all x by comparing leading coefficients, cf. [28, Lem. 3.1(2)]. In particular, T commutes with the one-but-leading coefficient of \(P_n^{(\nu )}\), i.e., \(X_n^{(\nu )} T=T X_n^{(\nu )} \) for all \(n\in \mathbb {N}\). From the explicit expression of \(X^{(\nu )}_n\) in Lemma 5.1, we find that \(T_{i,j}=0\) unless \(j=i\) or \(j=2\ell -i, T_{i,i}=T_{i+1,i+1}\), and \(T_{i,2\ell -i}=T_{i+1,2\ell -i-1}\) for all \(i=0,\ldots ,2\ell -1\). This implies that \(T\in \langle \{I,J\}\rangle \).
Observe that \(W^{(\nu )}(x)J=JW^{(\nu )}(x)\) for all \(x\in (-1,1)\) if and only if \((W^{(\nu )}(x))_{2\ell -n,m} =(W^{(\nu )}(x))_{n,2\ell -m}\) for all n, m and all x. This means that the weight matrix \(W^{(\nu )}(x)\) is persymmetric (or orthosymmetric). Using Definition 2.1 and comparing coefficients of the Gegenbauer polynomials, we need to prove
This is straightforwardly verified from the expression in Definition 2.1.
References
Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover, New York (1992)
Aptekarev, A.I., Nikishin, E.M.: The scattering problem for a discrete Sturm–Liouville operator. Mat. USSR Sbornik 49, 325–355 (1984)
Aldenhoven, N., Koelink, E., Román, P.: Matrix-valued orthogonal polynomials related to the quantum analogue of \((\text{ SU }(2) \times \text{ SU }(2), \text{ diag })\). Ramanujan J. Math. 43, 243–311 (2017)
Álvarez-Fernández, C., Ariznabarreta, G., García-Ardila, J.C., Mañas, M., Marcellán, F.: Christoffel transformations for matrix orthogonal polynomials in the real line and the non-Abelian 2D Toda lattice hierarchy. Int. Math. Res. Not. 2017, 1285–1341 (2017)
Andrews, G.E., Askey, R.A., Roy, R.: Special Functions. Cambridge University Press, Cambridge (1999)
Ariznabarreta, G., Mañas, M.: Matrix orthogonal Laurent polynomials on the unit circle and Toda type integrable systems. Adv. Math. 264, 396–463 (2014)
Bailey, W.N.: Generalized Hypergeometric Series. Hafner, New York (1964)
Berezanskii, J.M: Expansions in Eigenfunctions of Selfadjoint Operators, Translations of Mathematical Monographs. AMS (1968)
Cagliero, L., Koornwinder, T.H.: Explicit matrix inverses for lower triangular matrices with entries involving Jacobi polynomials. J. Approx. Theory 193, 20–38 (2015)
Cantero, M.J., Moral, L., Velázquez, L.: Matrix orthogonal polynomials whose derivatives are also orthogonal. J. Approx. Theory 146, 174–211 (2007)
Casper, W.R.: Elementary examples of solutions to Bochner’s problem for matrix differential operators. arXiv:1509.03674v2
Castro, M., Grünbaum, F.A.: The Darboux process and time-and-band limiting for matrix orthogonal polynomials. Linear Alg. Appl. 487, 328–341 (2015)
Damanik, D., Pushnitski, A., Simon, B.: The analytic theory of matrix orthogonal polynomials. Surv. Approx. Theory 4, 1–85 (2008)
Durán, A.J.: Generating orthogonal matrix polynomials satisfying second order differential equations from a trio of triangular matrices. J. Approx. Theory 161, 88–113 (2009)
Durán, A.J., Grünbaum, F.A.: Orthogonal matrix polynomials satisfying second-order differential equations. Int. Math. Res. Not. 2004, 461–484 (2004)
Durán, A.J., Van Assche, W.: Orthogonal matrix polynomials and higher-order recurrence relations. Linear Algebra Appl. 219, 261–280 (1995)
Gangolli, R., Varadarajan, V.S: Harmonic Analysis of Spherical Functions on Real Reductive Groups, Ergebnisse der Mathematik und ihrer Grenzgebiete, vol 101. Springer, Berlin (1988)
Gekhtman, M.: Hamiltonian structure of non-abelian Toda lattice. Let. Math. Phys. 46, 189–205 (1998)
Geronimo, J.S.: Scattering theory and matrix orthogonal polynomials on the real line. Circuits Syst. Signal Process. 1, 471–495 (1982)
Groenevelt, W., Ismail, M.E.H., Koelink, E.: Spectral theory and matrix-valued orthogonal polynomials. Adv. Math. 244, 91–105 (2013)
Grünbaum, F.A.: The Darboux process and a noncommutative bispectral problem: some explorations and challenges. In: van den Ban, E.P., Kolk, J.A.C. (eds.) Geometric Aspects of Analysis and Mechanics, Progress in Mathematics, vol. 292, pp. 161–177. Birkhäuser, Boston (2011)
Grünbaum, F.A., Tirao, J.: The algebra of differential operators associated to a weight matrix. Integral Equ. Oper. Theory 58, 449–475 (2007)
Grünbaum, F.A., Pacharoni, I., Tirao, J.: Matrix valued spherical functions associated to the complex projective plane. J. Funct. Anal. 188, 350–441 (2002)
Heckman, G., van Pruijssen, M.: Matrix valued orthogonal polynomials for Gelfand pairs of rank one. Tohoku Math. J. (2) 68, 407–437 (2016)
Ismail, M.E.H.: Classical and Quantum Orthogonal Polynomials in One Variable. Cambridge University Press, Cambridge (2009)
Koekoek, R., Lesky, P.A., Swarttouw, R.F.: Hypergeometric Orthogonal Polynomials and their \(q\)-Analogues. Springer, Berlin (2010)
Koekoek, R., Swarttouw, R.F: The Askey-scheme of hypergeometric orthogonal polynomials and its \(q\)-analogue. http://aw.twi.tudelft.nl/~koekoek/askey.html. Report 98-17, Technical University Delft (1998)
Koelink, E., Román, P.: Orthogonal vs. non-orthogonal reducibility of matrix-valued measures. SIGMA 12, 008 (2016)
Koelink, E., van Pruijssen, M., Román, P.: Matrix valued orthogonal polynomials related to (\({{\rm SU}}(2)\times {{\rm SU}}(2),{{\rm diag}}\)). Int. Math. Res. Not. 2012, 5673–5730 (2012)
Koelink, E., van Pruijssen, M., Román, P.: Matrix valued orthogonal polynomials related to (\(\text{ SU }(2)\times \text{ SU }(2),{{\rm diag}}\)). Publ. RIMS Kyoto 49, 271–312 (2013)
Koornwinder, T.H.: Matrix elements of irreducible representations of SU(2) \(\times \) SU(2) and vector-valued orthogonal polynomials. SIAM J. Math. Anal. 16, 602–613 (1985)
Koornwinder, T.H.: Compact quantum groups and q-special functions. In: Baldoni, V., Picardello, M. (eds.) Representation of Lie Groups and Quantum Groups, Pitman Research Notes in Mathematics Series, vol. 311, pp. 46–128. Longman Science Technology, London (1994)
Krein, M.G.: Infinite J-matrices and a matrix moment problem. Dokl. Akad. Nauk SSSR 69, 125–128 (1949)
Krein, M.G.: Fundamental aspects of the representation theory of hermitian operators with deficiency index \((m, m)\). AMS Transl. Ser. 2(97), 75–143 (1971)
Lance, E.C.: Hilbert C\(^\ast \)-Modules. In: A Toolkit for Operator Algebraists, London Mathematical Society, Lecture Note Series, vol. 210. Cambridge University Press, Cambridge (1995)
Opdam, E.M.: Some applications of hypergeometric shift operators. Invent. Math. 98, 1–18 (1989)
Pacharoni, I., Tirao, J.: Matrix valued orthogonal polynomials arising from the complex projective space. Constr. Approx. 25, 177–192 (2007)
Pacharoni, I., Zurrián, I.: Matrix Gegenbauer polynomials: the \(2\times 2\) fundamental cases. Constr. Approx. 43, 253–271 (2016)
Pacharoni, I., Tirao, J., Zurrián, I.: Spherical functions associated to the three dimensional sphere. Ann. Mat. Pura Appl (4) 193, 1727–1778 (2014)
Rahman, M., Suslov, S.K.: The Pearson equation and the beta integrals. SIAM J. Math. Anal. 25, 646–693 (1994)
Tirao, J.: The matrix-valued hypergeometric equation. Proc. Natl. Acad. Sci. USA 100, 8138–8141 (2003)
Tirao, J., Zurrián, I.: Reducibility of matrix weights. Ramanujan J. arXiv:1501.04059v4
van Pruijssen, M.L.A.: Matrix valued orthogonal polynomials related to compact Gelfand pairs of rank one. Radboud Universiteit, Ph.D.-thesis (2012)
van Pruijssen, M.: Multiplicity free induced representations and orthogonal polynomials. Int. Math. Res. Not. doi:10.1093/imrn/rnw295 (2017)
Wilson, J.: Some hypergeometric orthogonal polynomials. SIAM J. Math. Anal. 11, 690–701 (1980)
Acknowledgements
We thank Ruiming Zhang for asking the questions at a presentation by one of us (EK) on [29, 30], which eventually led to this paper. Part of the work was done while EK was visiting Universidad Nacional Córdoba and while AMdlR or PR was visiting Radboud Universiteit. We thank both universities for their hospitality. We thank the referees for their input.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Tom H. Koornwinder.
Dedicated to Mourad E. H. Ismail.
The research of AMdlR is partially supported by MTM2012-36732-C03-03 (Ministerio de Economía y Competitividad), FQM-262, FQM-4643, FQM-7276 (Junta de Andalucía) and Feder Funds (European Union). PR was supported by a NWO-Visiting Grant 040.11.366. PR was also supported by the Coimbra Group Scholarships Programme at KULeuven.
Appendices
Appendix A: Proof of the LDU-Decomposition
In Appendix A, we prove Theorem 2.2. The proof is an extension of the result [30, Thm. 2.1, App. A] for the case \(\nu =1\). Note that the proof is verificational; it has initially been obtained by use of computer algebra for specific values of \(\nu \) and \(\ell \). We indicate all the steps, and we leave out the manipulation of shifted factorials. We assume \(\nu >0\), but it is possible to extend the argument to \(\nu >-\frac{1}{2}, \nu \not =0\).
By using the symmetry and taking matrix elements, we see that the LDU-decomposition is equivalent to proving
for \(n\ge m\). Note first that the right-hand side is a polynomial of degree \(n+m\). We need to find its expansion in terms of Gegenbauer polynomials \(C^{(\nu )}_r(x)\), so that we need the integral
Since the Gegenbauer polynomials are symmetric, and by the orthogonality relations for the Gegenbauer polynomials \(C^{(\nu +k)}_n\) we see that this integral is zero for \(n+m-r\) odd and for \(r<n-m\), the expansion of the right-hand side of (A.1) in terms of Gegenbauer polynomials \(C^{(\nu )}_r(x)\) only has nonzero terms for the summands in the left-hand side of (A.1). Using (1.3) it remains to prove the identity
for \(n\ge m, t\in \{0,\ldots ,m\}\).
The integral in (A.2) has been evaluated in [30, Rmk. 2.8], and in order to prove (A.2), we follow the steps in [30, App. A]. The integral in (A.2) is equal to a balanced \({}_4F_3\)-series [30, Rmk. 2.8],
Using Whipple’s transformation, see, e.g., [5, Thm. 3.3], twice (once with (n, a, d) of [5, Thm. 3.3] as \((m-k,-m-2\nu -k+1,-m)\) and the second time as \((t,-m-n-\nu +t,-m)\)), the \({}_4F_3\)-series can be rewritten as
We now plug in the obtained expression for the integral into the right-hand side of (A.2), where we replace the \({}_4F_3\)-series of (A.3) by its sum \(\sum _{j=0}^k\). We next interchange the summations over k and j, and in the summation \(\sum _{k=j}^m\), we replace \(k=p+j\). This gives for the inner sum (the k-dependent part):
using the Dougall summation formula for a very-well-poised \({}_5F_4\)-series, see, e.g., [5, Cor. 4.3], [7, §4.3(3)]. This shows that the right-hand side of (A.2) can be written as a single sum; explicitly,
The balanced \({}_3F_2\)-series is summable to \(\frac{(1-m-n-2\nu )_t (-2\ell -\nu )_t}{(\nu )_t (2\ell +1-m-n)_t}\) by the Pfaff–Saalschütz formula, see [5, Thm. 2.2.6], [7, §2.2], [25, (1.4.5)]. Next a straightforward verification using the expression of Definition 2.1 shows that this is equal to the left-hand side of (A.2), which proves Theorem 2.2.
Appendix B: Proof of Lemma 4.3
The following properties of the Gegenbauer polynomials
are useful, see, e.g., [25, §4.5]. We follow the convention that polynomials of negative degree are 0.
First we observe that \(D^{(\nu )}-2\ell E^{(\nu )}=L^{(\nu )}(D^{(\nu )}-2\ell E^{(\nu )})\widetilde{\ }(L^{(\nu )})^{-1}\) if and only if
If we multiply the (m, k)th entry of (B.5) by \(m!/(k!(2\nu +2k)_{m-k})\), we obtain
Since \(-k(2\nu +k)+2\ell (\nu +2\ell +1)-m(2\ell -m)+(2\ell +2)(m-2\ell )-2(\nu -1)(\ell -m) =(m-k)(m+k+2\nu )\), (B.6) is the differential equation for the Gegenbauer polynomial \(C^{(\nu +k)}_{m-k}\), see [25, §4.5], and hence (B.5) holds true.
On the other hand, if we multiply (B.4) by \(m!/(k!(2\nu +2k)_{m-k})\), we obtain
which can be verified using [1, (22.7.21)].
In order to prove the statement for \((E^{(\nu )} )\widetilde{\ }\), we observe that \(E^{(\nu )}=L^{(\nu )}(E^{(\nu )})\widetilde{\ }(L^{(\nu )})^{-1}\) if and only
If we multiply the first equation in (B.7) by \(m!/(k!(2\nu +2k)_{m-k})\), we obtain
On the left-hand side of (B.8), we apply (B.2) on the first term and (B.3) on the second term, and on the right-hand side of (B.8), we apply the three-term recurrence relation (B.1). Now (B.8) only involves Gegenbauer polynomials \(C^{(\nu +k)}_{m-k-1}\) and \(C^{(\nu +k)}_{m-k+1}\), and we verify (B.8) by showing that the coefficients for each of these polynomials are equal.
If we multiply the second equation in (B.7) by \(m!/(k!(2\nu +2k)_{m-k})\), we obtain
In order to verify (B.9), we proceed as before. We use (B.2), (B.3), and (B.1) to write (B.9) as a combination of Gegenbauer polynomials with parameter \(\nu +k\), and we verify that the corresponding coefficients of both sides are equal.
The kth diagonal entry of \((D^{(\nu )}-2\ell E^{(\nu )})\widetilde{\ }\) is, up to a constant, the differential operator for the Gegenbauer polynomials \(C^{(\nu +k)}_n\), and hence it is symmetric with respect to the kth diagonal entry of \(T^{(\nu )}\) being the weight for the Gegenbauer polynomials with parameter \(\nu +k\). This implies the symmetry of \((D^{(\nu )}-2\ell E^{(\nu )})\widetilde{\ }\).
We prove the symmetry of the operator \((E)\widetilde{\ }\) by showing that the symmetry and boundary conditions in Lemma 4.1 hold true, taking \(F_2=0\). The boundary conditions follow directly from the explicit expressions of \(T^{(\nu )}\) in Theorem 2.2 and of \(S_1^{(\nu )}\) in Lemma 4.3. Finally we need to prove
Writing down the \((k,k-1)\)th and \((k,k+1)\)th entries of the first equation of (B.10), we see that it is equivalent to \(-(T^{(\nu )}(x))_{k-1,k-1}/(T^{(\nu )}(x))_{k,k} = (S^{(\nu )}_1)_{k-1,k}/ (S^{(\nu )}_1)_{k,k+1}\), which can be verified from the explicit expressions of \(T^{(\nu )}\) and of \(S_1^{(\nu )}\). On the other hand, the only nonzero entries of the second equation in (B.10) are the \((k,k-1)\)th and \((k-1,k)\)th entries. The verification is straightforward from the explicit expressions of \(T^{(\nu )}, S_1^{(\nu )}\), and \(S_0^{(\nu )}\). \(\square \)
Appendix C: Proof of the Matrix-Valued Pearson Equation
Proof of Proposition 4.4
The first statement of Proposition 4.4 is equivalent to \(L^{(\nu +1)}(x)-L^{(\nu )}(x)M_1^{(\nu )}(x)=0\). The (n, m)-entry, divided by \(m!/(n!(2\nu +2n+2)_{m-n})\), of the left-hand side is given by
We use (B.2) in the second term of (C.1) and (B.3) in the third term of (C.1) to write in terms of two Gegenbauer polynomials of parameter \(\nu +n+1\) and degree \(m-n\) and \(m-n-2\). By an easy calculation, both coefficients are zero, hence the first statement of Proposition 4.4 follows.
Using \(\frac{\mathrm{d}A^{-1}}{\mathrm{d}x} = -A^{-1} \frac{\mathrm{d}A}{\mathrm{d}x} A^{-1}\) and the first statement of Proposition 4.4, it suffices to prove
The (n, m)-entry of the left-hand side of (C.2), divided by \(m!/(n!(2\nu +2n)_{m-n})\), is given by
Applying (B.2) in the first and third term of (C.3), (B.3) in the second term of (C.3), and (B.1) in the fourth term of (C.3), we expand (C.3) in terms of two Gegenbauer polynomials of parameter \(\nu +n+2\) and of degree \(m-n-1\) and \(m-n-3\). A straightforward computation shows that the coefficients of \(C_{m-n-1}^{(\nu +n+2)}\) and \(C_{m-n-3}^{(\nu +n+2)}\) are zero. \(\square \)
For the proof of Theorem 2.4, we finally have to prove (4.7). For this we rewrite (4.7) as
We write the three-diagonal matrices \(A^{(\nu )} \) and \(\varPsi ^{(\nu )}\) as
where the explicit expression for the matrix entries follow from (4.10), (4.8). The (n, m)-entry of the left-hand side of (C.4) is given by
Now we apply (B.2) in the first term, the three-term recurrence relation (B.1) in the second and fifth term, and (B.3) in the third term in order to rewrite this expression in terms of just two Gegenbauer polynomials \(C^{(\nu )}_{m-n-1}\) and \(C^{(\nu )}_{m-n+1}\). Using the explicit expressions of \(A^{(\nu )}\) and \(\varPsi ^{(\nu )}\), it is a straightforward verification that the coefficients of \(C^{(\nu )}_{m-n+1}\) and \(C^{(\nu )}_{m-n-1}\) are zero. This completes the proof of (4.5), and hence of Theorem 2.4.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Koelink, E., de los Ríos, A.M. & Román, P. Matrix-Valued Gegenbauer-Type Polynomials. Constr Approx 46, 459–487 (2017). https://doi.org/10.1007/s00365-017-9384-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00365-017-9384-4
Keywords
- Matrix-valued orthogonal polynomials
- Matrix-valued differential operators
- Gegenbauer polynomials
- Shift operator
- Darboux factorization