Matrix-Valued Gegenbauer-Type Polynomials

We introduce matrix-valued weight functions of arbitrary size, which are analogues of the weight function for the Gegenbauer or ultraspherical polynomials for the parameter ν>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\nu >0$$\end{document}. The LDU-decomposition of the weight is explicitly given in terms of Gegenbauer polynomials. We establish a matrix-valued Pearson equation for these matrix weights leading to explicit shift operators relating the weights for parameters ν\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\nu $$\end{document} and ν+1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\nu +1$$\end{document}. The matrix coefficients of the Pearson equation are obtained using a special matrix-valued differential operator in a commutative algebra of symmetric differential operators. The corresponding orthogonal polynomials are the matrix-valued Gegenbauer-type polynomials which are eigenfunctions of the symmetric matrix-valued differential operators. Using the shift operators, we find the squared norm, and we establish a simple Rodrigues formula. The three-term recurrence relation is obtained explicitly using the shift operators as well. We give an explicit nontrivial expression for the matrix entries of the matrix-valued Gegenbauer-type polynomials in terms of scalar-valued Gegenbauer and Racah polynomials using the LDU-decomposition and differential operators. The case ν=1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\nu =1$$\end{document} reduces to the case of matrix-valued Chebyshev polynomials previously obtained using group theoretic considerations.

are the matrix-valued Gegenbauer-type polynomials which are eigenfunctions of the

Introduction
Matrix-valued orthogonal polynomials have been introduced and studied by Krein in connection with spectral analysis and moment problems, see [8,33,34]. Matrix-valued orthogonal polynomials have applications in the study of higher-order recurrences and their spectral analysis (see, e.g., [2,16,19,20]) and the study of Toda lattices [4,6,18], to name a few.
Using matrix-valued spherical functions [17], one is able to associate matrix-valued orthogonal polynomials with certain symmetric pairs. The first case was studied by Grünbaum et al. [23] for the case of the symmetric pair (G, K ) = (SU (3), U(2)) relying heavily on invariant differential operators. Another more direct approach was developed in [29,30] for the case of (SU(2) × SU (2), diag) inspired by [31]. See also [24,42,43] for the general set-up in the context of the so-called multiplicity free pairs. In particular, [29,30] give a detailed study of the matrix-valued orthogonal polynomials, which can be considered as matrix-valued analogues of the Chebyshev polynomials, i.e., the spherical polynomials on (SU(2) × SU (2), diag) better known as the characters on SU(2), see also [3] for the quantum group case.
The purpose of this paper is to extend the family of matrix-valued Chebyshev polynomials in [29,30] to a family of matrix-valued Gegenbauer-type polynomials using shift operators, where the lowering operator is the derivative. In the matrixvalued setting, the raising operator, being the adjoint of the derivative, is harder to obtain and involves a matrix-valued analogue of the Pearson equation. The ingredients for the matrix-valued Pearson equation are obtained from a matrix-valued differential operator that is well suited for a Darboux factorization. The use of shift operators is a well-established technique in special functions, see, e.g., [32,36] for explicit examples and [5,25,26] for general information.
Let us recall the classical case of the shift operators for the Gegenbauer polynomials. The Chebyshev polynomials can be considered as the Gegenbauer polynomials C (ν) n with ν = 1, and by successive derivation, the Gegenbauer polynomials C (ν) n with ν ∈ N\{0} can be obtained. Moreover, many properties of the Chebyshev polynomials can be transported to Gegenbauer polynomials C (ν) n with integer ν. Then one can obtain the general family of Gegenbauer polynomials C (ν) n by continuation in ν, or one can extend differentation using suitable fractional integral operators. Let us recall this in some more detail, see, e.g., [5,25,26]. The Gegenbauer polynomials are eigenfunctions of an explicit second-order differential operator of hypergeometric type; (1. 2) The orthogonality for the Gegenbauer polynomials is with w (ν) (x) = (1 − x 2 ) ν−1/2 . In particular, using (1.1) shows that dC (ν) n−1 (x). We view the derivative d dx as an unbounded operator L 2 (w (ν) ) → L 2 (w (ν+1) ) with respect to the integral over (−1, 1). It is a shift (or lowering) operator since it lowers the degree of a polynomial by one. Its adjoint −S (ν) : L 2 (w (ν+1) ) → L 2 (w (ν) ) is explicitly given by S (ν) . This follows from the Pearson equation (1.4) see also [40]. It follows that S (ν) : C (ν+1) n−1 → −n(2ν+n) 2ν C (ν) n by considering the orthogonality relations and the leading coefficients. Then D (ν) = S (ν) • d dx gives a factorization in terms of a lowering and a raising operator of the differential operator (1.2) having the Gegenbauer-type polynomials as eigenfunctions. In particular, we n−1 (x), being a multiple of the derivative of C (ν) n (x), is an eigenfunction of d dx • S (ν) . This Darboux transformation, i.e., interchanging the raising and lowering operators, gives d dx • S (ν) = D (ν+1) − (2ν + 1), and this is also known as a transmutation property.
The main results of this paper are stated in Sects. 2 and 3. In Sect. 2, we define the weight function, and we state its LDU-decomposition, which is essential in most of the proofs in this paper. The inverse of the matrix L in this decomposition is a particular case of the results of Cagliero and Koornwinder [9]. We obtain a commutative algebra generated by two matrix-valued differential operators that are symmetric with respect to the matrix weight. From these differential operators, we obtain the ingredients for a matrix-valued Pearson equation which, in turn, gives us the matrix-valued adjoint of the derivative. Finally, in Sect. 2, we describe the commutant algebra of the weight, showing that there is a nontrivial orthogonal decomposition for each weight. In [28], we show that no further reduction is possible. The proofs of these statements are given in Sect. 4. In Sect. 3, we discuss the corresponding monic matrix-valued orthogonal polynomials. Using the shift operators, we explicitly evaluate the squared norms, and we obtain a simple Rodrigues formula of the polynomials. Moreover, the polynomials are eigenfunctions of the matrix-valued differential operators in Sect. 2. The matrix entries of the monic matrix-valued orthogonal polynomials are explicitly calculated in terms of Gegenbauer and Racah polymomials. We give the three-term recurrence relation for the polynomials explicitly. We give the proofs in Sect. 5.
In Sect. 4, we give an elementary approach to symmetry of matrix-valued differential operators, which we expect to be useful in other cases as well. We have relegated all the proofs that only involve Gegenbauer polynomials to the appendices.
In the literature, there are more families of matrix-valued orthogonal polynomials that are considered as matrix analogues of the Gegenbauer polynomials, since, as is the case here, there is a reduction to Gegenbauer polynomials in case the dimension is one. We compare the results with other matrix-valued Gegenbauer-type orthogonal polynomials as in [14,37,38] in Remark 3.8. We note that the matrix-valued orthogonal Gegenbauer-type polynomials of this paper are better understood in the sense of explicit formulas and for larger dimensions than the ones in [14,37,38].
All proofs are direct and only use special functions. Anyway, we have benefited from the use of computer algebra such as Maple, in order to come up with explicit conjectures. This means that we have checked many of our results up to a sufficiently large size of the matrices involved. For the reader's convenience, the worksheet is available via the first author's webpage. 1

The Weight Function and Symmetric Differential Operators
The weight function is introduced by defining its matrix entries, which are taken with respect to the standard orthonormal basis {e 0 , e 1 , . . . , e 2 } of C 2 +1 for ∈ 1 2 N. We suppress from the notation as much as possible, which occurred initially as the spin of a representation of SU(2), see [29]. After defining the weight, we discuss its LDUdecomposition, and we study commuting symmetric matrix-differential operators for the weight. These matrix-differential operators follow directly using the fact that we want the transmutation property with the derivative. Then we obtain in this algebra a symmetric second-order matrix-valued differential operator on which we can apply the Darboux factorization, which gives the entries for the matrix-valued Pearson equation as in [10].
where n, m ∈ {0, 1, . . . , 2 } and n ≥ m. The matrix is extended to a symmetric matrix, W In order to show that matrix-valued weight fits into the general theory of matrixvalued orthogonal polynomials, we calculate its LDU-decomposition. This is also very useful in establishing symmetry of matrix-valued differential operators and in the study of the derivative in this context.

Theorem 2.2
For ν > 0, W (ν) (x) has the following LDU-decomposition: Theorem 2.2 is proved in Appendix A, and it is a straightforward extension of the proof in [30]. L(x) is invertible, its inverse being a unipotent lower triangular matrix as well. Cagliero and Koornwinder [9] give the matrix entries of L(x) −1 explicitly in terms of Gegenbauer polynomials with negative ν, see (5.3). Note that Theorem 2.2 gives that W (ν) (x) > 0 for x ∈ (−1, 1). It is possible to extend Definition 2.1 and Theorem 2.2 to − 1 2 < ν ≤ 0, but for ν = 0, the weight W (ν) is indefinite, and for − 1 2 < ν < 0, it is has nontrivial signature depending on the size. So in this paper we assume ν > 0.
For matrix-valued functions P and Q with, say, C([−1, 1])-entries, we define for ν > 0 the matrix-valued inner product (2.1) Note that the integral of the entries are well defined.
A matrix-valued differential operator acts from the right; are matrix-valued functions and P is a matrixvalued function with C 2 -entries. The derivatives in (2.2) of a matrix-valued function are taken entry-wise. A matrix-valued differential operator D is symmetric for the weight W (ν) if P D, Q (ν) = P, Q D (ν) for all matrix-valued functions P and Q with C 2 ([−1, 1])-entries.

Theorem 2.3
For ν > 0, let D (ν) and E (ν) be the matrix-valued differential operators where the matrices C, V, B (ν) 0 , B (ν) 1 , and A (ν) 0 are given by Then D (ν) and E (ν) are symmetric with respect to the weight W (ν) , and D (ν) and E (ν) commute.  (ν) and E (ν) for ν = 1 are given in [29,30]. With this definition, we have . In order to prove the statement on the symmetry in Theorem 2.3, we use Theorem 2.2.
We next look for a second-order matrix-valued differential operator generated by the commuting operators D (ν) and E (ν) having no constant term; i.e., F 0 = 0 in the notation of (2.2). The reason is that in the matrix-valued case, the constant term F 0 cannot be moved into the eigenvalue matrix, see, e.g., Theorem 3.2, unless F 0 is a multiple of the identity. A straightforward calculation gives and this defines matrix-valued polynomials Φ (ν) and Ψ (ν) of degree 2 and 1. The explicit expressions for Φ (ν) and Ψ (ν) are given in (4.9) and (4.10). In particular, we find that the ν-dependence of is factored as derivation followed by a first-order matrix-valued differential operator, and in order to study this operator, we need the analogue of the Pearson equation (1.4).

Theorem 2.4 We have
The matrix-valued Pearson equation of Theorem 2.4 is much more involved than its scalar companion (1.4), and it fits into the framework of [10, §3].
The space of matrix-valued functions with continuous entries with respect to (2.1) forms a pre-Hilbert C * -module, see [35], with M 2 +1 (C) the corresponding (finitedimensional) C * -algebra. Note that we consider the pre-Hilbert C * -module as a left module for M 2 +1 (C) and the inner product to be conjugate linear in the second variable, in contrast with [35]. Let H (ν) be the Hilbert C * -module which is the completion [35, p. 4]; then d dx : H (ν) → H (ν+1) is an unbounded operator with dense domain and dense range in H (ν) and H (ν+1) . In general, a linear operator from one Hilbert C * -module to another does not necessarily have an adjoint, but in this case it does.

Corollary 2.5 (i) Define the first-order matrix-valued differential operator S (ν) by
Note that Corollary 2.5(ii) gives the symmetry of D (ν) , which is a consequence of Theorem 2.4. However, we use the symmetry of D (ν) in the proof of Theorem 2.4, so that we need another proof of the symmetry of D (ν) . The required symmetry follows from Theorem 2.3 and (2.3).
Having factorized D (ν) as a product of a lowering operator and a rising operator in (2.3) and Corollary 2.5, we can take its Darboux transform. The Darboux transform does not give the operator D (ν+1) up to an affine transformation, but it is an element of the algebra generated by E (ν+1) , D (ν+1) ; namely, Darboux transformations for matrix-valued differential operators require more study, see, e.g., [4,11,12,21].

Proposition 2.6 The commutant algebra A
The commutant algebra A (ν) is related to orthogonal decompositions, and Proposition 2.6 states that there is an orthogonal decomposition with respect to the ±1-eigenspaces of J . General nonorthogonal decompositions are governed by the real vector space [22,44]. In [28] we show that A (ν) equals the Hermitian elements of A (ν) , so that there is no further nonorthogonal decomposition. Proposition 2.6 is proven in Sect. 5.4.

The Matrix-Valued Gegenbauer-Type Polynomials and Their Properties
Since the matrix weight function W (ν) is strictly positive definite on (−1, 1), one can associate matrix-valued orthogonal polynomials, see, e.g., [13,22,24]. Denote the corresponding monic orthogonal polynomials with respect to the matrix-valued weight n is a strictly positive definite matrix. Note that H (ν) 0 can be calculated using the explicit expression of Definition 2.1 and the orthogonality relations (1.3), which gives the special case n = 0 of Theorem 3.1(i). In case Q n is another set of matrixvalued orthogonal polynomials with respect to the weight function W (ν) on (−1, 1), then there exist invertible matrices E n so that Q n (x) = E n P n (x) for all x and all n, see [13,22].
The following Rodrigues formula holds: Note that the Rodrigues formula has a compact nature very similar to the scalar case [26, (9.
where the eigenvalue matrix is obtained using the same combination.

Theorem 3.3 The monic matrix-valued orthogonal polynomials P (ν)
n satisfy the threeterm recurrence relation where the matrices B n , we need the one-but-leading coefficient of P (ν) n for which we use the shift operators. In [30] we calculated the one-but-leading coefficients using Tirao's matrix-valued hypergeometric functions [41], and we note that a similar expression for the rows of P (ν) n in terms of matrix-valued hypergeometric functions as in [30, §4] is possible.
The matrix-entries of the monic matrix-valued orthogonal polynomials can be expressed in terms of scalar orthogonal polynomials. In this expression, Racah polynomials [45, §4], [26, §9.2] and Gegenbauer polynomials occur. The Gegenbauer polynomials with negative parameter ν arise from L(x) −1 and have to be interpreted as in [9].

Theorem 3.4 The monic matrix-valued Gegenbauer-type polynomials P
The right-hand side in Theorem 3.4 is not obviously a polynomial of degree at most n in case k > i. In particular, the coefficients of x p with p > n are zero. In particular, for k > i, the leading coefficient of the right-hand side is zero, and this gives m with n−2k ≤ m ≤ n. However, in general, the coefficients do not seem to have simple expressions. The case k = 1 can be done easily by differentiating the three-term recurrence relation of Theorem 3.3 and using Theorem 3.1(ii).

Remark 3.6
The weight matrices W (ν+1) are obtained from W (ν) by a Christoffel transformation, given by multiplication by the polynomial Φ (ν) of degree two, see Theorem 2.4. Therefore our Gegenbauer-type polynomials give a nontrivial example of arbitrary size for the theory developed in [4], see also [4,Example 3].
n,2 are the monic polynomials for W 2 , respectively. Since Y is independent of ν, we can take the matrix corresponding to the case ν = 1, which is given in [29, (6.7)]. The block decompositions for the differential operators D (ν) and E (ν) are analogous to those for the case ν = 1, see [29, §7.2]. We have where the square blocks D n,2 , respectively, as eigenfunctions. Observe that the ν-dependence of D (ν) is quite simple, so that (3.2) can be easily verified. For the differential operator E (ν) , the decomposition is analogous to [29,Prop. 7.8]. If = (2s + 1)/2 for some s ∈ N, then E (ν) splits in (s + 1) × (s + 1) blocks in the following way: Here I s+1 denotes the (s + 1) × (s + 1) identity matrix. The operators E n,2 in the following way: n,2 are diagonal matrices given by If ∈ N, then the blocks are no longer square, see [29,Prop. 7.9].
Remark 3. 8 Pacharoni and Zurrián [38] introduce 2 × 2 matrix-valued Gegenbauer polynomials. Using Proposition 2.6, we have irreducible 2 × 2 Gegenbauer polynomials upon restricting to the ±1-eigenspaces of J for the cases = 1, 3/2, 2, see Remark 3.7. The cases = 1 and = 2 can be connected to the results in [38], but the case = 3/2 not, see [38,Rmk. 3.7]. Other Gegenbauer-type matrix-valued polynomials can be obtained from the α = β case of Durán [14] or Pacharoni-Tirao [37]. The results of this paper have no overlap with [14], and the relation with the α = β case of [37] seems slightly similar, but the weight function is different. This can be checked by considering the leading terms of the polynomial part of the weight functions of [14,37] and comparing these with the leading coefficient of the polynomial part of the weight function. Note that the results for the matrix-valued polynomials of Gegenbauer type in this paper are more complete and explicit than the ones of [14,37,38], since all results are very explicit and for any dimension.

Differential Operators
In this section, we prove the statements of Sect. 2 except for some technical statements, which are deferred to the appendix, and the commutativity statement of Theorem 2.3 and Proposition 2.6. These statements are easier to derive using the matrix-valued orthogonal polynomials. Section 4.1 is of a general nature, and is next applied to this particular situation in Sect. 4.2. The proof of the Pearson-type result is the most technical, see Sect. 4.3.

Differential Operators and Conjugation
We consider matrix-valued differential operators of the form d 2 dx F 2 (x) + d dx F 1 (x) + F 0 (x), which act from the right on matrix-valued functions G which have C 2 -matrix entries by In (4.1) the derivatives are taken entry-wise. Moreover, we assume that all entries of F i , i = 0, 1, 2, are C 2 ([a, b]). In the application for this paper, F i 's are polynomials.
Next we assume that we have matrix-valued weight function W on (a, b), so that its entries are C 2 ((a, b)), and we allow for an integrable singularity at the end points. The matrix-valued operator D is symmetric with respect to W if for all matrix-valued C 2 ([a, b] as long as both integrals exist. For the explicit weight functions W (ν) this is valid.

Lemma 4.1 D as in (4.1) is symmetric with respect to W if and only if the boundary conditions
and the symmetry conditions Note that such symmetric matrix-valued differential operators form a vector space and that the product of two symmetric first-order matrix-valued differential operators is a symmetric second-order matrix-valued differential operator.
We now consider the differential operator under conjugation. We assume L is a C 2 ([a, b]) matrix-valued function such that L(x) is a unipotent lower-triangular matrix. In particular, it follows that its inverse L(x) −1 is a unipotent lower-triangular matrices with C 2 ([a, b])-entries. Then L * (x) and (L * (x)) −1 are unipotent uppertriangular matrices with C 2 ([a, b])-entries. Note that L as in Theorem 2.2 satisfies these conditions. LetD = d 2 dx 2F2 + d dxF 1 +F 0 be the second-order matrix-valued differential operator obtained by conjugation of D by L; i.e., for all C 2 -matrix-valued functions G. Comparing (4.1) and (4.3), we obtain (as matrixvalued functions) By symmetry, interchanging F i ↔F i and L ↔ L −1 , we get the expressions for F i in terms ofF i . This can also be obtained by solving forF i from (4.4) and using the expressions for the first-and second-order derivatives of L −1 obtained from deriving L L −1 = I.

Proposition 4.2 With the assumptions in Sect. 4.1 and assuming additionally for x ∈ (a, b) that W (x) = L(x)V (x)L * (x), then D is symmetric with respect to W on the interval (a, b) if and only ifD is symmetric with respect to V on the interval (a, b).
Note that in particular the entries of V are again in C 2 ((a, b)).
Proof Note that symmetry of D with respect to W is given by (4.2). Plugging in the Replacing G L and H L byG andH , and using (4

Differential Operators E (ν) and D (ν)
In order to prove the symmetry of E (ν) and D (ν) as stated in Theorem 2.3 we use Lemma 4.1 with L = L (ν) as in Theorem 2.2, so that V of Lemma 4.1 identifies with the diagonal weight T (ν) .

Pearson Equation for the Weight W (ν)
In order to prove the Pearson equations of Theorem 2.4, we use the fact that D (ν) , is symmetric with respect to the weight W (ν) , and we use the LDU-decomposition of Theorem 2.2. We actually do not need the explicit expression for Φ (ν) to prove Theorem 2.4, but we give it for completeness in (4.9) since it gives a highly nontrivial example of the theory for Christoffel transformations for matrix-valued weights as presented in [4].

Proof of Theorem 2.4
In case F 0 = 0 in Lemma 4.1, we can integrate the last symmetry condition using the last boundary condition to obtain d( (2.3) has no constant term. D (ν) is expressed in terms of the symmetric differential operators E (ν) and D (ν) , see Theorem 2.3, hence symmetric with respect to W (ν) . Noting that Taking adjoints proves the first identity of Theorem 2.4.
The proof of the second identity is more involved. We will show that Integrating gives the second identity, since W (ν+1) vanishes at x = −1 by Definition 2.1 as does W (ν) Φ (ν) = (Φ (ν) ) * W (ν) by the first boundary condition of Lemma 4.1 applied to D (ν) .
In order to prove (4.5), we use the LDU-decompositions of the weights involved. For this we need the relation between L (ν) and L (ν+1) .

Proposition 4.4 The matrix L (ν) satisfies the following identities:
We prove Proposition 4.4 in Appendix C. Note that Proposition 4.4 implies by differentiation of the first identity of Proposition 4.4 and using the second identity.
Using the LDU-decomposition of Theorem 2.2 for ν and ν + 1, and Proposition 4.4 and (4.6), the identity (4.5) is equivalent to showing where The expression for A (ν) might look complicated, but since T (ν) is diagonal, and M (ν) 2 only have two nonzero diagonals, A (ν) is band-limited. Using the expressions from Theorem 2.2 and Proposition 4.4, we find that A (ν) is a three-diagonal matrix-valued polynomial of degree 2; (4.8) In order to prove (4.7), we need the explicit expression for Ψ (ν) , and for completeness we also write down the explicit expression for Φ (ν) . The matrix-valued polynomials Φ (ν) and Ψ (ν) are introduced in (2.3). A straightforward calculation using Theorem 2.3 gives and We prove (4.7) in Appendix C by showing that the matrix entries on both sides, each consisting of at most three Gegenbauer polynomials, are equal using standard identities for Gegenbauer polynomials. Note that a straightforward check of (4.5) using Definition 2.1 and Gegenbauer polynomials is much more involved, but this can also be done.

Matrix-Valued Orthogonal Polynomials
We prove all the statements of Sect. 3 concerning the corresponding matrix-valued orthogonal polynomials. We also prove the remaining statements of Sect. 4, i.e., Proposition 2.6 and the commutativity statement of Theorem 2.3.

Proof of Theorem 3.1 The leading coefficient nI of d P (ν)
n dx is nonsingular, and for k ∈ N with k < n − 1, we get In particular, S (ν) is a shift or raising operator. Now we use Corollary 2.5 to see that n is a real diagonal matrix. Note that the matrices commute, so we do not need to specify the order. It suffices to calculate H (ν) 0 for general ν > 0, and this follows immediately from the explicit expression of Definition 2.1 and the orthogonality relations of the (scalar) Gegenbauer polynomials (1.3). This leads to (i).
For (iii), we use Theorem 2.4 to write which is a raising operator preserving polynomials. Iterating shows Now we take Q(x) = P (ν+n) 0 (x) = I, so that the left-hand side equals n (x). A calculation gives the diagonal matrix G (ν) n , and we obtain the Rodrigues formula (iii).

Eigenfunctions of Differential Operators and Scalar Orthogonal Polynomials
Proof of Theorem 3.2 Since the differential operators of Theorem 2.3 are symmetric with respect to the weight W (ν) on (−1, 1) by Theorem 2.3, and since the differential operators also preserve the polynomials of fixed degree, we find that such a differential operator acting on P (ν) n is an orthogonal polynomial of degree n with respect to the weight W (ν) on (−1, 1). Hence it can be written as Λ n P (ν) n , and the eigenvalue matrix Λ n follows by considering leading coefficients.
We can now use the differential operators, especially the diagonal (or uncoupled) differential operator of Lemma 4.3, to obtain precise information on the matrixvalued orthogonal polynomials. In particular, we can derive Theorem 3.4 in this way. Indeed, P (ν) n are eigenfunctions of D (ν) − 2 E (ν) . Hence, Lemma 4.3 shows that R (ν) n (x)L (ν) (x) are eigenfunctions of a diagonal differential operator. Note that in general R (ν) n is a matrix-valued polynomial of degree n + 2 with highly singular leading coefficient. Because of Lemma 4.3, the matrix entry R (ν) n (x) k, j is a polynomial solution to the hypergeometric differential operator Since the polynomial solutions are unique up to a constant, we find Similarly, by Theorem 3.2 and Lemma 4.3, we have that R (ν) n (E (ν) ) = Λ n (E (ν) )R (ν) n . Note that the eigenvalue does not change. Using the explicit expression of Lemma 4.3 and that Λ n (E (ν) ) is diagonal, plugging in x = 1 gives the recurrence which can be solved in terms of Racah polynomials as in [30], see also [39]; Since the inverse of the lower triangular matrix L (ν) has been calculated explicitly by Cagliero and Koornwinder [9,Thm. 4.1] as It follows from (5.1) and Proposition 4.4 that the (k, 0)th entry of (5.4), evaluated at x = 1, is given by k,0 (n) and c (ν) k,1 (n) using the recurrence (5.2) for i = 1, and we replace in (5.5), we obtain Finally, from the recurrence (5.2) for i = 0, we write c k,0 (n). If we replace this in (5.6), we get k,0 (0) = (2ν) k , and therefore the recurrence (5.7) is solved explicitly by Observe that for the case ν = 1, the initial value c (ν) k,0 differs from that in [30] by a factor (−1) n 2 −n (n + k)!/(2ν) n+k . This is due to a different normalization in (5.1) and the fact that the orthogonality interval is [−1, 1] and [0, 1] in [30, §6].

Three-Term Recurrence Relation
To calculate the coefficients B n + · · · , we find by comparing coefficients of x n in the three-term recurrence relation. Now Theorem 3.3 follows from these observations and Lemma 5.1.

Lemma 5.1
The one-but-leading coefficient X (ν) n is given by Proof We differentiate P * . Using (4.10), we find X (ν)

Commutant
The commutant algebra A (ν) in Proposition 2.6 is a * -algebra, hence is generated by its self-adjoint elements. Let T = T * ∈ A (ν) be an invertible self-adjoint element; then P → PT is a symmetric operator with respect to W (ν) , a matrix-valued differential operator of order 0. Since it preserves the polynomials, the matrix-valued Gegenbauertype polynomials are eigenfunctions. Hence, P n (x) for all n and all x by comparing leading coefficients, cf. [28,Lem. 3.1(2)]. In particular, T commutes with the one-but-leading coefficient of P This is straightforwardly verified from the expression in Definition 2.1.
for n ≥ m. Note first that the right-hand side is a polynomial of degree n + m. We need to find its expansion in terms of Gegenbauer polynomials C (ν) r (x), so that we need the integral Since the Gegenbauer polynomials are symmetric, and by the orthogonality relations for the Gegenbauer polynomials C (ν+k) n we see that this integral is zero for n + m − r odd and for r < n − m, the expansion of the right-hand side of (A.1) in terms of Gegenbauer polynomials C (ν) r (x) only has nonzero terms for the summands in the left-hand side of (A.1). Using (1.3) it remains to prove the identity The integral in (A.2) has been evaluated in [30,Rmk. 2.8], and in order to prove (A.2), we follow the steps in [ Using Whipple's transformation, see, e.g., [5,Thm. 3.3], twice (once with (n, a, d) of [5,Thm. 3.3] as (m − k, −m − 2ν − k + 1, −m) and the second time as (t, −m − n − ν + t, −m)), the 4 F 3 -series can be rewritten as (A.3) We now plug in the obtained expression for the integral into the right-hand side of (A.2), where we replace the 4 F 3 -series of (A.3) by its sum k j=0 . We next interchange the summations over k and j, and in the summation m k= j , we replace k = p + j. This gives for the inner sum (the k-dependent part): using the Dougall summation formula for a very-well-poised 5 F 4 -series, see, e.g., [5,Cor. 4.3], [7, §4.3(3)]. This shows that the right-hand side of (A.2) can be written as a single sum; explicitly,  [25, (1.4.5)]. Next a straightforward verification using the expression of Definition 2.1 shows that this is equal to the left-hand side of (A.2), which proves Theorem 2.2.

Appendix B: Proof of Lemma 4.3
The following properties of the Gegenbauer polynomials are useful, see, e.g., [25, §4.5]. We follow the convention that polynomials of negative degree are 0. First we observe that If we multiply the (m, k)th entry of (B.5) by m!/(k!(2ν + 2k) m−k ), we obtain which can be verified using [1, (22.7.21)].
In order to prove the statement for (E (ν) ) , we observe that E (ν) = L (ν) (E (ν) ) (L (ν) ) −1 if and only If we multiply the first equation in (B.7) by m!/(k!(2ν + 2k) m−k ), we obtain  In order to verify (B.9), we proceed as before. We use (B.2), (B.3), and (B.1) to write (B.9) as a combination of Gegenbauer polynomials with parameter ν + k, and we verify that the corresponding coefficients of both sides are equal.
The kth diagonal entry of (D (ν) − 2 E (ν) ) is, up to a constant, the differential operator for the Gegenbauer polynomials C (ν+k) n , and hence it is symmetric with respect to the kth diagonal entry of T (ν) being the weight for the Gegenbauer polynomials with parameter ν + k. This implies the symmetry of (D (ν) − 2 E (ν) ) .