Matrix-valued orthogonal polynomials related to the quantum analogue of (SU(2)×SU(2),diag)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\mathrm{SU}(2) \times \mathrm{SU}(2), \mathrm{diag})$$\end{document}

Matrix-valued spherical functions related to the quantum symmetric pair for the quantum analogue of (SU(2)×SU(2),diag)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\mathrm{SU}(2) \times \mathrm{SU}(2), \mathrm{diag})$$\end{document} are introduced and studied in detail. The quantum symmetric pair is given in terms of a quantised universal enveloping algebra with a coideal subalgebra. The matrix-valued spherical functions give rise to matrix-valued orthogonal polynomials, which are matrix-valued analogues of a subfamily of Askey–Wilson polynomials. For these matrix-valued orthogonal polynomials, a number of properties are derived using this quantum group interpretation: the orthogonality relations from the Schur orthogonality relations, the three-term recurrence relation and the structure of the weight matrix in terms of Chebyshev polynomials from tensor product decompositions, and the matrix-valued Askey–Wilson type q-difference operators from the action of the Casimir elements. A more analytic study of the weight gives an explicit LDU-decomposition in terms of continuous q-ultraspherical polynomials. The LDU-decomposition gives the possibility to find explicit expressions of the matrix entries of the matrix-valued orthogonal polynomials in terms of continuous q-ultraspherical polynomials and q-Racah polynomials.


Introduction
Shortly after the introduction of quantum groups, it was realised that many special functions of basic hypergeometric type [15] have a natural relation to quantum groups, see e.g. [9,Chap. 6], [20,31] for references. In particular, many orthogonal polynomials in the q-analogue of the Askey scheme, see e.g. [26], have found an interpretation on compact quantum groups analogous to the interpretation of orthogonal polynomials of hypergeometric type from the Askey scheme on compact Lie groups and related structures, see e.g. [46,47].
In case of the harmonic analysis on classical Gelfand pairs, one studies spherical functions and related Fourier transforms, see [43]. For our purposes, a Gelfand pair consists of a Lie group G and a compact subgroup K so that the trivial representation of K in the decomposition of any irreducible representation of G restricted to K occurs with multiplicity at most one. The spherical functions are functions on G which are left-and right-K -invariant. The zonal spherical functions are realised as matrix elements of irreducible G-representations with respect to a fixed K -vector. For special cases, the zonal spherical functions can be identified with explicit special functions of hypergeometric type, see [43,Chap. 9], [12,§IV]. The zonal spherical functions are eigenfunctions to an algebra of differential operators, which includes the differential operator arising from the Casimir operator in case G is a reductive group. For special cases with G compact, we obtain orthogonality relations and differential operators for the spherical functions, which can be identified with orthogonal polynomials from the Askey scheme. For the special case G = SU(2) × SU(2) with K ∼ = SU(2) embedded as the diagonal subgroup, the zonal spherical functions are the characters of SU (2), which are identified with the Chebyshev polynomials U n of the second kind by the Weyl character formula. The Gelfand pair situation has been generalised to the setting of quantum groups, mainly in the compact context, see e.g. Andruskiewitch and Natale [3] for the case of finite dimensional Hopf algebra with a Hopf subalgebra, Floris [13], Koornwinder [32], Vainermann [45] for more general compact quantum groups, and, for a non-compact example, Caspers [7].
The notions of matrix-valued and vector-valued spherical functions have already emerged at the beginning of the development of the theory of spherical functions, see e.g. [14] and references given there. However, the focus on the relation with matrix-valued or vector-valued special functions only came later, see e.g. references given in [18,44]. Grünbaum et al. [17] give a group theoretic approach to matrixvalued orthogonal polynomials emphasising the role of the matrix-valued differential operators, which are manipulated in great detail. The paper [17] deals with the case (G, K ) = (SU (3), U(2)). Motivated by [17] and the approach of Koornwinder [29], the group theoretic interpretation of matrix-valued orthogonal polynomials on (G, K ) = (SU (2) × SU (2), SU(2)) is studied from a different point of view, in particular with less manipulation of the matrix-valued differential operators, in [23,24], see also [18,44]. The point of view is to construct the matrix-valued orthogonal polynomials using matrix-valued spherical functions, and next using this group theoretic interpretation to obtain properties of the matrix-valued orthogonal polynomials. This approach for the case (G, K ) = (SU(2) × SU (2), SU(2)) leads to matrix-valued orthogonal polynomials for arbitrary size, which can be considered as analogues of the Chebyshev polynomials of the second kind. A combination of the group theoretic approach and analytic considerations then allows us to understand these matrix-valued orthogonal polynomials completely, i.e. we have explicit orthogonality relations, three-term recurrence relations, matrix-valued differential operators having the matrixvalued orthogonal polynomials as eigenfunctions, expression in terms of Tirao's [41] matrix-valued hypergeometric functions, expression in terms of well-known scalarvalued orthogonal polynomials from the Askey scheme, etc. This has been analytically extended to arbitrary size matrix-valued orthogonal Gegenbauer polynomials [25], see also [39] for related 2 × 2 cases.
The interpretation on quantum groups and related structures leads to many new results for special functions of basic hypergeometric type. In this paper, we use quantum groups in order to obtain matrix-valued orthogonal polynomials as analogues of a subclass of the Askey-Wilson polynomials. In particular, we consider the Chebyshev polynomials of the second kind, recalled in (5.6), as a special case of the Askey-Wilson polynomials [4, (2.18)]. Moreover, we know that the Chebyshev polynomials occur as characters on the quantum SU(2) group, see [48, §A.1]. The approach in this paper is to establish the quantum analogue of the group theoretic approach as presented in [23,24], see also [18,44], for the example of the Gelfand pair G = SU(2) × SU (2) with K ∼ = SU (2). For this approach, we need Letzter's approach [34][35][36] to quantum symmetric spaces using coideal subalgebras. We stick to the conventions as in Kolb [27] and we refer to [28, §1] for a broader perspective on quantum symmetric pairs. So we work with the quantised universal enveloping algebra U q (g) = U q (su (2) ⊕ su(2)), introduced in Sect. 3, equipped with a right coideal subalgebra B, see Sect. 4. Once we have this setting established, the branching rules of the representations of U q (g) restricted to B follow by identifying B with the image of U q (su (2)) (up to an isomorphism) under the comultiplication using the standard Clebsch-Gordan decomposition. In particular, it gives explicit intertwiners. Next we introduce matrix-valued spherical functions in Sect. 4. Using the matrix-valued spherical functions, we introduce the matrix-valued orthogonal polynomials. Then we use a mix of quantum group theoretic and analytic approaches to study these matrix-valued orthogonal polynomials. So we find the orthogonality for the matrix-valued orthogonal polynomials from the Schur orthogonality relations, and the three-term recurrence relation follows from tensor product decompositions of U q (g)-representations, and the matrix-valued q-difference operators for which these matrix-valued orthogonal polynomials are eigenvectors follow from the study of the Casimir elements in U q (g). More analytic properties follow from the LDU-decomposition of the matrix-valued weight function, and this allows to decouple the matrix-valued q-difference operators involved. The decoupling gives the possibility to link the entries of the matrix-valued orthogonal polynomials with (scalar-valued) orthogonal polynomials from the q-analogue of the Askey scheme, in particular the continuous q-ultraspherical polynomials and the q-Racah polynomials. The approach of [17] does not seem to work in the quantum case, because the possibilities to transform q-difference equations are very limited compared to transforming differential equations. We note that in [3, §5] matrix-valued spherical functions are considered for finite dimensional Hopf algebras with respect to a Hopf subalgebra.
The approach to matrix-valued orthogonal polynomials from this quantum group setting also leads to identities in the quantised function algebra. This paper does not include the resulting identities after using infinite dimensional representations of the quantised function algebra. Furthermore, we have not supplied a proof of Lemma 5.4 using infinite dimensional representations and the direct integral decomposition of the Haar functional, but this should be possible as well.
In general, the notion of a quantum symmetric pair seems to be best-suited for the development of harmonic analysis in general and in particular of matrix-valued spherical functions on quantum groups, see e.g. [28,[34][35][36][37][38] and references given there. When considering other quantum symmetric pairs in relation to matrix-valued spherical functions, the branching rule of a representation of the quantised universal enveloping algebra to a coideal subalgebra seems to be difficult. In this paper, it is reduced to the Clebsch-Gordan decomposition, and there is a nice result by Oblomkov and Stokman [38,Proposition 1.15] on a special case of the branching rule for quantum symmetric pair of type AIII, but in general the lack of the branching rule for the quantum symmetric pairs is an obstacle for the study of quantum analogues of matrix-valued spherical functions of e.g. [17,18,38,44].
The matrix-valued orthogonal polynomials resulting from the study in this paper are matrix-valued analogues of the Chebyshev polynomials of the second kind viewed as an example of the Askey-Wilson polynomials. We expect that it is possible to obtain matrix-valued analogues of the continuous q-ultraspherical polynomials viewed as subfamily of the Askey-Wilson polynomials using the approach of [25] using the Askey-Wilson q-derivative instead of the ordinary derivative. We have not explicitly worked out the limit transition q ↑ 1 of the results, but by the set-up it is clear that the formal limit gives back many of the results of [23,24].
The contents of the paper are as follows. In Sect. 2, we fix notation regarding matrix-valued orthogonal polynomials. In Sect. 3, the notation for quantised universal enveloping algebras is recalled. Section 4 states all the main results of this paper. It introduces the quantum symmetric pair explicitly. Using the representations of the quantised universal enveloping algebra and the coideal subalgebra, the matrixvalued polynomials are introduced. We continue to give explicit information on the orthogonality relations, three-term recurrence relations, q-difference operators, the commutant of the weight, the LDU-decomposition of the weight, the decoupling of the q-difference equations and the link to scalar-valued orthogonal polynomials from the q-Askey scheme. The proofs of the statements of Sect. 4 occupy the rest of the paper. In Sect. 5, the main properties derivable from the quantum group set-up are derived, and we discuss in Appendix 1 the precise relation of the branching rule for this quantum symmetric pair and the standard Clebsch-Gordan decomposition. In Sect. 6, we continue the study of the orthogonality relations, in which we make the weight explicit. This requires several identities involving basic hypergeometric series, whose proofs we relegate to Appendix 2. Section 7 studies the consequences of the explicit form of the matrix-valued q-difference operators of Askey-Wilson type to which the matrix-valued orthogonal polynomials are eigenfunctions.
In preparing this paper, we have used computer algebra in order to verify the statements up to certain size of the matrix and up to certain degree of the polynomial in order to eliminate errors and typos. Note, however, that all proofs are direct and do not use computer algebra. A computer algebra package used for this purpose can be found on the homepage of the second author. 1 The convention on notation follows Kolb [27] for quantised universal enveloping algebras and right coideal subalgebras and we follow Gasper and Rahman [15] for the convention on basic hypergeometric series and we assume 0 < q < 1.

Matrix-valued orthogonal polynomials
In this section, we fix notation and give a short background to matrix-valued orthogonal polynomials, which were originally introduced by Krein in the forties, see e.g. references in [5,10]. General references for this section are [5,10,16], and references given there.
Assume that we have a matrix-valued function W : ] almost everywhere. We use the notation A > 0 to denote a strictly positive definite matrix. Moreover, we assume that all moments exist, where integration of a matrix-valued function means that each matrix entry is separately integrated. In particular, the integrals are matrices in M 2 +1 (C). It then follows that for matrix-valued polynomials P, exists. This gives a matrix-valued inner product on the space M 2 +1 (C)[x] of matrixvalued polynomials, satisfying where G n > 0. Moreover, the leading coefficient of P n is non-singular. Any other family of polynomials (Q n ) n∈N so that Q n is a matrix-valued polynomial of degree n and Q n , Q m = 0 for n = m satisfies P n (x) = Q n (x)E n for some non-singular E n ∈ M 2 +1 (C) for all n ∈ N. We call the matrix-valued polynomial P n monic in case the leading coefficient is the identity matrix I . The polynomials P n are called orthonormal in case the squared norm G n = I for all n ∈ N in the orthogonality relations (2.2). The matrix-valued orthogonal polynomials P n always satisfy a matrix-valued threeterm recurrence of the form for matrices A n , B n , C n ∈ M 2 +1 (C) for all n ∈ N. Note that in particular A n is nonsingular for all n. Conversely, assuming P −1 (x) = 0 (by convention) and fixing the constant polynomial P 0 (x) ∈ M 2 +1 (C) we can generate the polynomials P n from the recursion (2.3). In case the polynomials are monic, the coefficient A n = I for all n and P 0 (x) = I as the initial value. In general, the matrices satisfy G n+1 A n = C * n+1 G n , G n B n = B * n G n , so that in the monic case C n = G −1 n−1 G n for n ≥ 1. In case the polynomials are orthonormal, we have C n = A * n−1 and B n Hermitian. Note that the matrix-valued 'sesquilinear form' (2.1) is antilinear in the first entry of the inner product, which leads to a three-term recurrence of the form (2.3) where the multiplication by the constant matrices is from the right, see [10] for a discussion. In be the matrix-valued polynomial defined by P V n (x) = ι * V P n (x)ι V , where P n are the monic matrix-valued orthogonal polynomials for the weight W . Then P V n form a family of monic Vendomorphism-valued orthogonal polynomials, and P n (x) = P V n (x) ⊕ P V ⊥ n (x). The same decomposition can be written down for the orthonormal polynomials.
The projections on invariant subspaces are in the commutant * -algebra {T ∈ M 2 +1 (C) | T W (x) = W (x)T ∀x}. In case the commutant algebra is trivial, the matrix-valued orthogonal polynomials are irreducible. The primitive idempotents correspond to the minimal invariant subspaces, and hence they determine the decomposition of the matrix-valued orthogonal polynomials into irreducible cases.

Remark 2.1
In [42] the authors discuss non-orthogonal decompositions by considering, instead of the commutant algebra, the real vector space It follows that if I R A , then the weight W reduces, non-unitarily, to weights of smaller size. Koelink and Román [22,Example 4.3] showed that A = {W (x) : x ∈ (−1, 1)} so that, in our case, both decompositions coincide.
We denote by E i, j ∈ M 2 +1 (C) the matrix with zeroes except at the (i, j)th entry where it is 1. So for the corresponding standard basis {e k } 2 k=0 we set E i, j e k = δ j,k e i .
We usually use the basis {e k } 2 k=0 in describing the results for the matrix-valued orthogonal polynomials, but occasionally the basis is relabelled {e k } k=− , as is customary for the U q (su (2))-representations of spin . In the latter case, we use superscripts to distinguish from the previous case: E i, j e k = δ j,k e i , i, j, k ∈ {− , . . . , }.

Quantised universal enveloping algebra
We recall the setting for quantised universal enveloping algebras and quantised function algebras, and this section is mainly meant to fix notation. The definitions can be found at various sources on quantum groups, such as the books [9,11,20], and we follow Kolb [27].
Fix for the rest of this paper 0 < q < 1. The quantised universal enveloping algebra can be associated to any root datum, but we only need the simplest cases g = sl(2) and g = sl(2) ⊕ sl (2). The quantised universal enveloping algebra is the unital associative algebra generated by k, k −1 , e, f subject to the relations where we follow the convention as in [27, §3]. For our purposes, it is useful to extend the algebra with the roots of k and k −1 , denoted by k 1/2 , k −1/2 satisfying The extended algebra is denoted by U q (sl(2)), and it is a Hopf algebra with comultiplication , counit ε, antipode S defined on the generators by The Hopf algebra has a * -structure defined on the generators by We denote the corresponding Hopf * -algebra by U q (su (2)).
The identification as Hopf * -algebras with [21,30] The irreducible finite dimensional type 1 representations of the underlying * -algebra have been classified. Here type 1 means that the spectrum of K 1/2 is contained in q 1 2 Z . For each dimension 2 + 1 of spin ∈ 1 2 N, there is a representation in H ∼ = C 2 +1 with orthonormal basis {e − , e − +1 , . . . , e } and on which the action is given by . Finally, recall that the centre Z(U q (su(2))) is generated by the Casimir element ω, (3.4) We use the notation U q (g) to denote the Hopf * -algebra U q (su(2) ⊕ su (2)), which we identify with U q (su (2) for any fixed i and the generators with different index i commute. The tensor product of two Hopf * -algebras is again a Hopf * -algebra, where the maps on a simple tensor X 1 ⊗ X 2 are given by, see e.g. [9,Chap. 4], where we use leg-numbering notation. The irreducible finite dimensional type 1 representations of U q (g) are labelled by ( 1 , 2 ) ∈ 1 2 N × 1 2 N, and the representations t 1 , 2 from U q (g) to End(H 1 , 2 ), H 1 , 2 = H 1 ⊗H 2 , are obtained as the exterior tensor product of the representations of spin 1 and 2 of U q (su (2)). Here type 1 means that the spectrum of K 1/2 i , i = 1, 2, is contained in q 1 2 Z . We have used the notation , ε and S for the comultiplication, counit and antipode for all Hopf algebras, respectively. From the context, it should be clear which comultiplication, counit and antipode is meant. The corresponding dual Hopf * -algebra related to the quantised function algebra is not needed for the description of the results in Sect. 4, and it will be recalled in Sect. 5.1.
spaces have been introduced and studied in detail by Letzter [34][35][36], see also Kolb [27]. In particular, Letzter has shown that Macdonald polynomials occur as spherical functions on quantum symmetric pairs motivated by the works of Koornwinder, Dijkhuizen, Noumi and others. In our case, B ⊂ U q (g), as in Definition 4.1, is the appropriate right coideal subalgebra. Using the explicit branching rules for t ( 1 , 2 ) | B of Theorem 4.3, we introduce matrix-valued spherical functions in Definition 4.4. To these matrix-valued spherical functions, we associate matrix-valued polynomials in (4.8), and we spend the remainder of this section to describe properties of these matrix-valued polynomials. This includes the orthogonality relations, the three-term recurrence relation and the matrix-valued polynomials as eigenfunctions of a commuting set of matrix-valued q-difference equations of Askey-Wilson type. Moreover, we give two explicit descriptions of the matrix-valued weight function W , one in terms of spherical functions for this quantum symmetric pair and one in terms of the LDU-decomposition. The LDU-decomposition gives the possibility to decouple the matrix-valued q-difference operator, and this leads to an explicit expression for the matrix entries of the matrix-valued orthogonal polynomials in terms of scalar-valued orthogonal polynomials from the q-Askey scheme in Theorem 4.17.
For the symmetric pair (G, K ) = (SU(2) × SU(2), SU(2)), K = SU(2) corresponds to the fixed points of the Cartan involution θ flipping the order of the pairs in G. The quantised universal enveloping algebra associated to G is U q (g) as introduced in Sect. 3. As the quantum analogue of K , we take the right coideal subalgebra B ⊂ U q (g), i.e. B ⊂ U q (g) is an algebra satisfying (B) ⊂ B⊗U q (g), as in Definition 4.1. Letzter [34,Sect. 7,(7.2)] has introduced the corresponding left coideal subalgebra, and we follow Kolb [27, §5] in using right coideal subalgebras for quantum symmetric pairs. Note that we have modified the generators slightly in order to have B * 1 = B 2 . Definition 4.1 The right coideal subalgebra B ⊂ U q (g) is the subalgebra generated by K ±1/2 , where K = K 1 K −1 2 , and Remark 4.2 (i) B is a right coideal as follows from the general construction, see [27,Proposition 5.2]. It can be verified directly by checking it for the generators.
is in B⊗U q (g) by a straightforward calculation. Since B 2 = B * 1 , it also follows for B 2 , since K ±1/2 are self-adjoint. The relations, cf. [27,Lemma 5.15], hold in U q (g) as can also be checked directly.
Then we see that k 1/2 → K −1/2 , q f k 1/2 → B 1 , and q −1 k −1/2 e → B 2 under the map ι•(Ψ ⊗Id)• . In particular, the relations (4.1) follow. We conclude that B is isomorphic as a * -algebra to (U q (su(2)) ⊂ U q (g) by the * -isomorphism ι • (Ψ ⊗ Id). (iii) In particular, B ∼ = U q (su(2)) as * -algebras. So the irreducible type 1 representations of B are labelled by the spin ∈ 1 2 N. This can be made explicit by t : B → End(H ) and setting with the notation of (3.3). We use the same notation t for these representations here and in (3.3), since they correspond under the identification of B as U q (su (2)). (iv) Let σ be the * -algebra isomorphism on U q (g) = U q (su(2)) ⊗ U q (su (2)) by flipping the order in the tensor product, or equivalently by flipping the subscripts 1 ↔ 2. Then σ : B → B is an involution B 1 ↔ B 2 , K ↔ K −1 . On the level of representations of U q (g) and B, it follows t 1 , 2 (σ (X )) = P * t 2 , 1 (X )P, X ∈ U q (g), where P :

Theorem 4.3
The finite dimensional representation t 1 , 2 of U q (g) restricted to B decomposes multiplicity-free into irreducible representations t of B: With respect to the orthonormal basis {e p } p=− of H and the orthogonal basis where C 1 , 2 , i, j, p are Clebsch-Gordan coefficients satisfying The proof of Theorem 4.3 is a reduction to the well-known Clebsch-Gordan decomposition for the quantised universal enveloping algebra U q (su (2)), see e.g. [9,20], using Remark 4.2. The proof is presented in Appendix 1. In particular, (β 1 , 2 ) * β 1 , 2 is the identity on H . Note that In general, the decomposition of an irreducible representation restricted to a right coideal subalgebra seems a difficult problem. In this particular case, we can reduce to the Clebsch-Gordan decomposition, and yet another known special case is by Oblomkov We use the reparametrisation of (4.3) by  2 N and let ( 1 , 2 ) ∈ 1 2 N × 1 2 N so that [t 1 , 2 | B : t ] = 1. The spherical function of type associated to ( 1 , 2 ) is defined by (ii) Note that the condition (4.3) is symmetric in 1 and 2 . With the notation of Remark 4.2(iv), we have Φ 2 , 1 (Z ) = J Φ 1 , 2 (σ (Z ))J for Z ∈ U q (g). This follows from β 2 , 1 = Pβ 1 , 2 J , which is a consequence of (8.2).
In case = 0, H 0 ∼ = C, we need 1 = 2 . Then Φ 0 1 , 1 are linear maps U q (g) → C. In particular, Φ 0 0,0 equals the counit ε, and the spherical function ϕ = 1 2 (q −1 + Fig. 1 The spherical functions Φ 1 , 2 when = 2 and interpretation of ϕ · Φ 5/2,5/2 in terms of the matrix-valued spherical functions. The reparametrisation ξ is depicted q)Φ 0 1/2,1/2 is scalar-valued linear map on U q (g). The elements Φ 0 n/2,n/2 can be written as a multiple of U n (ϕ), where U n denotes the Chebyshev polynomial of the second kind of degree n, see Proposition 5.3. This statement can be considered as a special case of Theorem 4.8, but we need the identification with the Chebyshev polynomials in the spherical case = 0 in order to obtain the weight function in Theorem 4.8. Proposition 5.3 will follow from Theorem 4.6. The identification of the spherical functions for = 0 with Chebyshev polynomials corresponds to the classical case, since the spherical functions on G × G/G are the characters on G and the characters on SU(2) are Chebyshev polynomials of the second kind, as the simplest case of the Weyl character formula. It also corresponds to the computation of the characters on the quantum SU(2) group by Woronowicz [48], since the characters are identified with Chebyshev polynomials as well.
Next Theorem 4.6 gives the possibility to associate polynomials in ϕ to spherical functions of Definition 4.4. Theorem 4.6 essentially follows from the tensor product decomposition of representations of U q (g), which in turn follows from tensor product decomposition for U q (su(2)), and some explicit knowledge of Clebsch-Gordan coefficients.
Theorem 4.6 Fix ∈ 1 2 N and let ( 1 , 2 ) ∈ 1 2 N × 1 2 N satisfy (4.3), then for constants A i, j we have In order to interpret the result of Theorem 4.6, we evaluate both sides at an arbitrary X ∈ U q (g). The right-hand side is a linear combination of linear maps from H to itself after evaluating at X . For the left-hand side, we use the pairing of Hopf algebras so that multiplication and comultiplication are dual to each other and the left-hand side has to be interpreted as which is a linear combination of linear maps from H to itself, using (X ) = (X ) X (1) ⊗ X (2) . The convention in Theorem 4.6 is that A i, j is zero in case ( 1 + i, 2 + j) does not satisfy (4.3). The proof of Theorem 4.6 can be found in Sect. 5.2.
Since B is a right coideal subalgebra, we see that the left-hand side of Theorem 4.6 has the same transformation behaviour as (4.5). Indeed, for X ∈ B and Y ∈ U q (g) we have where we have used that X (1) ∈ B by the right coideal property and (4.5) for ϕ = Φ 0 1/2,1/2 in the second equality, and the counit axiom (X ) ε(X (1) )X (2) = X in the fourth equality and then (4.5) for Φ 1 , 2 and the fact that ϕ(Y (1) ) is a scalar. Similarly, the invariance property from the right can be proved. Theorem 4.6 leads to polynomials in ϕ by iterating the result and using that A 1/2,1/2 is non-zero.

Corollary 4.7
There exist 2 + 1 polynomials r ,k n,m , 0 ≤ k ≤ 2 , of degree at most n so that The aim of the paper is to show that the polynomials r ,k n,m give rise to matrix-valued orthogonal polynomials. Put where the matrix-valued polynomials P n are taken with respect to the relabelled standard basis e p = e p− , p ∈ {0, 1, . . . , 2 } so that P n = 2 i, j=0 r ,i n, j ⊗ E i, j . From Corollary 4.12 or Theorem 4.17, we see that the polynomial r ,i n, j has real coefficients. The case = 0 corresponds to a three-term recurrence relation for (scalar-valued) orthogonal polynomials, and then the polynomials coincide with the Chebyshev polynomials U n viewed as a subclass of Askey-Wilson polynomials [4, (2.18)], see Proposition 5.3.
We show that the matrix-valued polynomials (P n ) ∞ n=0 are orthogonal with respect to an explicit matrix-valued weight function W , see Theorem 4.8, arising from the Schur orthogonality relations. The expansion of the entries of weight function in terms of Chebyshev polynomials is given by quantum group theoretic considerations except for the calculation of the coefficients in this expansion. The matrix-valued orthogonal polynomials satisfy a matrix-valued three-term recurrence relation as follows from Theorem 4.6, which in turn is a consequence of the decomposition of tensor product representations of U q (g). However, in order to determine the matrix coefficients in the matrix-valued three-term recurrence we use analytic methods. The existence of two Casimir elements in U q (g) leads to the matrix-valued orthogonal polynomials being eigenfunctions of two commuting matrix-valued q-difference operators, see [23] for the group case. This extends Letzter [35] to the matrix-valued set-up for this particular case. The q-difference operators are the key to determining the entries of the matrix-valued orthogonal polynomials explicitly in terms of scalar-valued orthogonal polynomials from the q-Askey scheme [26], namely the continuous q-ultraspherical polynomials and the q-Racah polynomials. In this deduction, the LDU-decomposition of the matrix-valued weight function W is essential, since the conjugation with L allows us to decouple the matrix-valued q-difference operator.
In the remainder of Sect. 4, we state these results explicitly, and we present the proofs in the remaining sections. First we give the main statements which essentially follow from the quantum group theoretic set-up, except for explicit calculations, and these are Theorems 4.8, 4.11, 4.13. The remaining Theorems 4.15, 4.17 are obtained using scalar orthogonal polynomials from the q-analogue of the Askey scheme [26] and transformation and summation formulas for basic hypergeometric series [15].
We start by stating that the matrix-valued polynomials (P n ) ∞ n=0 introduced in (4.8) are orthogonal with the conventions of Sect. 2. The orthogonality relations of Theorem 4.8 are due to the Schur orthogonality relations. The expansion of the entries of the weight function in terms of Chebyshev polynomials follows from the fact that the entries are spherical functions, i.e. correspond to the case = 0 so that they are polynomial in ϕ. The non-zero entries follow by considering tensor product decompositions, but the explicit values for the coefficients α t (m, n) in Theorem 4.8 require summation and transformation formulae for basic hypergeometric series.

Theorem 4.8
The polynomials (P n ) ∞ n=0 of (4.8) form a family of matrix-valued orthogonal polynomials so that P n is of degree n with non-singular leading coefficient. The orthogonality for the matrix-valued polynomials (P n ) n≥0 is given by where the squared norm matrix G m is diagonal: .
Moreover, for 0 ≤ m ≤ n ≤ 2 the weight matrix is given by The proof of Theorem 4.8 proceeds in steps. First we study explicitly the case = 0, motivated by the works of Koornwinder [30], Letzter [34][35][36] and others. Secondly, we show that taking traces of a matrix-valued spherical function of type associated to ( 1 , 2 ) times the adjoint of a spherical function of type associated to ( 1 , 2 ) gives, up to an action by an invertible group-like element of U q (g), a polynomial in the generator for the case = 0. Then the explicit expression of the Haar functional on this polynomial algebra, stated in Lemma 5.4, gives the matrix-valued orthogonality relations. Finally, the explicit expression for the weight is obtained by analysing the explicit expression of W in terms of the matrix entries of the intertwiners β 1 , 2 in case 1 + 2 = . These matrix entries are Clebsch-Gordan coefficients.
The leading coefficient of P n can be calculated explicitly from the proof of Theorem 4.8:

Corollary 4.9
The leading coefficient of P n is a non-singular diagonal matrix: The weight W is not irreducible, see Sect. 2, but splits into two irreducible block matrices. The symmetry J of the weight function of Theorem 4.8 is essentially a consequence of Remark 4.5(ii), but we need the explicit expression of the weight in order to prove that the commutant algebra is not bigger, see also [22, §4].

Proposition 4.10 The commutant algebra
is spanned by I and J , where J : e p → e 2 − p , p ∈ {0, . . . , 2 }, is a self-adjoint involution. Then J P n (x)J = P n (x) and J G n J = G n . Moreover, the weight W decomposes into two irreducible block matrices W + and W − , where W + , respectively W − , acts in the +1-eigenspace, respectively −1-eigenspace, of J . So for P + + P − = I , where P + , P − are the orthogonal self-adjoint projections P + = 1 The special cases for = 1 2 and = 1 are given at the end of this section. In particular, we identify all scalar-valued orthogonal polynomials occurring in this framework explicitly in terms of Askey-Wilson polynomials.
Theorem 4.6 can be used to find a three-term recurrence relation for the matrixvalued orthogonal polynomials P n , cf. Sect. 2, so the underlying tensor product decompositions provide the three-term recurrence relation. However, the resulting expressions for the entries of the coefficients of the matrices are rather complicated expressions in terms of Clebsch-Gordan coefficients. For the corresponding matrixvalued monic polynomials Q n (x) = P n (x)lc(P n ) −1 , see Corollary 4.9 for the explicit expression for the leading coefficient, we can derive a simple expression for the matrices in the three-term recurrence relation once we have obtained more explicit expressions for the matrix entries of Q n . This is obtained in Sect. 7 using an explicit link of the matrix entries to scalar orthogonal polynomials in the q-Askey scheme.

Theorem 4.11
The monic matrix-valued orthogonal polynomials (Q n ) n≥0 satisfy the three-term recurrence relation Note that X n → 0, Y n → 1 4 as n → ∞. The three-term recurrence relation for the matrix-valued orthogonal polynomials P n is given in Corollary 4.12, which follows from Theorem 4.11, since we have G n+1 A n = lc(P n+1 ) * lc(P n ), G n B n = lc(P n ) * X n lc(P n ), and G n−1 C n = lc(P n−1 ) * Y n lc(P n ). For future reference, we give the explicit expressions in Corollary 4.12.

Corollary 4.12
The matrix-valued orthogonal polynomials (P n ) n≥0 satisfy the threeterm recurrence relation Note that the case = 0 gives a three-term recurrence relation that can be solved in terms of the Chebyshev polynomials, see Proposition 5.3.
In the group case, the spherical functions are eigenfunctions of K -invariant differential operators on G/K , see e.g. [8,14]. For matrix-valued spherical functions this is also the case, see [40], and this has been exploited in the special cases studied in [17,23,24]. In the quantum group case, the action of the Casimir operator gives rise to a q-difference operator for the corresponding spherical functions, see [35]. The first occurrence of an Askey-Wilson q-difference operator, see [4,15,19], in this context is due to Koornwinder [30]. For the matrix-valued orthogonal polynomials, we have a matrix-valued analogue of the Askey-Wilson q-difference operator, as given in Theorem 4.13. We obtain two of these operators, one arising from the Casimir operator for U q (su (2)) in the first leg of U q (g) and one from the U q (su (2)) Casimir operator of the second leg. This is related to a kind of Cartan decomposition of U q (g), cf. (4.5), which, however, does not exist in general for quantised universal enveloping algebras. We can still resolve this problem using techniques based on [8, §2], see the first part of the proof in Sect. 5. The proof of Theorem 4.13 is completed in Sect. 7.

Theorem 4.13 Define two matrix-valued q-difference operators by
where the multiplication by the matrix-valued functions M i (z) and M i (z −1 ) is from the left and η q is the shift operator defined by . The matrix-valued function M 1 is given by The matrix-valued orthogonal polynomials P n are eigenfunctions for the operators D i with eigenvalue matrices given by Λ n (i) such that D i P n = P n Λ n (i) and Explicitly, where η q and η q −1 are applied entry-wise to the matrix-valued orthogonal polynomials P n .
Theorem 4.13 shows that J D 1 J = D 2 , since J is constant. In particular, D 1 + D 2 commutes with J and reduces to a q-difference operator for the matrix-valued orthogonal polynomials associated with the weight W + or W − , see Proposition 4.10. Similarly, D 1 − D 2 anticommutes with J .
Note that the expression M i (z)P(qz) + M i (z −1 )P(z/q) is symmetric in z ↔ z −1 for any matrix-valued polynomial P, and hence again is a function in x = μ(z). The case = 0 corresponds to only one q-difference operator, which we rewrite as For the Chebyshev polynomials, U n (x) = (q n+2 ; q) −1 n p n (x; q, −q, q 1/2 , −q 1/2 |q) rewritten as Askey-Wilson polynomials [4, (2.18) By [16, §2] it suffices to check Corollary 4.14 for P = P n , Q = P m , so that by Theorems 4.13 and 4.8 we need to check that Λ n (i) * G n δ m,n = G n Λ m (i)δ m,n , which is true since the matrices involved are real and diagonal.
The orthogonality measure is a positive measure in case 0 < q < 1 and β real with |β| < 1. Explicitly, (4.11) Note that in the special case β = q 1+k , k ∈ N, the weight function is polynomial in x = cos θ , and We use the continuous q-ultraspherical polynomials (4.10) for any β ∈ C. In particular, for β = q −k with k ∈ N the sum in (4.10) is restricted to n − k ≤ r ≤ k, and in particular C n (x; q −k ; q) = 0 in case n − k > k. With this convention, we can now describe the LDU-decomposition of the weight matrix, and state the inverse of the unipotent lower triangular matrix L in Theorem 4.15.

Theorem 4.15
The matrix-valued weight W as in Theorem 4.8 has the following LDU-decomposition: The inverse of L is given by Note that T , L and L −1 are matrix-valued polynomials, which is clear from the explicit expression and (4.12). It is remarkable that the LDU-decomposition is for arbitrary size 2 + 1, but that there is no dependence of L on the spin and that the dependence of T on the spin is only in the constants c k ( ).
We prove the first part of Theorem 4.15 in Sect. 6. The proof of Theorem 4.15 is analytic in nature, and a quantum group theoretic proof would be desirable. The statement on the inverse of L(x) is taken from [1], where the inverse of a lower triangular matrix with matrix entries continuous q-ultraspherical polynomials in a more general situation is derived. The inverse L −1 is derived in [1,Example 4.2]. The inverse of L in the limit case q ↑ 1 was derived by Cagliero and Koornwinder [6], and the proof of [1] is of a different nature than the proof presented in [6].  Using the lower triangular matrix L of the LDU-decomposition of Theorem 4.15, we are able to decouple D 1 of Theorem 4.13 after conjugation with L t (x). We get a scalar q-difference equation for each of the matrix entries of L t (x)P n (x), which is solved by continuous q-ultraspherical polynomials up to a constant. Since we have yet another matrix-valued q-difference operator for L t (x)P n (x), namely L t D 2 (L t ) −1 with D 2 as in Theorem 4.13, we get a relation for the constants involved. This relation turns out to be a three-term recurrence relation along columns, which can be identified with the three-term recurrence for q-Racah polynomials. Finally, use (L t (x)) −1 to obtain an explicit expression for the matrix entries of the matrix-valued orthogonal polynomials of Theorem 4.17. Before where n ∈ {0, 1, 2, . . . , N }, N ∈ N, μ(x) = q −x + γ δq x+1 and so that one of the conditions αq = q −N , or βδq = q −N , or γ q = q −N holds.
Note that the left-hand side is a polynomial of degree at most n, whereas the righthand side is of degree n + j − i. In particular, for j > i the leading coefficient of the right-hand side of Theorem 4.17 has to vanish, leading to Corollary 4.18.

Corollary 4.18 With the notation of Theorem 4.17, we have for j
By evaluating Corollary 4.7 at 1 ∈ U q (g), we obtain Corollary 4.19, which is not clear from Theorem 4.17.

Examples
We end this section by specialising the results for low-dimensional cases. The case = 0 reduces to the Chebyshev polynomials U n (x) of the second kind as observed following Theorem 4.13. This is proved in Proposition 5.3, which is required for the proofs of the general statements of Sect. 4. In this case we work with 2 × 2 matrices. By Proposition 4.10, we know that the weight is block-diagonal, so that in this case we have an orthogonal decomposition to scalar-valued orthogonal polynomials. To be explicit, the matrix-valued weight W is given by In this case, see Sect. 2, the polynomials P n diagonalise since the leading coefficient is diagonalised by conjugation with the orthogonal matrix Y , and we write ). Then we can identify p ± n by any of the results given in this section, and we do this using the three-term recurrence relation of Corollary 4.12. After conjugation, the three-term recurrence relation for p + n is given by and the three-term recurrence relation for p − n is obtained by substituting x → −x into the three-term recurrence relation for p + n . The explicit expressions for p + n and p − n are given in terms of continuous q-Jacobi polynomials P which is a q-analogue of [24, §8.2]. Moreover, writing down the conjugation of the q-difference operator D 1 + D 2 of Theorem 4.13 for the case = 1 2 for the conjugated polynomials gives back the Askey-Wilson q-difference for the continuous q-Jacobi polynomials P . Working out the eigenvalue equation for D 1 − D 2 gives a simple q-analogue of the contiguous relations of [24, p. 5708].

Example: = 1
For = 1 we work with 3×3 matrices. By Proposition 4.10, we can block-diagonalise the matrix-valued weight: where W + is a 2×2 matrix-valued weight and W − is a scalar-valued weight. Explicitly, n is a scalar-valued polynomial. Conjugating Corollary 4.12, the three-term recurrence relations for P + n and p − n are The scalar-valued polynomial p − n can be identified with the continuous q-ultraspherical polynomials: The 2 × 2 matrix-valued polynomials P + n are solutions to the matrix-valued qdifference equation D P + n,1 = P + n Λ n . Here D = M(z)η q + M(z −1 )η q −1 is the restriction of the conjugated D 1 + D 2 to the +1-eigenspace of J . The explicit expressions for M(z) and Λ n are These results are q-analogues of some of the results given in [24, §8.3], see also [39]. Note moreover that W − (0) is a multiple of the identity so that the commutant of W − equals the commuting algebra of Tirao and Zurrián [42], see also [22]. Since the commutant is trivial, the weight W − is irreducible, which can also be checked directly.

Quantum group-related properties of spherical functions
In this section, we start with the proofs of the statements of Sect. 4 which can be obtained using the interpretation of matrix-valued spherical functions on U q (g).

Matrix-valued spherical functions on the quantum group
In this subsection, we study some of the properties of the matrix-valued spherical functions which follow from the quantum group theoretic interpretation. In particular, we derive Theorem 4.3 from Remark 4.2. The precise identification with the literature and the standard Clebsch-Gordan coefficients is made in Appendix 1, and we use the intertwiner and the Clebsch-Gordan coefficients as presented there. We also need the matrix elements of the type 1 irreducible finite dimensional representations. Define t m,n : U q (su(2)) → C, t m,n (X ) = t (X )e n , e m , n, m ∈ {− , . . . , }, where we take the inner product in the representation space H for which the basis {e n } n=− is orthonormal. Denoting then α, β, γ , δ generate a Hopf algebra, where the Hopf algebra structure is determined by duality of Hopf algebras. Moreover, it is a Hopf * -algebra with * -structure defined by α * = δ, β * = −qγ , which we denote by A q (SU (2)). Then the Hopf * -algebra A q (SU (2)) is in duality as Hopf * -algebras with U q (su (2)). In particular, the matrix elements t m,n ∈ A q (SU (2)) can be expressed in terms of the generators and span A q (SU (2)). Moreover, the matrix elements t m,n ∈ A q (SU (2)) form a basis for the underlying vector space of A q (SU (2)). The left action of U q (g) on A q (SU (2)) is given by (X · ξ)(Y ) = ξ(Y X) for X, Y ∈ U q (g) and ξ ∈ A q (SU (2)). Similarly, the right action is given by (ξ · X )(Y ) = ξ(XY ) for X, Y ∈ U q (g) and ξ ∈ A q (SU (2)). A calculation gives k 1/2 · t m,n = q −n t m,n and t m,n · k 1/2 = q −m t m,n so that α · k 1/2 = q 1/2 α, β · k 1/2 = q −1/2 β, γ · k 1/2 = q 1/2 γ , δ · k 1/2 = q −1/2 δ. Since k 1/2 and its powers are group-like elements of U q (g), it follows that the left and right action of k 1/2 and its powers are algebra homomorphisms. See e.g. [9,11,20,21], and references given there.
In the same way, we view where the functions are taken with respect to the Hopf algebra tensor product U q (g) = U q (su (2)) ⊗ U q (su (2)). In particular, for λ, μ ∈ 1 2 Z we find the expression t 1 Similarly, the Hopf * -algebra spanned by all the matrix elements t 1 2 and let A be the commutative subalgebra of U q (g) generated by A and A −1 . Recall the spherical function Φ 1 , 2 from Definition 4.4, and recall the transformation property (4.5).

Definition 5.1 The linear map
So the spherical function Φ 1 , 2 is a spherical function of type by (4.5).
(ii) A spherical function Φ of type restricted to A is diagonal with respect to the basis {e p } p=− . Moreover, for each λ ∈ Z, (iii) Assume that Φ is a spherical function of type and that with all linear maps Φ m,n on U q (g) in the linear span of the matrix elements This proves (i).
To obtain (ii), write Φ = m,n=− Φ m,n ⊗ E m,n , and observe Finally, for (iii) note that for and, by the multiplicity free statement in Theorem 4.3, this is the only (up to a constant) possible linear combination of the matrix elements t 1 m 1 ,n 1 ⊗ t 2 m 2 ,n 2 for fixed ( 1 , 2 ) which has this property. Hence, (iii) follows.
The special case Φ 0 1/2,1/2 : U q (g) → C can now be calculated explicitly using the Clebsch-Gordan coefficients C 1/2,1/2,0 m,−m,0 from (8.5). Explicitly, This element is not self-adjoint in A q (SU (2))⊗A q (SU (2)). Recall the right action of . This is analogous to the construction discussed at the beginning of this subsection. Then is self-adjoint for the * -structure of A q (SU (2)) ⊗ A q (SU (2)). Then by construction Proof of Theorem 4.6 As U q (g)-representations, the tensor product decomposition

The recurrence relation for spherical functions of type
where Ψ i, j m,n is in the span of the matrix elements t 1 +i r 1 ,s 1 ⊗ t 1 + j r 2 ,s 2 . Note that (4.7), and a similar calculation for multiplication by an element from B from the other side, shows that ϕΦ 1 , 2 has the required transformation behaviour (4.5) for the action of B from the left and the right. Since the matrix elements t 1 r 1 ,s 1 ⊗ t 1 r 2 ,s 2 form a basis for A q (G), it follows that each Ψ i, j satisfies (4.5) so that by Proposition 5.2(iii) It remains to show that A 1/2,1/2 = 0. In order to do so, we evaluate the identity of Theorem 4.6 at a suitable element of U q (g). For the U q (su (2))-representations in (3.3), it is immediate that t (e k ) = 0 for k > 2 and that t r,s (e 2 ) = 0 except for the case (r, s) = (− , ) and then t − , (e 2 ) = 0. Extending to U q (g), we find that where the right-hand side is non-zero in case m = − 1 + 2 , n = 1 − 2 . So if we evaluate the identity of Theorem 4.6 at E 2 1 +1 , it follows that only the term with (i, j) = (1/2, 1/2) on the right-hand side of Theorem 4.6 is non-zero, and the specific matrix element is given by (5.4) and this is non-zero for the same matrix element (m, n) = (− 1 + 2 , 1 − 2 ) by (5.4).
Note that we can derive the explicit value of A 1/2,1/2 from the proof of Theorem 4.6 by keeping track of the constants involved. However, we do not need the explicit value except in the case = 0.

Proposition 5.3 For n ∈ N, we have
Recall that the Chebyshev polynomials of the second kind are orthogonal polynomials: see e.g. [15,19,26].

Orthogonality relations
In this subsection, we prove Theorem 4.8 from the quantum group theoretic interpretation up to the calculation of certain explicit coefficients in the expansion of the entries of the weight. Recall the Haar functional on A q (SU (2)). It is the unique left and right invariant positive functional h 0 : A q (SU (2)) → C normalised by h 0 (1) = 1, see e.g. [9], [20, §4.3.2], [21,48]. The Schur orthogonality relations state (2)). Then the functional h = h 0 ⊗ h 0 is the Haar functional on A q (G). We can identify the analogue of the algebra of bi-K -invariant polynomials on G as the algebra generated by the self-adjoint element ψ, and give the analogue of the restriction of the invariant integration in Lemma 5.4.

Lemma 5.4 A functional τ : U q (g) → C that satisfies the transformation behaviour τ ((AX A −1 )Y Z) = ε(X )ψ(Y )ε(Z ) for all X, Z ∈ B and all Y ∈ U q (g) is a polynomial in ψ as in (5.2). Moreover, the Haar functional on the * -algebra C[ψ] ⊂ A q (G) is given by
Proof From Proposition 5.2(iii) and (5.3), we see that any functional on U q (g) satisfying the invariance property is a polynomial in ψ, since this space is spanned by Φ 0 for all X, Z ∈ B and all Y ∈ U q (g).
In particular, any such trace is a polynomial in the generator ψ by Lemma 5.4.

Corollary 5.6
Fix ∈ 1 2 N, then for k, p ∈ {0, 1, . . . , 2 }, Proof of Theorem 5.5 Write Ψ = m,n=− Ψ m,n ⊗ E m,n , Φ = m,n=− Φ m,n ⊗ E m,n , with Ψ m,n and Φ m,n linear functionals on U q (g) so that (5.8) Since B is a right coideal, we can assume that Z (1) ∈ B, so, by the transformation property (4.5), Ψ m, (1) ). Using this, and t k,n (Z (1) ) = t n,k (Z * (1) ) by the * -invariance of B and the unitarity of t , and next move this to the Φ-part, and summing over n and the transformation property (4.5) for Φ, we obtain (2) ) * = ε(Z ) by the antipode axiom in a Hopf algebra. For the invariance property from the left, we proceed similarly using that A is a group-like element. So for Y ∈ U q (g) and X ∈ B we have Proceeding as in the previous paragraph using X (1) The result follows if we prove (X ) X * (1) A −2 S(X (2) ) * A = A −1 ε(X ). In order to prove this, we need the observation that S(X * ) = A −2 S(X ) * A 2 for all X ∈ U q (g), which can be verified on the generators and follows since the operators are antilinear homomorphisms, see Remark 5.7. Now we obtain the required identity: Remark 5.7 The required identity S(X * ) = A −2 S(X ) * A 2 for all X ∈ U q (g) can be generalised to arbitrary semisimple g. Indeed, since the square of the antipode S is given by conjugation with an explicit element of the Cartan subalgebra associated to ρ = 1 2 α>0 α, see e.g. [33, Exercise 4.1.1], and since in a Hopf * -algebra S • * is an involution, we find that S( Using the Clebsch-Gordan decomposition, we see that t The statement on the adjoint follows immediately from (5.8) and ψ being selfadjoint, see (5.2).
We now can start the first part of the proof of Theorem 4.8, except for the fact that we have to determine certain constants. This is contained in Lemma 5.8.

Lemma 5.8 We have
The proof of Lemma 5.8 is a calculation using Theorem 4.6, which we postpone to Sect. 6.3.
First part of the proof of Theorem 4.8 Using the notation of Sect. 4, we find from Corollary 4.7 and (4.8) that where we use that the action by A −1 from the right is an algebra homomorphism, since A −1 is group like, and that ψ is self-adjoint. By Proposition 5.2, Lemma 5.4 and (5.7), we have after a straightforward calculation. Plugging in Lemma 5.8 in (5.9) and rewriting prove the result using Corollary 5.6.
Note that we have not yet determined the explicit values of α t (m, n) in Theorem 4.8 and we have not shown that W is a matrix-valued weight function in the sense of Sect. 2. The values of the constants α t (m, n) will be determined in Sect. 6.1 and the positivity of W (x) for x ∈ (−1, 1) will follow from Theorem 4.15.

q-Difference equations
It is well known, see e.g. Koornwinder [30], Letzter [35], Noumi [37], that the centre of the quantised enveloping algebra can be used to determine a commuting family of q-difference operators to which the corresponding spherical functions are eigenfunctions. In this subsection, we derive the matrix-valued q-difference operators corresponding to central elements to which we find matrix-valued eigenfunctions.
The centre of U q (g) is generated by two Casimir elements, see Sect. 3: Because of Proposition 5.2 and (3.4), we find The goal is to compute the radial parts of the Casimir elements acting on arbitrary spherical functions of type in terms of an explicit q-difference operator. In order to derive such a q-difference operator, we find a BAB-decomposition for suitable elements in U q (g) in Proposition 5.10. This special case of a BAB-decomposition is the analogue of the K AK -decomposition, which has not a general quantum algebra analogue. For this purpose, we first establish Lemma 5.9, which can be viewed as a quantum analogue of [8,Lemma 2.2], and gives the BAB-decomposition of F 2 A λ .

Lemma 5.9 Recall
Proof Recall Definition 4.1, so the result follows from

Proposition 5.10
The BAB-decomposition for the Casimir elements 1 and 2 is given by Proof We first concentrate on 2 , and the statement for 1 follows by flipping the order using σ as in Remark 4.2(iv). We need to rewrite E 2 F 2 A λ , which we do in terms of F 2 A λ and next using Lemma 5.9. The details are as follows.
Using Definition 4.1 and the commutation relations, and pulling through F 2 to the left, we get Similarly, and only slightly more involved, we obtain Using (5.11) and (5.12), we eliminate the term with F 1 F 2 , and shifting λ to λ + 1 gives Apply Lemma 5.9 on the first two terms on the right-hand side of (5.13) and note that the remaining terms in (5.13) and in 2 A λ can be dealt with by observing that K 2 = K −1/2 A = AK −1/2 . Taking corresponding terms together proves the BABdecomposition for 2 A λ after a short calculation.
Our next task is to translate Proposition 5.10 into an operator for spherical functions of type , from which we derive eventually, see Theorem 4.13, an Askey-Wilson qdifference type operator for the matrix-valued orthogonal polynomials P n . Let Φ be a spherical function of type , then we immediately obtain from Proposition 5.10 (5.14) The analogous expression for Φ( 2 A λ ) of (5.14) can be obtained using the flip σ , see Remark 4.2(iv). In particular, it suffices to replace all t (X ) in (5.14) by J t (X )J , see Remark 4.2(iv), to get the corresponding expression. By Proposition 5.2(ii), we know that Φ(A λ ) is diagonal. Note that Φ · 1 : U q → End(H ) is also a spherical function of type by the centrality of the Casimir operator 1 λ ) is diagonal, which can also be seen directly from (5.14). We can calculate the matrix entries of Φ( 1 A λ ) using the upper triangular matrices t (B 1 K −1/2 ), t (B 1 ) having only non-zero entries on the superdiagonal, the lower triangular matrices t (K −1/2 B 2 ), t (B 2 ) having only non-zero entries on the subdiagonal and the diagonal matrices t (K ±1/2 ), t ( For Φ a spherical function of type , we view the diagonal restricted to A as a vector-valued functionΦ: So we can regard the Casimir elements as acting on the vector-valued functionΦ, and the action of the Casimir is made explicit in Proposition 5.11.

Proposition 5.11
Let Φ be a spherical function of type , with corresponding vectorvalued functionΦ : A → H representing the diagonal when restricted to A. Then a tridiagonal and N 1 (z) is a diagonal matrix with respect to the basis {e n } n=− of H with coefficients Moreover, for 2 the action is Remark 5.12 The proof of Theorem 4.13 does not explain why the matrix coefficients of η q and η q −1 in Theorem 4.13 are related by z ↔ z −1 . In Proposition 5.11, there is a lack of symmetry between the up and down shift in λ, and only after suitable multiplication withΦ 0 from the left and right the symmetry of Theorem 4.13 pops up. It would be desirable to have an explanation of this symmetry from the quantum group theoretic interpretation. Note that the symmetry can be translated to the requirement The remark following (5.14) on how to switch to the second Casimir operator gives the conjugation between M 1 and M 2 , respectively N 1 and N 2 . Note that in case = 0, Φ andΦ are equal, and we find that Proposition 5.11 gives the operator for Φ : U q (g) → C a spherical function (of type 0), which should be compared to [30,Lemma 5.1], see also [35,37].
Proof Consider (5.14) and calculate the (m, m)-entry. Using the explicit expressions for the elements t (X ) for X ∈ B, see (4.2), (3.1), in (5.14), we find which we can rewrite as stated with the matrices M 1 (q λ ) and N 1 (q λ ). The case for 2 follows from the observed symmetry, or it can be obtained by an analogous computation.

Corollary 5.13 AssumingΦ 0 (A λ ) is invertible, the matrix-valued polynomials P n satisfy the eigenvalue equations
and Λ n (i) are defined in (5.16) and μ(x) = 1 2 (x + x −1 ). It remains to prove the assumption in Corollary 5.13 for sufficiently many λ, and to calculate the coefficients in the eigenvalue equations explicitly. This is done in Sect. 7.
Having established (5.17), we can prove Corollary 4.9 by considering coefficients in the Laurent expansion.

Proof of Corollary 4.9
The left-hand side of (5.17) can be expanded as a Laurent series in q λ by Proposition 5.2(ii) and (5.15). The leading coefficient of degree + n is an antidiagonal matrix since the Clebsch-Gordan coefficient is zero unless i + j = 2 . With a similar expression forΦ 0 (A λ ) on the right-hand side, expanding P n μ(q λ+1 ) = lc(P n )q n(λ+1) 2 −n + lower order terms gives

Symmetries
Even though we can use the explicit expression of the weight W to establish Proposition 4.10, we show the occurrence of J in the commutant from Remark 4.5(ii).
Note that (5.19), after applying the Haar functional on the * -algebra generated by ψ as in Lemma 5.4, also gives J G n J = G n as is immediately clear from the explicit expression for the squared norm matrix G n in Theorem 4.8 derived in Sect. 5.3. It moreover shows that (J P n J ) n∈N is a family of matrix-valued orthogonal polynomials with respect to the weight W . It is a consequence of Corollary 4.9, see Sect. 2, that J P n J = P n , since J lc(P n )J = lc(P n ) by Corollary 4.9.
Note that we have now proved Proposition 4.10 except for the ⊃-inclusion in the first line. This is done after the proof of Theorem 4.8 is completed at the end of Sect. 6.1.
As a consequence of the discussion on symmetries, we can formulate the symmetry forΦ n in Lemma 5.14. Proof This is a consequence of initial observations in Sect. 5.5. For i, j ∈ {0, . . . , 2 }, using (5.15) and σ (A λ ) = A λ we obtain

The weight and orthogonality relations for the matrix-valued polynomials
In this section, we complement the quantum group theoretic proofs of Sect. 5 of some of the statements of Sect. 4 using mainly analytic techniques. In Sect. 6.1, we prove the statement on the expansion of the entries of the weight function in terms of Chebyshev polynomials of Theorem 4.8. In Sect. 6.2, we prove the LDU-decomposition of Theorem 4.15. In Sect. 6.3, we prove Lemma 5.8 using a special case and induction using Theorem 4.6 in the induction step.

Explicit expressions of the weight
In order to prove the explicit expansion of the matrix entries of the weight W in terms of Chebyshev polynomials, we start with the expression of Corollary 5.6 for the matrix entries of the weight W . After pairing with A λ , we expand as a Laurent polynomial in q λ in Proposition 6.1. Then we can use Lemma 6.2, whose proof is presented in Appendix 2.1.
Proof We obtain from Corollary 5.6 using that A λ is a group-like element and S(A λ ) * = A −λ . By Proposition 5.2(ii) where the Clebsch-Gordan coefficients are to be taken as zero in case |i −n| > − 1 2 k, respectively | j − n| > − 1 2 p. Now put s = j − i, then s runs from − 1 2 (k + p) up to 1 2 (k + p), and we have the Laurent expansion

Plugging in (8.3) gives the explicit expression for d s (k, p).
In order to show that d s ( p, k) = d −s ( p, k), we note that p(ψ)(A λ ) = p(μ(q λ )) is symmetric in λ ↔ −λ for any polynomial p and so Corollary 5.6 implies the symmetry.
Note that for a matrix element of P n (ψ) * W (ψ)P m (ψ) a similar expression can be given, cf. the first part of the proof of Theorem 4.8, but this is not required. The symmetry in the Laurent expansion of W (ψ) k, p (A λ ) does not seem to follow directly from known symmetries for the Clebsch-Gordan coefficients, see e.g. [20,Chap. 3].
Now we can proceed with the proof of Theorem 4.8, for which we need Lemma 6.2. Lemma 6.2 For ∈ 1 2 N and for k, p ∈ {0, . . . , 2 } subject to k ≤ p and k + p ≤ 2 we have with the notation of Proposition 6.1 and Theorem 4.8, Lemma 6.2 contains the essential equality for the proof of the explicit expression of the coefficients of the matrix entries of the weight of Theorem 4.8. The proof of Lemma 6.2 can be found in Appendix 2.1, and it is based on a q-analogue of the corresponding statement for the classical case [24,Theorem 5.4]. Previously in 2011, Mizan Rahman (private correspondence) has informed one of us that he has obtained a q-analogue of the underlying summation formula for [24,Theorem 5.4]. It is remarkable that Rahman's q-analogue is different from the one needed here in Lemma 6.2.
Second part of the proof of Theorem 4. 8 We prove the statement on the explicit expression of the matrix entries of the weight in terms of Chebyshev polynomials. By Corollary 5.6, we have for all λ ∈ Z. Since the coefficients α r (k, p) are completely determined by W (ψ) k, p and since W (ψ) k, p = W (ψ) 2 −k,2 −k = W (ψ) p,k * , we can restrict to the case k ≤ p, k + p ≤ 2 . For this case, the result follows from Lemma 6.2, and hence the explicit expression for α r (k, p) in Theorem 4.8 is obtained.
Note that proof of Theorem 4.8 is not yet complete, since we have to show that the weight is strictly positive definite for almost all x ∈ [−1, 1], see Sect. 2. This will follow from the LDU-decomposition for the weight as observed in Corollary 4.16, but in order to prove the LDU-decomposition of Theorem 4.15 we need the explicit expression for the coefficients α t (m, n) of Theorem 4.8.
In Sect. 5.5, we observed that J commutes with W (x) for all x. In order to prove Proposition 4.10, we need to show that the commutant is not larger, and for this we need the explicit expression of α t (m, n) of Theorem 4.8.
Proof of Proposition 4.10 Let Y be in the commutant, and write W (x) = 2 n=0 W k U n (x) for W k ∈ Mat 2 +1 (C) using Theorem 4.8. Then [Y, W k ] = 0 for all k. The proof follows closely the proofs of [24,Proposition 5.5] and [25,Proposition 2.6]. Note that W 2 and W 2 −1 are symmetric and persymmetric (i.e. commute with J ). Moreover, (W 2 ) m,n is non-zero only for m + n = 2 and (W 2 −1 ) m,n is non-zero only for |m +n −2 | = 1. From the explicit expression of the coefficients α t (m, n), we find that all non-zero entries of W 2 and W 2 −1 are different apart from the symmetry and persymmetry. The proof of Proposition 4.10 can then be finished following [24, Proposition 5.5].

LDU-decomposition
In order to prove the LDU-decomposition of Theorem 4.15 for the weight, we need to prove the matrix identity termwise. So we are required to show that for the expression of W (x) in Theorem 4.8 and for the expressions of L(x) and T (x) in Theorem 4.15. Because of symmetry, we can assume without loss of generality that m ≥ n. Then (6.2) is equivalent to Proposition 6.3 after taking into account the coefficients in the LDU-decomposition, so it suffices to prove Proposition 6.3 in order to obtain Theorem 4.15.
Before embarking on the proof of Proposition 6.3, note that each summand on the right-hand side of the expression of Proposition 6.3 is an even, respectively odd, polynomial for m + n even, respectively odd, since the continuous q-ultraspherical polynomials are symmetric and since w(x; q 2k+2 |q 2 ) = 4(1 − x 2 ) k j=1 1 − 2(2x 2 − 1)q 2 j + q 4 j , see (4.12), is an even polynomial with a factor (1 − x 2 ). In the proof of Proposition 6.3 we use Lemma 6.4. Lemma 6.4 Let 0 ≤ k ≤ m ≤ n and t ∈ N. Then the integral is equal to zero for t > m, and for 0 ≤ t ≤ m, the integral above is equal to In Lemma 6.4, we use the notation (4.13) for the q-Racah polynomials. Lemma 6.4 shows that the expansion as in Proposition 6.3 is indeed valid, and it remains to determine the coefficients β k (m, n).
The proof of Lemma 6.4 follows the lines of the proof of [23, Lemma 2.7], see Appendix 2.2. The main ingredients are, cf. the proof in [23], the connection and linearisation coefficients for the continuous q-ultraspherical polynomials dating back to the work of L.J. Rogers (1894-5), see e.g. [2, §10.11], [19, §13.3], [15, (7.6.14), (8.5.1)]. Write the product C m−k (x; q 2k+2 |q 2 )C n−k (x; q 2k+2 |q 2 ) as a sum over r of continuous q-ultraspherical polynomials C r (x; q 2k+2 |q 2 ) using the linearisation formula and write U n+m−2t , which is a continuous q-ultraspherical polynomial for β = q, in terms of C s (x; q 2k+2 |q 2 ) using the connection formula. The orthogonality relations for the continuous q-ultraspherical polynomials then give the integral in terms of a single series. The details are in Appendix 2.2. From this sketch of proof, it is immediately clear that Lemma 6.4 can be generalised to a more general statement. This is the content of Remark 6.5, whose proof is left to the reader. Remark 6.5 For integers 0 ≤ t, k ≤ m ≤ n and parameters α, β, we have Note that the 4 ϕ 3 -series in Remark 6.5 is balanced, but in general is not a q-Racah polynomial.
In the proof of Proposition 6.3, and hence of Theorem 4.15, we need the summation formula involving q-Racah polynomials stated in Lemma 6.6. Its proof is also given in Appendix 2.2.
With these preparations, we can prove Proposition 6.3, and hence the LDUdecomposition of Theorem 4.15.
Proof of Proposition 6.3 Since we have the existence of the expression in Proposition 6.3, it suffices to calculate β k (m, n) having the explicit value of the α t (m, n)'s from Theorem 4.8. Multiply both sides by 1 2π √ 1 − x 2 U m+n−2t (x) and integrate using the orthogonality for the Chebyshev polynomials so that Lemma 6.4 gives The q-Racah polynomials R k (μ(t); 1, 1, q −m−1 , q −n−1 ; q 2 ) satisfy the orthogonality relations Using the orthogonality relations, we find the explicit expression of β k (m, n) in terms of α t (m, n), and using the explicit expression of α t (m, n) of Theorem 4.8 gives This expression is summable by Lemma 6.6. Collecting the coefficients proves the proposition.
Last part of the proof of Theorem 4.8 Now that we have proved Theorem 4.15, Corollary 4.16 is immediate, since the coefficients of the diagonal matrix T (x) are positive on (−1, 1). So the weight is strictly positive definite on (−1, 1), which is the last step to be taken in the proof of Theorem 4.8.

Summation formula for Clebsch-Gordan coefficients
In this subsection, we prove Lemma 5.8, which has been used in the first part of the proof of Theorem 4.8, see Sect. 5.3. The proof of Lemma 5.8 is somewhat involved, since we employ an indirect way using induction and Theorem 4.6.
Proof of Lemma 5.8 Assume for the moment that Assuming (6.3) the lemma follows, using It remains to prove (6.3). We do this in case 1 + 2 = . Put ( 1 , 2 ) = (k/2, − k/2) = ξ(0, k) for k ∈ N and k ≤ 2 . Using the explicit expression for the Clebsch-Gordan coefficients (8.3), we find that in this special case the left-hand side of (6.3) equals where the sum runs over i for − + k/2 + m ≤ i ≤ − k/2 + m. Substitute i → p − + k/2 + m to see that this equals Simplifying the expression, we find that since the 2 ϕ 1 can be summed by the reversed q-Chu-Vandermonde sum [15,(II.7)].
To prove Lemma 5.8 in general, we set hence it is sufficient to calculate f 1 , 2 (−2). We will show by induction on n that f ξ(n,k) (−2) is independent of (n, k), or equivalently that f 1 , 2 (−2) is independent of ( 1 , 2 ). Since we have established the case n = 0, Lemma 5.8 then follows.
In order to perform the induction step, we consider the recursion of Theorem 4.6.

q-Difference operators for the matrix-valued polynomials
We continue the study of q-difference operator for the matrix-valued orthogonal polynomials started in Corollary 5.13. In particular, we show that the assumption on the invertibility ofΦ 0 follows from the LDU-decomposition in Theorem 4.15. Then we make the coefficients in the matrix-valued second-order q-difference operator of Corollary 5.13 explicit in Sect. 7.1. Comparing to the scalar-valued Askey-Wilson q-difference operators, see e.g. [2,15,19,26], we view the q-difference operator as a matrix-valued analogue of the Askey-Wilson q-difference operator. Next, having the matrix-valued orthogonal polynomials as eigenfunctions to the matrix-valued Askey-Wilson q-difference operator, we use this to obtain an explicit expression of the matrix entries of the matrix-valued orthogonal polynomials in terms of scalar-valued orthogonal polynomials from the q-Askey scheme by decoupling of the q-difference operator using the matrix-valued polynomial L in the LDU-decomposition of Theorem 4.15.
From this expression, we can obtain an explicit expression for the coefficients in the matrix-valued three-term recurrence relation for the matrix-valued orthogonal polynomials, hence proving Theorem 4.11 and Corollary 4.12.

Explicit expressions of the q-difference operators
In order to make Corollary 5.13 explicit, we need to study the invertibility of the matrix Φ 0 (A λ ). For this, we first use Theorem 5.5 and its Corollary 5.6, and in particular (6.1).
Proof of Theorem 4.13 Corollary 5.13 gives a second-order q-difference equation for the matrix-valued orthogonal polynomials for an infinite set of λ. So it suffices to check that for i = 1, 2 the matrix-valued functionsM i ,Ñ i of Corollary 5.13 coincide which is an easy check using the explicit expressions of Theorem 4.13. Now (7.2) and (5.16) for n = 0 show that any equation of (7.1) implies the other equation of (7.1). Indeed, assuming the second equation of (7.1) holds, then implying the first equation of (7.1). By Proposition 5.2(ii) and (5.15), the matrix entries ofΦ 0 (A λ ) are Laurent series in q λ . Setting z = q λ , we see that in order to verify (7.1) entry-wise, we need to check equalities for Laurent series in z.
We first consider the second equality of (7.1) for i = 1. In this case, the matrices N 1 and N 1 are band-limited. Hence, the (m, n)th entry of both sides of (7.1) involves either two or one terms, so we need to check 3) The proof of (7.3) involves the explicit expression of the spherical functions in terms of Clebsch-Gordan coefficients using Proposition 5.2(ii). It is given in Appendix 2.3. The statements for the second q-difference equation with i = 2 follows from the symmetries of Proposition 5.11 and Lemma 5.14.
The explicit expressions have been obtained initially by computer algebra, and then later the proof as presented here and in Appendix 2.3 has been obtained.

Explicit expressions for the matrix entries of the matrix-valued orthogonal polynomials
Having established the q-difference equations for the matrix-valued orthogonal polynomials of Theorem 4.13 and having the diagonal part of the LDU-decomposition of the weight in terms of weight functions for the continuous q-ultraspherical polynomials in Theorem 4.15, it is natural to look at the q-difference operators conjugated by the polynomial function L t . It turns out that this completely decouples one of the second-order q-difference operators of Theorem 4.13. This gives the opportunity to link the matrix entries of the matrix-valued orthogonal polynomials to continuous q-ultraspherical polynomials. In order to determine the coefficients, we use the other q-difference operator and the orthogonality relations. Having such an explicit expression, we can determine the three-term recurrence relation for the monic matrixvalued orthogonal polynomials straightforwardly, and hence also for the matrix-valued orthogonal polynomials P n , since we already have determined the leading coefficient in Corollary 4.9. The first step is to conjugate the second-order q-difference operator D 1 of Theorem 4.13 with the matrix L t of the LDU-decomposition of Theorem 4.15 into a diagonal q-difference operator. This conjugation is inspired by the result of [23, Theorem 6.1]. This conjugation takes D 2 in a three-diagonal q-difference operator. For any n ∈ N, let R n (x) = L t (x)Q n (x), where Q n (x) = P n (x) lc(P n ) −1 denote the corresponding monic polynomial. Note that we have determined the leading coefficient lc(P n ) in Corollary 4.9. Then (R n ) n≥0 forms a family of matrix-valued polynomials, but note that the degree of R n is larger than n, and that the leading coefficient of R n is singular. Note that R n satisfy the orthogonality relations Proof We start by observing that the monic matrix-valued orthogonal polynomials Q n are eigenfunctions of the second-order q-difference operators D i of Theorem 4.13 for the eigenvalue lc(P n )Λ n (i)lc(P n ) −1 = Λ n (i), since the matrices are diagonal and thus commute. By conjugation, we find that R n satisfy using the notationL t (z) = L t (μ(z)), etc., with x = μ(z) = 1 2 (z + z −1 ) as before. It remains to calculate K i (z) explicitly. We show in Appendix 2.4 that the expressions for K i are correct by verifying for i = 1, 2.
Lemma 7.2 For n ∈ N and 0 ≤ i, j ≤ 2 , we have where C n (x; β|q) are the continuous q-ultraspherical polynomials (4.10) and β n (i, j) is a constant depending on i, j and n.
Proof Evaluate D 1 R n (x) = R n (x)Λ n (1) in entry (i, j). Since D 1 is decoupled, we get a q-difference equation for the polynomial R n i, j , which, after simplifying, is All polynomial solutions of this q-difference are given by a multiple of the Askey-Wilson polynomials, p n+ j−i (x; q i+1 , −q i+1 , q 1/2 , −q 1/2 |q), see [ Our next objective is to determine the coefficients β n (i, j) of Lemma 7.2. Having exploited that the matrix-valued polynomials R n are eigenfunctions for the decoupled operator D 1 of Theorem 7.1, we can use Lemma 7.2 in (7.4) to calculate the (i, j)th coefficient of (7.4): using that lc(P m ) and the squared norm matrix G m are diagonal matrices, see Corollary 4.9 and Theorem 4.8. The integral in (7.6) can be evaluated by (4.11). In particular, the case m + j = n + i of (7.6) gives the explicit orthogonality relations Theorem 7. 3 We have Proof From Theorem 7.1, we have Evaluate (7.8) in entry (i, j) and use Lemma 7.2 to find a three-term recurrence relation in i of β n (i, j), Multiply by (1 − z 2 )(1 − q 2 ) 2 and evaluate the Laurent expansion at the leading coefficient in z of degree n + j − i + 3. The leading coefficient in z of the continuous q-ultraspherical polynomialC n (αz; β|q) is (β;q) n (q;q) n α n . After a straightforward computation, this leads to the three-term recurrence relation This recursion relation can be rewritten as the three-term recurrence relation for the q-Racah polynomials after rescaling, see [ for some constant γ n ( j) independent of i. Plugging this expression in (7.7) for i = j gives |γ n ( j)| 2 by comparing with the explicit orthogonality relations for the q-Racah polynomials, see [ , and since the explicit expression of L j,i shows that the leading coefficient (of degree j − i) is positive, we see that the leading coefficient (of degree n + j − i) of R i, j in case j ≥ i is positive. Since γ n ( j) is independent of i, we take i = 0, which shows that γ n ( j) is positive.
Proof of Theorem 4.17 Using Theorem 7.3 with the explicit inverse of L(x) as given in Theorem 4.15 gives an explicit expression for the matrix entries of Q n (x) = (L(x) −1 ) t R n (x). Then we obtain the matrix entries of P n (x) = Q n (x)lc(P n ) from this expression and Corollary 4.9, stating that the leading coefficient is a diagonal matrix.

Three-term recursion relation
The matrix-valued orthogonal polynomials satisfy a three-term recurrence relation, see Sect. 2. Moreover, Theorem 4.6 shows that the three-term recurrence relation can in principle be obtained from the tensor product decomposition. However, in that case we obtain the coefficients of the matrices in the three-term recurrence relation in terms of sums of squares of Clebsch-Gordan coefficients, and this leads to a cumbersome result. In order to obtain an explicit expression for the three-term recurrence relation as in Theorem 4.11 and Corollary 4.12, we use the explicit expression obtained in Theorem 4.17 and Lemma 7.4, which is [23, Lemma 5.1]. Lemma 7.4 is only used to determine X n . Lemma 7.4 Let (Q n ) n≥0 be a sequence of monic (matrix-valued) orthogonal polynomials and write Q n (x) = n k=0 Q n k x k , where Q n k ∈ Mat N (C). The sequence (Q n ) n≥0 satisfies the three-term recurrence relation So we start by calculating the one-but-leading term in the monic matrix-valued orthogonal polynomials.  (7.9) and this expression shows that deg(Q n (x) i, j ) = n + j − i. So in case j − i ≤ 0, we can only have a contribution to Q n n−1 in case i − j = 0 or i − j = 1. The first case does not give a contribution, since (7.9) is even or odd for n + j − i even or odd. So we only have to calculate the leading coefficient in (7.9) for i − j = 1. With the explicit value of β n (i, j) as in Sect. 7.2 or Theorem 4.17 and Corollary 4.19, we see that Q n n−1 in case i − j = 1 gives the required expression for Q n n−1 j, j−1 . On the other hand, by Proposition 4.10 and since J lc(P n )J = lc(P n ) by Corollary 4.9, it follows that J Q n (x)J = Q n (x). Therefore, we find the symmetry of the entries of the monic matrix-valued polynomials Q n (x) i, j = Q n (x) 2 −i,2 − j so that the case j − i ≥ 0 can be reduced to the previous case, and we get Q n n−1 j, j+1 = Q n n−1 2 − j,2 − j−1 .
Proof of Theorem 4.11 The explicit expression for X n follows from Lemma 7.4 and Lemma 7.5. By Theorem 4.8 and Corollary 4.9, we have the orthogonality relations since the matrices involved are diagonal and self-adjoint, hence pairwise commute. By the discussion in Sect. 2, we have Y n = G −1 n−1 lc(P n−1 ) 2 G n lc(P n ) −2 , and a straightforward calculation gives the required expression.  (2)). This gives C 1 , 2 , n 1 ,n 2 ,n = C q ( 1 , 2 , ; n 1 , n 2 , n), where the righthand side is the notation of the Clebsch-Gordan coefficient as in [20, §3.4.2, (41)]. The Clebsch-Gordan coefficients are explicitly known in terms of terminating basic hypergeometric orthogonal polynomials, the so-called q-Hahn polynomials [26], see [20, §3.4].

Appendix 2: Proofs involving only basic hypergeometric series
In Appendix 2, we collect the proofs of various intermediate results only involving basic hypergeometric series. For these proofs, we use the results of Gasper and Rahman [15]. In particular, we follow the standard notation of [15], and recall 0 < q < 1.

Proofs of lemmas for Theorem 4.8
Here we present the details of the proof of Lemma 6.2, which is used in order to prove the explicit expression of the matrix entries of the weight in terms of Chebyshev polynomials in Theorem 4.8. We start with two intermediate results needed in the proof. Lemma 7.6 can be viewed as a q-analogue of Sheppard's result [2,Corollary 3.3.4].
The inner sum over i is a 3 ϕ 2 -series, and after some rewriting, it can be transformed using Lemma 7.6. The sum over i becomes We observe that the evaluation of the t-order q-difference operator yields only one non-zero term in the sum over i, namely the one corresponding to i = p − s − t. After simplifications, we have Reversing the order of summation using T = p − s − t proves the proposition. Suppose that p − k ≤ 0, k + p ≤ 2 and s ≥ 1 2 (k − p). Using Proposition 6.1, we find the expression By taking e s (k, p) = d s+ 1 2 (k− p) , Proposition 7.8 shows that d Comparing (9.6) with the explicit expression of α i (k, p) yields the statement of the lemma.
The proof of Lemma 6.4 follows the lines of the proof of [23, Lemma 2.7] closely.
Proof of Lemma 6.4 In order to evaluate the integral of Lemma 6.4, we observe that the weight function in the integral is the weight function (4.11) for the continuous q-ultraspherical (in base q 2 ) with β = q 2k+2 . Rewrite the product of the two continuous q-ultraspherical (in base q 2 ) with β = q 2k+2 as a sum over i of C m+n−2k−2i (x; q 2k+2 |q 2 ) using (9.8). Since the Chebyshev polynomials can be viewed as continuous q-ultraspherical polynomials, which in base q 2 is U m+n−2t (x) = C m+n−2t (x; q 2 |q 2 ), we can use (9.7) to write the Chebyshev polynomial as a sum of continuous q-ultraspherical polynomials with β = q 2k+2 . Plugging in the two sum-By a long but straightforward calculation, we can show that the coefficient of z n−m−2k , k ∈ {−1, 0, . . . , n − m + 1} equals zero. This proves the case i = 2.