1 Introduction

Shortly after the introduction of quantum groups, it was realised that many special functions of basic hypergeometric type [15] have a natural relation to quantum groups, see e.g. [9, Chap. 6], [20, 31] for references. In particular, many orthogonal polynomials in the q-analogue of the Askey scheme, see e.g. [26], have found an interpretation on compact quantum groups analogous to the interpretation of orthogonal polynomials of hypergeometric type from the Askey scheme on compact Lie groups and related structures, see e.g. [46, 47].

In case of the harmonic analysis on classical Gelfand pairs, one studies spherical functions and related Fourier transforms, see [43]. For our purposes, a Gelfand pair consists of a Lie group G and a compact subgroup K so that the trivial representation of K in the decomposition of any irreducible representation of G restricted to K occurs with multiplicity at most one. The spherical functions are functions on G which are left- and right-K-invariant. The zonal spherical functions are realised as matrix elements of irreducible G-representations with respect to a fixed K-vector. For special cases, the zonal spherical functions can be identified with explicit special functions of hypergeometric type, see [43, Chap. 9], [12, §IV]. The zonal spherical functions are eigenfunctions to an algebra of differential operators, which includes the differential operator arising from the Casimir operator in case G is a reductive group. For special cases with G compact, we obtain orthogonality relations and differential operators for the spherical functions, which can be identified with orthogonal polynomials from the Askey scheme. For the special case \(G=\mathrm{SU}(2)\times \mathrm{SU}(2)\) with \(K\cong \mathrm{SU}(2)\) embedded as the diagonal subgroup, the zonal spherical functions are the characters of \(\mathrm{SU}(2)\), which are identified with the Chebyshev polynomials \(U_n\) of the second kind by the Weyl character formula. The Gelfand pair situation has been generalised to the setting of quantum groups, mainly in the compact context, see e.g. Andruskiewitch and Natale [3] for the case of finite dimensional Hopf algebra with a Hopf subalgebra, Floris [13], Koornwinder [32], Vainermann [45] for more general compact quantum groups, and, for a non-compact example, Caspers [7].

The notions of matrix-valued and vector-valued spherical functions have already emerged at the beginning of the development of the theory of spherical functions, see e.g. [14] and references given there. However, the focus on the relation with matrix-valued or vector-valued special functions only came later, see e.g. references given in [18, 44]. Grünbaum et al. [17] give a group theoretic approach to matrix-valued orthogonal polynomials emphasising the role of the matrix-valued differential operators, which are manipulated in great detail. The paper [17] deals with the case \((G,K) = (\mathrm{SU}(3), \mathrm{U}(2))\). Motivated by [17] and the approach of Koornwinder [29], the group theoretic interpretation of matrix-valued orthogonal polynomials on \((G,K)=(\mathrm{SU}(2)\times \mathrm{SU}(2),\mathrm{SU}(2))\) is studied from a different point of view, in particular with less manipulation of the matrix-valued differential operators, in [23, 24], see also [18, 44]. The point of view is to construct the matrix-valued orthogonal polynomials using matrix-valued spherical functions, and next using this group theoretic interpretation to obtain properties of the matrix-valued orthogonal polynomials. This approach for the case \((G,K)=(\mathrm{SU}(2)\times \mathrm{SU}(2),\mathrm{SU}(2))\) leads to matrix-valued orthogonal polynomials for arbitrary size, which can be considered as analogues of the Chebyshev polynomials of the second kind. A combination of the group theoretic approach and analytic considerations then allows us to understand these matrix-valued orthogonal polynomials completely, i.e. we have explicit orthogonality relations, three-term recurrence relations, matrix-valued differential operators having the matrix-valued orthogonal polynomials as eigenfunctions, expression in terms of Tirao’s [41] matrix-valued hypergeometric functions, expression in terms of well-known scalar-valued orthogonal polynomials from the Askey scheme, etc. This has been analytically extended to arbitrary size matrix-valued orthogonal Gegenbauer polynomials [25], see also [39] for related \(2\times 2\) cases.

The interpretation on quantum groups and related structures leads to many new results for special functions of basic hypergeometric type. In this paper, we use quantum groups in order to obtain matrix-valued orthogonal polynomials as analogues of a subclass of the Askey–Wilson polynomials. In particular, we consider the Chebyshev polynomials of the second kind, recalled in (5.6), as a special case of the Askey–Wilson polynomials [4, (2.18)]. Moreover, we know that the Chebyshev polynomials occur as characters on the quantum \(\mathrm{SU}(2)\) group, see [48, §A.1]. The approach in this paper is to establish the quantum analogue of the group theoretic approach as presented in [23, 24], see also [18, 44], for the example of the Gelfand pair \(G=\mathrm{SU}(2)\times \mathrm{SU}(2)\) with \(K\cong \mathrm{SU}(2)\). For this approach, we need Letzter’s approach [3436] to quantum symmetric spaces using coideal subalgebras. We stick to the conventions as in Kolb [27] and we refer to [28, §1] for a broader perspective on quantum symmetric pairs. So we work with the quantised universal enveloping algebra \({\mathcal {U}}_{q}(\mathfrak {g})={\mathcal {U}}_{q}(\mathfrak {su}(2)\oplus \mathfrak {su}(2))\), introduced in Sect. 3, equipped with a right coideal subalgebra \({\mathcal {B}}\), see Sect. 4. Once we have this setting established, the branching rules of the representations of \({\mathcal {U}}_{q}(\mathfrak {g})\) restricted to \({\mathcal {B}}\) follow by identifying \({\mathcal {B}}\) with the image of \({\mathcal {U}}_{q}(\mathfrak {su}(2))\) (up to an isomorphism) under the comultiplication using the standard Clebsch–Gordan decomposition. In particular, it gives explicit intertwiners. Next we introduce matrix-valued spherical functions in Sect. 4. Using the matrix-valued spherical functions, we introduce the matrix-valued orthogonal polynomials. Then we use a mix of quantum group theoretic and analytic approaches to study these matrix-valued orthogonal polynomials. So we find the orthogonality for the matrix-valued orthogonal polynomials from the Schur orthogonality relations, and the three-term recurrence relation follows from tensor product decompositions of \({\mathcal {U}}_{q}(\mathfrak {g})\)-representations, and the matrix-valued q-difference operators for which these matrix-valued orthogonal polynomials are eigenvectors follow from the study of the Casimir elements in \({\mathcal {U}}_{q}(\mathfrak {g})\). More analytic properties follow from the LDU-decomposition of the matrix-valued weight function, and this allows to decouple the matrix-valued q-difference operators involved. The decoupling gives the possibility to link the entries of the matrix-valued orthogonal polynomials with (scalar-valued) orthogonal polynomials from the q-analogue of the Askey scheme, in particular the continuous q-ultraspherical polynomials and the q-Racah polynomials. The approach of [17] does not seem to work in the quantum case, because the possibilities to transform q-difference equations are very limited compared to transforming differential equations. We note that in [3, §5] matrix-valued spherical functions are considered for finite dimensional Hopf algebras with respect to a Hopf subalgebra.

The approach to matrix-valued orthogonal polynomials from this quantum group setting also leads to identities in the quantised function algebra. This paper does not include the resulting identities after using infinite dimensional representations of the quantised function algebra. Furthermore, we have not supplied a proof of Lemma 5.4 using infinite dimensional representations and the direct integral decomposition of the Haar functional, but this should be possible as well.

In general, the notion of a quantum symmetric pair seems to be best-suited for the development of harmonic analysis in general and in particular of matrix-valued spherical functions on quantum groups, see e.g. [28, 3438] and references given there. When considering other quantum symmetric pairs in relation to matrix-valued spherical functions, the branching rule of a representation of the quantised universal enveloping algebra to a coideal subalgebra seems to be difficult. In this paper, it is reduced to the Clebsch–Gordan decomposition, and there is a nice result by Oblomkov and Stokman [38, Proposition 1.15] on a special case of the branching rule for quantum symmetric pair of type AIII, but in general the lack of the branching rule for the quantum symmetric pairs is an obstacle for the study of quantum analogues of matrix-valued spherical functions of e.g. [17, 18, 38, 44].

The matrix-valued orthogonal polynomials resulting from the study in this paper are matrix-valued analogues of the Chebyshev polynomials of the second kind viewed as an example of the Askey–Wilson polynomials. We expect that it is possible to obtain matrix-valued analogues of the continuous q-ultraspherical polynomials viewed as subfamily of the Askey–Wilson polynomials using the approach of [25] using the Askey–Wilson q-derivative instead of the ordinary derivative. We have not explicitly worked out the limit transition \(q\uparrow 1\) of the results, but by the set-up it is clear that the formal limit gives back many of the results of [23, 24].

The contents of the paper are as follows. In Sect. 2, we fix notation regarding matrix-valued orthogonal polynomials. In Sect. 3, the notation for quantised universal enveloping algebras is recalled. Section 4 states all the main results of this paper. It introduces the quantum symmetric pair explicitly. Using the representations of the quantised universal enveloping algebra and the coideal subalgebra, the matrix-valued polynomials are introduced. We continue to give explicit information on the orthogonality relations, three-term recurrence relations, q-difference operators, the commutant of the weight, the LDU-decomposition of the weight, the decoupling of the q-difference equations and the link to scalar-valued orthogonal polynomials from the q-Askey scheme. The proofs of the statements of Sect. 4 occupy the rest of the paper. In Sect. 5, the main properties derivable from the quantum group set-up are derived, and we discuss in Appendix 1 the precise relation of the branching rule for this quantum symmetric pair and the standard Clebsch–Gordan decomposition. In Sect. 6, we continue the study of the orthogonality relations, in which we make the weight explicit. This requires several identities involving basic hypergeometric series, whose proofs we relegate to Appendix 2. Section 7 studies the consequences of the explicit form of the matrix-valued q-difference operators of Askey–Wilson type to which the matrix-valued orthogonal polynomials are eigenfunctions.

In preparing this paper, we have used computer algebra in order to verify the statements up to certain size of the matrix and up to certain degree of the polynomial in order to eliminate errors and typos. Note, however, that all proofs are direct and do not use computer algebra. A computer algebra package used for this purpose can be found on the homepage of the second author.Footnote 1

The convention on notation follows Kolb [27] for quantised universal enveloping algebras and right coideal subalgebras and we follow Gasper and Rahman [15] for the convention on basic hypergeometric series and we assume \(0<q<1\).

2 Matrix-valued orthogonal polynomials

In this section, we fix notation and give a short background to matrix-valued orthogonal polynomials, which were originally introduced by Krein in the forties, see e.g. references in [5, 10]. General references for this section are [5, 10, 16], and references given there.

Assume that we have a matrix-valued function \(W:[a,b] \rightarrow M_{2\ell +1}({\mathbb {C}})\), \(2\ell +1\in {\mathbb {N}}\), \(a<b\), so that \(W(x)>0\) for \(x\in [a,b]\) almost everywhere. We use the notation \(A>0\) to denote a strictly positive definite matrix. Moreover, we assume that all moments exist, where integration of a matrix-valued function means that each matrix entry is separately integrated. In particular, the integrals are matrices in \(M_{2\ell +1}({\mathbb {C}})\). It then follows that for matrix-valued polynomials \(P,Q \in M_{2\ell +1}({\mathbb {C}})[x]\) the integral

$$\begin{aligned} \langle P, Q \rangle \, =\, \int _a^b P(x)^*\, W(x)\, Q(x)\, \mathrm{d}x \in M_{2\ell +1}({\mathbb {C}}) \end{aligned}$$
(2.1)

exists. This gives a matrix-valued inner product on the space \(M_{2\ell +1}({\mathbb {C}})[x]\) of matrix-valued polynomials, satisfying

for all \(P,Q, R \in M_{2\ell +1}({\mathbb {C}})[x]\) and \(A,B \in M_{2\ell +1}({\mathbb {C}})\). More general matrix-valued measures can be considered [5, 10], but for this paper the above set-up suffices.

A matrix-valued polynomial \(P(x) = \sum _{r=0}^n x^r P^r\), \(P^r\in M_{2\ell +1}({\mathbb {C}})\) is of degree n if the leading coefficient \(P^n\) is non-zero. Given a weight W, there exists a family of matrix-valued polynomials \((P_n)_{n\in {\mathbb {N}}}\) so that \(P_n\) is a matrix-valued polynomial of degree n and

$$\begin{aligned} \int _a^b \bigl (P_n(x)\bigr )^*\, W(x)\, P_m(x)\, \mathrm{d}x\, =\, \delta _{n,m} G_n, \end{aligned}$$
(2.2)

where \(G_n>0\). Moreover, the leading coefficient of \(P_n\) is non-singular. Any other family of polynomials \((Q_n)_{n\in {\mathbb {N}}}\) so that \(Q_n\) is a matrix-valued polynomial of degree n and \(\langle Q_n, Q_m\rangle =0\) for \(n\not = m\) satisfies \(P_n(x)=Q_n(x)E_n\) for some non-singular \(E_n\in M_{2\ell +1}({\mathbb {C}})\) for all \(n\in {\mathbb {N}}\). We call the matrix-valued polynomial \(P_n\) monic in case the leading coefficient is the identity matrix I. The polynomials \(P_n\) are called orthonormal in case the squared norm \(G_n = I\) for all \(n\in {\mathbb {N}}\) in the orthogonality relations (2.2).

The matrix-valued orthogonal polynomials \(P_n\) always satisfy a matrix-valued three-term recurrence of the form

$$\begin{aligned} xP_n(x) = P_{n+1}(x)A_n + P_n(x) B_n + P_{n-1}(x) C_n \end{aligned}$$
(2.3)

for matrices \(A_n,B_n,C_n\in M_{2\ell +1}({\mathbb {C}})\) for all \(n\in {\mathbb {N}}\). Note that in particular \(A_n\) is non-singular for all n. Conversely, assuming \(P_{-1}(x)=0\) (by convention) and fixing the constant polynomial \(P_0(x)\in M_{2\ell +1}({\mathbb {C}})\) we can generate the polynomials \(P_n\) from the recursion (2.3). In case the polynomials are monic, the coefficient \(A_n=I\) for all n and \(P_0(x)=I\) as the initial value. In general, the matrices satisfy \(G_{n+1}A_n = C_{n+1}^*G_n\), \(G_n B_n = B_n^*G_n\), so that in the monic case \(C_n = G_{n-1}^{-1} G_n\) for \(n\ge 1\). In case the polynomials are orthonormal, we have \(C_n=A_{n-1}^*\) and \(B_n\) Hermitian.

Note that the matrix-valued ‘sesquilinear form’ (2.1) is antilinear in the first entry of the inner product, which leads to a three-term recurrence of the form (2.3) where the multiplication by the constant matrices is from the right, see [10] for a discussion.

In case a subspace \(V\subset {\mathbb {C}}^{2\ell +1}\) is invariant for W(x) for all x, \(V^\perp \) is also invariant for W(x) for all x. Let \({\iota }_V:V\rightarrow {\mathbb {C}}^{2\ell +1}\) be the embedding of V into \({\mathbb {C}}^{2\ell +1}\) so that \(P_V={\iota }_V{\iota }_V^*\in M_{2\ell +1}({\mathbb {C}})\) is the corresponding orthogonal projection. Then \(W(x)P_V=P_VW(x)\) for all x. Let \(P^V_n:[a,b]\rightarrow \text {End}(V)[x]\) be the matrix-valued polynomial defined by \(P^V_n(x) = {\iota }_V^*P_n(x) {\iota }_V\), where \(P_n\) are the monic matrix-valued orthogonal polynomials for the weight W. Then \(P^V_n\) form a family of monic V-endomorphism-valued orthogonal polynomials, and \(P_n(x) = P^V_n(x) \oplus P^{V^\perp }_n(x)\). The same decomposition can be written down for the orthonormal polynomials.

The projections on invariant subspaces are in the commutant \(*\)-algebra \(\{ T\in M_{2\ell +1}({\mathbb {C}}) \mid TW(x) = W(x)T \ \forall x\}\). In case the commutant algebra is trivial, the matrix-valued orthogonal polynomials are irreducible. The primitive idempotents correspond to the minimal invariant subspaces, and hence they determine the decomposition of the matrix-valued orthogonal polynomials into irreducible cases.

Remark 2.1

In [42] the authors discuss non-orthogonal decompositions by considering, instead of the commutant algebra, the real vector space

$$\begin{aligned} \mathscr {A} = \{ Y \in \mathrm{End}(\mathcal {H}^{\ell }) : Y W(x) = W(x) Y^*, \quad \forall x \in (-1, 1) \}. \end{aligned}$$

It follows that if \(I\mathbb {R} \subsetneq \mathscr {A}\), then the weight W reduces, non-unitarily, to weights of smaller size. Koelink and Román [22, Example 4.3] showed that \(\mathscr {A} = \{ W(x) : x \in (-1, 1) \}'\) so that, in our case, both decompositions coincide.

We denote by \(E_{i,j}\in M_{2\ell +1}({\mathbb {C}})\) the matrix with zeroes except at the (ij)th entry where it is 1. So for the corresponding standard basis \(\{e_k\}_{k=0}^{2\ell }\) we set \(E_{i,j}e_k = {\delta }_{j,k} e_i\). We usually use the basis \(\{e_k\}_{k=0}^{2\ell }\) in describing the results for the matrix-valued orthogonal polynomials, but occasionally the basis is relabelled \(\{e^\ell _k\}_{k=-\ell }^{\ell }\), as is customary for the \({\mathcal {U}}_{q}(\mathfrak {su}(2))\)-representations of spin \(\ell \). In the latter case, we use superscripts to distinguish from the previous case: \(E^\ell _{i,j}e^\ell _k = {\delta }_{j,k} e^\ell _i\), \(i,j,k\in \{-\ell ,\ldots , \ell \}\).

3 Quantised universal enveloping algebra

We recall the setting for quantised universal enveloping algebras and quantised function algebras, and this section is mainly meant to fix notation. The definitions can be found at various sources on quantum groups, such as the books [9, 11, 20], and we follow Kolb [27].

Fix for the rest of this paper \(0< q < 1\). The quantised universal enveloping algebra can be associated to any root datum, but we only need the simplest cases \(\mathfrak {g}=\mathfrak {sl}(2)\) and \(\mathfrak {g}=\mathfrak {sl}(2)\oplus \mathfrak {sl}(2)\). The quantised universal enveloping algebra is the unital associative algebra generated by k, \(k^{-1}\), e, f subject to the relations

$$\begin{aligned} kk^{-1}=1=k^{-1}k, \quad ke= q^2 ek, \quad kf=q^{-2} fk, \quad ef-fe=\frac{k-k^{-1}}{q-q^{-1}},\quad \quad \end{aligned}$$
(3.1)

where we follow the convention as in [27, §3]. For our purposes, it is useful to extend the algebra with the roots of k and \(k^{-1}\), denoted by \(k^{1/2}\), \(k^{-1/2}\) satisfying

$$\begin{aligned} \begin{aligned}&k^{1/2}k^{-1/2}=1=k^{-1/2}k^{1/2}, \quad k^{1/2}k^{1/2}=k, \quad k^{-1/2}k^{-1/2}=k^{-1}, \\&k^{1/2}e= q ek^{1/2}, \qquad k^{1/2}f=q^{-1} fk^{1/2}. \end{aligned} \end{aligned}$$
(3.2)

The extended algebra is denoted by \({\mathcal {U}}_{q}(\mathfrak {sl}(2))\), and it is a Hopf algebra with comultiplication \(\Delta \), counit \(\varepsilon \), antipode S defined on the generators by

$$\begin{aligned}&\Delta (e) = e\otimes 1 + k\otimes e, \quad \Delta (f) = f\otimes k^{-1} + 1\otimes f, \\&\qquad \qquad \Delta (k^{\pm 1/2}) = k^{\pm 1/2}\otimes k^{\pm 1/2}, \\&S(e) = -k^{-1}e, \quad S(f) = -fk, \quad S(k^{\pm 1/2})= k^{\mp 1/2}, \\&\qquad \qquad \varepsilon (e) = 0=\varepsilon (f), \quad \varepsilon (k^{\pm 1/2})=1. \end{aligned}$$

The Hopf algebra has a \(*\)-structure defined on the generators by

$$\begin{aligned} (k^{\pm 1/2})^*= k^{\pm 1/2}, \quad e^*= q^2fk, \quad f^*= q^{-2}k^{-1}e. \end{aligned}$$

We denote the corresponding Hopf \(*\)-algebra by \({\mathcal {U}}_{q}(\mathfrak {su}(2))\).

The identification as Hopf \(*\)-algebras with [21, 30] is \((A,B,C,D) \leftrightarrow (k^{1/2}, q^{-1}k^{-1/2}e, qfk^{1/2}, k^{-1/2})\).

The irreducible finite dimensional type 1 representations of the underlying \(*\)-algebra have been classified. Here type 1 means that the spectrum of \(K^{1/2}\) is contained in \(q^{\frac{1}{2}{\mathbb {Z}}}\). For each dimension \(2\ell +1\) of spin \(\ell \in \frac{1}{2}{\mathbb {N}}\), there is a representation in \(\mathcal {H}^{\ell }\cong {\mathbb {C}}^{2\ell +1}\) with orthonormal basis \(\{e^\ell _{-\ell }, e^\ell _{-\ell +1}, \ldots , e^\ell _{\ell } \}\) and on which the action is given by

$$\begin{aligned} t^\ell (k^{1/2}) e^\ell _{p}= & {} q^{-p} e^\ell _{p}, \quad t^\ell (e) e^\ell _{p} = q^{2-p} b^\ell (p) e^\ell _{p-1}, \nonumber \\ t^\ell (f) e^\ell _{p}= & {} q^{p-1} b^\ell (p+1) e^\ell _{p+1}, \\ b^\ell (p)= & {} \frac{1}{q^{-1}-q} \sqrt{(q^{-\ell +p-1}-q^{\ell -p+1})(q^{-\ell -p}-q^{\ell +p})},\nonumber \end{aligned}$$
(3.3)

where \(t^\ell :{\mathcal {U}}_{q}(\mathfrak {su}(2))\rightarrow \text {End}(\mathcal {H}^{\ell })\) is the corresponding representation. Note that \(b^\ell (p)=b^\ell (1-p)\). Finally, recall that the centre \(\mathcal {Z}({\mathcal {U}}_{q}(\mathfrak {su}(2)))\) is generated by the Casimir element \(\omega \),

$$\begin{aligned} \begin{aligned} \displaystyle \omega&= \frac{q^{-1}k^{-1}+qk-2}{(q^{-1}-q)^2}+fe = \frac{qk^{-1}+q^{-1}k-2}{(q^{-1}-q)^2}+ef, \\ \displaystyle t^\ell (\omega )&= \left( \frac{q^{-\frac{1}{2} -\ell }-q^{\frac{1}{2} + \ell }}{q^{-1}-q}\right) ^2 I. \end{aligned} \end{aligned}$$
(3.4)

We use the notation \({\mathcal {U}}_{q}(\mathfrak {g})\) to denote the Hopf \(*\)-algebra \({\mathcal {U}}_{q}(\mathfrak {su}(2)\oplus \mathfrak {su}(2))\), which we identify with \({\mathcal {U}}_{q}(\mathfrak {su}(2))\otimes {\mathcal {U}}_{q}(\mathfrak {su}(2))\), where \(K^{1/2}_i\), \(K^{-1/2}_i\), \(E_i\), \(F_i\), \(i=1,2\), are the generators. The relations (3.1) and (3.2) hold with \((k^{1/2}, k^{-1/2},e,f)\) replaced by \((K^{1/2}_i, K^{-1/2}_i, E_i, F_i)\) for any fixed i and the generators with different index i commute. The tensor product of two Hopf \(*\)-algebras is again a Hopf \(*\)-algebra, where the maps on a simple tensor \(X_1\otimes X_2\) are given by, see e.g. [9, Chap. 4],

$$\begin{aligned} \begin{aligned}&\Delta (X_1\otimes X_2) = \Delta _{13}(X_1)\Delta _{24}(X_2), \quad \varepsilon (X_1\otimes X_2) = \varepsilon (X_1)\varepsilon (X_2), \\&S(X_1\otimes X_2) = S(X_1)\otimes S(X_2), \quad (X_1\otimes X_2)^*= X_1^*\otimes X_2^*, \end{aligned} \end{aligned}$$
(3.5)

where we use leg-numbering notation.

The irreducible finite dimensional type 1 representations of \({\mathcal {U}}_{q}(\mathfrak {g})\) are labelled by \((\ell _1,\ell _2)\in \frac{1}{2}{\mathbb {N}}\times \frac{1}{2}{\mathbb {N}}\), and the representations \(t^{\ell _1,\ell _2}\) from \({\mathcal {U}}_{q}(\mathfrak {g})\) to \(\text {End}(\mathcal {H}^{\ell _1, \ell _2})\), \(\mathcal {H}^{\ell _1, \ell _2} = \mathcal {H}^{\ell _1} {\otimes }\mathcal {H}^{\ell _2}\), are obtained as the exterior tensor product of the representations of spin \(\ell _1\) and \(\ell _2\) of \({\mathcal {U}}_{q}(\mathfrak {su}(2))\). Here type 1 means that the spectrum of \(K_i^{1/2}\), \(i=1,2\), is contained in \(q^{\frac{1}{2}{\mathbb {Z}}}\).

We have used the notation \(\Delta \), \(\varepsilon \) and S for the comultiplication, counit and antipode for all Hopf algebras, respectively. From the context, it should be clear which comultiplication, counit and antipode is meant. The corresponding dual Hopf \(*\)-algebra related to the quantised function algebra is not needed for the description of the results in Sect. 4, and it will be recalled in Sect. 5.1.

4 Matrix-valued orthogonal polynomials related to the quantum analogue of \({(\mathrm{SU}(2) \times \mathrm{SU}(2),\text {diag})}\)

In this section, we state the main results of the paper. First we introduce the specific quantum symmetric pair, which is to be considered as the quantum analogue of a symmetric space G / K, in this case \(\mathrm{SU}(2) \times \mathrm{SU}(2)/\mathrm{SU}(2)\). Quantum symmetric spaces have been introduced and studied in detail by Letzter [3436], see also Kolb [27]. In particular, Letzter has shown that Macdonald polynomials occur as spherical functions on quantum symmetric pairs motivated by the works of Koornwinder, Dijkhuizen, Noumi and others. In our case, \(\mathcal {B}\subset {\mathcal {U}}_{q}(\mathfrak {g})\), as in Definition 4.1, is the appropriate right coideal subalgebra. Using the explicit branching rules for \(t^{(\ell _1,\ell _2)}\vert _{\mathcal B}\) of Theorem 4.3, we introduce matrix-valued spherical functions in Definition 4.4. To these matrix-valued spherical functions, we associate matrix-valued polynomials in (4.8), and we spend the remainder of this section to describe properties of these matrix-valued polynomials. This includes the orthogonality relations, the three-term recurrence relation and the matrix-valued polynomials as eigenfunctions of a commuting set of matrix-valued q-difference equations of Askey–Wilson type. Moreover, we give two explicit descriptions of the matrix-valued weight function W, one in terms of spherical functions for this quantum symmetric pair and one in terms of the LDU-decomposition. The LDU-decomposition gives the possibility to decouple the matrix-valued q-difference operator, and this leads to an explicit expression for the matrix entries of the matrix-valued orthogonal polynomials in terms of scalar-valued orthogonal polynomials from the q-Askey scheme in Theorem 4.17.

For the symmetric pair \((G,K)= (\mathrm{SU}(2) \times \mathrm{SU}(2), \mathrm{SU}(2))\), \(K=\mathrm{SU}(2)\) corresponds to the fixed points of the Cartan involution \(\theta \) flipping the order of the pairs in G. The quantised universal enveloping algebra associated to G is \({\mathcal {U}}_{q}(\mathfrak {g})\) as introduced in Sect. 3. As the quantum analogue of K, we take the right coideal subalgebra \(\mathcal {B}\subset {\mathcal {U}}_{q}(\mathfrak {g})\), i.e. \(\mathcal {B}\subset {\mathcal {U}}_{q}(\mathfrak {g})\) is an algebra satisfying \(\Delta (\mathcal {B})\subset \mathcal {B} \otimes {\mathcal {U}}_{q}(\mathfrak {g})\), as in Definition 4.1. Letzter [34, Sect. 7,(7.2)] has introduced the corresponding left coideal subalgebra, and we follow Kolb [27, §5] in using right coideal subalgebras for quantum symmetric pairs. Note that we have modified the generators slightly in order to have \(B_1^*=B_2\).

Definition 4.1

The right coideal subalgebra \(\mathcal {B}\subset {\mathcal {U}}_{q}(\mathfrak {g})\) is the subalgebra generated by \(K^{\pm 1/2}\), where \(K=K_1K_2^{-1}\), and

$$\begin{aligned} \begin{aligned} B_1&= q^{-1} K_1^{-1/2} K_2^{-1/2} E_1 + q F_2 K_1^{-1/2} K_2^{1/2}, \\ B_2&= q^{-1} K_1^{-1/2} K_2^{-1/2} E_2 + q F_1 K_1^{1/2} K_2^{-1/2}. \end{aligned} \end{aligned}$$

Remark 4.2

  1. (i)

    \(\mathcal {B}\) is a right coideal as follows from the general construction, see [27, Proposition 5.2]. It can be verified directly by checking it for the generators. Note \(\Delta (K^{\pm 1/2})=K^{\pm 1/2}\otimes K^{\pm 1/2}\) is immediate, and

    $$\begin{aligned} \Delta (B_1)= & {} B_1 \otimes (K_1K_2)^{-1/2} + K^{1/2}\otimes q^{-1}(K_1K_2)^{-1/2} E_1 \\&+ K^{-1/2} \otimes q F_2 K^{-1/2} \end{aligned}$$

    is in \(\mathcal {B} {\otimes }{\mathcal {U}}_{q}(\mathfrak {g})\) by a straightforward calculation. Since \(B_2=B_1^*\), it also follows for \(B_2\), since \(K^{\pm 1/2}\) are self-adjoint. The relations, cf. [27, Lemma 5.15],

    $$\begin{aligned} K^{1/2} B_1 = q B_1 K^{1/2}, \quad \! K^{1/2} B_2 = q^{-1} B_2 K^{1/2}, \quad \! [B_1, B_2] = \frac{K - K^{-1}}{q - q^{-1}},\quad \quad \quad \end{aligned}$$
    (4.1)

    hold in \({\mathcal {U}}_{q}(\mathfrak {g})\) as can also be checked directly.

  2. (ii)

    Let \(\varPsi :{\mathcal {U}}_{q}(\mathfrak {su}(2)) \rightarrow {\mathcal {U}}_{q}(\mathfrak {su}(2))\) be defined by

    $$\begin{aligned} \varPsi (k^{1/2}) = k^{-1/2},\quad \varPsi (k^{-1/2}) = k^{1/2}, \quad \varPsi (e)=q^3 f, \qquad \varPsi (f) = q^{-3}e, \end{aligned}$$

    then \(\varPsi \) extends to an involutive \(*\)-algebra isomorphism. Consider the map

    $$\begin{aligned} \iota \circ (\varPsi \otimes \text {Id})\circ \Delta :{\mathcal {U}}_{q}(\mathfrak {su}(2))\rightarrow {\mathcal {U}}_{q}(\mathfrak {g}), \end{aligned}$$

    where \(\iota \) is the algebra morphism mapping \(x\otimes y\in {\mathcal {U}}_{q}(\mathfrak {su}(2))\otimes {\mathcal {U}}_{q}(\mathfrak {su}(2))\) to the corresponding element \(X_1Y_2\in {\mathcal {U}}_{q}(\mathfrak {g})\) for x and y generators of \({\mathcal {U}}_{q}(\mathfrak {su}(2))\). Then we see that \(k^{1/2} \mapsto K^{-1/2}\), \(qfk^{1/2}\mapsto B_1\), and \(q^{-1}k^{-1/2}e \mapsto B_2\) under the map \(\iota \circ (\varPsi \otimes \text {Id})\circ \Delta \). In particular, the relations (4.1) follow. We conclude that \(\mathcal {B}\) is isomorphic as a \(*\)-algebra to \(\Delta (U_q(\mathfrak {su}(2))\subset {\mathcal {U}}_{q}(\mathfrak {g})\) by the \(*\)-isomorphism \(\iota \circ (\varPsi \otimes \text {Id})\).

  3. (iii)

    In particular, \(\mathcal {B}\cong {\mathcal {U}}_{q}(\mathfrak {su}(2))\) as \(*\)-algebras. So the irreducible type 1 representations of \(\mathcal {B}\) are labelled by the spin \(\ell \in \frac{1}{2}{\mathbb {N}}\). This can be made explicit by \(t^\ell :{\mathcal {B}}\rightarrow \text {End}(\mathcal {H}^\ell )\) and setting

    $$\begin{aligned} t^\ell (K^{1/2}) e^\ell _{p}&= q^{-p} e^\ell _{p}, \quad t^\ell (B_1) e^\ell _{p} = b^\ell (p) e^\ell _{p-1}, \\ \nonumber t^\ell (B_2) e^\ell _{p}&= b^\ell (p+1) e^\ell _{p+1} \end{aligned}$$
    (4.2)

    with the notation of (3.3). We use the same notation \(t^\ell \) for these representations here and in (3.3), since they correspond under the identification of \({\mathcal {B}}\) as \({\mathcal {U}}_{q}(\mathfrak {su}(2))\).

  4. (iv)

    Let \(\sigma \) be the \(*\)-algebra isomorphism on \({\mathcal {U}}_{q}(\mathfrak {g})= {\mathcal {U}}_{q}(\mathfrak {su}(2))\otimes {\mathcal {U}}_{q}(\mathfrak {su}(2))\) by flipping the order in the tensor product, or equivalently by flipping the subscripts \(1\leftrightarrow 2\). Then \(\sigma :\mathcal {B} \rightarrow \mathcal {B}\) is an involution \(B_1\leftrightarrow B_2\), \(K\leftrightarrow K^{-1}\). On the level of representations of \({\mathcal {U}}_{q}(\mathfrak {g})\) and \(\mathcal {B}\), it follows \(t^{\ell _1,\ell _2}(\sigma (X)) = P^*t^{\ell _2,\ell _1}(X) P\), \(X\in {\mathcal {U}}_{q}(\mathfrak {g})\), where \(P:\mathcal {H}^{\ell _1,\ell _2} \rightarrow \mathcal {H}^{\ell _2,\ell _1}\) is the flip, and \(t^{\ell }(\sigma (Y)) = (J^\ell )^*t^{\ell }(Y)J^\ell \), \(Y\in \mathcal {B}\), where \(J^\ell :\mathcal {H}^\ell \rightarrow \mathcal {H}^\ell \), \(J^\ell :e^\ell _p \mapsto e^\ell _{-p}\).

Theorem 4.3

The finite dimensional representation \(t^{\ell _1, \ell _2}\) of \({\mathcal {U}}_{q}(\mathfrak {g})\) restricted to \(\mathcal {B}\) decomposes multiplicity-free into irreducible representations \(t^{\ell }\) of \(\mathcal {B}\):

With respect to the orthonormal basis \(\{e^{\ell }_p\}_{p=-\ell }^\ell \) of \(\mathcal {H}^{\ell }\) and the orthogonal basis \(\{ e^{\ell _1}_{i} {\otimes }e^{\ell _2}_{j} \}_{i=-\ell _1, j=-\ell _2}^{\ell _1,\ell _2}\) for \(\mathcal {H}^{\ell _1, \ell _2}\), the \(\mathcal {B}\)-intertwiner \(\beta ^{\ell }_{\ell _1, \ell _2} :\mathcal {H}^{\ell } \rightarrow \mathcal {H}^{\ell _1, \ell _2}\) is given by

$$\begin{aligned} \beta ^{\ell }_{\ell _1, \ell _2} :e^{\ell }_p \mapsto \sum _{i = -\ell _1}^{\ell _1} \sum _{j = -\ell _2}^{\ell _2} C^{\ell _1, \ell _2, \ell }_{i, j, p} e^{\ell _1}_i {\otimes }e^{\ell _2}_{j}, \end{aligned}$$

where \(C^{\ell _1, \ell _2, \ell }_{i, j, p}\) are Clebsch–Gordan coefficients satisfying \(C^{\ell _1, \ell _2, \ell }_{i, j, p} = 0\) if \(i - j \ne p\).

The proof of Theorem 4.3 is a reduction to the well-known Clebsch–Gordan decomposition for the quantised universal enveloping algebra \({\mathcal {U}}_{q}(\mathfrak {su}(2))\), see e.g. [9, 20], using Remark 4.2. The proof is presented in Appendix 1. In particular, \((\beta ^{\ell }_{\ell _1, \ell _2})^*\beta ^{\ell }_{\ell _1, \ell _2}\) is the identity on \(\mathcal {H}^\ell \). Note that

$$\begin{aligned} (\beta ^{\ell }_{\ell _1, \ell _2})^*:\mathcal {H}^{\ell _1, \ell _2} \rightarrow \mathcal {H}^{\ell }, \qquad e^{\ell _1}_{n_1} \otimes e^{\ell _2}_{n_2} \mapsto \sum _{p=-\ell }^\ell C^{\ell _1, \ell _2, \ell }_{n_1, n_2, p} \, e^\ell _p. \end{aligned}$$

In general, the decomposition of an irreducible representation restricted to a right coideal subalgebra seems a difficult problem. In this particular case, we can reduce to the Clebsch–Gordan decomposition, and yet another known special case is by Oblomkov and Stokman [38, Proposition 1.15], but in general this is an open problem.

In particular, for fixed \(\ell \in \frac{1}{2}{\mathbb {N}}\), we have \([t^{\ell _1, \ell _2}\vert _{{\mathcal {B}}}:t^\ell ]=1\) if and only if

$$\begin{aligned} (\ell _1,\ell _2)\in \frac{1}{2}{\mathbb {N}}\times \frac{1}{2}{\mathbb {N}}, \qquad |\ell _1-\ell _2|\le \ell \le \ell _1+\ell _2, \quad \ell _1+\ell _2-\ell \in {\mathbb {Z}}. \end{aligned}$$
(4.3)

We use the reparametrisation of (4.3) by

$$\begin{aligned} \xi = \xi ^{\ell } :{\mathbb {N}}\times \{0, 1, \ldots , 2\ell \} \rightarrow \frac{1}{2} {\mathbb {N}}\times \frac{1}{2} {\mathbb {N}}, \quad \xi (n, k) = \left( \frac{n + k}{2}, \ell + \frac{n - k}{2} \right) ,\nonumber \\ \end{aligned}$$
(4.4)

see also Fig. 1 and [24, Figs. 1, 2]. In case \(\ell =0\), we have \(t^0=\varepsilon \vert _{{\mathcal {B}}}\), where \(\varepsilon \) is the counit of \({\mathcal {U}}_{q}(\mathfrak {g})\) and is the trivial representation of \({\mathcal {B}}\), and the condition (4.3) gives \(\ell _1=\ell _2\) and \(\xi ^0(n,0)=(\frac{1}{2} n, \frac{1}{2} n)\).

Fig. 1
figure 1

The spherical functions \(\varPhi ^{\ell }_{\ell _1, \ell _2}\) when \(\ell = 2\) and interpretation of \(\varphi \cdot \varPhi ^{\ell }_{5/2, 5/2}\) in terms of the matrix-valued spherical functions. The reparametrisation \(\xi \) is depicted

With these preparations, we can introduce the matrix-valued spherical functions associated to a fixed representation \(t^\ell \) of \({\mathcal {B}}\), where we use the notation of Theorem 4.3.

Definition 4.4

Fix \(\ell \in \frac{1}{2}{\mathbb {N}}\) and let \((\ell _1, \ell _2) \in \frac{1}{2}{\mathbb {N}}\times \frac{1}{2}{\mathbb {N}}\) so that \([t^{\ell _1, \ell _2}\vert _{{\mathcal {B}}}:t^\ell ]=1\). The spherical function of type \(\ell \) associated to \((\ell _1, \ell _2)\) is defined by

$$\begin{aligned} \varPhi ^{\ell }_{\ell _1, \ell _2} :{\mathcal {U}}_{q}(\mathfrak {g}) \rightarrow \mathrm{End}(\mathcal {H}^{\ell }), \qquad Z \mapsto (\beta ^{\ell }_{\ell _1, \ell _2})^*\circ t^{\ell _1, \ell _2}(Z) \circ \beta ^{\ell }_{\ell _1, \ell _2}. \end{aligned}$$

Remark 4.5

  1. (i)

    Note that the requirement on \((\ell _1,\ell _2)\) in Definition 4.4 corresponds to the condition (4.3). Since \(\beta ^{\ell }_{\ell _1, \ell _2}\) is a \({\mathcal {B}}\)-intertwiner, we have

    $$\begin{aligned} \varPhi ^{\ell }_{\ell _1, \ell _2}(XZY) = t^\ell (X) \varPhi ^{\ell }_{\ell _1, \ell _2}(Z) t^\ell (Y), \qquad \forall \, X,Y\in {\mathcal {B}}, \ \forall \, Z\in {\mathcal {U}}_{q}(\mathfrak {g}). \end{aligned}$$
    (4.5)
  2. (ii)

    Note that the condition (4.3) is symmetric in \(\ell _1\) and \(\ell _2\). With the notation of Remark 4.2(iv), we have \(\varPhi ^{\ell }_{\ell _2, \ell _1}(Z) = J^\ell \varPhi ^{\ell }_{\ell _1, \ell _2}(\sigma (Z)) J^\ell \) for \(Z\in {\mathcal {U}}_{q}(\mathfrak {g})\). This follows from \(\beta ^{\ell }_{\ell _2, \ell _1} = P \beta ^{\ell }_{\ell _1, \ell _2} J^\ell \), which is a consequence of (8.2).

In case \(\ell = 0\), \(\mathcal {H}^0\cong {\mathbb {C}}\), we need \(\ell _1 = \ell _2\). Then \(\varPhi ^0_{\ell _1,\ell _1}\) are linear maps \({\mathcal {U}}_{q}(\mathfrak {g}) \rightarrow {\mathbb {C}}\). In particular, \(\varPhi ^{0}_{0,0}\) equals the counit \(\varepsilon \), and the spherical function \(\varphi = \frac{1}{2}(q^{-1}+q) \varPhi ^{0}_{1/2, 1/2}\) is scalar-valued linear map on \({\mathcal {U}}_{q}(\mathfrak {g})\). The elements \(\varPhi ^{0}_{n/2, n/2}\) can be written as a multiple of \(U_n(\varphi )\), where \(U_n\) denotes the Chebyshev polynomial of the second kind of degree n, see Proposition 5.3. This statement can be considered as a special case of Theorem 4.8, but we need the identification with the Chebyshev polynomials in the spherical case \(\ell =0\) in order to obtain the weight function in Theorem 4.8. Proposition 5.3 will follow from Theorem 4.6. The identification of the spherical functions for \(\ell =0\) with Chebyshev polynomials corresponds to the classical case, since the spherical functions on \(G\times G/G\) are the characters on G and the characters on \(\mathrm{SU}(2)\) are Chebyshev polynomials of the second kind, as the simplest case of the Weyl character formula. It also corresponds to the computation of the characters on the quantum \(\mathrm{SU}(2)\) group by Woronowicz [48], since the characters are identified with Chebyshev polynomials as well.

Next Theorem 4.6 gives the possibility to associate polynomials in \(\varphi \) to spherical functions of Definition 4.4. Theorem 4.6 essentially follows from the tensor product decomposition of representations of \({\mathcal {U}}_{q}(\mathfrak {g})\), which in turn follows from tensor product decomposition for \({\mathcal {U}}_{q}(\mathfrak {su}(2))\), and some explicit knowledge of Clebsch–Gordan coefficients.

Theorem 4.6

Fix \(\ell \in \frac{1}{2}{\mathbb {N}}\) and let \((\ell _1, \ell _2) \in \frac{1}{2}{\mathbb {N}}\times \frac{1}{2}{\mathbb {N}}\) satisfy (4.3), then for constants \(A_{i, j}\) we have

$$\begin{aligned} \varphi \varPhi ^{\ell }_{\ell _1, \ell _2} = \sum _{i, j = \pm 1/2} A_{i, j} \varPhi ^{\ell }_{\ell _1 + i, \ell _2 + j}, \qquad A_{1/2, 1/2} \ne 0. \end{aligned}$$

In order to interpret the result of Theorem 4.6, we evaluate both sides at an arbitrary \(X\in {\mathcal {U}}_{q}(\mathfrak {g})\). The right-hand side is a linear combination of linear maps from \(\mathcal {H}^\ell \) to itself after evaluating at X. For the left-hand side, we use the pairing of Hopf algebras so that multiplication and comultiplication are dual to each other and the left-hand side has to be interpreted as

$$\begin{aligned} \Bigl (\varphi \varPhi ^{\ell }_{\ell _1, \ell _2}\Bigr )(X) = \sum _{(X)} \varphi (X_{(1)})\, \varPhi ^{\ell }_{\ell _1, \ell _2}(X_{(2)}) \in \text {End}(\mathcal {H}^\ell ), \end{aligned}$$
(4.6)

which is a linear combination of linear maps from \(\mathcal {H}^\ell \) to itself, using \(\Delta (X)= \sum _{(X)} X_{(1)}\otimes X_{(2)}\). The convention in Theorem 4.6 is that \(A_{i,j}\) is zero in case \((\ell _1+i,\ell _2+j)\) does not satisfy (4.3). The proof of Theorem 4.6 can be found in Sect. 5.2.

Since \({\mathcal {B}}\) is a right coideal subalgebra, we see that the left-hand side of Theorem 4.6 has the same transformation behaviour as (4.5). Indeed, for \(X\in {\mathcal {B}}\) and \(Y\in {\mathcal {U}}_{q}(\mathfrak {g})\) we have

$$\begin{aligned} \Bigl (\varphi \varPhi ^{\ell }_{\ell _1, \ell _2}\Bigr )(XY)&= \sum _{(X),(Y)} \varphi (X_{(1)}Y_{(1)})\, \varPhi ^{\ell }_{\ell _1, \ell _2}(X_{(2)}Y_{(2)})\nonumber \\&= \sum _{(X),(Y)} \varepsilon (X_{(1)})\varphi (Y_{(1)})\, \varPhi ^{\ell }_{\ell _1, \ell _2}(X_{(2)}Y_{(2)}) \nonumber \\&= \sum _{(Y)} \varphi (Y_{(1)})\, \varPhi ^{\ell }_{\ell _1, \ell _2}(\sum _{(X)} \varepsilon (X_{(1)})X_{(2)}Y_{(2)})\nonumber \\&= \sum _{(Y)} \varphi (Y_{(1)})\, \varPhi ^{\ell }_{\ell _1, \ell _2}(XY_{(2)}) \nonumber \\&= \sum _{(Y)} \varphi (Y_{(1)})\, t^\ell (X) \varPhi ^{\ell }_{\ell _1, \ell _2}(Y_{(2)}) = t^\ell (X) \Bigl (\varphi \varPhi ^{\ell }_{\ell _1, \ell _2}\Bigr )(Y), \end{aligned}$$
(4.7)

where we have used that \(X_{(1)}\in {\mathcal {B}}\) by the right coideal property and (4.5) for \(\varphi = \varPhi ^0_{1/2,1/2}\) in the second equality, and the counit axiom \(\sum _{(X)} \varepsilon (X_{(1)})X_{(2)}=X\) in the fourth equality and then (4.5) for \(\varPhi ^\ell _{\ell _1,\ell _2}\) and the fact that \(\varphi (Y_{(1)})\) is a scalar. Similarly, the invariance property from the right can be proved.

Theorem 4.6 leads to polynomials in \(\varphi \) by iterating the result and using that \(A_{1/2,1/2}\) is non-zero.

Corollary 4.7

There exist \(2\ell + 1\) polynomials \(r^{\ell , k}_{n,m}\), \(0 \le k \le 2\ell \), of degree at most n so that

$$\begin{aligned} \varPhi ^{\ell }_{\xi (n,m)} = \sum _{k = 0}^{2\ell } r^{\ell , k}_{n,m}(\varphi )\, \varPhi ^{\ell }_{\xi (0, k)}, \quad n\in {\mathbb {N}}, \quad 0 \le m \le 2\ell . \end{aligned}$$

The aim of the paper is to show that the polynomials \(r^{\ell ,k}_{n,m}\) give rise to matrix-valued orthogonal polynomials. Put

$$\begin{aligned} P_n = P^\ell _n \in \mathrm{End}(\mathcal {H}^\ell )[x] \qquad (P_n)_{i,j}=\overline{r^{\ell ,i}_{n,j}}, \qquad 0\le i,j\le 2\ell , \end{aligned}$$
(4.8)

where the matrix-valued polynomials \(P_n\) are taken with respect to the relabelled standard basis \(e_p=e^\ell _{p-\ell }\), \(p\in \{0,1,\ldots ,2\ell \}\) so that \(P_n = \sum _{i,j=0}^{2\ell } \overline{r^{\ell ,i}_{n,j}} \otimes E_{i,j}\). From Corollary 4.12 or Theorem 4.17, we see that the polynomial \(r^{\ell ,i}_{n,j}\) has real coefficients. The case \(\ell =0\) corresponds to a three-term recurrence relation for (scalar-valued) orthogonal polynomials, and then the polynomials coincide with the Chebyshev polynomials \(U_n\) viewed as a subclass of Askey–Wilson polynomials [4, (2.18)], see Proposition 5.3.

We show that the matrix-valued polynomials \((P_n)_{n=0}^\infty \) are orthogonal with respect to an explicit matrix-valued weight function W, see Theorem 4.8, arising from the Schur orthogonality relations. The expansion of the entries of weight function in terms of Chebyshev polynomials is given by quantum group theoretic considerations except for the calculation of the coefficients in this expansion. The matrix-valued orthogonal polynomials satisfy a matrix-valued three-term recurrence relation as follows from Theorem 4.6, which in turn is a consequence of the decomposition of tensor product representations of \({\mathcal {U}}_{q}(\mathfrak {g})\). However, in order to determine the matrix coefficients in the matrix-valued three-term recurrence we use analytic methods. The existence of two Casimir elements in \({\mathcal {U}}_{q}(\mathfrak {g})\) leads to the matrix-valued orthogonal polynomials being eigenfunctions of two commuting matrix-valued q-difference operators, see [23] for the group case. This extends Letzter [35] to the matrix-valued set-up for this particular case. The q-difference operators are the key to determining the entries of the matrix-valued orthogonal polynomials explicitly in terms of scalar-valued orthogonal polynomials from the q-Askey scheme [26], namely the continuous q-ultraspherical polynomials and the q-Racah polynomials. In this deduction, the LDU-decomposition of the matrix-valued weight function W is essential, since the conjugation with L allows us to decouple the matrix-valued q-difference operator.

In the remainder of Sect. 4, we state these results explicitly, and we present the proofs in the remaining sections. First we give the main statements which essentially follow from the quantum group theoretic set-up, except for explicit calculations, and these are Theorems 4.8, 4.11, 4.13. The remaining Theorems 4.15, 4.17 are obtained using scalar orthogonal polynomials from the q-analogue of the Askey scheme [26] and transformation and summation formulas for basic hypergeometric series [15].

We start by stating that the matrix-valued polynomials \((P_n)_{n=0}^\infty \) introduced in (4.8) are orthogonal with the conventions of Sect. 2. The orthogonality relations of Theorem 4.8 are due to the Schur orthogonality relations. The expansion of the entries of the weight function in terms of Chebyshev polynomials follows from the fact that the entries are spherical functions, i.e. correspond to the case \(\ell =0\) so that they are polynomial in \(\varphi \). The non-zero entries follow by considering tensor product decompositions, but the explicit values for the coefficients \(\alpha _t(m,n)\) in Theorem 4.8 require summation and transformation formulae for basic hypergeometric series.

Theorem 4.8

The polynomials \((P_n)_{n=0}^\infty \) of (4.8) form a family of matrix-valued orthogonal polynomials so that \(P_n\) is of degree n with non-singular leading coefficient. The orthogonality for the matrix-valued polynomials \((P_n)_{n \ge 0}\) is given by

$$\begin{aligned} \frac{2}{\pi } \int _{-1}^1 P_m(x)^* W(x) P_n(x) \sqrt{1 - x^2} dx&= G_m \delta _{m, n}, \end{aligned}$$

where the squared norm matrix \(G_m\) is diagonal:

$$\begin{aligned} (G_n)_{i, j}&= \delta _{i, j} q^{2n - 2\ell } \frac{ (1-q^{4\ell +2})^2}{(1-q^{2n+2i+2})(1-q^{4\ell -2i+2n+2})}. \end{aligned}$$

Moreover, for \(0 \le m \le n \le 2\ell \) the weight matrix is given by

$$\begin{aligned} W(x)_{m, n}&= \sum _{t = 0}^n \alpha _t(m, n)\, U_{m + n - 2t}(x), \end{aligned}$$

where

$$\begin{aligned} \alpha _t(m, n)&= q^{ 2n(2\ell +1) - n^2 - (4\ell + 3)t + t^2 - 2\ell + m } \frac{1 - q^{4\ell + 2}}{1 - q^{2m + 2}} \frac{(q^2; q^2)_{2\ell - n} (q^2; q^2)_{n}}{(q^2; q^2)_{2\ell }} \\&\qquad \times \,\frac{(-1)^{m - t} (q^{2m - 4\ell }; q^{2})_{n - t}}{(q^{2m + 4}; q^2)_{n-t}} \frac{(q^{4\ell + 4 - 2t}; q^2)_{t}}{(q^2; q^2)_{t}}, \end{aligned}$$

and \(W(x)_{m, n} = W(x)_{n, m}\) if \(m > n\).

The proof of Theorem 4.8 proceeds in steps. First we study explicitly the case \(\ell =0\), motivated by the works of Koornwinder [30], Letzter [3436] and others. Secondly, we show that taking traces of a matrix-valued spherical function of type \(\ell \) associated to \((\ell _1,\ell _2)\) times the adjoint of a spherical function of type \(\ell \) associated to \((\ell _1',\ell _2')\) gives, up to an action by an invertible group-like element of \({\mathcal {U}}_{q}(\mathfrak {g})\), a polynomial in the generator for the case \(\ell =0\). Then the explicit expression of the Haar functional on this polynomial algebra, stated in Lemma 5.4, gives the matrix-valued orthogonality relations. Finally, the explicit expression for the weight is obtained by analysing the explicit expression of W in terms of the matrix entries of the intertwiners \(\beta ^\ell _{\ell _1,\ell _2}\) in case \(\ell _1+\ell _2=\ell \). These matrix entries are Clebsch–Gordan coefficients.

The leading coefficient of \(P_n\) can be calculated explicitly from the proof of Theorem 4.8:

Corollary 4.9

The leading coefficient of \(P_n\) is a non-singular diagonal matrix:

$$\begin{aligned} \mathrm{lc}(P_n)_{i,j}&= \delta _{i,j} 2^n q^n \frac{ (q^{2i + 2}, q^{4\ell - 2i + 2}; q^2)_{n} }{ (q^2, q^{4\ell + 4}; q^2)_{n} }. \end{aligned}$$

The weight W is not irreducible, see Sect. 2, but splits into two irreducible block matrices. The symmetry J of the weight function of Theorem 4.8 is essentially a consequence of Remark 4.5(ii), but we need the explicit expression of the weight in order to prove that the commutant algebra is not bigger, see also [22, §4].

Proposition 4.10

The commutant algebra

$$\begin{aligned} \{ W(x) \mid x \in [-1, 1] \}'&= \{ Y \in \mathrm{End}(\mathcal {H}^{\ell }) \mid W(x) Y = Y W(x), \forall x \in (-1, 1) \}, \end{aligned}$$

is spanned by I and J, where \(J:e_p\mapsto e_{2\ell -p}\), \(p\in \{0,\ldots , 2\ell \}\), is a self-adjoint involution. Then \(JP_n(x)J = P_n(x)\) and \(JG_nJ = G_n\). Moreover, the weight W decomposes into two irreducible block matrices \(W_+\) and \(W_-\), where \(W_+\), respectively \(W_-\), acts in the \(+1\)-eigenspace, respectively \(-1\)-eigenspace, of J. So for \(P_++P_-=I\), where \(P_+\), \(P_-\) are the orthogonal self-adjoint projections \(P_+=\frac{1}{2}(I+J)\), \(P_-=\frac{1}{2}(I-J)\), we have that \(W_+\), respectively \(W_-\), corresponds to \(P_+W(x)P_+\), respectively \(P_-W(x)P_-\), restricted to the \(+1\)-eigenspace, respectively \(-1\)-eigenspace, of J.

The special cases for \(\ell =\frac{1}{2}\) and \(\ell =1\) are given at the end of this section. In particular, we identify all scalar-valued orthogonal polynomials occurring in this framework explicitly in terms of Askey–Wilson polynomials.

Theorem 4.6 can be used to find a three-term recurrence relation for the matrix-valued orthogonal polynomials \(P_n\), cf. Sect. 2, so the underlying tensor product decompositions provide the three-term recurrence relation. However, the resulting expressions for the entries of the coefficients of the matrices are rather complicated expressions in terms of Clebsch–Gordan coefficients. For the corresponding matrix-valued monic polynomials \(Q_n(x) = P_n(x) \mathrm{lc}(P_n)^{-1}\), see Corollary 4.9 for the explicit expression for the leading coefficient, we can derive a simple expression for the matrices in the three-term recurrence relation once we have obtained more explicit expressions for the matrix entries of \(Q_n\). This is obtained in Sect. 7 using an explicit link of the matrix entries to scalar orthogonal polynomials in the q-Askey scheme.

Theorem 4.11

The monic matrix-valued orthogonal polynomials \((Q_n)_{n \ge 0}\) satisfy the three-term recurrence relation

$$\begin{aligned} xQ_n(x) = Q_{n+1}(x) + Q_n(x) X_n + Q_{n-1}(x) Y_n, \end{aligned}$$

where \(Q_{-1}(x) = 0\), \(Q_0(x) = I\) and

$$\begin{aligned} X_n&= \sum _{i=0}^{2\ell -1} \frac{ q^{2n+1} (1-q^{2i+2})^2 (1-q^{4\ell +2n+2})^2 }{ 2 (1-q^{2i+2n})(1-q^{4\ell +2n-2i})(1-q^{2n+2i+2})(1-q^{4\ell -2i+2n+2}) } E_{i,i+1} \\&\quad +\, \sum _{i=1}^{2\ell } \frac{ q^{2n+1} (1-q^{2n})^2(1-q^{4\ell +2n+2})^2 }{ 2 (1-q^{2n+2i})(1-q^{4\ell +2n-2i})(1-q^{2n+2i+2})(1-q^{4\ell -2i+2n+2})} E_{i,i-1}, \\ Y_n&= \sum _{i=0}^{2\ell } \frac{1}{4} \frac{ (1-q^{2n})^2(1-q^{4\ell +2n+2})^2 }{ (1-q^{2n+2i})(1-q^{4\ell +2n-2i})(1-q^{2n+2i+2})(1-q^{4\ell -2i+2n+2}) } E_{i,i}. \end{aligned}$$

Note that \(X_n\rightarrow 0\), \(Y_n\rightarrow \frac{1}{4}\) as \(n\rightarrow \infty \).

The three-term recurrence relation for the matrix-valued orthogonal polynomials \(P_n\) is given in Corollary 4.12, which follows from Theorem 4.11, since we have \(G_{n+1}A_n=\mathrm{lc}(P_{n+1})^*\mathrm{lc}(P_n)\), \(G_nB_n= \mathrm{lc}(P_n)^*X_n \mathrm{lc}(P_n)\), and \(G_{n-1}C_n=\mathrm{lc}(P_{n-1})^*Y_n\mathrm{lc}(P_n)\). For future reference, we give the explicit expressions in Corollary 4.12.

Corollary 4.12

The matrix-valued orthogonal polynomials \((P_n)_{n \ge 0}\) satisfy the three-term recurrence relation

$$\begin{aligned} x P_n(x) = P_{n+1}(x) A_n + P_{n}(x) B_n + P_{n-1}(x) C_n, \end{aligned}$$

where \(P_{-1}(x) = 0\), \(P_0(x) = I\) and

$$\begin{aligned} A_n&= \sum _{i = 0}^{2\ell } \frac{1}{2q} \frac{ (1 - q^{2n + 2})(1 - q^{4\ell + 2n + 4}) }{ (1 - q^{2i + 2n + 2})(1 - q^{4\ell - 2i + 2n + 2}) } E_{i, i}, \\ B_n&= \sum _{i=0}^{2\ell -1} \frac{q^{2n+1}}{2} \frac{ (1-q^{4\ell -2i})(1-q^{2i+2}) }{ (1-q^{4\ell +2n-2i})(1-q^{2n+2i+4}) } E_{i,i+1} \\&\quad +\,\sum _{i=1}^{2\ell } \frac{q^{2n+1}}{2} \frac{ (1-q^{2i})(1-q^{4\ell -2i+2}) }{ (1-q^{2n+2i})(1-q^{4\ell -2i+2n+4}) } E_{i,i-1}, \\ C_n&= \sum _{i=0}^{2\ell } \frac{q}{2} \frac{ (1-q^{2n})(1-q^{4\ell +2n+2}) }{ (1-q^{2n+2i+2})(1-q^{4\ell +2n-2i+2}) } E_{i,i}. \end{aligned}$$

Note that the case \(\ell =0\) gives a three-term recurrence relation that can be solved in terms of the Chebyshev polynomials, see Proposition 5.3.

In the group case, the spherical functions are eigenfunctions of K-invariant differential operators on G / K, see e.g. [8, 14]. For matrix-valued spherical functions this is also the case, see [40], and this has been exploited in the special cases studied in [17, 23, 24]. In the quantum group case, the action of the Casimir operator gives rise to a q-difference operator for the corresponding spherical functions, see [35]. The first occurrence of an Askey–Wilson q-difference operator, see [4, 15, 19], in this context is due to Koornwinder [30]. For the matrix-valued orthogonal polynomials, we have a matrix-valued analogue of the Askey–Wilson q-difference operator, as given in Theorem 4.13. We obtain two of these operators, one arising from the Casimir operator for \({\mathcal {U}}_{q}(\mathfrak {su}(2))\) in the first leg of \({\mathcal {U}}_{q}(\mathfrak {g})\) and one from the \({\mathcal {U}}_{q}(\mathfrak {su}(2))\) Casimir operator of the second leg. This is related to a kind of Cartan decomposition of \({\mathcal {U}}_{q}(\mathfrak {g})\), cf. (4.5), which, however, does not exist in general for quantised universal enveloping algebras. We can still resolve this problem using techniques based on [8, §2], see the first part of the proof in Sect. 5. The proof of Theorem 4.13 is completed in Sect. 7.

Theorem 4.13

Define two matrix-valued q-difference operators by

$$\begin{aligned} D_i = \mathcal {M}_i(z) \eta _{q} + \mathcal {M}_i(z^{-1}) \eta _{q^{-1}}, \quad i = 1, 2, \end{aligned}$$

where the multiplication by the matrix-valued functions \(\mathcal {M}_i(z)\) and \(\mathcal {M}_i(z^{-1})\) is from the left and \(\eta _q\) is the shift operator defined by \((\eta _{q} \breve{f})(z) = \breve{f}(qz)\), \(\breve{f}(z) =f(\mu (z))\), where \(x=\mu (z) = \frac{1}{2}(z+z^{-1})\). The matrix-valued function \(\mathcal {M}_1\) is given by

$$\begin{aligned} \begin{aligned} \mathcal {M}_1(z)&= - \sum _{i=0}^{2\ell -1} \frac{q^{1-i}(1 - q^{2i+2})}{(1 - q^2)^2} \frac{z}{(1 - z^2)} E_{i,i+1} \\&\quad + \sum _{i=0}^{2\ell } \frac{q^{1-i}}{(1 - q^2)^2} \frac{(1 - q^{2i + 2} z^2)}{(1 - z^2)} E_{i,i}, \end{aligned} \end{aligned}$$

and \(\mathcal {M}_2(z) = J \mathcal {M}_1(z) J\), where \(J e_{p} = e_{2\ell - p}\). The matrix-valued orthogonal polynomials \(P_n\) are eigenfunctions for the operators \(D_i\) with eigenvalue matrices given by \(\varLambda _n(i)\) such that \(D_i P_n = P_n \varLambda _n(i)\) and

$$\begin{aligned} \varLambda _{n}(1)&= \sum _{j=0}^{2\ell } \frac{ q^{-j - n - 1} + q^{j + n + 1} }{ (q^{-1} - q)^2 } E_{j,j}, \quad \varLambda _{n}(2) = J\varLambda _{n}(1)J. \end{aligned}$$

Explicitly,

$$\begin{aligned} (D_i P_n)(\mu (z)) = \mathcal {M}_i(z) (\eta _{q} \breve{P_n})(z) + \mathcal {M}_i(z^{-1}) (\eta _{q^{-1}} \breve{P_n})(z) = P_n(\mu (z)) \varLambda _n(i), \end{aligned}$$

where \(\eta _q\) and \(\eta _{q^{-1}}\) are applied entry-wise to the matrix-valued orthogonal polynomials \(P_n\).

Theorem 4.13 shows that \(JD_1J=D_2\), since J is constant. In particular, \(D_1+D_2\) commutes with J and reduces to a q-difference operator for the matrix-valued orthogonal polynomials associated with the weight \(W_+\) or \(W_-\), see Proposition 4.10. Similarly, \(D_1-D_2\) anticommutes with J.

Note that the expression \(\mathcal {M}_i(z) \breve{P}(qz) + \mathcal {M}_i(z^{-1}) \breve{P}(z/q)\) is symmetric in \(z\leftrightarrow z^{-1}\) for any matrix-valued polynomial P, and hence again is a function in \(x=\mu (z)\). The case \(\ell =0\) corresponds to only one q-difference operator, which we rewrite as

$$\begin{aligned} \left( \frac{1-q^2z^2}{1-z^2}\eta _{q} +\frac{1-q^2z^{-2}}{1-z^{-2}}\eta _{q^{-1}}\right) \breve{p}_n = (q^{-n} +q^{n+2}) \breve{p}_n. \end{aligned}$$
(4.9)

For the Chebyshev polynomials, \(U_n(x) = (q^{n+2};q)_n^{-1} p_n(x;q,-q,q^{1/2},-q^{1/2}|q)\) rewritten as Askey–Wilson polynomials [4, (2.18)] are solutions for the relation (4.9), see [26, §14.1], [15, §7.7], [19, Chap. 15–16]. In particular, we consider the operators of Theorem 4.13 as matrix-valued analogues of the Askey–Wilson operator, see Askey and Wilson [4], or [2, 15, 19].

Corollary 4.14

The q-difference operators \(D_1\) and \(D_2\) are symmetric with respect to the matrix-valued weight W, i.e. for all matrix-valued polynomials P, Q, we have

$$\begin{aligned} \int _{-1}^1 \bigl ( (D_iP)(x)\bigr )^*W(x) Q(x) \, dx = \int _{-1}^1 \bigl ( P(x)\bigr )^*W(x) (D_iQ)(x) \, dx, \quad i=1,2. \end{aligned}$$

By [16, §2] it suffices to check Corollary 4.14 for \(P=P_n\), \(Q=P_m\), so that by Theorems 4.13 and 4.8 we need to check that \(\varLambda _n(i)^*G_n\delta _{m,n} = G_n \varLambda _m(i) \delta _{m,n}\), which is true since the matrices involved are real and diagonal.

In order to study the matrix-valued orthogonal polynomials and the weight function in more detail, we need the continuous q-ultraspherical polynomials [2, Chap. 2], [15], [19, Chap. 20], [26]:

$$\begin{aligned} C_n(x;\beta | q) = \sum _{r = 0}^{n} \frac{ (\beta ; q)_r (\beta ; q)_{n - r} }{ (q; q)_{r} (q; q)_{n - r} } e^{i(n - 2r)\theta }, \quad x = \cos \theta . \end{aligned}$$
(4.10)

The continuous q-ultraspherical polynomials are orthogonal polynomials for \(|\beta |<1\). The orthogonality measure is a positive measure in case \(0<q<1\) and \(\beta \) real with \(|\beta |<1\). Explicitly,

$$\begin{aligned} \begin{aligned}&\int _{-1}^1 C_n(x;\beta | q) C_m(x;\beta | q) \frac{w(x;\beta \mid q)}{\sqrt{1-x^2}}\, dx = {\delta }_{nm}, \\&2\pi \frac{({\beta }, {\beta }q;q)_\infty }{({\beta }^2,q;q)_\infty } \frac{({\beta }^2;q)_n}{(q;q)_n} \frac{1-{\beta }}{1-{\beta }q^n} w(\cos \theta ;\beta | q) = \frac{(e^{2i\theta }, e^{-2i\theta };q)_\infty }{({\beta }e^{2i\theta }, {\beta }e^{-2i\theta };q)_\infty }.\quad \quad \end{aligned} \end{aligned}$$
(4.11)

Note that in the special case \({\beta }= q^{1+k}\), \(k\in {\mathbb {N}}\), the weight function is polynomial in \(x=\cos \theta \), and

$$\begin{aligned} w(\cos \theta ;q^{1+k}| q) = 4(1-\cos ^2\theta ) \, (qe^{2i\theta }, qe^{-2i\theta };q)_k. \end{aligned}$$
(4.12)

We use the continuous q-ultraspherical polynomials (4.10) for any \(\beta \in {\mathbb {C}}\). In particular, for \(\beta =q^{-k}\) with \(k\in {\mathbb {N}}\) the sum in (4.10) is restricted to \(n-k\le r\le k\), and in particular \(C_n(x;q^{-k};q)=0\) in case \(n-k>k\). With this convention, we can now describe the LDU-decomposition of the weight matrix, and state the inverse of the unipotent lower triangular matrix L in Theorem 4.15.

Theorem 4.15

The matrix-valued weight W as in Theorem 4.8 has the following LDU-decomposition:

$$\begin{aligned} W(x)&= L(x) T(x) L(x)^{t}, \quad x \in [-1, 1], \end{aligned}$$

where \(L :[-1, 1] \rightarrow M_{2\ell + 1}({\mathbb {C}})\) is the unipotent lower triangular matrix

$$\begin{aligned} L(x)_{m k}&= q^{m - k} \frac{ (q^2; q^2)_{m} (q^2; q^2)_{2k + 1} }{ (q^2; q^2)_{m + k + 1} (q^2; q^2)_{k} } C_{m - k}(x; q^{2k + 2} | q^2),&0 \le k \le m \le 2\ell , \end{aligned}$$

and \(T :[-1, 1] \rightarrow M_{2\ell + 1}({\mathbb {C}})\) is the diagonal matrix, \(0\le k\le 2\ell \),

$$\begin{aligned} T(x)_{k k}&= c_k(\ell ) \frac{w(x;q^{2k+2}|q^2)}{1-x^2}, \\ c_{k}(\ell )&= \frac{q^{-2\ell }}{4} \frac{ (1 - q^{4k + 2}) (q^2; q^2)_{2\ell + k + 1} (q^2; q^2)_{2\ell - k} (q^2; q^2)_{k}^4 }{ (q^2; q^2)_{2k + 1}^2 (q^2; q^2)_{2\ell }^2 }. \end{aligned}$$

The inverse of L is given by

$$\begin{aligned} \bigl ( L(x)\bigr )^{-1}_{k, n}&= q^{(2k + 1)(k - n)} \frac{ (q^2; q^2)_{k} (q^2; q^2)_{k + n} }{ (q^2; q^2)_{2k} (q^2; q^2)_{n} } C_{k - n}(x; q^{-2k} | q^2),&0 \le n \le k. \end{aligned}$$

Note that T, L and \(L^{-1}\) are matrix-valued polynomials, which is clear from the explicit expression and (4.12). It is remarkable that the LDU-decomposition is for arbitrary size \(2\ell +1\), but that there is no dependence of L on the spin \(\ell \) and that the dependence of T on the spin \(\ell \) is only in the constants \(c_k(\ell )\).

We prove the first part of Theorem 4.15 in Sect. 6. The proof of Theorem 4.15 is analytic in nature, and a quantum group theoretic proof would be desirable. The statement on the inverse of L(x) is taken from [1], where the inverse of a lower triangular matrix with matrix entries continuous q-ultraspherical polynomials in a more general situation is derived. The inverse \(L^{-1}\) is derived in [1, Example 4.2]. The inverse of L in the limit case \(q\uparrow 1\) was derived by Cagliero and Koornwinder [6], and the proof of [1] is of a different nature than the proof presented in [6].

Theorem 4.15 shows that \(\det (W(x))\) is the product of the diagonal entries of T(x). Since all coefficients \(c_k(\ell )>0\) and the weight functions are positive, we obtain Corollary 4.16.

Corollary 4.16

The matrix-valued weight W(x) is strictly positive definite for \(x \in [-1, 1]\). In particular, the matrix-valued weight \(W(x)\sqrt{1-x^2}\) of Theorem 4.8 is strictly positive definite for \(x \in (-1, 1)\).

Using the lower triangular matrix L of the LDU-decomposition of Theorem 4.15, we are able to decouple \(D_1\) of Theorem 4.13 after conjugation with \(L^t(x)\). We get a scalar q-difference equation for each of the matrix entries of \(L^t(x)P_n(x)\), which is solved by continuous q-ultraspherical polynomials up to a constant. Since we have yet another matrix-valued q-difference operator for \(L^t(x)P_n(x)\), namely \(L^tD_2(L^t)^{-1}\) with \(D_2\) as in Theorem 4.13, we get a relation for the constants involved. This relation turns out to be a three-term recurrence relation along columns, which can be identified with the three-term recurrence for q-Racah polynomials. Finally, use \((L^t(x))^{-1}\) to obtain an explicit expression for the matrix entries of the matrix-valued orthogonal polynomials of Theorem 4.17.

Before stating Theorem 4.17, recall that the q-Racah polynomials, see e.g. [15, §7.2], [19, §15.6], [26, §14.2], are defined by

$$\begin{aligned} R_n(\mu (x);\alpha , \beta , \gamma , \delta ; q) = {}_{4}\varphi _{3}\biggl (\genfrac{}{}{0.0pt}{}{ q^{-n}, \alpha \beta q^{n+1}, q^{-x}, \gamma \delta q^{x+1} }{ \alpha q, \beta \delta q, \gamma q };q,q\biggr ), \end{aligned}$$
(4.13)

where \(n \in \{0, 1, 2, \ldots , N\}\), \(N\in {\mathbb {N}}\), \(\mu (x) = q^{-x} + \gamma \delta q^{x+1}\) and so that one of the conditions \(\alpha q = q^{-N}\), or \(\beta \delta q = q^{-N}\), or \(\gamma q = q^{-N}\) holds.

Theorem 4.17

For \(0 \le i, j \le 2\ell \), we have

$$\begin{aligned} P_n(x)_{i, j}&= \sum _{k = i}^{2\ell } (-1)^k q^{n + (2k+1)(k-i) + j(2k+1) + 2k(2\ell + n + 1) - k^2} \\&\quad \times \,\frac{ (q^2; q^2)_{k} (q^2; q^2)_{k+i} }{ (q^2; q^2)_{2k} (q^2; q^2)_{i} } \frac{ (q^{-4\ell }, q^{-2j-2n}; q^2)_{k} }{ (q^2, q^{4\ell +4}; q^2)_{k} } \frac{ (q^2; q^2)_{n+j-k} }{ (q^{4k+4}; q^2)_{n+j-k} } \\&\quad \times \, R_{k}(\mu (j);1,1,q^{-2n-2j-2},q^{-4\ell -2};q^2) \\&\quad \times \, C_{k-i}(x;q^{-2k}|q^2) C_{n+j-k}(x;q^{2k+2}|q^2). \end{aligned}$$

Note that the left-hand side is a polynomial of degree at most n, whereas the right-hand side is of degree \(n+j-i\). In particular, for \(j>i\) the leading coefficient of the right-hand side of Theorem 4.17 has to vanish, leading to Corollary 4.18.

Corollary 4.18

With the notation of Theorem 4.17, we have for \(j>i\)

$$\begin{aligned}&\sum _{k = i}^{2\ell } (-1)^k q^{(2k+1)(k-i) + j(2k+1) + 2k(2\ell + n + 1) - k^2} \\&\quad \times \frac{ (q^2; q^2)_{k} (q^2; q^2)_{k+i} }{ (q^2; q^2)_{2k} } \frac{ (q^{-4\ell }, q^{-2j-2n}; q^2)_{k} }{ (q^2, q^{4\ell +4}; q^2)_{k} }\\&\quad \times \frac{ (q^{2+2k}; q^2)_{n+j-k} }{ (q^{4k+4}; q^2)_{n+j-k} } R_{k}(\mu (j);1,1,q^{-2n-2j-2},q^{-4\ell -2};q^2) = 0. \end{aligned}$$

By evaluating Corollary 4.7 at \(1\in {\mathcal {U}}_{q}(\mathfrak {g})\), we obtain Corollary 4.19, which is not clear from Theorem 4.17.

Corollary 4.19

For \(m\in \{0,\ldots , 2\ell \}\), we have \(\sum _{k=0}^{2\ell } (P_n\bigl (\frac{1}{2}(q+q^{-1})\bigr ))_{k,m}=1\).

4.1 Examples

We end this section by specialising the results for low-dimensional cases. The case \(\ell =0\) reduces to the Chebyshev polynomials \(U_n(x)\) of the second kind as observed following Theorem 4.13. This is proved in Proposition 5.3, which is required for the proofs of the general statements of Sect. 4.

4.1.1 Example: \(\ell =\frac{1}{2}\)

In this case we work with \(2 \times 2\) matrices. By Proposition 4.10, we know that the weight is block-diagonal, so that in this case we have an orthogonal decomposition to scalar-valued orthogonal polynomials. To be explicit, the matrix-valued weight W is given by

$$\begin{aligned} YW(x)Y^{t}&= \sqrt{1-x^2} \begin{pmatrix} 2x + (q + q^{-1}) &{} 0 \\ 0 &{} -2x + (q + q^{-1}) \end{pmatrix}, \\ \quad Y&= \frac{1}{2} \sqrt{2} \begin{pmatrix} 1 &{} 1 \\ -1 &{} 1 \end{pmatrix}, \quad x \in [-1, 1]. \end{aligned}$$

In this case, see Sect. 2, the polynomials \(P_n\) diagonalise since the leading coefficient is diagonalised by conjugation with the orthogonal matrix Y, and we write \(YP_n(x)Y^{t} = \mathrm{diag}(p^+_{n}(x), p^-_{n}(x))\). Then we can identify \(p^{\pm }_n\) by any of the results given in this section, and we do this using the three-term recurrence relation of Corollary 4.12. After conjugation, the three-term recurrence relation for \(p^+_{n}\) is given by

$$\begin{aligned} xp^+_{n}(x)&= \frac{1}{2q} \frac{(1 - q^{2n+6})}{(1 - q^{2n+4})} p^+_{n+1}(x) + \frac{q^{2n+1}}{2} \frac{(1 - q^{2})^2}{(1 - q^{2n+2})(1 - q^{2n+4})} p^+_{n}(x) \\&\quad + \,\frac{q}{2} \frac{(1 - q^{2n})}{(1 - q^{2n+2})} p^+_{n-1}(x), \end{aligned}$$

and the three-term recurrence relation for \(p^-_{n}\) is obtained by substituting \(x \mapsto -x\) into the three-term recurrence relation for \(p^+_{n}\). The explicit expressions for \(p^+_{n}\) and \(p^-_{n}\) are given in terms of continuous q-Jacobi polynomials \(P_n^{(\alpha , \beta )}(x|q)\) for \((\alpha , \beta )=(\frac{1}{2}, \frac{3}{2})\), see [4, §4], [26, §14.10]. From [15, Exercise 7.32(ii)] we have \(P_n^{(\frac{1}{2}, \frac{3}{2})}(-x|q^2) = (-1)^n q^{-n} P_n^{(\frac{3}{2}, \frac{1}{2})}(x|q^2)\). So we obtain

$$\begin{aligned} p^+_{n}(x)&= \frac{(1 - q^4)}{(1 - q^{2n+4})} \frac{(q^2,-q^3,-q^4;q^2)_n}{(q^{2n+6};q^2)_n} P_n^{(\frac{1}{2}, \frac{3}{2})}(x|q^2), \\ p^-_{n}(x)&= (-1)^n q^{-n} \frac{(1 - q^4)}{(1 - q^{2n+4})} \frac{(q^2, -q^3, -q^4; q^2)_n}{(q^{2n+6}; q^2)_n} P^{(\frac{3}{2}, \frac{1}{2})}_n(x|q^2), \end{aligned}$$

which is a q-analogue of [24, §8.2]. Moreover, writing down the conjugation of the q-difference operator \(D_1+D_2\) of Theorem 4.13 for the case \(\ell =\frac{1}{2}\) for the conjugated polynomials gives back the Askey–Wilson q-difference for the continuous q-Jacobi polynomials \(P_n^{(\alpha , \beta )}(x|q)\) for \((\alpha ,\beta )= (\frac{1}{2}, \frac{3}{2})\) and \((\frac{3}{2},\frac{1}{2})\). Working out the eigenvalue equation for \(D_1-D_2\) gives a simple q-analogue of the contiguous relations of [24, p. 5708].

4.1.2 Example: \(\ell =1\)

For \(\ell = 1\) we work with \(3 \times 3\) matrices. By Proposition 4.10, we can block-diagonalise the matrix-valued weight:

$$\begin{aligned} YW(x)Y^{t}&= \sqrt{1-x^2} \begin{pmatrix} W_+(x) &{} 0 \\ 0 &{} W_-(x) \end{pmatrix}, \\ Y&= \frac{1}{2}\sqrt{2} \begin{pmatrix} 1 &{} 0 &{} 1 \\ 0 &{} \sqrt{2} &{} 0 \\ -1 &{} 0 &{} 1 \end{pmatrix}, \quad x \in [-1,1], \end{aligned}$$

where \(W_+\) is a \(2 \times 2\) matrix-valued weight and \(W_-\) is a scalar-valued weight. Explicitly,

$$\begin{aligned} W_+(x)&= \begin{pmatrix} 4x^2 + (q^2 + q^{-2}) &{} \displaystyle 2\sqrt{2}q^{-1}\frac{(1 + q^2 + q^4)}{(1 + q^2)}x \\ \displaystyle 2\sqrt{2}q^{-1}\frac{(1 + q^2 + q^4)}{(1 + q^2)}x &{} \displaystyle \frac{4q^2}{(1 + q^2)^2}x^2 + (q^2 + q^{-2}) \end{pmatrix}, \\ W_-(x)&= -4x^2 + (q^2 + 2 + q^{-2}). \end{aligned}$$

The polynomials diagonalise by \(YP_n(x)Y^t = \mathrm{diag}(P^+_{n}(x), p^{-}_{n}(x))\), where \(P^+_{n}\) is a \(2 \times 2\) matrix-valued polynomial and \(p^{-}_{n}\) is a scalar-valued polynomial. Conjugating Corollary 4.12, the three-term recurrence relations for \(P^+_{n}\) and \(p^{-}_{n}\) are

$$\begin{aligned} xP^+_{n}(x)&= P^+_{n+1}(x) A_n + P^+_{n}(x) B_n + P^+_{n-1}(x) C_n, \\ xp^{-}_{n}(x)&= \frac{1}{2q} \frac{(1 - q^{2n+8})}{(1 - q^{2n+6})} p^{-}_{n+1}(x) + \frac{q}{2} \frac{(1 - q^{2n})}{(1 - q^{2n+2})} p^{-}_{n-1}(x), \end{aligned}$$

where

$$\begin{aligned} A_n&= \frac{1}{2q} \begin{pmatrix} \displaystyle \frac{(1 - q^{2n+8})}{(1 - q^{2n+6})} &{} 0 \\ 0 &{} \displaystyle \frac{(1 - q^{2n+8})(1 - q^{2n+2})}{(1 - q^{2n+4})^2} \end{pmatrix}, \\ B_n&= \frac{1}{2}\sqrt{2} \begin{pmatrix} 0 &{} \displaystyle q^{2n+1} \frac{(1 - q^2)(1 - q^4)}{(1 - q^{2n+4})^2} \\ \displaystyle q^{2n+1} \frac{(1 - q^2)(1 - q^4)}{(1 - q^{2n+2})(1 - q^{2n+6})} &{} 0 \end{pmatrix}, \\ C_n&= \frac{q}{2} \begin{pmatrix} \displaystyle \frac{(1 - q^{2n})}{(1 - q^{2n+2})} &{} 0 \\ 0 &{} \displaystyle \frac{(1 - q^{2n})(1 - q^{2n+6})}{(1 - q^{2n+4})^2} \end{pmatrix}. \end{aligned}$$

The scalar-valued polynomial \(p^{-}_{n}\) can be identified with the continuous q-ultraspherical polynomials:

$$\begin{aligned} p^{-}_{n}(x) = q^n \frac{(1 - q^2)(1 - q^6)}{(1 - q^{2n+2})(1 - q^{2n+6})} C_n(x;q^4|q^2). \end{aligned}$$

The \(2 \times 2\) matrix-valued polynomials \(P^{+}_{n}\) are solutions to the matrix-valued q-difference equation \(DP^{+}_{n,1} = P^{+}_{n} \varLambda _n\). Here \(D= M(z) \eta _q + M(z^{-1}) \eta _{q^{-1}}\) is the restriction of the conjugated \(D_1 + D_2\) to the \(+1\)-eigenspace of J. The explicit expressions for M(z) and \(\varLambda _n\) are

$$\begin{aligned} M(z)&= \frac{q^{-1}}{(1 - q^2) (1 - z^2)} \begin{pmatrix} \displaystyle \frac{(1 + q^2)}{(1 - q^2)} (1 - q^4 z^2) &{} -\sqrt{2}q^2z \\ -\sqrt{2}q(1 + q^2)z &{} \displaystyle 2q\frac{(1 - q^4z^2)}{(1 - q^2)} \end{pmatrix}, \\ \varLambda _n&= \begin{pmatrix} \displaystyle q^{-n-1} \frac{(1 + q^2)(1 + q^{2n+4})}{(1 - q^2)^2} &{} 0 \\ 0 &{} \displaystyle 2q^{-n} \frac{(1 + q^{2n+4})}{(1 - q^2)^2} \end{pmatrix}. \end{aligned}$$

These results are q-analogues of some of the results given in [24, §8.3], see also [39]. Note moreover that \(W_-(0)\) is a multiple of the identity so that the commutant of \(W_-\) equals the commuting algebra of Tirao and Zurrián [42], see also [22]. Since the commutant is trivial, the weight \(W_-\) is irreducible, which can also be checked directly.

5 Quantum group-related properties of spherical functions

In this section, we start with the proofs of the statements of Sect. 4 which can be obtained using the interpretation of matrix-valued spherical functions on \({\mathcal {U}}_{q}(\mathfrak {g})\).

5.1 Matrix-valued spherical functions on the quantum group

In this subsection, we study some of the properties of the matrix-valued spherical functions which follow from the quantum group theoretic interpretation. In particular, we derive Theorem 4.3 from Remark 4.2. The precise identification with the literature and the standard Clebsch–Gordan coefficients is made in Appendix 1, and we use the intertwiner and the Clebsch–Gordan coefficients as presented there.

We also need the matrix elements of the type 1 irreducible finite dimensional representations. Define

$$\begin{aligned} t^\ell _{m,n}:{\mathcal {U}}_{q}(\mathfrak {su}(2)) \rightarrow \mathbb {C}, \quad t^\ell _{m,n}(X) = \langle t^\ell (X) e^\ell _n, e^\ell _m\rangle , \quad n,m\in \{-\ell ,\ldots , \ell \}, \end{aligned}$$

where we take the inner product in the representation space \(\mathcal {H}^\ell \) for which the basis \(\{e^\ell _n\}_{n=-\ell }^\ell \) is orthonormal. Denoting

$$\begin{aligned} \begin{pmatrix} t^{1/2}_{-1/2,-1/2} &{}\quad t^{1/2}_{-1/2,1/2} \\ t^{1/2}_{1/2,-1/2} &{}\quad t^{1/2}_{1/2,1/2} \end{pmatrix} = \begin{pmatrix} \alpha &{} \beta \\ \gamma &{} \delta \end{pmatrix}, \end{aligned}$$

then \(\alpha \), \(\beta \), \(\gamma \), \(\delta \) generate a Hopf algebra, where the Hopf algebra structure is determined by duality of Hopf algebras. Moreover, it is a Hopf \(*\)-algebra with \(*\)-structure defined by \(\alpha ^*= \delta \), \(\beta ^*= -q\gamma \), which we denote by \(\mathcal {A}_q(SU(2))\). Then the Hopf \(*\)-algebra \(\mathcal {A}_q(SU(2))\) is in duality as Hopf \(*\)-algebras with \({\mathcal {U}}_{q}(\mathfrak {su}(2))\). In particular, the matrix elements \(t^\ell _{m,n}\in \mathcal {A}_q(SU(2))\) can be expressed in terms of the generators and span \(\mathcal {A}_q(SU(2))\). Moreover, the matrix elements \(t^\ell _{m,n}\in \mathcal {A}_q(SU(2))\) form a basis for the underlying vector space of \(\mathcal {A}_q(SU(2))\). The left action of \({\mathcal {U}}_{q}(\mathfrak {g})\) on \(\mathcal {A}_q(SU(2))\) is given by \((X\cdot \xi )(Y) = \xi (YX)\) for \(X,Y\in {\mathcal {U}}_{q}(\mathfrak {g})\) and \(\xi \in \mathcal {A}_q(SU(2))\). Similarly, the right action is given by \((\xi \cdot X)(Y) = \xi (XY)\) for \(X,Y\in {\mathcal {U}}_{q}(\mathfrak {g})\) and \(\xi \in \mathcal {A}_q(SU(2))\). A calculation gives \(k^{1/2}\cdot t^\ell _{m,n}= q^{-n}t^\ell _{m,n}\) and \(t^\ell _{m,n}\cdot k^{1/2}= q^{-m}t^\ell _{m,n}\) so that \(\alpha \cdot k^{1/2} = q^{1/2}\alpha \), \(\beta \cdot k^{1/2} = q^{-1/2}\beta \), \(\gamma \cdot k^{1/2} = q^{1/2}\gamma \), \(\delta \cdot k^{1/2} = q^{-1/2}\delta \).

Since \(k^{1/2}\) and its powers are group-like elements of \({\mathcal {U}}_{q}(\mathfrak {g})\), it follows that the left and right action of \(k^{1/2}\) and its powers are algebra homomorphisms. See e.g. [9, 11, 20, 21], and references given there.

In the same way, we view

$$\begin{aligned} t^{\ell _1}_{m_1,n_1}\otimes t^{\ell _2}_{m_2,n_2} :{\mathcal {U}}_{q}(\mathfrak {g}) \rightarrow \mathbb {C} \end{aligned}$$

where the functions are taken with respect to the Hopf algebra tensor product \({\mathcal {U}}_{q}(\mathfrak {g}) = {\mathcal {U}}_{q}(\mathfrak {su}(2))\otimes {\mathcal {U}}_{q}(\mathfrak {su}(2))\). In particular, for \(\lambda , \mu \in \frac{1}{2} {\mathbb {Z}}\) we find the expression \(t^{\ell _1}_{m_1,n_1}\otimes t^{\ell _2}_{m_2,n_2}(K_1^\lambda K_2^\mu ) = \delta _{m_1,n_1}\delta _{m_2,n_2} q^{-\lambda m_1 -\mu m_2}\). Similarly, the Hopf \(*\)-algebra spanned by all the matrix elements \(t^{\ell _1}_{m_1,n_1}\otimes t^{\ell _2}_{m_2,n_2}\) is isomorphic to \(\mathcal {A}_q(SU(2))\otimes \mathcal {A}_q(SU(2))\). We set \(\mathcal {A}_q(G)=\mathcal {A}_q(SU(2))\otimes \mathcal {A}_q(SU(2))\) for \(G=SU(2)\times SU(2)\).

Define \(A = K_1^{1/2} K_2^{1/2}\) and let \(\mathcal {A}\) be the commutative subalgebra of \({\mathcal {U}}_{q}(\mathfrak {g})\) generated by A and \(A^{-1}\). Recall the spherical function \(\varPhi ^{\ell }_{\ell _1, \ell _2}\) from Definition 4.4, and recall the transformation property (4.5).

Definition 5.1

The linear map \(\varPhi :{\mathcal {U}}_{q}(\mathfrak {g})\rightarrow \mathrm{End}(\mathcal {H}^\ell )\) is a spherical function of type \(\ell \) if

$$\begin{aligned} \varPhi (XZY) = t^\ell (X) \varPhi (Z) t^\ell (Y), \quad \forall \, X,Y\in {\mathcal {B}}, \ \forall \, Z\in {\mathcal {U}}_{q}(\mathfrak {g}). \end{aligned}$$

So the spherical function \(\varPhi ^{\ell }_{\ell _1, \ell _2}\) is a spherical function of type \(\ell \) by (4.5).

Proposition 5.2

Fix \(\ell \in \frac{1}{2} {\mathbb {N}}\), and assume \((\ell _1,\ell _2)\) satisfies (4.3).

  1. (i)

    Write \(\varPhi ^{\ell }_{\ell _1, \ell _2} = \sum _{m, n=-\ell }^\ell (\varPhi ^{\ell }_{\ell _1, \ell _2})_{m, n} \otimes E^\ell _{m, n}\), where \(E^\ell _{m, n}\) are the elementary matrices, then

    $$\begin{aligned} \left( \varPhi ^{\ell }_{\ell _1, \ell _2} \right) _{m, n}&= \sum _{m_1, n_1 = -\ell _1}^{\ell _1} \sum _{m_2, n_2 = -\ell _2}^{\ell _2} C^{\ell _1, \ell _2, \ell }_{m_1, m_2, m} C^{\ell _1, \ell _2, \ell }_{n_1, n_2, n}\, t^{\ell _1}_{m_1, n_1} {\otimes }t^{\ell _2}_{m_2, n_2}. \end{aligned}$$
  2. (ii)

    A spherical function \(\varPhi \) of type \(\ell \) restricted to \(\mathcal {A}\) is diagonal with respect to the basis \(\{e^\ell _p\}_{p=-\ell }^\ell \). Moreover, for each \(\lambda \in {\mathbb {Z}}\),

    $$\begin{aligned} \left( \varPhi ^{\ell }_{\ell _1, \ell _2}(A^{\lambda }) \right) _{m, n}&= \delta _{m, n} \sum _{i = -\ell _1}^{\ell _1} \sum _{j = -\ell _2}^{\ell _2} \left( C^{\ell _1, \ell _2, \ell }_{i, j, n} \right) ^2 q^{-\lambda (i + j)}, \end{aligned}$$

    so that \(\varPhi ^{\ell }_{\ell _1, \ell _2}(1)\) is the identity.

  3. (iii)

    Assume that \(\varPhi \) is a spherical function of type \(\ell \) and that

    $$\begin{aligned} \varPhi = \sum _{m, n=-\ell }^\ell \varPhi _{m, n} \otimes E^\ell _{m, n}:{\mathcal {U}}_{q}(\mathfrak {g})\rightarrow \mathrm{End}(\mathcal {H}^\ell ), \end{aligned}$$

    with all linear maps \(\varPhi _{m, n}\) on \({\mathcal {U}}_{q}(\mathfrak {g})\) in the linear span of the matrix elements \(t^{\ell _1}_{i_1,j_1}\otimes t^{\ell _2}_{i_2,j_2}\), \(-\ell _1\le i_1,j_1\le \ell _1\), \(-\ell _2\le i_2,j_2\le \ell _2\), then \(\varPhi \) is a multiple of \(\varPhi ^{\ell }_{\ell _1, \ell _2}\).

Proof

Note \((\varPhi ^{\ell }_{\ell _1, \ell _2}(X))_{m,n} = \langle \varPhi ^{\ell }_{\ell _1, \ell _2}(X) e^{\ell }_n, e^{\ell }_m \rangle \) for \(X\in {\mathcal {U}}_{q}(\mathfrak {g})\), therefore we first compute \(\varPhi ^{\ell }_{\ell _1, \ell _2}(X) e^{\ell }_n\). For \(X \in {\mathcal {U}}_{q}(\mathfrak {g})\), we have

This proves (i).

To obtain (ii), write \(\varPhi = \sum _{m, n=-\ell }^\ell \varPhi _{m, n} \otimes E^\ell _{m, n}\), and observe

$$\begin{aligned} t^\ell (K^\mu ) \varPhi (A^\lambda ) = \varPhi (K^\mu A^\lambda ) = \varPhi (A^\lambda K^\mu ) = \varPhi (A^\lambda )t^\ell (K^\mu ), \end{aligned}$$

for all \(\lambda \in {\mathbb {Z}}\), \(\mu \in \frac{1}{2} {\mathbb {Z}}\) that implies \(q^{-m\mu } \varPhi _{m,n}(A^\lambda )= q^{-n\mu } \varPhi _{m,n}(A^\lambda )\) since \(t^\ell (K^\mu )\) is diagonal. This gives \(\varPhi _{m,n}(A^\lambda )=0\) for \(m\not =n\). Next pair \(\varPhi ^{\ell }_{\ell _1, \ell _2}\) with \(A^{\lambda }\) using (i) and the observation made before Proposition 5.2 and the fact \(C^{\ell _1, \ell _2, \ell }_{m_1, m_2, m} =0\) unless \(m_1-m_2=m\). Next take \(\lambda =0\), and use (8.4).

Finally, for (iii) note that for \(X\in \mathcal {B}\) we have \(\varPhi (X)= t^\ell (X) \varPhi (1) = \varPhi (1) t^\ell (X)\). Since \(t^\ell :\mathcal {B} \rightarrow \text {End}(\mathcal {H}^\ell )\) is an irreducible unitary representation, Schur’s Lemma implies that \(\varPhi (1)=cI\) is a multiple of the identity. Theorem 4.3 implies that for \(X\in \mathcal {B}\)

$$\begin{aligned} t^\ell (X)_{m,n} = \sum _{m_1, n_1 = -\ell _1}^{\ell _1} \sum _{m_2, n_2 = -\ell _2}^{\ell _2} C^{\ell _1, \ell _2, \ell }_{m_1, m_2, m} C^{\ell _1, \ell _2, \ell }_{n_1, n_2, n} \left( t^{\ell _1}_{m_1, n_1} \otimes t^{\ell _2}_{m_2, n_2}\right) (X) \end{aligned}$$

and, by the multiplicity free statement in Theorem 4.3, this is the only (up to a constant) possible linear combination of the matrix elements \(t^{\ell _1}_{m_1, n_1} \otimes t^{\ell _2}_{m_2, n_2}\) for fixed \((\ell _1,\ell _2)\) which has this property. Hence, (iii) follows. \(\square \)

The special case \(\varPhi ^0_{1/2,1/2}:{\mathcal {U}}_{q}(\mathfrak {g})\rightarrow {\mathbb {C}}\) can now be calculated explicitly using the Clebsch–Gordan coefficients \(C^{1/2,1/2,0}_{m,-m,0}\) from (8.5). Explicitly,

$$\begin{aligned} \varphi= & {} \frac{1}{2} (q^{-1}+q) \varPhi ^{0}_{1/2, 1/2} \nonumber \\= & {} \frac{1}{2} q t^{\frac{1}{2}}_{-\frac{1}{2},-\frac{1}{2}}\otimes t^{\frac{1}{2}}_{-\frac{1}{2},-\frac{1}{2}} - \frac{1}{2} t^{\frac{1}{2}}_{-\frac{1}{2},\frac{1}{2}}\otimes t^{\frac{1}{2}}_{-\frac{1}{2},\frac{1}{2}}\nonumber \\&- \frac{1}{2} t^{\frac{1}{2}}_{\frac{1}{2},-\frac{1}{2}}\otimes t^{\frac{1}{2}}_{\frac{1}{2},-\frac{1}{2}} + \frac{1}{2} q^{-1} t^{\frac{1}{2}}_{\frac{1}{2},\frac{1}{2}}\otimes t^{\frac{1}{2}}_{\frac{1}{2},\frac{1}{2}}\nonumber \\= & {} \frac{1}{2} q \alpha \otimes \alpha - \frac{1}{2}\beta \otimes \beta - \gamma \otimes \gamma + \frac{1}{2} q^{-1} \delta \otimes \delta . \end{aligned}$$
(5.1)

This element is not self-adjoint in \(\mathcal {A}_q(SU(2))\otimes \mathcal {A}_q(SU(2))\). Recall the right action of \({\mathcal {U}}_{q}(\mathfrak {g})\) on the map \(\varPhi :{\mathcal {U}}_{q}(\mathfrak {g}) \rightarrow \text {End}(\mathcal {H}^\ell )\), including the case \(\ell =0\), by \((\varPhi \cdot X)(Y) = \varPhi (XY)\) for all \(X, Y \in {\mathcal {U}}_{q}(\mathfrak {g})\). This is analogous to the construction discussed at the beginning of this subsection. Then

$$\begin{aligned} \psi = \varphi \cdot A^{-1} = \frac{1}{2}\alpha \otimes \alpha - \frac{1}{2} q^{-1} \beta \otimes \beta - \frac{1}{2} q \gamma \otimes \gamma +\frac{1}{2}\delta \otimes \delta =\psi ^*\end{aligned}$$
(5.2)

is self-adjoint for the \(*\)-structure of \(\mathcal {A}_q(SU(2))\otimes \mathcal {A}_q(SU(2))\). Then by construction

$$\begin{aligned} \psi \left( \left( AXA^{-1}\right) YZ\right) = \varepsilon (X) \psi (Y) \varepsilon (Z), \qquad \forall \, X,Z\in {\mathcal {B}}, \ Y\in {\mathcal {U}}_{q}(\mathfrak {g}). \end{aligned}$$
(5.3)

5.2 The recurrence relation for spherical functions of type \(\ell \)

The proof of Theorem 4.6 in the group case can be found in [24, Proposition 3.1], where the constants \(A_{i,j}\) are explicitly given in terms of Clebsch–Gordan coefficients. This proof can also be applied in this case, giving the coefficients \(A_{i,j}\) explicitly in terms of Clebsch–Gordan coefficients. A more general set-up can be found in [44, Proposition 3.3.17]. The approach given here is related [24, Proposition 3.1], except that it differs in its way of establishing \(A_{1/2,1/2}\not =0\).

Proof of Theorem 4.6

As \({\mathcal {U}}_{q}(\mathfrak {g})\)-representations, the tensor product decomposition

$$\begin{aligned} t^{1/2,1/2}\otimes t^{\ell _1,\ell _2} \cong \sum _{i,j=\pm 1/2} t^{\ell _1+i,\ell _2+j} \end{aligned}$$

follows from the standard tensor product decomposition for \({\mathcal {U}}_{q}(\mathfrak {su}(2))\), see (8.1). It follows that

$$\begin{aligned} \varphi \varPhi ^{\ell }_{\ell _1, \ell _2} = \sum _{i,j=\pm 1/2} \varPsi ^{i,j}, \quad \varPsi ^{i,j} = \sum _{m,n=-\ell }^\ell \varPsi ^{i,j}_{m,n} \otimes E^\ell _{m,n}, \end{aligned}$$

where \(\varPsi ^{i,j}_{m,n}\) is in the span of the matrix elements \(t^{\ell _1+i}_{r_1,s_1}\otimes t^{\ell _1+j}_{r_2,s_2}\). Note that (4.7), and a similar calculation for multiplication by an element from \(\mathcal {B}\) from the other side, shows that \(\varphi \varPhi ^{\ell }_{\ell _1, \ell _2}\) has the required transformation behaviour (4.5) for the action of \(\mathcal {B}\) from the left and the right. Since the matrix elements \(t^{\ell _1}_{r_1,s_1}\otimes t^{\ell _1}_{r_2,s_2}\) form a basis for \(\mathcal {A}_q(G)\), it follows that each \(\varPsi ^{i,j}\) satisfies (4.5) so that by Proposition 5.2(iii) \(\varPsi ^{i,j}= A_{i,j} \, \varPhi ^{\ell }_{\ell _1+i, \ell _2+j}\). Here \(\varPsi ^{i,j}=0\) in case \((\ell _1+i,\ell _2+j)\) does not satisfy the conditions (4.3).

It remains to show that \(A_{1/2,1/2}\not =0\). In order to do so, we evaluate the identity of Theorem 4.6 at a suitable element of \({\mathcal {U}}_{q}(\mathfrak {g})\). For the \({\mathcal {U}}_{q}(\mathfrak {su}(2))\)-representations in (3.3), it is immediate that \(t^\ell (e^k)=0\) for \(k>2\ell \) and that \(t^\ell _{r,s}(e^{2\ell })=0\) except for the case \((r,s)=(-\ell ,\ell )\) and then \(t^\ell _{-\ell ,\ell }(e^{2\ell })\not =0\). Extending to \({\mathcal {U}}_{q}(\mathfrak {g})\), we find that \(\varPhi ^{\ell }_{\ell _1,\ell _2} (E_1^{k_1}E_2^{k_2})=0\) if \(k_1>2\ell _1\) or \(k_2>2\ell _2\), and

$$\begin{aligned} \bigl ( \varPhi ^{\ell }_{\ell _1,\ell _2}\bigr )_{m,n} (E_1^{2\ell _1}E_2^{2\ell _2})= & {} \delta _{m,-\ell _1+\ell _2}\delta _{n,\ell _1-\ell _2} C^{\ell _1, \ell _2, \ell }_{-\ell _1, -\ell _2, m} C^{\ell _1, \ell _2, \ell }_{\ell _1, \ell _2, n} \nonumber \\&\times \, t^{\ell _1}_{-\ell _1,\ell _1}(E_1^{2\ell _1}) t^{\ell _2}_{-\ell _2,\ell _2}(E_2^{2\ell _2}), \end{aligned}$$
(5.4)

where the right-hand side is non-zero in case \(m=-\ell _1+\ell _2\), \(n=\ell _1-\ell _2\).

So if we evaluate the identity of Theorem 4.6 at \(E_1^{2\ell _1+1}E_2^{2\ell _2+1}\), it follows that only the term with \((i,j)=(1/2,1/2)\) on the right- hand side of Theorem 4.6 is non-zero, and the specific matrix element is given by (5.4) with \((\ell _1,\ell _2)\) replaced by \((\ell _1+\frac{1}{2},\ell _2+\frac{1}{2})\). It suffices to check that the left-hand side of Theorem 4.6 is non-zero when evaluated at \(E_1^{2\ell _1+1}E_2^{2\ell _2+1}\). By (4.6) we need to calculate the comultiplication on \(E_1^{2\ell _1+1}E_2^{2\ell _2+1}\). Using the non-commutative q-binomial theorem [15, Exercise 1.35] twice, we get

$$\begin{aligned} \Delta (E_1^{2\ell _1+1}E_2^{2\ell _2+1})= & {} \Delta (E_1)^{2\ell _1+1}\Delta (E_2)^{2\ell _2+1} \\= & {} \sum _{k_1=0}^{2\ell _1+1} \sum _{k_2=0}^{2\ell _2+1} \left[ \genfrac{}{}{0.0pt}{}{2\ell _1+1}{k_1}\right] _{q^2} \left[ \genfrac{}{}{0.0pt}{}{2\ell _2+1}{k_2}\right] _{q^2} \\&\times \, E_1^{k_1}E_2^{k_2} K_1^{2\ell _1+1-k_1}K_2^{2\ell _2+1-k_2} \otimes E_1^{2\ell _1+1-k_1}E_2^{2\ell _2+1-k_2}. \end{aligned}$$

From (5.1), we find that \(\varphi (E_1^{k_1}E_2^{k_2} K_1^{2\ell _1+1-k_1}K_2^{2\ell _2+1-k_2})=0\) unless \(k_1=k_2=1\), and in that case the term \(\beta \otimes \beta \) of (5.1) gives a non-zero contribution, and the other terms give zero. So we find

$$\begin{aligned} \bigl ( \varphi \varPhi ^{\ell }_{\ell _1,\ell _2}\bigr ) (E_1^{2\ell _1+1}E_2^{2\ell _2+1})= & {} \left[ \genfrac{}{}{0.0pt}{}{2\ell _1+1}{1}\right] _{q^2} \left[ \genfrac{}{}{0.0pt}{}{2\ell _2+1}{1}\right] _{q^2} \\&\times \,\varphi \left( E_1E_2 K_1^{2\ell _1}K_2^{2\ell _2}\right) \left( \varPhi ^{\ell }_{\ell _1,\ell _2}\right) \left( E_1^{2\ell _1}E_2^{2\ell _2}\right) , \end{aligned}$$

and this is non-zero for the same matrix element \((m,n) = (-\ell _1+\ell _2,\ell _1-\ell _2)\) by (5.4). \(\square \)

Note that we can derive the explicit value of \(A_{1/2,1/2}\) from the proof of Theorem 4.6 by keeping track of the constants involved. However, we do not need the explicit value except in the case \(\ell =0\).

In the special case \(\ell =0\), the recurrence of Theorem 4.6 has two terms and we obtain

$$\begin{aligned} \begin{aligned}&\varphi \varPhi ^0_{\ell _1,\ell _1} = A_{1/2,1/2} \, \varPhi ^0_{\ell _1+1/2,\ell _1+1/2} + A_{-1/2,-1/2} \, \varPhi ^0_{\ell _1-1/2,\ell _1-1/2}, \\&A_{1/2,1/2} = \frac{1}{2} q^{-1}\frac{1-q^{4\ell _1+4}}{1-q^{4\ell _1+2}}, \quad A_{-1/2,-1/2} = \frac{1}{2} q\frac{1-q^{4\ell _1}}{1-q^{4\ell _1+2}}. \end{aligned} \end{aligned}$$
(5.5)

The value of \(A_{1/2,1/2}\) can be obtained by evaluating at the group-like element \(A^\lambda \), using Proposition 5.2(ii) and \(2 \varphi (A^\lambda ) = q^{\lambda +1}+q^{-1-\lambda }\) and comparing leading coefficients of the Laurent polynomials in \(q^\lambda \). This gives \(\frac{1}{2} q^{-1} (C^{\ell _1,\ell _1,0}_{-\ell _1,-\ell _1,0})^2= A_{1/2,1/2} (C^{\ell _1+1/2,\ell _1+1/2,0}_{-\ell _1-1/2,-\ell _1-1/2,0})^2\), and the value for \(A_{1/2,1/2}\) follows from (8.6). Having \(A_{1/2,1/2}\), we evaluate at 1, i.e. the case \(\lambda = 0\), which gives \(A_{1/2,1/2}+A_{-1/2,-1/2}=\frac{1}{2}(q+q^{-1})\) by Proposition 5.2(ii), from which \(A_{-1/2,-1/2}\) follows.

Proposition 5.3

For \(n\in {\mathbb {N}}\), we have \(\displaystyle {\varPhi ^0_{\frac{1}{2}n,\frac{1}{2}n} = q^{n}\frac{1-q^2}{1-q^{2n+2}} U_n(\varphi )}\).

Recall that the Chebyshev polynomials of the second kind are orthogonal polynomials:

$$\begin{aligned} U_n(\cos \theta )= & {} \frac{\sin ((n+1)\theta )}{\sin \theta }, \quad \int _{-1}^1 U_n(x) U_m(x) \sqrt{1-x^2}\, dx = \delta _{m,n} \frac{1}{2} \pi , \nonumber \\ x\, U_n(x)= & {} \frac{1}{2} U_{n+1}(x) + \frac{1}{2} U_{n-1}(x), \quad U_{-1}(x)=0, \ U_0(x)=1, \end{aligned}$$
(5.6)

see e.g. [15, 19, 26].

Proof

From (5.5), it follows that \(\varPhi ^0_{\frac{1}{2}n,\frac{1}{2}n} = p_n(\varphi )\) for a polynomial \(p_n\) of degree n satisfying

$$\begin{aligned} x \, p_n(x) = \frac{1}{2} q^{-1}\frac{1-q^{2n+4}}{1-q^{2n+2}} p_{n+1}(x) +\frac{1}{2} q\frac{1-q^{2n}}{1-q^{2n+2}} p_{n-1}(x), \end{aligned}$$

with initial conditions \(p_0(x)=1\), \(p_1(x)=\frac{2}{q+q^{-1}}x\). Set

$$\begin{aligned} p_n(x) = q^{n}\frac{1-q^2}{1-q^{2n+2}}r_n(x), \end{aligned}$$

then \(r_0(x)=1\), \(r_1(x)=2x\) and \(2x\, r_n(x) = r_{n+1}(x) + r_{n-1}(x)\). So \(r_n(x) = U_n(x)\). \(\square \)

5.3 Orthogonality relations

In this subsection, we prove Theorem 4.8 from the quantum group theoretic interpretation up to the calculation of certain explicit coefficients in the expansion of the entries of the weight.

Recall the Haar functional on \(\mathcal {A}_q(SU(2))\). It is the unique left and right invariant positive functional \(h_0:\mathcal {A}_q(SU(2)) \rightarrow \mathbb {C}\) normalised by \(h_0(1)=1\), see e.g. [9], [20, §4.3.2], [21, 48]. The Schur orthogonality relations state

$$\begin{aligned} h_0\bigl ( t^{\ell _1}_{m,n} (t^{\ell _2}_{r,s})^*\bigr ) = \delta _{\ell _1, \ell _2} \delta _{m,r } \delta _{n,s} q^{2(\ell _1 +n )} \frac{(1 - q^2)}{(1 - q^{4\ell _1 + 2})}. \end{aligned}$$
(5.7)

We identify \(\mathcal {A}_q(G)\) with \(\mathcal {A}_q(SU(2))\otimes \mathcal {A}_q(SU(2))\). Then the functional \(h=h_0\otimes h_0\) is the Haar functional on \(\mathcal {A}_q(G)\). We can identify the analogue of the algebra of bi-K-invariant polynomials on G as the algebra generated by the self-adjoint element \(\psi \), and give the analogue of the restriction of the invariant integration in Lemma 5.4.

Lemma 5.4

A functional \(\tau :{\mathcal {U}}_{q}(\mathfrak {g})\rightarrow {\mathbb {C}}\) that satisfies the transformation behaviour \(\tau ((AXA^{-1})YZ) = \varepsilon (X) \psi (Y) \varepsilon (Z)\) for all \(X, Z \in \mathcal {B}\) and all \(Y \in {\mathcal {U}}_{q}(\mathfrak {g})\) is a polynomial in \(\psi \) as in (5.2). Moreover, the Haar functional on the \(*\)-algebra \({\mathbb {C}}[\psi ]\subset \mathcal {A}_q(G)\) is given by

$$\begin{aligned} h(p(\psi )) = \frac{2}{\pi } \int _{-1}^1 p(x) \, \sqrt{1-x^2}\, dx. \end{aligned}$$

Proof

From Proposition 5.2(iii) and (5.3), we see that any functional on \({\mathcal {U}}_{q}(\mathfrak {g})\) satisfying the invariance property is a polynomial in \(\psi \), since this space is spanned by \(\varPhi ^0_{\frac{1}{2}n, \frac{1}{2}n}\cdot A^{-1}\) for \(n\in \mathbb {N}\). From Proposition 5.3, we have \(\varPhi ^0_{\frac{1}{2} n,\frac{1}{2} n}\cdot A^{-1}\) is a multiple of \(U_n(\psi )\), since the right action of \(A^{-1}\) is an algebra homomorphism. The Schur orthogonality relations give

$$\begin{aligned} h\left( \left( \varPhi ^0_{\frac{1}{2} n,\frac{1}{2} n}\cdot A^{-1}\right) \left( \varPhi ^0_{\frac{1}{2} m,\frac{1}{2} m}\cdot A^{-1}\right) ^*\right) =0,\quad \text {for }m\not = n, \end{aligned}$$

and since the argument of h is polynomial in \(\psi \), we see that it has to correspond to the orthogonality relations (5.6) for the Chebyshev polynomials. So we find the expression for the Haar functional on the \(*\)-algebra generated by \(\psi \). \(\square \)

Theorem 5.5

Assume \(\varPsi , \varPhi :{\mathcal {U}}_{q}(\mathfrak {g}) \rightarrow \mathrm{End}(\mathcal {H}^{\ell })\) are spherical functions of type \(\ell \), see Definition 5.1. Then the map

$$\begin{aligned} \tau :{\mathcal {U}}_{q}(\mathfrak {g}) \rightarrow {\mathbb {C}}, \qquad X \mapsto \mathrm{tr}\bigl ( (\varPsi \cdot A^{-1}) (\varPhi \cdot A^{-1})^*\bigr ) (X), \end{aligned}$$

satisfies \(\tau ((AXA^{-1})YZ) = \varepsilon (X) \psi (Y) \varepsilon (Z)\) for all \(X, Z \in \mathcal {B}\) and all \(Y \in {\mathcal {U}}_{q}(\mathfrak {g})\).

In particular, any such trace is a polynomial in the generator \(\psi \) by Lemma 5.4.

Corollary 5.6

Fix \(\ell \in \frac{1}{2}{\mathbb {N}}\), then for \(k,p\in \{0,1,\ldots , 2\ell \}\),

$$\begin{aligned} W(\psi )_{k,p} := \mathrm{tr}\bigl ( (\varPhi ^\ell _{\xi (0,k)}\cdot A^{-1}) (\varPhi ^\ell _{\xi (0,p)}\cdot A^{-1})^*\bigr ) = \sum _{r=0}^{p\wedge k} \alpha _r(k,p) \, U_{k+p-2r}(\psi ), \end{aligned}$$

with \(\bigl ( W(\psi )_{k,p}\bigr )^*=W(\psi )_{p,k}\).

Proof of Theorem 5.5

Write \(\varPsi = \sum _{m,n=-\ell }^\ell \varPsi _{m, n}\otimes E^\ell _{m,n}\), \(\varPhi = \sum _{m,n=-\ell }^\ell \varPhi _{m, n}\otimes E^\ell _{m,n}\), with \(\varPsi _{m, n}\) and \(\varPhi _{m, n}\) linear functionals on \({\mathcal {U}}_{q}(\mathfrak {g})\) so that

$$\begin{aligned} \begin{array}{ll} &{} \mathrm{tr}\bigl ((\varPsi \cdot A^{-1}) (\varPhi \cdot A^{-1})^*\bigr ) = \sum _{m, n=-\ell }^\ell (\varPsi _{m, n}\cdot A^{-1}) (\varPhi _{m, n}\cdot A^{-1})^*\\ &{}\quad \Longrightarrow \quad \tau (Y) = \sum _{m, n=-\ell }^\ell \sum _{(Y)} \varPsi _{m, n}(A^{-1}Y_{(1)}) \overline{\varPhi _{m, n}(A^{-1} S(Y_{(2)})^*)}, \end{array} \end{aligned}$$
(5.8)

using the standard notation \(\Delta (Y)= \sum _{(Y)} Y_{(1)}\otimes Y_{(2)}\) and \(\xi ^*(X)= \overline{\xi (S(X)^*)}\) for \(\xi :{\mathcal {U}}_{q}(\mathfrak {g})\rightarrow \mathbb {C}\) and \(X\in {\mathcal {U}}_{q}(\mathfrak {g})\).

Let \(Z \in \mathcal {B}\) and \(Y \in {\mathcal {U}}_{q}(\mathfrak {g})\), then we find

$$\begin{aligned} \tau (YZ)&= \sum _{m, n=-\ell }^\ell \sum _{(Y), (Z)} \varPsi _{m, n}(A^{-1}Y_{(1)} Z_{(1)})\, \overline{\varPhi _{m, n} (A^{-1}S(Y_{(2)})^*S(Z_{(2)})^*)}. \end{aligned}$$

Since \(\mathcal {B}\) is a right coideal, we can assume that \(Z_{(1)}\in \mathcal {B}\), so, by the transformation property (4.5), \(\varPsi _{m, n}(A^{-1} Y_{(1)} Z_{(1)})= \sum _{k=-\ell }^\ell \varPsi _{m,k}(A^{-1}Y_{(1)})t^\ell _{k,n}(Z_{(1)})\). Using this, and \(t^\ell _{k,n}(Z_{(1)})= \overline{t^\ell _{n,k}(Z_{(1)}^*)}\) by the \(*\)-invariance of \({\mathcal {B}}\) and the unitarity of \(t^\ell \), and next move this to the \(\varPhi \)-part, and summing over n and the transformation property (4.5) for \(\varPhi \), we obtain

$$\begin{aligned} \begin{aligned} \tau (YZ)&= \sum _{m,k=-\ell }^\ell \sum _{(Y), (Z)} \varPsi _{m,k}(A^{-1}Y_{(1)}) \\&\qquad \qquad \times \sum _{n=-\ell }^\ell \overline{ \varPhi _{m, n} (A^{-1} S(Y_{(2)})^*S(Z_{(2)})^*)t^\ell _{n,k}(Z_{(1)}^*)}\\&= \sum _{n,k=-\ell }^\ell \sum _{(Y)} \varPsi _{k, n}(A^{-1}Y_{(1)})\, \overline{\varPhi _{k, n} \bigl ( A^{-1} S(Y_{(2)})^*\sum _{(Z)} S(Z_{(2)})^*Z_{(1)}^*\bigr )} \\&= \varepsilon (Z)\sum _{n,k=-\ell }^\ell \sum _{(Y)} \varPsi _{k, n}(A^{-1} Y_{(1)})\, \overline{\varPhi _{k, n} (A^{-1} S(Y_{(2)})^*)} = \varepsilon (Z) \tau (Y), \end{aligned} \end{aligned}$$

using \(\sum _{(Z)} S(Z_{(2)})^*Z_{(1)}^*= \bigl (\sum _{(Z)}Z_{(1)} S(Z_{(2)})\bigr )^*= \overline{\varepsilon (Z)}\) by the antipode axiom in a Hopf algebra.

For the invariance property from the left, we proceed similarly using that A is a group-like element. So for \(Y\in {\mathcal {U}}_{q}(\mathfrak {g})\) and \(X\in \mathcal {B}\) we have

$$\begin{aligned} \tau (AXA^{-1}Y)= & {} \sum _{m, n=-\ell }^\ell \sum _{(X), (Y)} \varPsi _{m, n}(X_{(1)}A^{-1}Y_{(1)}) \\&\times \overline{\varPhi _{m, n} (A^{-1} S(A)^*S(X_{(2)})^*S(A^{-1})^*S(Y_{(2)})^*)}. \end{aligned}$$

Now \(S(A)^*=A^{-1}\), \(S(A^{-1})^*=A\). Proceeding as in the previous paragraph using \(X_{(1)}\in {\mathcal {B}}\), we obtain

$$\begin{aligned} \tau (AXA^{-1}Y)&= \sum _{n, k=-\ell }^\ell \sum _{(X), (Y)} \varPsi _{k, n}(A^{-1} Y_{(1)}) \\&\quad \times \sum _{m=-\ell }^\ell \overline{t^\ell _{k,m}(X_{(1)}^*)} \varPhi _{m, n} (A^{-2} S(X_{(2)})^*A S(Y_{(2)})^*) \\&= \sum _{n, k=-\ell }^\ell \sum _{(Y)} \varPsi _{k,n}(A^{-1} Y_{(1)}) \sum _{(X)} \overline{\varPhi _{k,n} (X_{(1)}^*A^{-2} S(X_{(2)})^*A S(Y_{(2)})^*)}. \end{aligned}$$

The result follows if we prove \(\sum _{(X)} X_{(1)}^*A^{-2} S(X_{(2)})^*A = A^{-1} \overline{\varepsilon (X)}\). In order to prove this, we need the observation that \(S(X^*) = A^{-2} S(X)^*A^2\) for all \(X\in {\mathcal {U}}_{q}(\mathfrak {g})\), which can be verified on the generators and follows since the operators are antilinear homomorphisms, see Remark 5.7. Now we obtain the required identity:

$$\begin{aligned} \sum _{(X)} X_{(1)}^*A^{-2} S(X_{(2)})^*A = \sum _{(X)} X_{(1)}^*S(X_{(2)}^*) A^{-1} = \varepsilon (X^*) A^{-1} = \overline{\varepsilon (X)} A^{-1}. \end{aligned}$$

\(\square \)

Remark 5.7

The required identity \(S(X^*) = A^{-2} S(X)^*A^2\) for all \(X\in {\mathcal {U}}_{q}(\mathfrak {g})\) can be generalised to arbitrary semisimple \(\mathfrak {g}\). Indeed, since the square of the antipode S is given by conjugation with an explicit element of the Cartan subalgebra associated to \(\rho =\frac{1}{2} \sum _{\alpha >0} \alpha \), see e.g. [33, Exercise 4.1.1], and since in a Hopf \(*\)-algebra \(S\circ *\) is an involution, we find that \(S(X^*) = K_{-\rho } S(X)^*K_{\rho }\) if \(S^2(X) = K_{-\rho } X K_{\rho }\) as in [33, Exercise 4.1.1].

Proof of Corollary 5.6

By Theorem 5.5 and Lemma 5.4, the trace is a linear combination of \(\varPhi ^0_{\frac{1}{2} n,\frac{1}{2} n}\cdot A^{-1}\), hence a polynomial in \(\psi \). To obtain the expression, we need to find those n’s for which \(\varPhi ^0_{\frac{1}{2} n,\frac{1}{2} n}\cdot A^{-1}\) occurs in \(W(\psi )_{k,p}\) by Proposition 5.3. Using (4.4), (5.8) and Proposition 5.2, we find that in \(W(\psi )_{k,p}\) matrix only matrix elements of the form \(t^{\frac{1}{2} k}_{a_1,b_1}(t^{\frac{1}{2} p}_{c_1,d_1})^*\otimes t^{\ell - \frac{1}{2} k}_{a_2,b_2}(t^{\ell -\frac{1}{2} p}_{c_2,d_2})^*\) occur. Using the Clebsch–Gordan decomposition, we see that \(t^{\frac{1}{2} k}_{a_1,b_1}(t^{\frac{1}{2} p}_{c_1,d_1})^*\) and similarly \(t^{\ell - \frac{1}{2} k}_{a_2,b_2}(t^{\ell -\frac{1}{2} p}_{c_2,d_2})^*\) can be written as a sum of matrix elements from \(t^{\frac{1}{2} (k+p)-r}\), \(r\in \{0,1,\ldots , k\wedge p\}\), and similarly \(t^{2\ell - \frac{1}{2} (k+p)-s}\), \(s\in \{0,1,\ldots , (2\ell -k)\wedge (2\ell -p)\}\). By Proposition 5.3 and Proposition 5.2, the only n’s that can occur are \(n=k+p-2r\), \(r\in \{0,1,\ldots , k\wedge p\}\).

The statement on the adjoint follows immediately from (5.8) and \(\psi \) being self-adjoint, see (5.2). \(\square \)

We now can start the first part of the proof of Theorem 4.8, except for the fact that we have to determine certain constants. This is contained in Lemma 5.8.

Lemma 5.8

We have

$$\begin{aligned} \sum _{i = -\ell }^{\ell } \sum _{m_1 = -\ell _1}^{\ell _1} \sum _{m_2 = -\ell _2}^{\ell _2} \left( C^{\ell _1, \ell _2, \ell }_{m_1, m_2, i}\right) ^2 q^{2(m_1 + m_2)} = q^{-2\ell } \frac{ (1 - q^{4\ell + 2}) }{ (1 - q^2) }. \end{aligned}$$

The proof of Lemma 5.8 is a calculation using Theorem 4.6, which we postpone to Sect. 6.3.

First part of the proof of Theorem 4.8

Using the notation of Sect. 4, we find from Corollary 4.7 and (4.8) that

$$\begin{aligned}&\mathrm{tr}\bigl ( (\varPhi ^\ell _{\xi (n,i)}\cdot A^{-1}) (\varPhi ^\ell _{\xi (m,j)}\cdot A^{-1})^*\bigr ) \\&\quad = \sum _{k=0}^{2\ell } \sum _{p=0}^{2\ell } r^{\ell ,k}_{n,i}(\psi ) \mathrm{tr}\bigl ( (\varPhi ^\ell _{\xi (0,k)}\cdot A^{-1}) (\varPhi ^\ell _{\xi (0,p)}\cdot A^{-1})^*\bigr ) \overline{r^{\ell ,p}_{m,j}}(\psi ) \\&\quad = \sum _{k=0}^{2\ell } \sum _{p=0}^{2\ell } P_n(\psi )^*_{i,k} W(\psi )_{k,p} P_m(\psi )_{p,j} = \Bigl ( (P_n(\psi ))^*W(\psi ), P_m(\psi )\Bigr )_{i,j} \end{aligned}$$

where we use that the action by \(A^{-1}\) from the right is an algebra homomorphism, since \(A^{-1}\) is group like, and that \(\psi \) is self-adjoint. By Proposition 5.2, Lemma 5.4 and (5.7), we have

$$\begin{aligned}&\frac{2}{\pi } \int _{-1}^1\Bigl ( (P_n(x))^*W(x) P_m(x)\Bigr )_{i,j} \sqrt{1-x^2}\, dx \nonumber \\&\quad = h\Bigl ( \mathrm{tr}\bigl ( (\varPhi ^\ell _{\xi (n,i)}\cdot A^{-1}) (\varPhi ^\ell _{\xi (m,j)}\cdot A^{-1})^*\bigr ) \Bigr ) \nonumber \\&\quad =\, \delta _{n,m}\delta _{i,j} \frac{q^{2\ell +2n}(1-q^2)^2}{(1-q^{2n+2i+2})(1-q^{4\ell +2n-2i+2})}\nonumber \\&\quad \quad \times \left( \sum _{r=-\ell }^\ell \sum _{a=-\frac{1}{2} (n+i)}^{\frac{1}{2} (n+i)} \sum _{b=-\ell -\frac{1}{2} (n-i)}^{\ell +\frac{1}{2} (n-i)}|C^{\frac{1}{2} (n+i), \ell +\frac{1}{2} (n-i), \ell }_{a,b,r}|^2 q^{2(a+b)}\right) ^2 \end{aligned}$$
(5.9)

after a straightforward calculation. Plugging in Lemma 5.8 in (5.9) and rewriting prove the result using Corollary 5.6. \(\square \)

Note that we have not yet determined the explicit values of \(\alpha _t(m,n)\) in Theorem 4.8 and we have not shown that W is a matrix-valued weight function in the sense of Sect. 2. The values of the constants \(\alpha _t(m,n)\) will be determined in Sect. 6.1 and the positivity of W(x) for \(x\in (-1,1)\) will follow from Theorem 4.15.

5.4 q-Difference equations

It is well known, see e.g. Koornwinder [30], Letzter [35], Noumi [37], that the centre of the quantised enveloping algebra can be used to determine a commuting family of q-difference operators to which the corresponding spherical functions are eigenfunctions. In this subsection, we derive the matrix-valued q-difference operators corresponding to central elements to which we find matrix-valued eigenfunctions.

The centre of \({\mathcal {U}}_{q}(\mathfrak {g})\) is generated by two Casimir elements, see Sect. 3:

$$\begin{aligned} \Omega _1 = \frac{q K_{1}^{-1} + q^{-1} K_{1} - 2}{(q - q^{-1})^2} + E_1 F_1, \qquad \Omega _2 = \frac{q K_{2}^{-1} + q^{-1} K_{2} - 2}{(q - q^{-1})^2} + E_2 F_2. \end{aligned}$$

Because of Proposition 5.2 and (3.4), we find

$$\begin{aligned} \Omega _i\cdot \varPhi ^\ell _{\ell _1,\ell _2} = \varPhi ^\ell _{\ell _1,\ell _2} \cdot \Omega _i = \left( \frac{q^{-\frac{1}{2} -\ell _i}-q^{\frac{1}{2} + \ell _i}}{q^{-1}-q}\right) ^2 \varPhi ^\ell _{\ell _1,\ell _2}, \qquad i=1,2.\quad \quad \end{aligned}$$
(5.10)

The goal is to compute the radial parts of the Casimir elements acting on arbitrary spherical functions of type \(\ell \) in terms of an explicit q-difference operator. In order to derive such a q-difference operator, we find a \(\mathcal {B}\mathcal {A}\mathcal {B}\)-decomposition for suitable elements in \({\mathcal {U}}_{q}(\mathfrak {g})\) in Proposition 5.10. This special case of a \(\mathcal {B}\mathcal {A}\mathcal {B}\)-decomposition is the analogue of the KAK-decomposition, which has not a general quantum algebra analogue. For this purpose, we first establish Lemma 5.9, which can be viewed as a quantum analogue of [8, Lemma 2.2], and gives the \(\mathcal {B}\mathcal {A}\mathcal {B}\)-decomposition of \(F_2A^{\lambda }\).

Lemma 5.9

Recall \(A=K_1^{1/2}K_2^{1/2}\). For \(\lambda \in {\mathbb {Z}}\setminus \{0\}\), we have

$$\begin{aligned} F_2 A^{\lambda } = \frac{q^{-1}}{q^{1-\lambda } - q^{1+\lambda }} \left( K^{1/2} A^{\lambda } B_1 - q^{\lambda } K^{1/2} B_1 A^{\lambda } \right) . \end{aligned}$$

Proof

Recall Definition 4.1, so the result follows from

$$\begin{aligned} A^{\lambda } B_1= & {} q^{-1} A^{\lambda } K_1^{-1/2} K_2^{-1/2} E_1 + q A^{\lambda } F_2 K_1^{-1/2} K_2^{1/2} \\= & {} q(q^{1-\lambda } - q^{1+\lambda }) K_1^{-1/2} K_2^{1/2} F_2 A^{\lambda } + q^{\lambda } B_1 A^{\lambda }, \end{aligned}$$

and using \(K^{1/2}= K_1^{1/2} K_2^{-1/2}\in \mathcal {B}\). \(\square \)

Proposition 5.10

The \(\mathcal {B} \mathcal {A} \mathcal {B}\)-decomposition for the Casimir elements \(\Omega _1\) and \(\Omega _2\) is given by

$$\begin{aligned} \Omega _1 A^{\lambda }= & {} \frac{q(1-q^{2\lambda +4})}{(1-q^2)^2 (1-q^{2\lambda +2})} K^{1/2}A^{\lambda +1} - \frac{2q^2}{(1 - q^2)^2} A^{\lambda } \\&+\,\frac{q^3(1-q^{2\lambda })}{(1-q^2)^2 (1-q^{2\lambda +2})} K^{-1/2}A^{\lambda -1} - \frac{q^{2\lambda +1}}{(1-q^{2\lambda +2})^2} B_1K^{-1/2}B_2 A^{\lambda +1} \\&-\,\frac{q^{2\lambda +2}}{(1-q^{2\lambda +2})^2} A^{\lambda +1}K^{-1/2}B_2 B_1 + \frac{q^\lambda }{(1-q^{2\lambda +2})^2} B_1K^{-1/2} A^{\lambda +1}B_2\\&+\,\frac{q^{3\lambda +3}}{(1-q^{2\lambda +2})^2} K^{-1/2}B_2 A^{\lambda +1}B_1, \end{aligned}$$

and

$$\begin{aligned} \Omega _2 A^{\lambda }= & {} \frac{q(1-q^{2\lambda +4})}{(1-q^2)^2 (1-q^{2\lambda +2})} K^{-1/2}A^{\lambda +1} - \frac{2q^2}{(1 - q^2)^2} A^{\lambda } \\&+\,\frac{q^3(1-q^{2\lambda })}{(1-q^2)^2 (1-q^{2\lambda +2})} K^{1/2}A^{\lambda -1} - \frac{q^{2\lambda +1}}{(1-q^{2\lambda +2})^2} B_2K^{1/2}B_1 A^{\lambda +1} \\&-\,\frac{q^{2\lambda +2}}{(1-q^{2\lambda +2})^2} A^{\lambda +1}K^{1/2}B_1 B_2 + \frac{q^\lambda }{(1-q^{2\lambda +2})^2} B_2K^{1/2} A^{\lambda +1}B_1 \\&+\,\frac{q^{3\lambda +3}}{(1-q^{2\lambda +2})^2} K^{1/2}B_1 A^{\lambda +1}B_2. \end{aligned}$$

Proof

We first concentrate on \(\Omega _2\), and the statement for \(\Omega _1\) follows by flipping the order using \(\sigma \) as in Remark 4.2(iv). We need to rewrite \(E_2F_2A^\lambda \), which we do in terms of \(F_2A^\lambda \) and next using Lemma 5.9. The details are as follows.

Using Definition 4.1 and the commutation relations, and pulling through \(F_2\) to the left, we get

$$\begin{aligned} B_2F_2A^\lambda = q^{-1} E_2F_2 A^{\lambda -1} + q^2 F_1F_2 K^{1/2} A^\lambda . \end{aligned}$$
(5.11)

Similarly, and only slightly more involved, we obtain

$$\begin{aligned} F_2A^\lambda B_2 = q^{\lambda -2} E_2F_2 A^{\lambda -1} - q^{\lambda -2} \frac{K_2 -K_2^{-1}}{q-q^{-1}} A^{\lambda -1} + q^{1-\lambda } F_1F_2 K^{1/2} A^\lambda .\nonumber \\ \end{aligned}$$
(5.12)

Using (5.11) and (5.12), we eliminate the term with \(F_1F_2\), and shifting \(\lambda \) to \(\lambda +1\) gives

$$\begin{aligned} (q^{-1}-q^{2\lambda +1}) E_2F_2A^\lambda = B_2F_2A^{\lambda +1} -q^{2+\lambda } F_2 A^{\lambda +1}B_2 - q^{2\lambda +1} \frac{K_2 -K_2^{-1}}{q-q^{-1}} A^{\lambda }.\nonumber \\ \end{aligned}$$
(5.13)

Apply Lemma 5.9 on the first two terms on the right-hand side of (5.13) and note that the remaining terms in (5.13) and in \(\Omega _2 A^\lambda \) can be dealt with by observing that \(K_2=K^{-1/2}A= AK^{-1/2}\). Taking corresponding terms together proves the \(\mathcal {B}\mathcal {A}\mathcal {B}\)-decomposition for \(\Omega _2 A^\lambda \) after a short calculation. \(\square \)

Our next task is to translate Proposition 5.10 into an operator for spherical functions of type \(\ell \), from which we derive eventually, see Theorem 4.13, an Askey–Wilson q-difference type operator for the matrix-valued orthogonal polynomials \(P_n\). Let \(\varPhi \) be a spherical function of type \(\ell \), then we immediately obtain from Proposition 5.10

$$\begin{aligned} \varPhi (\Omega _1 A^{\lambda })= & {} \frac{q(1-q^{2\lambda +4})}{(1-q^2)^2 (1-q^{2\lambda +2})} t^\ell (K^{1/2})\varPhi (A^{\lambda +1}) - \frac{2q^2}{(1 - q^2)^2} \varPhi (A^{\lambda }) \nonumber \\&+\, \frac{q^3(1-q^{2\lambda })}{(1-q^2)^2 (1-q^{2\lambda +2})} t^\ell (K^{-1/2})\varPhi (A^{\lambda -1}) \nonumber \\&-\, \frac{q^{2\lambda +1}}{(1-q^{2\lambda +2})^2} t^\ell (B_1K^{-1/2}B_2) \varPhi (A^{\lambda +1}) \nonumber \\&-\, \frac{q^{2\lambda +2}}{(1-q^{2\lambda +2})^2} \varPhi (A^{\lambda +1}) t^\ell (K^{-1/2}B_2 B_1) \nonumber \\&+\, \frac{q^\lambda }{(1-q^{2\lambda +2})^2} t^\ell (B_1K^{-1/2}) \varPhi (A^{\lambda +1})t^\ell (B_2) \nonumber \\&+\, \frac{q^{3\lambda +3}}{(1-q^{2\lambda +2})^2} t^\ell (K^{-1/2}B_2) \varPhi (A^{\lambda +1})t^\ell (B_1). \end{aligned}$$
(5.14)

The analogous expression for \(\varPhi (\Omega _2 A^{\lambda })\) of (5.14) can be obtained using the flip \(\sigma \), see Remark 4.2(iv). In particular, it suffices to replace all \(t^\ell (X)\) in (5.14) by \(J^\ell t^\ell (X)J^\ell \), see Remark 4.2(iv), to get the corresponding expression.

By Proposition 5.2(ii), we know that \(\varPhi (A^{\lambda })\) is diagonal. Note that \(\varPhi \cdot \Omega _1: {\mathcal {U}}_{q}\rightarrow \mathrm{End}(H_\ell )\) is also a spherical function of type \(\ell \) by the centrality of the Casimir operator \(\Omega _1\). Hence \((\varPhi \cdot \Omega _1)(A^\lambda )=\varPhi (\Omega _1A^\lambda )\) is diagonal, which can also be seen directly from (5.14). We can calculate the matrix entries of \(\varPhi (\Omega _1 A^\lambda )\) using the upper triangular matrices \(t^\ell (B_1K^{-1/2})\), \(t^\ell (B_1)\) having only non-zero entries on the superdiagonal, the lower triangular matrices \(t^\ell (K^{-1/2}B_2)\), \(t^\ell (B_2)\) having only non-zero entries on the subdiagonal and the diagonal matrices \(t^\ell (K^{\pm 1/2})\), \(t^\ell (B_1K^{-1/2}B_2)\), \(t^\ell (K^{-1/2}B_2B_1)\), see (4.2), (3.1).

For \(\varPhi \) a spherical function of type \(\ell \), we view the diagonal restricted to \(\mathcal {A}\) as a vector-valued function \(\hat{\varPhi }\):

$$\begin{aligned} {\hat{\varPhi }} :\mathcal {A} \rightarrow \mathcal {H}^\ell , \qquad A^\lambda \mapsto \sum _{m=-\ell }^\ell \varPhi (A^\lambda )_{m,m}\, e^\ell _m. \end{aligned}$$

So we can regard the Casimir elements as acting on the vector-valued function \(\hat{\varPhi }\), and the action of the Casimir is made explicit in Proposition 5.11.

Proposition 5.11

Let \(\varPhi \) be a spherical function of type \(\ell \), with corresponding vector-valued function \({\hat{\varPhi }} :\mathcal {A} \rightarrow \mathcal {H}^\ell \) representing the diagonal when restricted to \(\mathcal {A}\). Then

$$\begin{aligned} (\widehat{\varPhi \cdot \Omega _1})\, (A^{\lambda }) = M_1^\ell (q^{\lambda }) {\hat{\varPhi }}(A^{\lambda + 1}) - \frac{2q^2}{(1-q^2)^2}{\hat{\varPhi }}(A^{\lambda }) + N_1^\ell (q^\lambda ) {\hat{\varPhi }}(A^{\lambda - 1}), \end{aligned}$$

where \(M_1^\ell (z)\) is a tridiagonal and \(N_1^\ell (z)\) is a diagonal matrix with respect to the basis \(\{e^\ell _n\}_{n=-\ell }^\ell \) of \(\mathcal {H}^\ell \) with coefficients

$$\begin{aligned} (N_1^\ell (z))_{m, m}&= \frac{q^{3+m} (1-z^2)}{(1-q^2)^2 (1-q^{2}z^2)}, \\ (M_1^\ell (z))_{m, m+1}&= \frac{zq^{m+1}}{(1 - q^2z^2)^2}(b^{\ell }(m+1))^2, \\ (M_1^\ell (z))_{m, m}&= \frac{q^{1-m} (1-q^4z^2)}{(1-q^2)^2 (1-q^{2}z^2)} \\&\quad -\, \frac{z^2q^{2+m}}{(1-q^{2}z^2)^2} \left( (b^\ell (m))^2+ (b^\ell (m+1))^2\right) , \\ (M_1(z))_{m, m-1}&= \frac{z^3q^{m+3}}{(1 - q^2z^2)^2}(b^{\ell }(m))^2. \end{aligned}$$

Moreover, for \(\Omega _2\) the action is

$$\begin{aligned} \widehat{(\varPhi \cdot \Omega _2)} (A^{\lambda }) = M_2^\ell (q^{\lambda }) {\hat{\varPhi }}(A^{\lambda + 1}) - \frac{2q^2}{(1-q^2)^2}{\hat{\varPhi }}(A^{\lambda }) + N_2^\ell (q^\lambda ) {\hat{\varPhi }}(A^{\lambda - 1}), \end{aligned}$$

where \(M_2^\ell (z)=J^\ell M_1^\ell (z) J^\ell \) is a tridiagonal, and \(M_2^\ell (z)=J^\ell M_1^\ell (z) J^\ell \) is a diagonal matrix.

Remark 5.12

The proof of Theorem 4.13 does not explain why the matrix coefficients of \(\eta _q\) and \(\eta _{q^{-1}}\) in Theorem 4.13 are related by \(z\leftrightarrow z^{-1}\). In Proposition 5.11, there is a lack of symmetry between the up and down shift in \(\lambda \), and only after suitable multiplication with \(\hat{\varPhi }_0\) from the left and right the symmetry of Theorem 4.13 pops up. It would be desirable to have an explanation of this symmetry from the quantum group theoretic interpretation. Note that the symmetry can be translated to the requirement \(\varPsi (z) N_1(q^{-2}z^{-1}) = M_1(z) \varPsi (qz)\) with \(\varPsi (q^\lambda ) = \hat{\varPhi }^\ell _0(A^{\lambda })\hat{\varPhi }^\ell _0(A^{-1-\lambda })^{-1}\).

The remark following (5.14) on how to switch to the second Casimir operator gives the conjugation between \(M_1^\ell \) and \(M_2^\ell \), respectively \(N_1^\ell \) and \(N_2^\ell \). Note that in case \(\ell =0\), \(\varPhi \) and \({\hat{\varPhi }}\) are equal, and we find that Proposition 5.11 gives the operator

$$\begin{aligned} \varPhi (\Omega _2 A^{\lambda }) = \varPhi (\Omega _1 A^{\lambda })&= \frac{q(1-q^{2\lambda +4})}{(1-q^2)^2 (1-q^{2\lambda +2})} \varPhi (A^{\lambda +1}) - \frac{2q^2}{(1 - q^2)^2} \varPhi (A^{\lambda }) \\&\quad + \,\frac{q^3(1-q^{2\lambda })}{(1-q^2)^2 (1-q^{2\lambda +2})} \varPhi (A^{\lambda -1}) \end{aligned}$$

for \(\varPhi :{\mathcal {U}}_{q}(\mathfrak {g})\rightarrow {\mathbb {C}}\) a spherical function (of type 0), which should be compared to [30, Lemma 5.1], see also [35, 37].

Proof

Consider (5.14) and calculate the (mm)-entry. Using the explicit expressions for the elements \(t^\ell (X)\) for \(X\in \mathcal {B}\), see (4.2), (3.1), in (5.14), we find

$$\begin{aligned} \begin{aligned} \varPhi (\Omega _1 A^{\lambda })_{m,m}&= \frac{q^{3+m}(1-q^{2\lambda })}{(1-q^2)^2 (1-q^{2\lambda +2})} \varPhi (A^{\lambda -1})_{m,m} - \frac{2q^2}{(1 - q^2)^2} \varPhi (A^{\lambda })_{m,m} \\&\quad +\, \frac{q^{1-m}(1-q^{2\lambda +4})}{(1-q^2)^2 (1-q^{2\lambda +2})} \varPhi (A^{\lambda +1})_{m,m} \\&\quad -\, \frac{q^{2\lambda +2+m} (b^\ell (m+1))^2}{(1-q^{2\lambda +2})^2} \varPhi (A^{\lambda +1})_{m,m} \\&\quad - \,\frac{q^{2\lambda +2+m} (b^\ell (m))^2}{(1-q^{2\lambda +2})^2} \varPhi (A^{\lambda +1})_{m,m} \\&\quad + \,\frac{q^{\lambda +m+1} (b^\ell (m+1))^2}{(1-q^{2\lambda +2})^2} \varPhi (A^{\lambda +1})_{m+1,m+1} \\&\quad +\, \frac{q^{3\lambda +3+m} (b^\ell (m))^2 }{(1-q^{2\lambda +2})^2} \varPhi (A^{\lambda +1})_{m-1,m-1}, \end{aligned} \end{aligned}$$

which we can rewrite as stated with the matrices \(M_1^\ell (q^{\lambda })\) and \(N_1^\ell (q^{\lambda })\). The case for \(\Omega _2\) follows from the observed symmetry, or it can be obtained by an analogous computation. \(\square \)

In particular, we can apply Proposition 5.11 to \(\varPhi ^\ell _{\xi (n,m)}\) using (5.10) to find an eigenvalue equation. In order to find an eigenvalue equation for the matrix-valued polynomials, we first introduce the full spherical functions \({\hat{\varPhi }}^\ell _n:\mathcal {A} \rightarrow \text {End}(\mathbb {C}^{2\ell +1})\) defined by

$$\begin{aligned} {\hat{\varPhi }}^\ell _n = \sum _{i,j=0}^{2\ell } ({\hat{\varPhi }}^\ell _n)_{i,j} \otimes E_{i,j}, \ ({\hat{\varPhi }}^\ell _n)_{i,j}(A^\lambda ) = (\varPhi ^\ell _{\xi (n,j)}(A^\lambda ))_{i-\ell ,i-\ell } \end{aligned}$$
(5.15)

for \(n\in \mathbb {N}\). So we put the vectors \((\hat{\varPhi }^\ell _{\xi (n,0)}, \ldots ,\hat{\varPhi }^\ell _{\xi (n,2\ell )})\) as columns in a matrix, and we relabel in order to have the matrix entries labelled by \(i,j\in \{0,\ldots , 2\ell \}\). We reformulate Proposition 5.11 and (5.10) as the eigenvalue equations

$$\begin{aligned} {\hat{\varPhi }}_n(A^\lambda ) \varLambda _n(i)&= M_i(q^\lambda ) {\hat{\varPhi }}^\ell _n(A^{\lambda +1}) + N_i(q^\lambda ) {\hat{\varPhi }}^\ell _n(A^{\lambda -1}), \\ \nonumber \varLambda _n(1)&= \sum _{j=0}^{2\ell } \frac{q^{1-n-j}+q^{3+n+j}}{(1-q^2)^2}E_{j,j}, \quad \varLambda _n(2) = J \varLambda _n(1)J, \end{aligned}$$
(5.16)

where \(\bigl (M_i(z)\bigr )_{m,n}= \bigl (M_i^\ell (z)\bigr )_{m-\ell ,n-\ell }\) for \(m,n\in \{0,1,\ldots , 2\ell \}\), and similarly for \(N_i\), are the matrices of Proposition 5.11 shifted to the standard matrix with respect to the standard basis \(\{e_n\}_{n=0}^{2\ell }\) of \(\mathbb {C}^{2\ell +1}\). Note that the symmetry of Proposition 5.11 then rewrites as \(M_2(z) = J M_1(z)J\), \(N_2(z) = J N_1(z)J\), with \(J:e_n\mapsto e_{2\ell -n}\).

Now we rewrite Corollary 4.7 after pairing with \(A^\lambda \) as

$$\begin{aligned} {\hat{\varPhi }}^\ell _n(A^\lambda ) = {\hat{\varPhi }}^\ell _0(A^\lambda ) \overline{P_n}\bigl ( \mu (q^{\lambda +1})\bigr ) \in \text {End}(\mathbb {C}^{2\ell +1}), \end{aligned}$$
(5.17)

where \(\mu (x) =\frac{1}{2}(x+x^{-1})\), using that pairing with the group-like elements \(A^\lambda \) is a homomorphism and \(\varphi (A^\lambda )=\mu (q^{\lambda +1})\) by (5.1). Using (5.17) in (5.16) proves Corollary 5.13.

Corollary 5.13

Assuming \({\hat{\varPhi }}^\ell _0(A^\lambda )\) is invertible, the matrix-valued polynomials \(P_n\) satisfy the eigenvalue equations

$$\begin{aligned} P_n(\mu (q^{\lambda }))\, \varLambda _n(i)&= \tilde{M}_i(q^{\lambda })\, P_n(\mu (q^{\lambda +1})) + \tilde{N}_i(q^{\lambda })\, P_n(\mu (q^{\lambda -1})), \quad i=1,2, \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} \tilde{M}_i(q^{\lambda })&= \overline{\bigl ( {\hat{\varPhi }}^\ell _0(A^{\lambda -1})\bigr )^{-1} M_i(q^{\lambda -1}) {\hat{\varPhi }}^\ell _0(A^{\lambda })}, \\ \tilde{N}_i(q^{\lambda })&= \overline{\bigl ( {\hat{\varPhi }}^\ell _0(A^{\lambda -1})\bigr )^{-1} N_i(q^{\lambda -1}) {\hat{\varPhi }}^\ell _0(A^{\lambda -2})}, \end{aligned} \end{aligned}$$

and \(\varLambda _n(i)\) are defined in (5.16) and \(\mu (x) = \frac{1}{2}(x+x^{-1})\).

It remains to prove the assumption in Corollary 5.13 for sufficiently many \(\lambda \), and to calculate the coefficients in the eigenvalue equations explicitly. This is done in Sect. 7.

Having established (5.17), we can prove Corollary 4.9 by considering coefficients in the Laurent expansion.

Proof of Corollary 4.9

The left-hand side of (5.17) can be expanded as a Laurent series in \(q^\lambda \) by Proposition 5.2(ii) and (5.15). The leading coefficient of degree \(\ell +n\) is an antidiagonal matrix

$$\begin{aligned} \bigl ( {\hat{\varPhi }}^\ell _n(A^\lambda )\bigr )_{i,j} = q^{\lambda (\ell +n)} \left( C^{\frac{1}{2}(n+j), \ell +\frac{1}{2} (n-j), \ell }_{-\frac{1}{2}(n+j), -\ell -\frac{1}{2} (n-j), i-\ell }\right) ^2 + \text {lower order terms}, \end{aligned}$$

since the Clebsch–Gordan coefficient is zero unless \(i+j=2\ell \). With a similar expression for \({\hat{\varPhi }}^\ell _0(A^\lambda )\) on the right-hand side, expanding \(\overline{P_n}\bigl ( \mu (q^{\lambda +1})\bigr ) = \overline{\text {lc}(P_n)} q^{n(\lambda +1)} 2^{-n} + \text {lower order terms}\) gives

$$\begin{aligned} \text {lc}(P_n)_{i,j} = \delta _{i,j} q^n 2^{-n} \left( C^{\frac{1}{2}(n+j), \ell +\frac{1}{2} (n-j), \ell }_{-\frac{1}{2}(n+j), -\ell -\frac{1}{2} (n-j), \ell -j}\right) ^2 \left( C^{\frac{1}{2}j, \ell -\frac{1}{2}j, \ell }_{-\frac{1}{2} j, -\ell +\frac{1}{2} j, \ell -j}\right) ^{-2}, \end{aligned}$$

and (8.7) gives the result. \(\square \)

Corollary 4.9 gives \(P_0(x)=I\), so Corollary 5.13 gives

$$\begin{aligned} \varLambda _0(i) = \tilde{M}_i(q^{\lambda }) + \tilde{N}_i(q^{\lambda }), \qquad \forall \, \lambda \in {\mathbb {Z}}. \end{aligned}$$
(5.18)

5.5 Symmetries

Even though we can use the explicit expression of the weight W to establish Proposition 4.10, we show the occurrence of J in the commutant from Remark 4.5(ii).

Observe that \((\ell _1,\ell _2)=\xi (n,k)\) gives \((\ell _2, \ell _1)= \xi (n,2\ell -k)\) and that \(\sigma (A)=A\), so Remark 4.5(ii) yields \(\bigl (\varPhi ^\ell _{\xi (n,i)}\cdot A^{-1}\bigr )(\sigma (Z)) = J^\ell \bigl (\varPhi ^\ell _{\xi (n,2\ell -i)}\cdot A^{-1}\bigr )(Z) J^\ell \) for any \(Z\in {\mathcal {U}}_{q}(\mathfrak {g})\). Similarly, using moreover that \(\sigma \) is a \(*\)-isomorphism and that S and \(\sigma \) commute, see (3.5) we obtain \(\bigl (\varPhi ^\ell _{\xi (n,j)}\cdot A^{-1}\bigr )^*(\sigma (Z)) = J^\ell \bigl (\varPhi ^\ell _{\xi (n,2\ell -i)}\cdot A^{-1}\bigr )^*(Z) J^\ell \). Since \((\sigma \otimes \sigma )\circ \Delta = \Delta \circ \sigma \) and \((J^\ell )^2=1\), we find for \(Z\in {\mathcal {U}}_{q}(\mathfrak {g})\) from these observations, cf. (5.8),

$$\begin{aligned} \begin{aligned}&\mathrm{tr}\bigl ( (\varPhi ^\ell _{\xi (n,i)}\cdot A^{-1}) (\varPhi ^\ell _{\xi (m,j)}\cdot A^{-1})^*\bigr ) (\sigma (Z)) \\&\quad = \mathrm{tr}\bigl ( (\varPhi ^\ell _{\xi (n,2\ell -i)}\cdot A^{-1}) (\varPhi ^\ell _{\xi (m,2\ell -j)}\cdot A^{-1})^*\bigr ) (Z). \end{aligned} \end{aligned}$$

By the first part of the proof of Theorem 4.8

$$\begin{aligned} \Bigl ( (P_n(\psi ))^*W(\psi ) P_m(\psi )\Bigr )_{i,j}(\sigma (Z)) = \Bigl ( (P_n(\psi ))^*W(\psi ) P_m(\psi )\Bigr )_{2\ell -i,2\ell -j}(Z). \end{aligned}$$

Note that \(\psi (\sigma (Z))= \psi (Z)\) by the symmetric expression of (5.2). Again using \((\sigma \otimes \sigma )\circ \Delta = \Delta \circ \sigma \), we have \(p(\psi )(\sigma (Z))= p(\psi )(Z)\) for any polynomial p. It follows that

$$\begin{aligned} \Bigl ( (P_n(\psi ))^*W(\psi ) P_m(\psi )\Bigr )_{i,j} = \Bigl ( (P_n(\psi ))^*W(\psi ) P_m(\psi )\Bigr )_{2\ell -i,2\ell -j}. \end{aligned}$$
(5.19)

For \(n=m=0\), (5.19) proves that J is in the commutant algebra for W as stated in Proposition 4.10.

Note that (5.19), after applying the Haar functional on the \(*\)-algebra generated by \(\psi \) as in Lemma 5.4, also gives \(JG_nJ=G_n\) as is immediately clear from the explicit expression for the squared norm matrix \(G_n\) in Theorem 4.8 derived in Sect. 5.3. It moreover shows that \((JP_nJ)_{n\in \mathbb {N}}\) is a family of matrix-valued orthogonal polynomials with respect to the weight W. It is a consequence of Corollary 4.9, see Sect. 2, that \(JP_nJ=P_n\), since \(J \text {lc}(P_n)J = \text {lc}(P_n)\) by Corollary 4.9.

Note that we have now proved Proposition 4.10 except for the \(\supset \)-inclusion in the first line. This is done after the proof of Theorem 4.8 is completed at the end of Sect. 6.1.

As a consequence of the discussion on symmetries, we can formulate the symmetry for \(\hat{\varPhi }^\ell _n\) in Lemma 5.14.

Lemma 5.14

With \(J:e_n\mapsto e_{2\ell -n}\), we have \(J \hat{\varPhi }^\ell _n(A^\lambda ) J = \hat{\varPhi }^\ell _n(A^\lambda )\) for all \(\lambda \in \mathbb {Z}\).

Proof

This is a consequence of initial observations in Sect. 5.5. For \(i,j\in \{0,\ldots , 2\ell \}\), using (5.15) and \(\sigma (A^\lambda )=A^\lambda \) we obtain

$$\begin{aligned} \begin{aligned} \bigl ( J \hat{\varPhi }^\ell _n(A^\lambda ) J\bigr )_{i,j}&= \bigl ( \hat{\varPhi }^\ell _n(A^\lambda ) \bigr )_{2\ell -i,2\ell -j} = \bigl ( \varPhi ^\ell _{\xi (n,2\ell -j)}(A^\lambda ) \bigr )_{\ell -i,\ell -i} \\&= \bigl ( J^\ell \varPhi ^\ell _{\xi (n,j)}(A^\lambda ) J^\ell \bigr )_{\ell -i,\ell -i} = \bigl ( \varPhi ^\ell _{\xi (n,j)}(A^\lambda ) \bigr )_{i-\ell ,i-\ell } \\&= \bigl ( \hat{\varPhi }^\ell _n(A^\lambda )\bigr )_{i,j}. \end{aligned} \end{aligned}$$

\(\square \)

6 The weight and orthogonality relations for the matrix-valued polynomials

In this section, we complement the quantum group theoretic proofs of Sect. 5 of some of the statements of Sect. 4 using mainly analytic techniques. In Sect. 6.1, we prove the statement on the expansion of the entries of the weight function in terms of Chebyshev polynomials of Theorem 4.8. In Sect. 6.2, we prove the LDU-decomposition of Theorem 4.15. In Sect. 6.3, we prove Lemma 5.8 using a special case and induction using Theorem 4.6 in the induction step.

6.1 Explicit expressions of the weight

In order to prove the explicit expansion of the matrix entries of the weight W in terms of Chebyshev polynomials, we start with the expression of Corollary 5.6 for the matrix entries of the weight W. After pairing with \(A^\lambda \), we expand as a Laurent polynomial in \(q^\lambda \) in Proposition 6.1. Then we can use Lemma 6.2, whose proof is presented in Appendix 2.1.

Proposition 6.1

For \(0\le k,p\le 2\ell \), \(\lambda \in \mathbb {Z}\), we have

$$\begin{aligned} \begin{aligned} W(\psi )_{k,p}(A^\lambda )&= \sum _{s=-\frac{1}{2}(k+p)}^{\frac{1}{2}(k+p)} d_s^\ell (k,p) q^{2s\lambda }, \qquad d_s^\ell (k,p) = d_{-s}^\ell (k,p), \\ d_s^\ell (k,p)&= \underset{s=j-i}{\sum _{i=-\frac{1}{2} k}^{\frac{1}{2} k}\sum _{j=-\frac{1}{2} p}^{\frac{1}{2} p}} \sum _{n=-\ell }^\ell q^{2(i+j-n)+2(i+\frac{1}{2} k)(i-n+\ell -\frac{1}{2} k)+ 2(j+\frac{1}{2} p)(j-n+\ell -\frac{1}{2} p)} \\&\quad \times \,\frac{ \left[ \genfrac{}{}{0.0pt}{}{k}{\frac{1}{2} k- i}\right] _{q^2} \left[ \genfrac{}{}{0.0pt}{}{2\ell -k}{\ell -\frac{1}{2} k+n-i}\right] _{q^2} \left[ \genfrac{}{}{0.0pt}{}{p}{\frac{1}{2} p- j}\right] _{q^2} \left[ \genfrac{}{}{0.0pt}{}{2\ell -p}{\ell -\frac{1}{2} p+n-j}\right] _{q^2} }{ \left[ \genfrac{}{}{0.0pt}{}{2\ell }{\ell -n}\right] _{q^{2}}^{2} }. \end{aligned} \end{aligned}$$

Proof

We obtain from Corollary 5.6

$$\begin{aligned} W(\psi )_{k,p}(A^\lambda )= & {} \sum _{m,n=-\ell }^\ell \bigl ( \varPhi ^\ell _{\xi (0,k)}\cdot A^{-1}\bigr )_{m,n}(A^\lambda )\, \overline{\bigl ( \varPhi ^\ell _{\xi (0,p)}\cdot A^{-1}\bigr )_{m,n}(A^{-\lambda })} \nonumber \\= & {} \sum _{m,n=-\ell }^\ell \bigl ( \varPhi ^\ell _{\xi (0,k)}\bigr )_{m,n}(A^{\lambda -1})\, \overline{\bigl ( \varPhi ^\ell _{\xi (0,p)}\bigr )_{m,n}(A^{-1-\lambda })}, \end{aligned}$$
(6.1)

using that \(A^\lambda \) is a group-like element and \(S(A^{\lambda })^*= A^{-\lambda }\). By Proposition 5.2(ii) \(W(\psi )_{k,p}(A^\lambda )\) is

$$\begin{aligned} \sum _{n=-\ell }^\ell \sum _{i=-\frac{1}{2} k}^{\frac{1}{2} k} \sum _{j=-\frac{1}{2} p}^{\frac{1}{2} p} \left( C^{\frac{1}{2} k, \ell -\frac{1}{2} k,\ell }_{i,i-n,n} \right) ^2 \left( C^{\frac{1}{2} p, \ell -\frac{1}{2} p,\ell }_{j,j-n,n} \right) ^2 q^{2\lambda (j-i)} q^{2(i+j-n)}, \end{aligned}$$

where the Clebsch–Gordan coefficients are to be taken as zero in case \(|i-n|>\ell -\frac{1}{2} k\), respectively \(|j-n|>\ell -\frac{1}{2} p\). Now put \(s=j-i\), then s runs from \(-\frac{1}{2}(k+p)\) up to \(\frac{1}{2}(k+p)\), and we have the Laurent expansion

$$\begin{aligned} W(\psi )_{k,p}(A^\lambda ) = \sum _{s=-\frac{1}{2}(k+p)}^{\frac{1}{2}(k+p)} d_s^\ell (k,p) q^{2s\lambda }, \end{aligned}$$

with

$$\begin{aligned} d_s^\ell (k,p) = \underset{s=j-i}{\sum _{i=-\frac{1}{2} k}^{\frac{1}{2} k}\sum _{j=-\frac{1}{2} p}^{\frac{1}{2} p}} \sum _{n=-\ell }^\ell \left( C^{\frac{1}{2} k, \ell -\frac{1}{2} k,\ell }_{i,i-n,n} \right) ^2 \left( C^{\frac{1}{2} p, \ell -\frac{1}{2} p,\ell }_{j,j-n,n} \right) ^2 q^{2(i+j-n)}. \end{aligned}$$

Plugging in (8.3) gives the explicit expression for \(d^\ell _s(k,p)\).

In order to show that \(d_s^\ell (p,k)= d_{-s}^\ell (p,k)\), we note that \(p(\psi )(A^{\lambda })= p(\mu (q^\lambda ))\) is symmetric in \(\lambda \leftrightarrow -\lambda \) for any polynomial p and so Corollary 5.6 implies the symmetry. \(\square \)

Note that for a matrix element of \(P_n(\psi )^*W(\psi ) P_m(\psi )\) a similar expression can be given, cf. the first part of the proof of Theorem 4.8, but this is not required. The symmetry in the Laurent expansion of \(W(\psi )_{k,p}(A^{\lambda })\) does not seem to follow directly from known symmetries for the Clebsch–Gordan coefficients, see e.g. [20, Chap. 3].

Now we can proceed with the proof of Theorem 4.8, for which we need Lemma 6.2.

Lemma 6.2

For \(\ell \in \frac{1}{2} {\mathbb {N}}\) and for \(k,p\in \{0,\ldots ,2\ell \}\) subject to \(k\le p\) and \(k+p\le 2\ell \) we have with the notation of Proposition 6.1 and Theorem 4.8,

$$\begin{aligned} \underset{2s=k+p-2(r+a)}{\sum _{r=0}^{k} \sum _{a=0}^{k+p-2r}} \alpha _r(k,p) = d^\ell _s(k,p). \end{aligned}$$

Lemma 6.2 contains the essential equality for the proof of the explicit expression of the coefficients of the matrix entries of the weight of Theorem 4.8. The proof of Lemma 6.2 can be found in Appendix 2.1, and it is based on a q-analogue of the corresponding statement for the classical case [24, Theorem 5.4]. Previously in 2011, Mizan Rahman (private correspondence) has informed one of us that he has obtained a q-analogue of the underlying summation formula for [24, Theorem 5.4]. It is remarkable that Rahman’s q-analogue is different from the one needed here in Lemma 6.2.

Second part of the proof of Theorem 4.8

We prove the statement on the explicit expression of the matrix entries of the weight in terms of Chebyshev polynomials. By Corollary 5.6, we have

$$\begin{aligned} \begin{aligned} W(\psi )_{k,p}(A^{\lambda })&= \sum _{r=0}^{k \wedge p} \alpha _r(k,p) U_{k+p-2r}(\mu (q^\lambda )) \\&= \sum _{r=0}^{k\wedge p} \sum _{a=0}^{k+p-2r} \alpha _r(k,p) q^{(k+p-2r-2a)\lambda } \\&= \sum _{s=-\frac{1}{2}(k+p)}^{\frac{1}{2}(k+p)} \left( \underset{2s=k+p-2(r+a)}{\sum _{r=0}^{k\wedge p} \sum _{a=0}^{k+p-2r}} \alpha _r(k,p) \right) q^{2s\lambda } \end{aligned} \end{aligned}$$

for all \(\lambda \in \mathbb {Z}\). Since the coefficients \(\alpha _r(k,p)\) are completely determined by \(W(\psi )_{k,p}\) and since \(W(\psi )_{k,p} = W(\psi )_{2\ell -k,2\ell -k}=\bigl (W(\psi )_{p,k}\bigr )^*\), we can restrict to the case \(k\le p\), \(k+p\le 2\ell \). For this case, the result follows from Lemma 6.2, and hence the explicit expression for \(\alpha _r(k,p)\) in Theorem 4.8 is obtained. \(\square \)

Note that proof of Theorem 4.8 is not yet complete, since we have to show that the weight is strictly positive definite for almost all \(x\in [-1,1]\), see Sect. 2. This will follow from the LDU-decomposition for the weight as observed in Corollary 4.16, but in order to prove the LDU-decomposition of Theorem 4.15 we need the explicit expression for the coefficients \(\alpha _t(m,n)\) of Theorem 4.8.

In Sect. 5.5, we observed that J commutes with W(x) for all x. In order to prove Proposition 4.10, we need to show that the commutant is not larger, and for this we need the explicit expression of \(\alpha _t(m,n)\) of Theorem 4.8.

Proof of Proposition 4.10

Let Y be in the commutant, and write \(W(x) = \sum _{n = 0}^{2\ell } W_k U_n(x)\) for \(W_k \in \mathrm{Mat}_{2\ell +1}({\mathbb {C}})\) using Theorem 4.8. Then \([Y,W_k]=0\) for all k. The proof follows closely the proofs of [24, Proposition 5.5] and [25, Proposition 2.6]. Note that \(W_{2\ell }\) and \(W_{2\ell -1}\) are symmetric and persymmetric (i.e. commute with J). Moreover, \((W_{2\ell })_{m,n}\) is non-zero only for \(m+n=2\ell \) and \((W_{2\ell -1})_{m,n}\) is non-zero only for \(|m+n-2\ell |=1\). From the explicit expression of the coefficients \(\alpha _t(m,n)\), we find that all non-zero entries of \(W_{2\ell }\) and \(W_{2\ell -1}\) are different apart from the symmetry and persymmetry. The proof of Proposition 4.10 can then be finished following [24, Proposition 5.5]. \(\square \)

6.2 LDU-decomposition

In order to prove the LDU-decomposition of Theorem 4.15 for the weight, we need to prove the matrix identity termwise. So we are required to show that

$$\begin{aligned} W(x)_{m,n} = \sum _{k=0}^{m\wedge n} L(x)_{m,k} T(x)_{k,k} L(x)_{n,k} \end{aligned}$$
(6.2)

for the expression of W(x) in Theorem 4.8 and for the expressions of L(x) and T(x) in Theorem 4.15. Because of symmetry, we can assume without loss of generality that \(m\ge n\). Then (6.2) is equivalent to Proposition 6.3 after taking into account the coefficients in the LDU-decomposition, so it suffices to prove Proposition 6.3 in order to obtain Theorem 4.15.

Proposition 6.3

For \(0 \le n \le m \le 2\ell \), \(\ell \in \frac{1}{2} {\mathbb {N}}\) and with \(\alpha _t(m, n)\) defined in Theorem 4.8, we have

$$\begin{aligned} \begin{aligned} \sum _{t = 0}^{n} \alpha _t(m, n) U_{m + n - 2t}(x)&= \sum _{k = 0}^{n} \beta _k(m, n) \frac{w(x;q^{2k+2}|q^2)}{1-x^2} C_{m - k}(x; q^{2k+2} | q^2)\\&\quad \times C_{n - k}(x; q^{2k + 2} | q^2), \\ \beta _{k}(m, n)&= \frac{(q^2; q^2)_m}{(q^2; q^2)_{m + k + 1}} \frac{(q^2; q^2)_{n}}{(q^2; q^2)_{n + k + 1}} (q^2; q^2)_{k}^{2} (1 - q^{4k + 2}) \\&\quad \times \,\frac{ (q^2; q^2)_{2\ell + k + 1} (q^2; q^2)_{2\ell + k} }{ q^{2\ell } (q^2; q^2)_{2\ell }^{2} }. \end{aligned} \end{aligned}$$

Before embarking on the proof of Proposition 6.3, note that each summand on the right-hand side of the expression of Proposition 6.3 is an even, respectively odd, polynomial for \(m+n\) even, respectively odd, since the continuous q-ultraspherical polynomials are symmetric and since

$$\begin{aligned} w(x;q^{2k+2}|q^2) = 4(1-x^2) \prod _{j=1}^k \left( 1-2(2x^2-1)q^{2j}+q^{4j}\right) , \end{aligned}$$

see (4.12), is an even polynomial with a factor \((1-x^2)\). In the proof of Proposition 6.3 we use Lemma 6.4.

Lemma 6.4

Let \(0\le k \le m \le n\) and \(t\in {\mathbb {N}}\). Then the integral

$$\begin{aligned} \frac{1}{2\pi } \int _{-1}^{1} \frac{w(x;q^{2k+2}|q^2)}{\sqrt{1 - x^2}} C_{m-k}(x;q^{2k+2}|q^2) C_{n-k}(x;q^{2k+2}|q^2) U_{n+m-2t}(x) dx \\ \end{aligned}$$

is equal to zero for \(t>m\), and for \(0\le t\le m\), the integral above is equal to \(C_k(m,n)R_k(\mu (t); 1, 1, q^{-2m - 2}, q^{-2n - 2}; q^2)\) with

$$\begin{aligned} C_k(m,n)= & {} \frac{q^{-2\left( {\begin{array}{c}k\\ 2\end{array}}\right) }}{1 - q^{2k + 2}} \frac{(q^{2k + 2}; q^2)_{m + n - 2k}}{(q^{4k + 4}; q^2)_{m + n - 2k}} \frac{(q^{2k + 2}; q^2)_{m - k}}{(q^{2}; q^{2})_{m - k}} \frac{(q^{2k + 2}; q^2)_{n - k}}{(q^{2}; q^{2})_{n - k}} \\&\quad \times \frac{ (-1)^k (q^{4k + 4}; q^2)_{m + n - 2k} (q^2; q^2)_{k + 1} }{ (q^2; q^2)_{m + n + 1} } . \end{aligned}$$

In Lemma 6.4, we use the notation (4.13) for the q-Racah polynomials. Lemma 6.4 shows that the expansion as in Proposition 6.3 is indeed valid, and it remains to determine the coefficients \(\beta _k(m,n)\).

The proof of Lemma 6.4 follows the lines of the proof of [23, Lemma 2.7], see Appendix 2.2. The main ingredients are, cf. the proof in [23], the connection and linearisation coefficients for the continuous q-ultraspherical polynomials dating back to the work of L.J. Rogers (1894-5), see e.g. [2, §10.11], [19, §13.3], [15, (7.6.14), (8.5.1)]. Write the product \(C_{m-k}(x;q^{2k+2}|q^2) C_{n-k}(x;q^{2k+2}|q^2)\) as a sum over r of continuous q-ultraspherical polynomials \(C_{r}(x;q^{2k+2}|q^2)\) using the linearisation formula and write \(U_{n+m-2t}\), which is a continuous q-ultraspherical polynomial for \(\beta =q\), in terms of \(C_{s}(x;q^{2k+2}|q^2)\) using the connection formula. The orthogonality relations for the continuous q-ultraspherical polynomials then give the integral in terms of a single series. The details are in Appendix 2.2. From this sketch of proof, it is immediately clear that Lemma 6.4 can be generalised to a more general statement. This is the content of Remark 6.5, whose proof is left to the reader.

Remark 6.5

For integers \(0 \le t\), \(k \le m \le n\) and parameters \(\alpha , \beta \), we have

$$\begin{aligned} \frac{1}{2\pi }&\int _{-1}^1 \frac{w(x;q^{k + 1}\alpha |q)}{\sqrt{1 - x^2}} C_{m-k}(x;q^{k+1}\alpha |q) C_{n-k}(x;q^{k+1}\alpha |q) C_{m + n - 2t}(x;\beta |q) dx \\&= C_k(m, n, \alpha , \beta ) {}_{4}\varphi _{3}\biggl (\genfrac{}{}{0.0pt}{}{ \alpha ^{-1} q^{-k}, q^{-m - n + t - 1}, \alpha q^{k + 1}, \beta \alpha ^{-1} q^{-t-1} }{ \alpha ^{-1} q^{-m}, \alpha ^{-1} q^{-n}, \beta };q,q\biggr ), \end{aligned}$$

where \(C_k(m,n,\alpha ,\beta )\) is

$$\begin{aligned}&\frac{ (\alpha q^{k + 1}, \alpha q^{k + 2}, \alpha ^{-1} q^{-m - n + k}, \alpha ^{-2} q^{-t}, \alpha \beta ^{-1} q^{k + 2}, \alpha ^{-1} \beta ^{-1} q^{t-m-n}; q)_{\infty } }{ (\alpha ^2 q^{2k + 2}, q, q^{k - t + 1}, \alpha ^{-2} q^{-m - n - 1}, \beta ^{-1} q^{k - m - n + t}, \beta ^{-1} q; q)_{\infty }} \\&\quad \times \frac{(\alpha q^{k + 1}; q)_{m - k}}{(q; q)_{m - k}} \frac{(\alpha q^{k + 1}; q)_{n - k}}{(q; q)_{n - k}} \frac{(\alpha ^2 q^{2k + 2}; q)_{m + n - 2k}}{(\alpha q^{k + 2}; q)_{m + n - 2k}} \\&\quad \times \frac{(\alpha ^{-1} \beta q^{-k - 1}; q)_{k - t}}{(q; q)_{k - t}} \frac{(\beta ; q)_{m + n - t - k}}{(\alpha q^{k + 2}; q)_{m + n - t - k}} (q^{k + 1} \alpha )^{k - t}. \end{aligned}$$

Note that the \({}_4\varphi _3\)-series in Remark 6.5 is balanced, but in general is not a q-Racah polynomial.

In the proof of Proposition 6.3, and hence of Theorem 4.15, we need the summation formula involving q-Racah polynomials stated in Lemma 6.6. Its proof is also given in Appendix 2.2.

Lemma 6.6

For \(\ell \in \frac{1}{2}{\mathbb {N}}\) and \(m, n, k \in {\mathbb {N}}\) with \(0 \le k \le n \le m\), we have

$$\begin{aligned}&\sum _{t = 0}^{m} (-1)^t \frac{(q^{2m - 4\ell }; q^2)_{n - t}}{(q^{2m + 4}; q^2)_{n - t}} \frac{(q^{4\ell + 4 - 2t}; q^2)_{t}}{(q^2; q^2)_{t}} (1 - q^{2m + 2n + 2 - 4t}) q^{2\left( {\begin{array}{c}t\\ 2\end{array}}\right) - 4\ell t} \\&\qquad \times \, R_k(\mu (t); 1, 1, q^{-2m-2}, q^{-2n-2}; q^2) \\&\quad = q^{n(n-1) - k(k+1) - 4n\ell } (-1)^{n + k} \frac{ (q^2; q^2)_{2\ell + k + 1} (q^2; q^2)_{2\ell - k} }{ (q^2; q^2)_{2\ell + 1} }\\&\qquad \times \frac{ (1 - q^{2m + 2}) }{ (q^2; q^2)_{n} (q^2; q^2)_{2\ell - n} }. \end{aligned}$$

With these preparations, we can prove Proposition 6.3, and hence the LDU-decomposition of Theorem 4.15.

Proof of Proposition 6.3

Since we have the existence of the expression in Proposition 6.3, it suffices to calculate \(\beta _k(m,n)\) having the explicit value of the \(\alpha _t(m,n)\)’s from Theorem 4.8. Multiply both sides by \(\frac{1}{2\pi } \sqrt{1 - x^2} U_{m + n - 2t}(x)\) and integrate using the orthogonality for the Chebyshev polynomials so that Lemma 6.4 gives

$$\begin{aligned} \frac{1}{4} \alpha _t(m, n) = \sum _{k = 0}^n \beta _k(m, n) C_k(m, n) R_k(\mu (t); 1, 1, q^{-2m-2}, q^{-2n-2}; q^2). \end{aligned}$$

The q-Racah polynomials \(R_k(\mu (t); 1, 1, q^{-m-1}, q^{-n-1}; q^2)\) satisfy the orthogonality relations

$$\begin{aligned}&\sum _{t = 0}^{n} q^{2t} (1 - q^{2m + 2n + 2 - 4t}) R_i(\mu (t); 1, 1, q^{-2m - 2}, q^{-2n - 2}; q^2) \\&\qquad \times \, R_j(\mu (t); 1, 1, q^{-2m - 2}, q^{-2n - 2}; q^2) \\&\quad =\, \delta _{i,j} q^{-2k(m + n + 1)} \frac{(1 - q^{2m + 2})(1 - q^{2n + 2}}{(1 - q^{4k + 2})} \frac{(q^{2m + 4}, q^{2n + 4}; q^2)_{k}}{(q^{-2m}, q^{-2n}; q^2)_{k}}. \end{aligned}$$

Using the orthogonality relations, we find the explicit expression of \(\beta _k(m,n)\) in terms of \(\alpha _t(m,n)\), and using the explicit expression of \(\alpha _t(m, n)\) of Theorem 4.8 gives

$$\begin{aligned} \beta _k(m, n)= & {} \frac{1}{4} \frac{q^{2k(m + n + 1)}}{C_k(m, n)} \frac{(1 - q^{4k + 2})}{(1 - q^{2m + 2})(1 - q^{2n + 2})} \frac{(q^{-2m}, q^{-2n}; q^2)_k}{(q^{2m + 4}, q^{2n + 4}; q^2)_{k}}\\&\times \,\sum _{t = 0}^{n} q^{2t} (1 - q^{2m + 2n + 2 - 4t}) R_k(\mu (t); 1, 1, q^{-2m - 2}, q^{-2n - 2}; q^2)\\&\times \,\alpha _t(m, n). \end{aligned}$$

This expression is summable by Lemma 6.6. Collecting the coefficients proves the proposition. \(\square \)

Last part of the proof of Theorem 4.8

Now that we have proved Theorem 4.15, Corollary 4.16 is immediate, since the coefficients of the diagonal matrix T(x) are positive on \((-1,1)\). So the weight is strictly positive definite on \((-1,1)\), which is the last step to be taken in the proof of Theorem 4.8. \(\square \)

6.3 Summation formula for Clebsch–Gordan coefficients

In this subsection, we prove Lemma 5.8, which has been used in the first part of the proof of Theorem 4.8, see Sect. 5.3. The proof of Lemma 5.8 is somewhat involved, since we employ an indirect way using induction and Theorem 4.6.

Proof of Lemma 5.8

Assume for the moment that

$$\begin{aligned} \sum _{i = -\ell }^{\ell } \left| C^{\ell _1, \ell _2, \ell }_{m, m - i, i} \right| ^2 q^{-2i} = q^{2\ell _1 - 2\ell - 2m} \frac{(1 - q^{4\ell + 2})}{(1 - q^{4\ell _1 + 2})}. \end{aligned}$$
(6.3)

Assuming (6.3) the lemma follows, using \(C^{\ell _1, \ell _2, \ell }_{m_1, m_2, i}=0\) if \(m_1-m_2\not =i\),

$$\begin{aligned} \sum _{i = -\ell }^{\ell } \sum _{m_1 = -\ell _1}^{\ell _1} \sum _{m_2 = -\ell _2}^{\ell _2} |C^{\ell _1, \ell _2, \ell }_{m_1, m_2, i}|^2 q^{2(m_1 + m_2)}&= \sum _{m_1 = -\ell _1}^{\ell _1} q^{4m_1} \sum _{i = -\ell }^{\ell } \left| C^{\ell _1, \ell _2, \ell }_{m_1, m_1-i, i} \right| ^2 q^{-2i} \\&\quad = \sum _{m_1 = -\ell _1}^{\ell _1} q^{2\ell _1 - 2\ell + 2m_1} \frac{(1 - q^{4\ell + 2})}{(1 - q^{4\ell _1 + 2})} \\&\quad = q^{-2\ell } \frac{(1 - q^{4\ell + 2})}{(1 - q^2)}. \end{aligned}$$

It remains to prove (6.3). We do this in case \(\ell _1+\ell _2=\ell \). Put \((\ell _1, \ell _2) = (k/2, \ell - k/2)=\xi (0,k)\) for \(k\in \mathbb {N}\) and \(k\le 2\ell \). Using the explicit expression for the Clebsch–Gordan coefficients (8.3), we find that in this special case the left-hand side of (6.3) equals

$$\begin{aligned}&\sum \frac{ (q^2; q^2)_{k} (q^2; q^2)_{2\ell - k} (q^2; q^2)_{\ell - i} (q^2; q^2)_{\ell + i} }{ (q^2; q^2)_{k/2 - m} (q^2; q^2)_{k/2 + m} (q^2; q^2)_{\ell - k/2 - m + i} (q^2; q^2)_{\ell - k/2 + m - i} (q^2; q^2)_{2\ell } } \\&\quad \times \, q^{-2i + 2(m - k/2)(m - i + \ell - k/2)}, \end{aligned}$$

where the sum runs over i for \(-\ell + k/2 + m \le i \le \ell - k/2 + m\). Substitute \(i \mapsto p - \ell + k/2 + m\) to see that this equals

$$\begin{aligned}&\sum _{i = -\ell + k/2 + m}^{\ell - k/2 + m} \frac{ (q^2; q^2)_{\ell - i} (q^2; q^2)_{\ell + i} }{ (q^2; q^2)_{\ell - k/2 + m - i} (q^2; q^2)_{\ell - k/2 - m + i} } q^{-2i(1 + m + k/2)} \\&\quad \quad = \sum _{p = 0}^{\ell - k} \frac{ (q^2; q^2)_{2\ell - p - k/2 - m} (q^2; q^2)_{p + k/2 + m} }{ (q^2; q^2)_p (q^2; q^2)_{2\ell - p - k} } q^{-2(1 + m + k/2)(p - \ell + k/2 + m)}. \end{aligned}$$

Simplifying the expression, we find that

$$\begin{aligned}&q^{-2(1 + m + k/2)(-\ell + k/2 + m)} \frac{ (q^2; q^2)_{2\ell - k/2 - m} (q^2; q^2)_{k/2 + m} }{ (q^2; q^2)_{2\ell - k} } \\&\quad \times \,{}_{2}\varphi _{1}\biggl (\genfrac{}{}{0.0pt}{}{q^{-4\ell + 2k}, q^{2m + k + 2}}{q^{-4\ell + k + 2m}};q^2,q^{-2k-2}\biggr ) \end{aligned}$$

is equal to

$$\begin{aligned} q^{-2(1 + m + k/2)(-\ell + k/2 + m)} \frac{ (q^2; q^2)_{2\ell - k/2 - m} (q^2; q^2)_{k/2 + m} (q^{-4\ell - 2};q^2)_{2\ell - k} }{ (q^2; q^2)_{2\ell - k} (q^{-4\ell + k + 2m}; q^2)_{2\ell - k} }, \end{aligned}$$

since the \({}_2 \varphi _1\) can be summed by the reversed q-Chu-Vandermonde sum [15, (II.7)]. Putting everything together proves (6.3), and hence for the case where \((\ell _1,\ell _2)=\xi (0,k)\) Lemma 5.8 , i.e. \(\ell _1+\ell _2=\ell \).

To prove Lemma 5.8 in general, we set

$$\begin{aligned} f^{\ell }_{\ell _1, \ell _2}(\lambda ) := \mathrm{tr}(\varPhi ^{\ell }_{\ell _1, \ell _2}(A^{\lambda })) = \sum _{i = -\ell }^{\ell } \sum _{m_1 = -\ell _1}^{\ell _1} \sum _{m_2 = -\ell _2}^{\ell _2} \left| C^{\ell _1, \ell _2, \ell }_{m_1, m_2, i} \right| ^2 q^{-\lambda (m_1 + m_2)}, \end{aligned}$$

hence it is sufficient to calculate \(f^{\ell }_{\ell _1, \ell _2}(-2)\). We will show by induction on n that \(f^{\ell }_{\xi (n,k)}(-2)\) is independent of (nk), or equivalently that \(f^{\ell }_{\ell _1, \ell _2}(-2)\) is independent of \((\ell _1,\ell _2)\). Since we have established the case \(n=0\), Lemma 5.8 then follows.

In order to perform the induction step, we consider the recursion of Theorem 4.6. Using \(\varphi (A^\lambda ) = \frac{1}{2}(q^{1+\lambda }+q^{-1-\lambda })\), we see that \(\varphi (1)=\varphi (A^{-2})=\frac{1}{2}(q+q^{-1})\). Taking the trace of Theorem 4.6 at \(A^0=1\) and using Proposition 5.2(ii), we find, after taking traces,

$$\begin{aligned} \frac{1}{2}(q+q^{-1})(2\ell +1) = (A_{1/2,1/2}+A_{-1/2, -1/2} + A_{-1/2, 1/2} + A_{1/2, -1/2})(2\ell +1).\nonumber \\ \end{aligned}$$
(6.4)

Next we evaluate Theorem 4.6 at \(A^{-2}\) and we take traces, so, using \(\varphi (A^{-2})=\varphi (1)\),

$$\begin{aligned} \frac{1}{2} (q+q^{-1})f^{\ell }_{\ell _1, \ell _2}(-2) = \sum _{i, j = \pm 1/2} A_{i, j} f^{\ell }_{\ell _1+i, \ell _2+j}(-2), \end{aligned}$$

which we rewrite, assuming \(f^{\ell }_{\ell _1, \ell _2}(-2)=F^\ell \) is independent of \((\ell _1,\ell _2)\) for \(\ell _1+\ell _2\le \ell +n\), so that \(f^{\ell }_{\ell _1+\frac{1}{2}, \ell _2+\frac{1}{2}}(-2)\) is

$$\begin{aligned} \frac{1}{A_{1/2,1/2}}\left( \frac{1}{2}(q+q^{-1}) - A_{1/2, -1/2} - A_{-1/2, 1/2} - A_{-1/2, -1/2}\right) F^\ell = F_\ell , \end{aligned}$$

by (6.4) for the last equality. So the statement also follows for \(\ell _1+\ell _2=\ell +n+1\), and the lemma follows. \(\square \)

7 q-Difference operators for the matrix-valued polynomials

We continue the study of q-difference operator for the matrix-valued orthogonal polynomials started in Corollary 5.13. In particular, we show that the assumption on the invertibility of \(\hat{\varPhi }_0\) follows from the LDU-decomposition in Theorem 4.15. Then we make the coefficients in the matrix-valued second-order q-difference operator of Corollary 5.13 explicit in Sect. 7.1. Comparing to the scalar-valued Askey–Wilson q-difference operators, see e.g. [2, 15, 19, 26], we view the q-difference operator as a matrix-valued analogue of the Askey–Wilson q-difference operator. Next, having the matrix-valued orthogonal polynomials as eigenfunctions to the matrix-valued Askey–Wilson q-difference operator, we use this to obtain an explicit expression of the matrix entries of the matrix-valued orthogonal polynomials in terms of scalar-valued orthogonal polynomials from the q-Askey scheme by decoupling of the q-difference operator using the matrix-valued polynomial L in the LDU-decomposition of Theorem 4.15. From this expression, we can obtain an explicit expression for the coefficients in the matrix-valued three-term recurrence relation for the matrix-valued orthogonal polynomials, hence proving Theorem 4.11 and Corollary 4.12.

7.1 Explicit expressions of the q-difference operators

In order to make Corollary 5.13 explicit, we need to study the invertibility of the matrix \(\hat{\varPhi }^\ell _0(A^\lambda )\). For this, we first use Theorem 5.5 and its Corollary 5.6, and in particular (6.1). Because of Proposition 5.2(ii), this is only a single sum. In the notation of (5.15), we find

$$\begin{aligned} \begin{aligned} W(\psi )_{k,p}(A^\lambda )&= \sum _{n=-\ell }^\ell (\hat{\varPhi }^\ell _0)_{n,k}(A^{\lambda -1}) \overline{(\hat{\varPhi }^\ell _0)_{n,p}(A^{-\lambda -1})} \\&= \left( \Bigl ( \hat{\varPhi }^\ell _0(A^{-\lambda -1})\Bigr )^*\, \hat{\varPhi }^\ell _0(A^{\lambda -1})\right) _{p,k}. \end{aligned} \end{aligned}$$

Taking the determinant gives, recalling \(\mu (z)= \frac{1}{2}(z+z^{-1})\),

$$\begin{aligned} \begin{aligned} \overline{\det \left( \hat{\varPhi }^\ell _0(A^{-\lambda -1}) \right) }&\det \left( \hat{\varPhi }^\ell _0(A^{\lambda -1}) \right) = \det \bigl (W(\mu (q^\lambda ))\bigr ) = \det \bigl (T(\mu (q^\lambda ))\bigr )\\&= \prod _{k=0}^{2\ell } T(\mu (q^\lambda ))_{k,k} = \prod _{k=0}^{2\ell } 4 c_k(\ell ) (q^{2+2\lambda }, q^{2-2\lambda };q^2)_k \end{aligned} \end{aligned}$$

using the LDU-decomposition for the weight of Theorem 4.15 and (4.12). The right-hand side is non-zero for \(\lambda \in \mathbb {Z}\) unless \(1\le |\lambda |\le 2\ell \). So \(\hat{\varPhi }^\ell _0(A^\lambda )\) is invertible for \(\lambda \ge 2\ell \) or \(\lambda < -2\ell -1\) or \(\lambda =-1\), i.e. for an infinite number of \(\lambda \in \mathbb {Z}\) and it is meaningful to consider Corollary 5.13.

Proof of Theorem 4.13

Corollary 5.13 gives a second-order q-difference equation for the matrix-valued orthogonal polynomials for an infinite set of \(\lambda \). So it suffices to check that for \(i=1,2\) the matrix-valued functions \(\tilde{M}_i\), \(\tilde{N}_i\) of Corollary 5.13 coincide with \(\mathcal {M}_i\), \(\mathcal {N}_i\), where \(\mathcal {N}_i(z)=\mathcal {M}_i(z^{-1})\), of Theorem 4.13, or

$$\begin{aligned} \begin{aligned} \overline{{\hat{\varPhi }}^\ell _0(A^{\lambda -1})} \mathcal {M}_i(q^{\lambda })&= \overline{M_i(q^{\lambda -1}) {\hat{\varPhi }}^\ell _0(A^{\lambda })}, \\ \overline{{\hat{\varPhi }}^\ell _0(A^{\lambda -1})} \mathcal {N}_i(q^{\lambda })&= \overline{N_i(q^{\lambda -1}) {\hat{\varPhi }}^\ell _0(A^{\lambda -2})}, \end{aligned} \end{aligned}$$
(7.1)

where \(M_i\), \(N_i\) as in (5.16), see Proposition 5.11, and \(\mathcal {M}_i\), \(\mathcal {N}_i\) as in Theorem 4.13, where \(\mathcal {N}_i(z)=\mathcal {M}_i(z^{-1})\).

By (5.18), we need

$$\begin{aligned} \varLambda _0(i)= \mathcal {M}_i(z) + \mathcal {N}_i(z) = \mathcal {M}_i(z) + \mathcal {M}_i(z^{-1}), \end{aligned}$$
(7.2)

which is an easy check using the explicit expressions of Theorem 4.13. Now (7.2) and (5.16) for \(n=0\) show that any equation of (7.1) implies the other equation of (7.1). Indeed, assuming the second equation of (7.1) holds, then

$$\begin{aligned} \overline{{\hat{\varPhi }}^\ell _0(A^\lambda )}\mathcal {M}_i(q^{\lambda +1}) +\overline{{\hat{\varPhi }}^\ell _0(A^\lambda )}\mathcal {N}_i(q^{\lambda +1})= & {} \, \overline{{\hat{\varPhi }}^\ell _0(A^\lambda )} \varLambda _0(i) \\= & {} \,\overline{M_i(q^\lambda ) {\hat{\varPhi }}^\ell _0(A^{\lambda +1})} + \overline{N_i(q^\lambda ) {\hat{\varPhi }}^\ell _0(A^{\lambda -1})} \\= & {} \,\overline{M_i(q^\lambda ) {\hat{\varPhi }}^\ell _0(A^{\lambda +1})} + \overline{{\hat{\varPhi }}^\ell _0(A^\lambda )}\mathcal {N}_i(q^{\lambda +1}) \end{aligned}$$

implying the first equation of (7.1).

By Proposition 5.2(ii) and (5.15), the matrix entries of \({\hat{\varPhi }}^\ell _0(A^{\lambda })\) are Laurent series in \(q^\lambda \). Setting \(z=q^\lambda \), we see that in order to verify (7.1) entry-wise, we need to check equalities for Laurent series in z.

We first consider the second equality of (7.1) for \(i=1\). In this case, the matrices \(N_1\) and \(\mathcal {N}_1\) are band-limited. Hence, the (mn)th entry of both sides of (7.1) involves either two or one terms, so we need to check

$$\begin{aligned}&\varPhi ^\ell _{\xi (0,n-1)}(A^{\lambda -1})_{m-\ell ,m-\ell }\Big \vert _{z=q^\lambda } \mathcal {N}_1(z)_{n-1,n} +\, \varPhi ^\ell _{\xi (0,n)}(A^{\lambda -1})_{m-\ell ,m-\ell }\Big \vert _{z=q^\lambda } \mathcal {N}_1(z)_{n,n} \nonumber \\&\quad =\,N_1(\frac{z}{q})_{m,m}\varPhi ^\ell _{\xi (0,n)}(A^{\lambda -2})_{m-\ell ,m-\ell }\Big \vert _{z=q^\lambda }. \end{aligned}$$
(7.3)

The proof of (7.3) involves the explicit expression of the spherical functions in terms of Clebsch–Gordan coefficients using Proposition 5.2(ii). It is given in Appendix 2.3.

The statements for the second q-difference equation with \(i=2\) follows from the symmetries of Proposition 5.11 and Lemma 5.14. \(\square \)

The explicit expressions have been obtained initially by computer algebra, and then later the proof as presented here and in Appendix 2.3 has been obtained.

7.2 Explicit expressions for the matrix entries of the matrix-valued orthogonal polynomials

Having established the q-difference equations for the matrix-valued orthogonal polynomials of Theorem 4.13 and having the diagonal part of the LDU-decomposition of the weight in terms of weight functions for the continuous q-ultraspherical polynomials in Theorem 4.15, it is natural to look at the q-difference operators conjugated by the polynomial function \(L^t\). It turns out that this completely decouples one of the second-order q-difference operators of Theorem 4.13. This gives the opportunity to link the matrix entries of the matrix-valued orthogonal polynomials to continuous q-ultraspherical polynomials. In order to determine the coefficients, we use the other q-difference operator and the orthogonality relations. Having such an explicit expression, we can determine the three-term recurrence relation for the monic matrix-valued orthogonal polynomials straightforwardly, and hence also for the matrix-valued orthogonal polynomials \(P_n\), since we already have determined the leading coefficient in Corollary 4.9.

The first step is to conjugate the second-order q-difference operator \(D_1\) of Theorem 4.13 with the matrix \(L^t\) of the LDU-decomposition of Theorem 4.15 into a diagonal q-difference operator. This conjugation is inspired by the result of [23, Theorem 6.1]. This conjugation takes \(D_2\) in a three-diagonal q-difference operator. For any \(n \in \mathbb {N}\), let \(\mathcal {R}_n(x) = L^t(x) Q_n(x)\), where \(Q_n(x) = P_n(x)\bigl (\text {lc}(P_n)\bigr )^{-1}\) denote the corresponding monic polynomial. Note that we have determined the leading coefficient \(\text {lc}(P_n)\) in Corollary 4.9. Then \((\mathcal {R}_n)_{n\ge 0}\) forms a family of matrix-valued polynomials, but note that the degree of \(\mathcal {R}_n\) is larger than n, and that the leading coefficient of \(\mathcal {R}_n\) is singular. Note that \(\mathcal {R}_n\) satisfy the orthogonality relations

$$\begin{aligned} \int _{-1}^1 \bigl (\mathcal {R}_n(x)\bigr )^*T(x) \mathcal {R}_m(x)\, \sqrt{1-x^2}dx= & {} \int _{-1}^1 \bigl (Q_n(x)\bigr )^*W(x) Q_m(x)\, \sqrt{1-x^2}dx \nonumber \\= & {} \delta _{m,n}\frac{\pi }{2} \bigl (\text {lc}(P_m)^*\bigr )^{-1} G_m \bigl (\text {lc}(P_m)\bigr )^{-1}.\quad \quad \quad \end{aligned}$$
(7.4)

Theorem 7.1

The polynomials \((\mathcal {R}_n)_{n \ge 0}\) are eigenfunctions of the q-difference operators

$$\begin{aligned} \mathcal {D}_i = \mathcal {K}_i(z) \eta _{q} + \mathcal {K}_i(z^{-1}) \eta _{q^{-1}}, \end{aligned}$$

with eigenvalues \(\varLambda _n(i)\), where

$$\begin{aligned} \mathcal {K}_1(z)&= \sum _{i=0}^{2\ell } \frac{q^{1-i}}{(1-q^2)^2} \frac{(1 - q^{2i+2} z^2)}{(1 - z^2)} E_{i,i}, \\ \mathcal {K}_2(z)&= -\sum _{i=1}^{2\ell } q^{i - 2\ell + 1} \frac{ (1 - q^{4\ell - 2i + 2}) }{ (1 - q^2)^2 } \frac{z}{(1 - z^2)} E_{i, i-1} \\&\quad +\, \sum _{i=0}^{2\ell } 2 q^{i - 2\ell + 1} \frac{ 1 }{ (1 - q^2)^2 } \frac{ (1 + q^{4\ell + 2}) }{ (1 + q^{2i})(1 + q^{2i + 2}) } \frac{ (1 - q^{2i + 2} z^2) }{ (1 - z^2) } E_{i,i} \\&\quad - \,\sum _{i=0}^{2\ell -1} q^{i - 2\ell + 1} \frac{ 1 }{ (1 - q^2)^2 } \frac{ (1 - q^{4\ell + 2i + 4})(1 - q^{2i + 2})^2 }{ (1 - q^{4i+6})(1 - q^{4i+2})(1 + q^{2i + 2})^2 } \\&\quad \times \,\frac{ (1 - q^{2i+2}z^2)(1 - q^{2i+4}z^2) }{ z (1 - z^2) } E_{i,i+1}. \end{aligned}$$

Proof

We start by observing that the monic matrix-valued orthogonal polynomials \(Q_n\) are eigenfunctions of the second-order q-difference operators \(D_i\) of Theorem 4.13 for the eigenvalue \(\text {lc}(P_n)\varLambda _n(i) \text {lc}(P_n)^{-1} =\varLambda _n(i)\), since the matrices are diagonal and thus commute. By conjugation, we find that \(\mathcal {R}_n\) satisfy

$$\begin{aligned} \mathcal {K}_i(z) \breve{\mathcal {R}}_n(qz) + \mathcal {K}_i(z^{-1})\breve{\mathcal {R}}_n(\frac{z}{q}) = \mathcal {R}_n(x) \varLambda _n(i),\, \mathcal {K}_i(z) = \breve{L}^t(z) \mathcal {M}_i(z) \bigl (\breve{L}^t(qz)\bigr )^{-1} \end{aligned}$$

using the notation \(\breve{L}^t(z)=L^t(\mu (z))\), etc., with \(x=\mu (z) = \frac{1}{2}(z+z^{-1})\) as before. It remains to calculate \(\mathcal {K}_i(z)\) explicitly. We show in Appendix 2.4 that the expressions for \(\mathcal {K}_i\) are correct by verifying

$$\begin{aligned} \mathcal {K}_i(z) \breve{L}^t(qz) = \breve{L}^t(z) \mathcal {M}_i(z), \end{aligned}$$
(7.5)

for \(i=1,2\). \(\square \)

Lemma 7.2

For \(n \in {\mathbb {N}}\) and \(0 \le i,j \le 2\ell \), we have

$$\begin{aligned} \mathcal {R}_n(x)_{i j} = \beta _n(i, j)\, C_{n + j - i}(x; q^{2i + 2} | q^2), \end{aligned}$$

where \(C_{n}(x; \beta | q)\) are the continuous q-ultraspherical polynomials (4.10) and \(\beta _n(i, j)\) is a constant depending on ij and n.

Proof

Evaluate \(\mathcal {D}_1 \mathcal {R}_n(x) = \mathcal {R}_n(x) \varLambda _n(1)\) in entry (ij). Since \(\mathcal {D}_1\) is decoupled, we get a q-difference equation for the polynomial \(\bigl (\mathcal {R}_n\bigr )_{i,j}\), which, after simplifying, is

$$\begin{aligned} \begin{aligned}&\frac{(1 - q^{2i + 2} z^2)}{(1 - z^2)} \breve{\mathcal {R}}_n(qz)_{i j} + \frac{(1 - q^{2i + 2} z^{-2})}{(1 - z^{-2})} \breve{\mathcal {R}}_n(q^{-1}z)_{i j} \\&\qquad = q^{1+i} (q^{-j - n - 1} + q^{j + n + 1}) \breve{\mathcal {R}}_n(z)_{i j}. \end{aligned} \end{aligned}$$

All polynomial solutions of this q-difference are given by a multiple of the Askey–Wilson polynomials, \(p_{n + j - i}(x; q^{i + 1}, -q^{i + 1}, q^{1/2}, -q^{1/2} | q)\), see [15, §7.5], [19, §16.3], [26, §14.1]. Apply the quadratic transformation, see [4, (4.20)], to see that the polynomial solutions are \(p_{n + j - i}(x; q^{i + 1}, -q^{i + 1}, q^{i + 2}, -q^{i + 2} | q^2)\). These polynomials are multiples of continuous q-ultraspherical polynomials \(C_{n + j - i}(x; q^{2i + 2} | q^2)\), [19, §13.2], [15, §7.4–5], [26, §14.10.1]. Hence, the polynomial matrix entries \(\mathcal {R}_n(x)_{ij}\) are a multiple of \(C_{n + j - i}(x; q^{2i + 2} | q^2)\). \(\square \)

Our next objective is to determine the coefficients \(\beta _n(i,j)\) of Lemma 7.2. Having exploited that the matrix-valued polynomials \(\mathcal {R}_n\) are eigenfunctions for the decoupled operator \(\mathcal {D}_1\) of Theorem 7.1, we can use Lemma 7.2 in (7.4) to calculate the (ij)th coefficient of (7.4):

$$\begin{aligned}&\frac{2}{\pi } \sum _{k=0}^{2\ell } \overline{\beta _n(k,i)} \beta _m(k,j) c_k(\ell ) \,\int _{-1}^1 C_{n+i-k}(x;q^{2k+2}|q^2)\nonumber \\&\qquad \times C_{m+j-k}(x;q^{2k+2}|q^2) \frac{w(x;q^{2k+2}|q^2)}{\sqrt{1-x^2}}\, dx \nonumber \\&\quad =\,\delta _{m,n} \delta _{i,j} (G_m)_{i,i} (\text {lc}(P_m))_{i,i}^{-2}, \end{aligned}$$
(7.6)

using that \(\text {lc}(P_m)\) and the squared norm matrix \(G_m\) are diagonal matrices, see Corollary 4.9 and Theorem 4.8. The integral in (7.6) can be evaluated by (4.11). In particular, the case \(m+j=n+i\) of (7.6) gives the explicit orthogonality relations

$$\begin{aligned}&\sum _{k=0}^{2\ell } \overline{\beta _{m+j-i}(k,i)} \beta _m(k,j) c_k(\ell ) \frac{ (q^{2k+4}; q^2)_k }{ (q^2; q^2)_{k} } \frac{ (q^{4k+4};q^2)_{m+j-k} }{ (q^2;q^2)_{m+j-k} } \frac{ (1 - q^{2k+2}) }{ (1 - q^{2m+2j+2}) } \nonumber \\&\quad =\, \delta _{i,j} 4^{1-m} q^{-2\ell } \frac{ (1 - q^{4\ell +2})^2 }{ (1 - q^{2m+2i+2})(1 - q^{4\ell -2i+2m+2}) } \times \,\frac{ (q^2, q^{4\ell +4}; q^2)_{m}^2 }{ (q^{2i+2}, q^{4\ell -2i+2}; q^2)_{m}^2 }.\nonumber \\ \end{aligned}$$
(7.7)

Theorem 7.3

We have

$$\begin{aligned} \mathcal {R}_n(x)_{i, j}&= (-1)^i\, 2^{-n} \frac{ (q^2, q^{4\ell + 4}; q^2)_n }{ (q^{2j + 2}, q^{4\ell - 2j + 2}; q^2)_n } \frac{ (q^{-4\ell }, q^{-2j - 2n}; q^2)_i }{ (q^2, q^{4\ell + 4}; q^2)_i }\\&\quad \times \frac{ (q^2; q^2)_{n + j - i} }{ (q^{4i + 4}; q^2)_{n + j - i} } q^{j(2i + 1) + 2i(2\ell + n + 1) - i^2} \\&\quad \times R_i(\mu (j); 1, 1, q^{-2n - 2j - 2}, q^{-4\ell - 2}; q^2) C_{n + j - i}(x; q^{2i + 2}| q^2). \end{aligned}$$

Proof

From Theorem 7.1, we have

$$\begin{aligned} \mathcal {K}_2(z) \breve{\mathcal {R}}_n(qz) + \mathcal {K}_2(z^{-1}) \breve{\mathcal {R}}_n(q^{-1}z) = \breve{\mathcal {R}}_n(z) \varLambda _n(2). \end{aligned}$$
(7.8)

Evaluate (7.8) in entry (ij) and use Lemma 7.2 to find a three-term recurrence relation in i of \(\beta _n(i, j)\),

$$\begin{aligned}&\frac{(q^{-j - n - 1} + q^{j + n + 1})}{(q^{-1} - q)^2} \beta _n(i, j) \breve{C}_{n + j - i}(z; q^{2i + 2}| q^2)\\&\quad = \beta _n(i + 1, j) \Bigl ( \mathcal {K}_2(z)_{i, i+1} \breve{C}_{n + j - i - 1}(qz; q^{2i + 4} | q^2) \\&\quad \quad \quad \quad \quad + \,\mathcal {K}_2(z^{-1})_{i, i+1} \breve{C}_{n + j - i - 1}(q^{-1}z; q^{2i + 4} | q^2) \Bigr ) \\&\quad \quad + \beta _n(i, j) \Bigl ( \mathcal {K}_2(z)_{i i} \breve{C}_{n + j - i - 1}(qz; q^{2i + 2} | q^2) \\&\qquad \quad \quad \quad +\, \mathcal {K}_2(z^{-1})_{i i} \breve{C}_{n + j - i}(q^{-1}z; q^{2i + 2} | q^2)\Bigr ) \\&\quad \quad + \beta _n(i - 1, j) \Bigl (\mathcal {K}_2(z)_{i, i-1} \breve{C}_{n + j - i + 1}(qz; q^{2i} | q^2) \\&\qquad \quad \quad \quad +\, \mathcal {K}_2(z^{-1})_{i, i-1} \breve{C}_{n + j - i + 1}(q^{-1}z; q^{2i} | q^2) \Bigr ). \end{aligned}$$

Multiply by \((1 - z^2)(1 - q^2)^2\) and evaluate the Laurent expansion at the leading coefficient in z of degree \(n + j - i + 3\). The leading coefficient in z of the continuous q-ultraspherical polynomial \(\breve{C}_n(\alpha z; \beta | q)\) is \(\frac{(\beta ; q)_n}{(q; q)_n} \alpha ^n\). After a straightforward computation, this leads to the three-term recurrence relation

$$\begin{aligned}&(1 + q^{4\ell - 2j + 2n + 2})\beta _n(i, j) \\&\quad = q^{2i + 3} \frac{ (1 - q^{4\ell + 2i + 4})(1 - q^{2i + 2})^2(1 - q^{2n + 2i + 2j + 4}) }{ (1 - q^{n + i + j + 2})(1 - q^{4i + 6})(1 - q^{4i + 2})(1 - q^{2i + 2})^2 } \beta _n(i + 1, j) \\&\qquad -\, 2 q^{2i + 2} \frac{(1 + q^{4\ell + 2})(1 - q^{2n + 2j + 2})}{(1 + q^{2i})(1 + q^{2i + 2})} \beta _n(i, j) \\&\qquad +\, q^{2i + 1} (1 - q^{n + i + j + 1})(1 - q^{4\ell - 2i + 2})(1 - q^{2n + 2j - 2i + 2}) \beta _n(i -1, j). \end{aligned}$$

This recursion relation can be rewritten as the three-term recurrence relation for the q-Racah polynomials after rescaling, see [15, Chap. 7], [19, §15.6] [26, §14.2]. We identify \((\alpha ,\beta ,\gamma ,\delta )\) as in (4.13) with \((1, 1, q^{-2n - 2j - 2}, q^{-4\ell - 2})\) in base \(q^2\). This gives

$$\begin{aligned} \beta _n(i, j)= & {} \gamma _n(j) (-1)^i \frac{(q^{-4\ell }, q^{-2j - 2n}; q^2)_i}{(q^2, q^{4\ell + 4}; q^2)_i} \frac{(q^2; q^2)_{n + j - i}}{(q^{4i + 4}; q^2)_{n + j - i}} q^{2ji + 2i(2\ell + n + 1) - i^2}\\&\times \,R_i(\mu (j); 1, 1, q^{-2n - 2j - 2}, q^{-4\ell - 2}; q^2) \end{aligned}$$

for some constant \(\gamma _n(j)\) independent of i. Plugging this expression in (7.7) for \(i=j\) gives \(|\gamma _n(j)|^2\) by comparing with the explicit orthogonality relations for the q-Racah polynomials, see [15, Chap. 7], [19, §15.6] [26, §14.2].

For \(j\ge i\), we have \(\mathcal {R}_{i,j}(x) = L_{j,i}(x) (x^n \text {Id} + \text {l.o.t})\), and since the explicit expression of \(L_{j,i}\) shows that the leading coefficient (of degree \(j-i\)) is positive, we see that the leading coefficient (of degree \(n+j-i\)) of \(\mathcal {R}_{i,j}\) in case \(j\ge i\) is positive. Since \(\gamma _n(j)\) is independent of i, we take \(i=0\), which shows that \(\gamma _n(j)\) is positive. \(\square \)

Proof of Theorem 4.17

Using Theorem 7.3 with the explicit inverse of L(x) as given in Theorem 4.15 gives an explicit expression for the matrix entries of \(Q_n(x) = (L(x)^{-1})^t \mathcal {R}_n(x)\). Then we obtain the matrix entries of \(P_n(x) = Q_n(x) \text {lc}(P_n)\) from this expression and Corollary 4.9, stating that the leading coefficient is a diagonal matrix. \(\square \)

7.3 Three-term recursion relation

The matrix-valued orthogonal polynomials satisfy a three-term recurrence relation, see Sect. 2. Moreover, Theorem 4.6 shows that the three-term recurrence relation can in principle be obtained from the tensor product decomposition. However, in that case we obtain the coefficients of the matrices in the three-term recurrence relation in terms of sums of squares of Clebsch–Gordan coefficients, and this leads to a cumbersome result. In order to obtain an explicit expression for the three-term recurrence relation as in Theorem 4.11 and Corollary 4.12, we use the explicit expression obtained in Theorem 4.17 and Lemma 7.4, which is [23, Lemma 5.1]. Lemma 7.4 is only used to determine \(X_n\).

Lemma 7.4

Let \((Q_n)_{n \ge 0}\) be a sequence of monic (matrix-valued) orthogonal polynomials and write \(Q_n(x) = \sum _{k = 0}^n Q^{n}_{k} x^k\), where \(Q^{n}_{k} \in \mathrm{Mat}_N({\mathbb {C}})\). The sequence \((Q_n)_{n \ge 0}\) satisfies the three-term recurrence relation

$$\begin{aligned} xQ_n(x) = Q_{n+1}(x) + Q_n(x) X_n + Q_{n-1}(x) Y_n, \end{aligned}$$

where \(Y_{-1} = 0\), \(Q_0(x) = I\) and

$$\begin{aligned} X_n = Q^n_{n-1} - Q^{n+1}_{n}, \quad Y_n = Q^{n}_{n-2} - Q^{n+1}_{n-1} - Q^n_{n-1} X_n. \end{aligned}$$

So we start by calculating the one-but-leading term in the monic matrix-valued orthogonal polynomials.

Lemma 7.5

For the monic matrix-valued orthogonal polynomials \((Q_n)_{n \ge 0}\), we have

$$\begin{aligned} Q^n_{n-1}&= - \sum _{j = 0}^{2\ell - 1} \frac{q}{2} \frac{ (1 - q^{2n})(1 - q^{2j + 2}) }{ (1 - q^2)(1 - q^{2n + 2j + 2}) } E_{j, j+1} \\&\quad - \sum _{j = 1}^{2 \ell } \frac{q}{2} \frac{ (1 - q^{2n})(1 - q^{4\ell - 2j + 2}) }{ (1 - q^2)(1 - q^{4\ell - 2j + 2n + 2}) } E_{j, j-1}. \end{aligned}$$

Proof

By Theorem 7.3, we have \((\mathcal {R}_n(x))_{i,j} = \beta _n(i, j) C_{n + j - i}(x; q^{2i + 2} | q^2)\) and \(Q_n(x) = L^{t}(x)^{-1}\mathcal {R}_n(x)\), see Sect. 7.2, we have

$$\begin{aligned} Q_n(x)_{i,j}&= \sum \limits _{k = i}^{2\ell } q^{(2k + 1)(k - i)} \frac{ (q^2; q^2)_k (q^2; q^2)_{k + i} }{ (q^2; q^2)_{2k} (q^2; q^2)_{i} } \beta _n(i, j) \nonumber \\&\quad \times \,C_{k - i}(x; q^{-2k} | q^2) C_{n + j - k}(x; q^{2k + 2} | q^2), \end{aligned}$$
(7.9)

and this expression shows that \(\deg (Q_n(x)_{i, j}) = n + j - i\). So in case \(j-i\le 0\), we can only have a contribution to \(Q^n_{n-1}\) in case \(i-j=0\) or \(i-j=1\). The first case does not give a contribution, since (7.9) is even or odd for \(n+j-i\) even or odd. So we only have to calculate the leading coefficient in (7.9) for \(i-j=1\). With the explicit value of \(\beta _n(i,j)\) as in Sect. 7.2 or Theorem 4.17 and Corollary 4.19, we see that \(Q^n_{n-1}\) in case \(i-j=1\) gives the required expression for \(\bigl (Q^n_{n-1}\bigr )_{j,j-1}\).

On the other hand, by Proposition 4.10 and since \(J\text {lc}(P_n)J=\text {lc}(P_n)\) by Corollary 4.9, it follows that \(JQ_n(x)J=Q_n(x)\). Therefore, we find the symmetry of the entries of the monic matrix-valued polynomials \(\bigl (Q_n(x)\bigr )_{i,j} = \bigl (Q_n(x)\bigr )_{2\ell -i,2\ell -j}\) so that the case \(j-i\ge 0\) can be reduced to the previous case, and we get \(\bigl (Q^n_{n-1}\bigr )_{j,j+1}=\bigl (Q^n_{n-1}\bigr )_{2\ell -j,2\ell -j-1}\). \(\square \)

Proof of Theorem 4.11

The explicit expression for \(X_n\) follows from Lemma 7.4 and Lemma 7.5.

By Theorem 4.8 and Corollary 4.9, we have the orthogonality relations

$$\begin{aligned} \begin{aligned} \frac{2}{\pi }\int _{-1}^1 \bigl ( Q_m(x)\bigr )^*W(x) Q_n(x) \sqrt{1 - x^2} dx&= \delta _{m, n} \bigl ( \text {lc}(P_m)^*\bigr )^{-1} G_m \bigl ( \text {lc}(P_m)\bigr )^{-1}\\&= \delta _{m, n} G_m \bigl ( \text {lc}(P_m)\bigr )^{-2}, \end{aligned} \end{aligned}$$

since the matrices involved are diagonal and self-adjoint, hence pairwise commute. By the discussion in Sect. 2, we have

$$\begin{aligned} Y_n = G_{n-1}^{-1} \bigl ( \text {lc}(P_{n-1})\bigr )^{2} G_n \bigl ( \text {lc}(P_n)\bigr )^{-2}, \end{aligned}$$

and a straightforward calculation gives the required expression. \(\square \)