Abstract
Matrix-valued spherical functions related to the quantum symmetric pair for the quantum analogue of \((\mathrm{SU}(2) \times \mathrm{SU}(2), \mathrm{diag})\) are introduced and studied in detail. The quantum symmetric pair is given in terms of a quantised universal enveloping algebra with a coideal subalgebra. The matrix-valued spherical functions give rise to matrix-valued orthogonal polynomials, which are matrix-valued analogues of a subfamily of Askey–Wilson polynomials. For these matrix-valued orthogonal polynomials, a number of properties are derived using this quantum group interpretation: the orthogonality relations from the Schur orthogonality relations, the three-term recurrence relation and the structure of the weight matrix in terms of Chebyshev polynomials from tensor product decompositions, and the matrix-valued Askey–Wilson type q-difference operators from the action of the Casimir elements. A more analytic study of the weight gives an explicit LDU-decomposition in terms of continuous q-ultraspherical polynomials. The LDU-decomposition gives the possibility to find explicit expressions of the matrix entries of the matrix-valued orthogonal polynomials in terms of continuous q-ultraspherical polynomials and q-Racah polynomials.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Shortly after the introduction of quantum groups, it was realised that many special functions of basic hypergeometric type [15] have a natural relation to quantum groups, see e.g. [9, Chap. 6], [20, 31] for references. In particular, many orthogonal polynomials in the q-analogue of the Askey scheme, see e.g. [26], have found an interpretation on compact quantum groups analogous to the interpretation of orthogonal polynomials of hypergeometric type from the Askey scheme on compact Lie groups and related structures, see e.g. [46, 47].
In case of the harmonic analysis on classical Gelfand pairs, one studies spherical functions and related Fourier transforms, see [43]. For our purposes, a Gelfand pair consists of a Lie group G and a compact subgroup K so that the trivial representation of K in the decomposition of any irreducible representation of G restricted to K occurs with multiplicity at most one. The spherical functions are functions on G which are left- and right-K-invariant. The zonal spherical functions are realised as matrix elements of irreducible G-representations with respect to a fixed K-vector. For special cases, the zonal spherical functions can be identified with explicit special functions of hypergeometric type, see [43, Chap. 9], [12, §IV]. The zonal spherical functions are eigenfunctions to an algebra of differential operators, which includes the differential operator arising from the Casimir operator in case G is a reductive group. For special cases with G compact, we obtain orthogonality relations and differential operators for the spherical functions, which can be identified with orthogonal polynomials from the Askey scheme. For the special case \(G=\mathrm{SU}(2)\times \mathrm{SU}(2)\) with \(K\cong \mathrm{SU}(2)\) embedded as the diagonal subgroup, the zonal spherical functions are the characters of \(\mathrm{SU}(2)\), which are identified with the Chebyshev polynomials \(U_n\) of the second kind by the Weyl character formula. The Gelfand pair situation has been generalised to the setting of quantum groups, mainly in the compact context, see e.g. Andruskiewitch and Natale [3] for the case of finite dimensional Hopf algebra with a Hopf subalgebra, Floris [13], Koornwinder [32], Vainermann [45] for more general compact quantum groups, and, for a non-compact example, Caspers [7].
The notions of matrix-valued and vector-valued spherical functions have already emerged at the beginning of the development of the theory of spherical functions, see e.g. [14] and references given there. However, the focus on the relation with matrix-valued or vector-valued special functions only came later, see e.g. references given in [18, 44]. Grünbaum et al. [17] give a group theoretic approach to matrix-valued orthogonal polynomials emphasising the role of the matrix-valued differential operators, which are manipulated in great detail. The paper [17] deals with the case \((G,K) = (\mathrm{SU}(3), \mathrm{U}(2))\). Motivated by [17] and the approach of Koornwinder [29], the group theoretic interpretation of matrix-valued orthogonal polynomials on \((G,K)=(\mathrm{SU}(2)\times \mathrm{SU}(2),\mathrm{SU}(2))\) is studied from a different point of view, in particular with less manipulation of the matrix-valued differential operators, in [23, 24], see also [18, 44]. The point of view is to construct the matrix-valued orthogonal polynomials using matrix-valued spherical functions, and next using this group theoretic interpretation to obtain properties of the matrix-valued orthogonal polynomials. This approach for the case \((G,K)=(\mathrm{SU}(2)\times \mathrm{SU}(2),\mathrm{SU}(2))\) leads to matrix-valued orthogonal polynomials for arbitrary size, which can be considered as analogues of the Chebyshev polynomials of the second kind. A combination of the group theoretic approach and analytic considerations then allows us to understand these matrix-valued orthogonal polynomials completely, i.e. we have explicit orthogonality relations, three-term recurrence relations, matrix-valued differential operators having the matrix-valued orthogonal polynomials as eigenfunctions, expression in terms of Tirao’s [41] matrix-valued hypergeometric functions, expression in terms of well-known scalar-valued orthogonal polynomials from the Askey scheme, etc. This has been analytically extended to arbitrary size matrix-valued orthogonal Gegenbauer polynomials [25], see also [39] for related \(2\times 2\) cases.
The interpretation on quantum groups and related structures leads to many new results for special functions of basic hypergeometric type. In this paper, we use quantum groups in order to obtain matrix-valued orthogonal polynomials as analogues of a subclass of the Askey–Wilson polynomials. In particular, we consider the Chebyshev polynomials of the second kind, recalled in (5.6), as a special case of the Askey–Wilson polynomials [4, (2.18)]. Moreover, we know that the Chebyshev polynomials occur as characters on the quantum \(\mathrm{SU}(2)\) group, see [48, §A.1]. The approach in this paper is to establish the quantum analogue of the group theoretic approach as presented in [23, 24], see also [18, 44], for the example of the Gelfand pair \(G=\mathrm{SU}(2)\times \mathrm{SU}(2)\) with \(K\cong \mathrm{SU}(2)\). For this approach, we need Letzter’s approach [34–36] to quantum symmetric spaces using coideal subalgebras. We stick to the conventions as in Kolb [27] and we refer to [28, §1] for a broader perspective on quantum symmetric pairs. So we work with the quantised universal enveloping algebra \({\mathcal {U}}_{q}(\mathfrak {g})={\mathcal {U}}_{q}(\mathfrak {su}(2)\oplus \mathfrak {su}(2))\), introduced in Sect. 3, equipped with a right coideal subalgebra \({\mathcal {B}}\), see Sect. 4. Once we have this setting established, the branching rules of the representations of \({\mathcal {U}}_{q}(\mathfrak {g})\) restricted to \({\mathcal {B}}\) follow by identifying \({\mathcal {B}}\) with the image of \({\mathcal {U}}_{q}(\mathfrak {su}(2))\) (up to an isomorphism) under the comultiplication using the standard Clebsch–Gordan decomposition. In particular, it gives explicit intertwiners. Next we introduce matrix-valued spherical functions in Sect. 4. Using the matrix-valued spherical functions, we introduce the matrix-valued orthogonal polynomials. Then we use a mix of quantum group theoretic and analytic approaches to study these matrix-valued orthogonal polynomials. So we find the orthogonality for the matrix-valued orthogonal polynomials from the Schur orthogonality relations, and the three-term recurrence relation follows from tensor product decompositions of \({\mathcal {U}}_{q}(\mathfrak {g})\)-representations, and the matrix-valued q-difference operators for which these matrix-valued orthogonal polynomials are eigenvectors follow from the study of the Casimir elements in \({\mathcal {U}}_{q}(\mathfrak {g})\). More analytic properties follow from the LDU-decomposition of the matrix-valued weight function, and this allows to decouple the matrix-valued q-difference operators involved. The decoupling gives the possibility to link the entries of the matrix-valued orthogonal polynomials with (scalar-valued) orthogonal polynomials from the q-analogue of the Askey scheme, in particular the continuous q-ultraspherical polynomials and the q-Racah polynomials. The approach of [17] does not seem to work in the quantum case, because the possibilities to transform q-difference equations are very limited compared to transforming differential equations. We note that in [3, §5] matrix-valued spherical functions are considered for finite dimensional Hopf algebras with respect to a Hopf subalgebra.
The approach to matrix-valued orthogonal polynomials from this quantum group setting also leads to identities in the quantised function algebra. This paper does not include the resulting identities after using infinite dimensional representations of the quantised function algebra. Furthermore, we have not supplied a proof of Lemma 5.4 using infinite dimensional representations and the direct integral decomposition of the Haar functional, but this should be possible as well.
In general, the notion of a quantum symmetric pair seems to be best-suited for the development of harmonic analysis in general and in particular of matrix-valued spherical functions on quantum groups, see e.g. [28, 34–38] and references given there. When considering other quantum symmetric pairs in relation to matrix-valued spherical functions, the branching rule of a representation of the quantised universal enveloping algebra to a coideal subalgebra seems to be difficult. In this paper, it is reduced to the Clebsch–Gordan decomposition, and there is a nice result by Oblomkov and Stokman [38, Proposition 1.15] on a special case of the branching rule for quantum symmetric pair of type AIII, but in general the lack of the branching rule for the quantum symmetric pairs is an obstacle for the study of quantum analogues of matrix-valued spherical functions of e.g. [17, 18, 38, 44].
The matrix-valued orthogonal polynomials resulting from the study in this paper are matrix-valued analogues of the Chebyshev polynomials of the second kind viewed as an example of the Askey–Wilson polynomials. We expect that it is possible to obtain matrix-valued analogues of the continuous q-ultraspherical polynomials viewed as subfamily of the Askey–Wilson polynomials using the approach of [25] using the Askey–Wilson q-derivative instead of the ordinary derivative. We have not explicitly worked out the limit transition \(q\uparrow 1\) of the results, but by the set-up it is clear that the formal limit gives back many of the results of [23, 24].
The contents of the paper are as follows. In Sect. 2, we fix notation regarding matrix-valued orthogonal polynomials. In Sect. 3, the notation for quantised universal enveloping algebras is recalled. Section 4 states all the main results of this paper. It introduces the quantum symmetric pair explicitly. Using the representations of the quantised universal enveloping algebra and the coideal subalgebra, the matrix-valued polynomials are introduced. We continue to give explicit information on the orthogonality relations, three-term recurrence relations, q-difference operators, the commutant of the weight, the LDU-decomposition of the weight, the decoupling of the q-difference equations and the link to scalar-valued orthogonal polynomials from the q-Askey scheme. The proofs of the statements of Sect. 4 occupy the rest of the paper. In Sect. 5, the main properties derivable from the quantum group set-up are derived, and we discuss in Appendix 1 the precise relation of the branching rule for this quantum symmetric pair and the standard Clebsch–Gordan decomposition. In Sect. 6, we continue the study of the orthogonality relations, in which we make the weight explicit. This requires several identities involving basic hypergeometric series, whose proofs we relegate to Appendix 2. Section 7 studies the consequences of the explicit form of the matrix-valued q-difference operators of Askey–Wilson type to which the matrix-valued orthogonal polynomials are eigenfunctions.
In preparing this paper, we have used computer algebra in order to verify the statements up to certain size of the matrix and up to certain degree of the polynomial in order to eliminate errors and typos. Note, however, that all proofs are direct and do not use computer algebra. A computer algebra package used for this purpose can be found on the homepage of the second author.Footnote 1
The convention on notation follows Kolb [27] for quantised universal enveloping algebras and right coideal subalgebras and we follow Gasper and Rahman [15] for the convention on basic hypergeometric series and we assume \(0<q<1\).
2 Matrix-valued orthogonal polynomials
In this section, we fix notation and give a short background to matrix-valued orthogonal polynomials, which were originally introduced by Krein in the forties, see e.g. references in [5, 10]. General references for this section are [5, 10, 16], and references given there.
Assume that we have a matrix-valued function \(W:[a,b] \rightarrow M_{2\ell +1}({\mathbb {C}})\), \(2\ell +1\in {\mathbb {N}}\), \(a<b\), so that \(W(x)>0\) for \(x\in [a,b]\) almost everywhere. We use the notation \(A>0\) to denote a strictly positive definite matrix. Moreover, we assume that all moments exist, where integration of a matrix-valued function means that each matrix entry is separately integrated. In particular, the integrals are matrices in \(M_{2\ell +1}({\mathbb {C}})\). It then follows that for matrix-valued polynomials \(P,Q \in M_{2\ell +1}({\mathbb {C}})[x]\) the integral
exists. This gives a matrix-valued inner product on the space \(M_{2\ell +1}({\mathbb {C}})[x]\) of matrix-valued polynomials, satisfying
![](http://media.springernature.com/full/springer-static/image/art%3A10.1007%2Fs11139-016-9788-y/MediaObjects/11139_2016_9788_Equ228_HTML.gif)
for all \(P,Q, R \in M_{2\ell +1}({\mathbb {C}})[x]\) and \(A,B \in M_{2\ell +1}({\mathbb {C}})\). More general matrix-valued measures can be considered [5, 10], but for this paper the above set-up suffices.
A matrix-valued polynomial \(P(x) = \sum _{r=0}^n x^r P^r\), \(P^r\in M_{2\ell +1}({\mathbb {C}})\) is of degree n if the leading coefficient \(P^n\) is non-zero. Given a weight W, there exists a family of matrix-valued polynomials \((P_n)_{n\in {\mathbb {N}}}\) so that \(P_n\) is a matrix-valued polynomial of degree n and
where \(G_n>0\). Moreover, the leading coefficient of \(P_n\) is non-singular. Any other family of polynomials \((Q_n)_{n\in {\mathbb {N}}}\) so that \(Q_n\) is a matrix-valued polynomial of degree n and \(\langle Q_n, Q_m\rangle =0\) for \(n\not = m\) satisfies \(P_n(x)=Q_n(x)E_n\) for some non-singular \(E_n\in M_{2\ell +1}({\mathbb {C}})\) for all \(n\in {\mathbb {N}}\). We call the matrix-valued polynomial \(P_n\) monic in case the leading coefficient is the identity matrix I. The polynomials \(P_n\) are called orthonormal in case the squared norm \(G_n = I\) for all \(n\in {\mathbb {N}}\) in the orthogonality relations (2.2).
The matrix-valued orthogonal polynomials \(P_n\) always satisfy a matrix-valued three-term recurrence of the form
for matrices \(A_n,B_n,C_n\in M_{2\ell +1}({\mathbb {C}})\) for all \(n\in {\mathbb {N}}\). Note that in particular \(A_n\) is non-singular for all n. Conversely, assuming \(P_{-1}(x)=0\) (by convention) and fixing the constant polynomial \(P_0(x)\in M_{2\ell +1}({\mathbb {C}})\) we can generate the polynomials \(P_n\) from the recursion (2.3). In case the polynomials are monic, the coefficient \(A_n=I\) for all n and \(P_0(x)=I\) as the initial value. In general, the matrices satisfy \(G_{n+1}A_n = C_{n+1}^*G_n\), \(G_n B_n = B_n^*G_n\), so that in the monic case \(C_n = G_{n-1}^{-1} G_n\) for \(n\ge 1\). In case the polynomials are orthonormal, we have \(C_n=A_{n-1}^*\) and \(B_n\) Hermitian.
Note that the matrix-valued ‘sesquilinear form’ (2.1) is antilinear in the first entry of the inner product, which leads to a three-term recurrence of the form (2.3) where the multiplication by the constant matrices is from the right, see [10] for a discussion.
In case a subspace \(V\subset {\mathbb {C}}^{2\ell +1}\) is invariant for W(x) for all x, \(V^\perp \) is also invariant for W(x) for all x. Let \({\iota }_V:V\rightarrow {\mathbb {C}}^{2\ell +1}\) be the embedding of V into \({\mathbb {C}}^{2\ell +1}\) so that \(P_V={\iota }_V{\iota }_V^*\in M_{2\ell +1}({\mathbb {C}})\) is the corresponding orthogonal projection. Then \(W(x)P_V=P_VW(x)\) for all x. Let \(P^V_n:[a,b]\rightarrow \text {End}(V)[x]\) be the matrix-valued polynomial defined by \(P^V_n(x) = {\iota }_V^*P_n(x) {\iota }_V\), where \(P_n\) are the monic matrix-valued orthogonal polynomials for the weight W. Then \(P^V_n\) form a family of monic V-endomorphism-valued orthogonal polynomials, and \(P_n(x) = P^V_n(x) \oplus P^{V^\perp }_n(x)\). The same decomposition can be written down for the orthonormal polynomials.
The projections on invariant subspaces are in the commutant \(*\)-algebra \(\{ T\in M_{2\ell +1}({\mathbb {C}}) \mid TW(x) = W(x)T \ \forall x\}\). In case the commutant algebra is trivial, the matrix-valued orthogonal polynomials are irreducible. The primitive idempotents correspond to the minimal invariant subspaces, and hence they determine the decomposition of the matrix-valued orthogonal polynomials into irreducible cases.
Remark 2.1
In [42] the authors discuss non-orthogonal decompositions by considering, instead of the commutant algebra, the real vector space
It follows that if \(I\mathbb {R} \subsetneq \mathscr {A}\), then the weight W reduces, non-unitarily, to weights of smaller size. Koelink and Román [22, Example 4.3] showed that \(\mathscr {A} = \{ W(x) : x \in (-1, 1) \}'\) so that, in our case, both decompositions coincide.
We denote by \(E_{i,j}\in M_{2\ell +1}({\mathbb {C}})\) the matrix with zeroes except at the (i, j)th entry where it is 1. So for the corresponding standard basis \(\{e_k\}_{k=0}^{2\ell }\) we set \(E_{i,j}e_k = {\delta }_{j,k} e_i\). We usually use the basis \(\{e_k\}_{k=0}^{2\ell }\) in describing the results for the matrix-valued orthogonal polynomials, but occasionally the basis is relabelled \(\{e^\ell _k\}_{k=-\ell }^{\ell }\), as is customary for the \({\mathcal {U}}_{q}(\mathfrak {su}(2))\)-representations of spin \(\ell \). In the latter case, we use superscripts to distinguish from the previous case: \(E^\ell _{i,j}e^\ell _k = {\delta }_{j,k} e^\ell _i\), \(i,j,k\in \{-\ell ,\ldots , \ell \}\).
3 Quantised universal enveloping algebra
We recall the setting for quantised universal enveloping algebras and quantised function algebras, and this section is mainly meant to fix notation. The definitions can be found at various sources on quantum groups, such as the books [9, 11, 20], and we follow Kolb [27].
Fix for the rest of this paper \(0< q < 1\). The quantised universal enveloping algebra can be associated to any root datum, but we only need the simplest cases \(\mathfrak {g}=\mathfrak {sl}(2)\) and \(\mathfrak {g}=\mathfrak {sl}(2)\oplus \mathfrak {sl}(2)\). The quantised universal enveloping algebra is the unital associative algebra generated by k, \(k^{-1}\), e, f subject to the relations
where we follow the convention as in [27, §3]. For our purposes, it is useful to extend the algebra with the roots of k and \(k^{-1}\), denoted by \(k^{1/2}\), \(k^{-1/2}\) satisfying
The extended algebra is denoted by \({\mathcal {U}}_{q}(\mathfrak {sl}(2))\), and it is a Hopf algebra with comultiplication \(\Delta \), counit \(\varepsilon \), antipode S defined on the generators by
The Hopf algebra has a \(*\)-structure defined on the generators by
We denote the corresponding Hopf \(*\)-algebra by \({\mathcal {U}}_{q}(\mathfrak {su}(2))\).
The identification as Hopf \(*\)-algebras with [21, 30] is \((A,B,C,D) \leftrightarrow (k^{1/2}, q^{-1}k^{-1/2}e, qfk^{1/2}, k^{-1/2})\).
The irreducible finite dimensional type 1 representations of the underlying \(*\)-algebra have been classified. Here type 1 means that the spectrum of \(K^{1/2}\) is contained in \(q^{\frac{1}{2}{\mathbb {Z}}}\). For each dimension \(2\ell +1\) of spin \(\ell \in \frac{1}{2}{\mathbb {N}}\), there is a representation in \(\mathcal {H}^{\ell }\cong {\mathbb {C}}^{2\ell +1}\) with orthonormal basis \(\{e^\ell _{-\ell }, e^\ell _{-\ell +1}, \ldots , e^\ell _{\ell } \}\) and on which the action is given by
where \(t^\ell :{\mathcal {U}}_{q}(\mathfrak {su}(2))\rightarrow \text {End}(\mathcal {H}^{\ell })\) is the corresponding representation. Note that \(b^\ell (p)=b^\ell (1-p)\). Finally, recall that the centre \(\mathcal {Z}({\mathcal {U}}_{q}(\mathfrak {su}(2)))\) is generated by the Casimir element \(\omega \),
We use the notation \({\mathcal {U}}_{q}(\mathfrak {g})\) to denote the Hopf \(*\)-algebra \({\mathcal {U}}_{q}(\mathfrak {su}(2)\oplus \mathfrak {su}(2))\), which we identify with \({\mathcal {U}}_{q}(\mathfrak {su}(2))\otimes {\mathcal {U}}_{q}(\mathfrak {su}(2))\), where \(K^{1/2}_i\), \(K^{-1/2}_i\), \(E_i\), \(F_i\), \(i=1,2\), are the generators. The relations (3.1) and (3.2) hold with \((k^{1/2}, k^{-1/2},e,f)\) replaced by \((K^{1/2}_i, K^{-1/2}_i, E_i, F_i)\) for any fixed i and the generators with different index i commute. The tensor product of two Hopf \(*\)-algebras is again a Hopf \(*\)-algebra, where the maps on a simple tensor \(X_1\otimes X_2\) are given by, see e.g. [9, Chap. 4],
where we use leg-numbering notation.
The irreducible finite dimensional type 1 representations of \({\mathcal {U}}_{q}(\mathfrak {g})\) are labelled by \((\ell _1,\ell _2)\in \frac{1}{2}{\mathbb {N}}\times \frac{1}{2}{\mathbb {N}}\), and the representations \(t^{\ell _1,\ell _2}\) from \({\mathcal {U}}_{q}(\mathfrak {g})\) to \(\text {End}(\mathcal {H}^{\ell _1, \ell _2})\), \(\mathcal {H}^{\ell _1, \ell _2} = \mathcal {H}^{\ell _1} {\otimes }\mathcal {H}^{\ell _2}\), are obtained as the exterior tensor product of the representations of spin \(\ell _1\) and \(\ell _2\) of \({\mathcal {U}}_{q}(\mathfrak {su}(2))\). Here type 1 means that the spectrum of \(K_i^{1/2}\), \(i=1,2\), is contained in \(q^{\frac{1}{2}{\mathbb {Z}}}\).
We have used the notation \(\Delta \), \(\varepsilon \) and S for the comultiplication, counit and antipode for all Hopf algebras, respectively. From the context, it should be clear which comultiplication, counit and antipode is meant. The corresponding dual Hopf \(*\)-algebra related to the quantised function algebra is not needed for the description of the results in Sect. 4, and it will be recalled in Sect. 5.1.
4 Matrix-valued orthogonal polynomials related to the quantum analogue of \({(\mathrm{SU}(2) \times \mathrm{SU}(2),\text {diag})}\)
In this section, we state the main results of the paper. First we introduce the specific quantum symmetric pair, which is to be considered as the quantum analogue of a symmetric space G / K, in this case \(\mathrm{SU}(2) \times \mathrm{SU}(2)/\mathrm{SU}(2)\). Quantum symmetric spaces have been introduced and studied in detail by Letzter [34–36], see also Kolb [27]. In particular, Letzter has shown that Macdonald polynomials occur as spherical functions on quantum symmetric pairs motivated by the works of Koornwinder, Dijkhuizen, Noumi and others. In our case, \(\mathcal {B}\subset {\mathcal {U}}_{q}(\mathfrak {g})\), as in Definition 4.1, is the appropriate right coideal subalgebra. Using the explicit branching rules for \(t^{(\ell _1,\ell _2)}\vert _{\mathcal B}\) of Theorem 4.3, we introduce matrix-valued spherical functions in Definition 4.4. To these matrix-valued spherical functions, we associate matrix-valued polynomials in (4.8), and we spend the remainder of this section to describe properties of these matrix-valued polynomials. This includes the orthogonality relations, the three-term recurrence relation and the matrix-valued polynomials as eigenfunctions of a commuting set of matrix-valued q-difference equations of Askey–Wilson type. Moreover, we give two explicit descriptions of the matrix-valued weight function W, one in terms of spherical functions for this quantum symmetric pair and one in terms of the LDU-decomposition. The LDU-decomposition gives the possibility to decouple the matrix-valued q-difference operator, and this leads to an explicit expression for the matrix entries of the matrix-valued orthogonal polynomials in terms of scalar-valued orthogonal polynomials from the q-Askey scheme in Theorem 4.17.
For the symmetric pair \((G,K)= (\mathrm{SU}(2) \times \mathrm{SU}(2), \mathrm{SU}(2))\), \(K=\mathrm{SU}(2)\) corresponds to the fixed points of the Cartan involution \(\theta \) flipping the order of the pairs in G. The quantised universal enveloping algebra associated to G is \({\mathcal {U}}_{q}(\mathfrak {g})\) as introduced in Sect. 3. As the quantum analogue of K, we take the right coideal subalgebra \(\mathcal {B}\subset {\mathcal {U}}_{q}(\mathfrak {g})\), i.e. \(\mathcal {B}\subset {\mathcal {U}}_{q}(\mathfrak {g})\) is an algebra satisfying \(\Delta (\mathcal {B})\subset \mathcal {B} \otimes {\mathcal {U}}_{q}(\mathfrak {g})\), as in Definition 4.1. Letzter [34, Sect. 7,(7.2)] has introduced the corresponding left coideal subalgebra, and we follow Kolb [27, §5] in using right coideal subalgebras for quantum symmetric pairs. Note that we have modified the generators slightly in order to have \(B_1^*=B_2\).
Definition 4.1
The right coideal subalgebra \(\mathcal {B}\subset {\mathcal {U}}_{q}(\mathfrak {g})\) is the subalgebra generated by \(K^{\pm 1/2}\), where \(K=K_1K_2^{-1}\), and
Remark 4.2
-
(i)
\(\mathcal {B}\) is a right coideal as follows from the general construction, see [27, Proposition 5.2]. It can be verified directly by checking it for the generators. Note \(\Delta (K^{\pm 1/2})=K^{\pm 1/2}\otimes K^{\pm 1/2}\) is immediate, and
$$\begin{aligned} \Delta (B_1)= & {} B_1 \otimes (K_1K_2)^{-1/2} + K^{1/2}\otimes q^{-1}(K_1K_2)^{-1/2} E_1 \\&+ K^{-1/2} \otimes q F_2 K^{-1/2} \end{aligned}$$is in \(\mathcal {B} {\otimes }{\mathcal {U}}_{q}(\mathfrak {g})\) by a straightforward calculation. Since \(B_2=B_1^*\), it also follows for \(B_2\), since \(K^{\pm 1/2}\) are self-adjoint. The relations, cf. [27, Lemma 5.15],
$$\begin{aligned} K^{1/2} B_1 = q B_1 K^{1/2}, \quad \! K^{1/2} B_2 = q^{-1} B_2 K^{1/2}, \quad \! [B_1, B_2] = \frac{K - K^{-1}}{q - q^{-1}},\quad \quad \quad \end{aligned}$$(4.1)hold in \({\mathcal {U}}_{q}(\mathfrak {g})\) as can also be checked directly.
-
(ii)
Let \(\varPsi :{\mathcal {U}}_{q}(\mathfrak {su}(2)) \rightarrow {\mathcal {U}}_{q}(\mathfrak {su}(2))\) be defined by
$$\begin{aligned} \varPsi (k^{1/2}) = k^{-1/2},\quad \varPsi (k^{-1/2}) = k^{1/2}, \quad \varPsi (e)=q^3 f, \qquad \varPsi (f) = q^{-3}e, \end{aligned}$$then \(\varPsi \) extends to an involutive \(*\)-algebra isomorphism. Consider the map
$$\begin{aligned} \iota \circ (\varPsi \otimes \text {Id})\circ \Delta :{\mathcal {U}}_{q}(\mathfrak {su}(2))\rightarrow {\mathcal {U}}_{q}(\mathfrak {g}), \end{aligned}$$where \(\iota \) is the algebra morphism mapping \(x\otimes y\in {\mathcal {U}}_{q}(\mathfrak {su}(2))\otimes {\mathcal {U}}_{q}(\mathfrak {su}(2))\) to the corresponding element \(X_1Y_2\in {\mathcal {U}}_{q}(\mathfrak {g})\) for x and y generators of \({\mathcal {U}}_{q}(\mathfrak {su}(2))\). Then we see that \(k^{1/2} \mapsto K^{-1/2}\), \(qfk^{1/2}\mapsto B_1\), and \(q^{-1}k^{-1/2}e \mapsto B_2\) under the map \(\iota \circ (\varPsi \otimes \text {Id})\circ \Delta \). In particular, the relations (4.1) follow. We conclude that \(\mathcal {B}\) is isomorphic as a \(*\)-algebra to \(\Delta (U_q(\mathfrak {su}(2))\subset {\mathcal {U}}_{q}(\mathfrak {g})\) by the \(*\)-isomorphism \(\iota \circ (\varPsi \otimes \text {Id})\).
-
(iii)
In particular, \(\mathcal {B}\cong {\mathcal {U}}_{q}(\mathfrak {su}(2))\) as \(*\)-algebras. So the irreducible type 1 representations of \(\mathcal {B}\) are labelled by the spin \(\ell \in \frac{1}{2}{\mathbb {N}}\). This can be made explicit by \(t^\ell :{\mathcal {B}}\rightarrow \text {End}(\mathcal {H}^\ell )\) and setting
$$\begin{aligned} t^\ell (K^{1/2}) e^\ell _{p}&= q^{-p} e^\ell _{p}, \quad t^\ell (B_1) e^\ell _{p} = b^\ell (p) e^\ell _{p-1}, \\ \nonumber t^\ell (B_2) e^\ell _{p}&= b^\ell (p+1) e^\ell _{p+1} \end{aligned}$$(4.2)with the notation of (3.3). We use the same notation \(t^\ell \) for these representations here and in (3.3), since they correspond under the identification of \({\mathcal {B}}\) as \({\mathcal {U}}_{q}(\mathfrak {su}(2))\).
-
(iv)
Let \(\sigma \) be the \(*\)-algebra isomorphism on \({\mathcal {U}}_{q}(\mathfrak {g})= {\mathcal {U}}_{q}(\mathfrak {su}(2))\otimes {\mathcal {U}}_{q}(\mathfrak {su}(2))\) by flipping the order in the tensor product, or equivalently by flipping the subscripts \(1\leftrightarrow 2\). Then \(\sigma :\mathcal {B} \rightarrow \mathcal {B}\) is an involution \(B_1\leftrightarrow B_2\), \(K\leftrightarrow K^{-1}\). On the level of representations of \({\mathcal {U}}_{q}(\mathfrak {g})\) and \(\mathcal {B}\), it follows \(t^{\ell _1,\ell _2}(\sigma (X)) = P^*t^{\ell _2,\ell _1}(X) P\), \(X\in {\mathcal {U}}_{q}(\mathfrak {g})\), where \(P:\mathcal {H}^{\ell _1,\ell _2} \rightarrow \mathcal {H}^{\ell _2,\ell _1}\) is the flip, and \(t^{\ell }(\sigma (Y)) = (J^\ell )^*t^{\ell }(Y)J^\ell \), \(Y\in \mathcal {B}\), where \(J^\ell :\mathcal {H}^\ell \rightarrow \mathcal {H}^\ell \), \(J^\ell :e^\ell _p \mapsto e^\ell _{-p}\).
Theorem 4.3
The finite dimensional representation \(t^{\ell _1, \ell _2}\) of \({\mathcal {U}}_{q}(\mathfrak {g})\) restricted to \(\mathcal {B}\) decomposes multiplicity-free into irreducible representations \(t^{\ell }\) of \(\mathcal {B}\):
![](http://media.springernature.com/full/springer-static/image/art%3A10.1007%2Fs11139-016-9788-y/MediaObjects/11139_2016_9788_Equ229_HTML.gif)
With respect to the orthonormal basis \(\{e^{\ell }_p\}_{p=-\ell }^\ell \) of \(\mathcal {H}^{\ell }\) and the orthogonal basis \(\{ e^{\ell _1}_{i} {\otimes }e^{\ell _2}_{j} \}_{i=-\ell _1, j=-\ell _2}^{\ell _1,\ell _2}\) for \(\mathcal {H}^{\ell _1, \ell _2}\), the \(\mathcal {B}\)-intertwiner \(\beta ^{\ell }_{\ell _1, \ell _2} :\mathcal {H}^{\ell } \rightarrow \mathcal {H}^{\ell _1, \ell _2}\) is given by
where \(C^{\ell _1, \ell _2, \ell }_{i, j, p}\) are Clebsch–Gordan coefficients satisfying \(C^{\ell _1, \ell _2, \ell }_{i, j, p} = 0\) if \(i - j \ne p\).
The proof of Theorem 4.3 is a reduction to the well-known Clebsch–Gordan decomposition for the quantised universal enveloping algebra \({\mathcal {U}}_{q}(\mathfrak {su}(2))\), see e.g. [9, 20], using Remark 4.2. The proof is presented in Appendix 1. In particular, \((\beta ^{\ell }_{\ell _1, \ell _2})^*\beta ^{\ell }_{\ell _1, \ell _2}\) is the identity on \(\mathcal {H}^\ell \). Note that
In general, the decomposition of an irreducible representation restricted to a right coideal subalgebra seems a difficult problem. In this particular case, we can reduce to the Clebsch–Gordan decomposition, and yet another known special case is by Oblomkov and Stokman [38, Proposition 1.15], but in general this is an open problem.
In particular, for fixed \(\ell \in \frac{1}{2}{\mathbb {N}}\), we have \([t^{\ell _1, \ell _2}\vert _{{\mathcal {B}}}:t^\ell ]=1\) if and only if
We use the reparametrisation of (4.3) by
see also Fig. 1 and [24, Figs. 1, 2]. In case \(\ell =0\), we have \(t^0=\varepsilon \vert _{{\mathcal {B}}}\), where \(\varepsilon \) is the counit of \({\mathcal {U}}_{q}(\mathfrak {g})\) and is the trivial representation of \({\mathcal {B}}\), and the condition (4.3) gives \(\ell _1=\ell _2\) and \(\xi ^0(n,0)=(\frac{1}{2} n, \frac{1}{2} n)\).
With these preparations, we can introduce the matrix-valued spherical functions associated to a fixed representation \(t^\ell \) of \({\mathcal {B}}\), where we use the notation of Theorem 4.3.
Definition 4.4
Fix \(\ell \in \frac{1}{2}{\mathbb {N}}\) and let \((\ell _1, \ell _2) \in \frac{1}{2}{\mathbb {N}}\times \frac{1}{2}{\mathbb {N}}\) so that \([t^{\ell _1, \ell _2}\vert _{{\mathcal {B}}}:t^\ell ]=1\). The spherical function of type \(\ell \) associated to \((\ell _1, \ell _2)\) is defined by
Remark 4.5
-
(i)
Note that the requirement on \((\ell _1,\ell _2)\) in Definition 4.4 corresponds to the condition (4.3). Since \(\beta ^{\ell }_{\ell _1, \ell _2}\) is a \({\mathcal {B}}\)-intertwiner, we have
$$\begin{aligned} \varPhi ^{\ell }_{\ell _1, \ell _2}(XZY) = t^\ell (X) \varPhi ^{\ell }_{\ell _1, \ell _2}(Z) t^\ell (Y), \qquad \forall \, X,Y\in {\mathcal {B}}, \ \forall \, Z\in {\mathcal {U}}_{q}(\mathfrak {g}). \end{aligned}$$(4.5) -
(ii)
Note that the condition (4.3) is symmetric in \(\ell _1\) and \(\ell _2\). With the notation of Remark 4.2(iv), we have \(\varPhi ^{\ell }_{\ell _2, \ell _1}(Z) = J^\ell \varPhi ^{\ell }_{\ell _1, \ell _2}(\sigma (Z)) J^\ell \) for \(Z\in {\mathcal {U}}_{q}(\mathfrak {g})\). This follows from \(\beta ^{\ell }_{\ell _2, \ell _1} = P \beta ^{\ell }_{\ell _1, \ell _2} J^\ell \), which is a consequence of (8.2).
In case \(\ell = 0\), \(\mathcal {H}^0\cong {\mathbb {C}}\), we need \(\ell _1 = \ell _2\). Then \(\varPhi ^0_{\ell _1,\ell _1}\) are linear maps \({\mathcal {U}}_{q}(\mathfrak {g}) \rightarrow {\mathbb {C}}\). In particular, \(\varPhi ^{0}_{0,0}\) equals the counit \(\varepsilon \), and the spherical function \(\varphi = \frac{1}{2}(q^{-1}+q) \varPhi ^{0}_{1/2, 1/2}\) is scalar-valued linear map on \({\mathcal {U}}_{q}(\mathfrak {g})\). The elements \(\varPhi ^{0}_{n/2, n/2}\) can be written as a multiple of \(U_n(\varphi )\), where \(U_n\) denotes the Chebyshev polynomial of the second kind of degree n, see Proposition 5.3. This statement can be considered as a special case of Theorem 4.8, but we need the identification with the Chebyshev polynomials in the spherical case \(\ell =0\) in order to obtain the weight function in Theorem 4.8. Proposition 5.3 will follow from Theorem 4.6. The identification of the spherical functions for \(\ell =0\) with Chebyshev polynomials corresponds to the classical case, since the spherical functions on \(G\times G/G\) are the characters on G and the characters on \(\mathrm{SU}(2)\) are Chebyshev polynomials of the second kind, as the simplest case of the Weyl character formula. It also corresponds to the computation of the characters on the quantum \(\mathrm{SU}(2)\) group by Woronowicz [48], since the characters are identified with Chebyshev polynomials as well.
Next Theorem 4.6 gives the possibility to associate polynomials in \(\varphi \) to spherical functions of Definition 4.4. Theorem 4.6 essentially follows from the tensor product decomposition of representations of \({\mathcal {U}}_{q}(\mathfrak {g})\), which in turn follows from tensor product decomposition for \({\mathcal {U}}_{q}(\mathfrak {su}(2))\), and some explicit knowledge of Clebsch–Gordan coefficients.
Theorem 4.6
Fix \(\ell \in \frac{1}{2}{\mathbb {N}}\) and let \((\ell _1, \ell _2) \in \frac{1}{2}{\mathbb {N}}\times \frac{1}{2}{\mathbb {N}}\) satisfy (4.3), then for constants \(A_{i, j}\) we have
In order to interpret the result of Theorem 4.6, we evaluate both sides at an arbitrary \(X\in {\mathcal {U}}_{q}(\mathfrak {g})\). The right-hand side is a linear combination of linear maps from \(\mathcal {H}^\ell \) to itself after evaluating at X. For the left-hand side, we use the pairing of Hopf algebras so that multiplication and comultiplication are dual to each other and the left-hand side has to be interpreted as
which is a linear combination of linear maps from \(\mathcal {H}^\ell \) to itself, using \(\Delta (X)= \sum _{(X)} X_{(1)}\otimes X_{(2)}\). The convention in Theorem 4.6 is that \(A_{i,j}\) is zero in case \((\ell _1+i,\ell _2+j)\) does not satisfy (4.3). The proof of Theorem 4.6 can be found in Sect. 5.2.
Since \({\mathcal {B}}\) is a right coideal subalgebra, we see that the left-hand side of Theorem 4.6 has the same transformation behaviour as (4.5). Indeed, for \(X\in {\mathcal {B}}\) and \(Y\in {\mathcal {U}}_{q}(\mathfrak {g})\) we have
where we have used that \(X_{(1)}\in {\mathcal {B}}\) by the right coideal property and (4.5) for \(\varphi = \varPhi ^0_{1/2,1/2}\) in the second equality, and the counit axiom \(\sum _{(X)} \varepsilon (X_{(1)})X_{(2)}=X\) in the fourth equality and then (4.5) for \(\varPhi ^\ell _{\ell _1,\ell _2}\) and the fact that \(\varphi (Y_{(1)})\) is a scalar. Similarly, the invariance property from the right can be proved.
Theorem 4.6 leads to polynomials in \(\varphi \) by iterating the result and using that \(A_{1/2,1/2}\) is non-zero.
Corollary 4.7
There exist \(2\ell + 1\) polynomials \(r^{\ell , k}_{n,m}\), \(0 \le k \le 2\ell \), of degree at most n so that
The aim of the paper is to show that the polynomials \(r^{\ell ,k}_{n,m}\) give rise to matrix-valued orthogonal polynomials. Put
where the matrix-valued polynomials \(P_n\) are taken with respect to the relabelled standard basis \(e_p=e^\ell _{p-\ell }\), \(p\in \{0,1,\ldots ,2\ell \}\) so that \(P_n = \sum _{i,j=0}^{2\ell } \overline{r^{\ell ,i}_{n,j}} \otimes E_{i,j}\). From Corollary 4.12 or Theorem 4.17, we see that the polynomial \(r^{\ell ,i}_{n,j}\) has real coefficients. The case \(\ell =0\) corresponds to a three-term recurrence relation for (scalar-valued) orthogonal polynomials, and then the polynomials coincide with the Chebyshev polynomials \(U_n\) viewed as a subclass of Askey–Wilson polynomials [4, (2.18)], see Proposition 5.3.
We show that the matrix-valued polynomials \((P_n)_{n=0}^\infty \) are orthogonal with respect to an explicit matrix-valued weight function W, see Theorem 4.8, arising from the Schur orthogonality relations. The expansion of the entries of weight function in terms of Chebyshev polynomials is given by quantum group theoretic considerations except for the calculation of the coefficients in this expansion. The matrix-valued orthogonal polynomials satisfy a matrix-valued three-term recurrence relation as follows from Theorem 4.6, which in turn is a consequence of the decomposition of tensor product representations of \({\mathcal {U}}_{q}(\mathfrak {g})\). However, in order to determine the matrix coefficients in the matrix-valued three-term recurrence we use analytic methods. The existence of two Casimir elements in \({\mathcal {U}}_{q}(\mathfrak {g})\) leads to the matrix-valued orthogonal polynomials being eigenfunctions of two commuting matrix-valued q-difference operators, see [23] for the group case. This extends Letzter [35] to the matrix-valued set-up for this particular case. The q-difference operators are the key to determining the entries of the matrix-valued orthogonal polynomials explicitly in terms of scalar-valued orthogonal polynomials from the q-Askey scheme [26], namely the continuous q-ultraspherical polynomials and the q-Racah polynomials. In this deduction, the LDU-decomposition of the matrix-valued weight function W is essential, since the conjugation with L allows us to decouple the matrix-valued q-difference operator.
In the remainder of Sect. 4, we state these results explicitly, and we present the proofs in the remaining sections. First we give the main statements which essentially follow from the quantum group theoretic set-up, except for explicit calculations, and these are Theorems 4.8, 4.11, 4.13. The remaining Theorems 4.15, 4.17 are obtained using scalar orthogonal polynomials from the q-analogue of the Askey scheme [26] and transformation and summation formulas for basic hypergeometric series [15].
We start by stating that the matrix-valued polynomials \((P_n)_{n=0}^\infty \) introduced in (4.8) are orthogonal with the conventions of Sect. 2. The orthogonality relations of Theorem 4.8 are due to the Schur orthogonality relations. The expansion of the entries of the weight function in terms of Chebyshev polynomials follows from the fact that the entries are spherical functions, i.e. correspond to the case \(\ell =0\) so that they are polynomial in \(\varphi \). The non-zero entries follow by considering tensor product decompositions, but the explicit values for the coefficients \(\alpha _t(m,n)\) in Theorem 4.8 require summation and transformation formulae for basic hypergeometric series.
Theorem 4.8
The polynomials \((P_n)_{n=0}^\infty \) of (4.8) form a family of matrix-valued orthogonal polynomials so that \(P_n\) is of degree n with non-singular leading coefficient. The orthogonality for the matrix-valued polynomials \((P_n)_{n \ge 0}\) is given by
where the squared norm matrix \(G_m\) is diagonal:
Moreover, for \(0 \le m \le n \le 2\ell \) the weight matrix is given by
where
and \(W(x)_{m, n} = W(x)_{n, m}\) if \(m > n\).
The proof of Theorem 4.8 proceeds in steps. First we study explicitly the case \(\ell =0\), motivated by the works of Koornwinder [30], Letzter [34–36] and others. Secondly, we show that taking traces of a matrix-valued spherical function of type \(\ell \) associated to \((\ell _1,\ell _2)\) times the adjoint of a spherical function of type \(\ell \) associated to \((\ell _1',\ell _2')\) gives, up to an action by an invertible group-like element of \({\mathcal {U}}_{q}(\mathfrak {g})\), a polynomial in the generator for the case \(\ell =0\). Then the explicit expression of the Haar functional on this polynomial algebra, stated in Lemma 5.4, gives the matrix-valued orthogonality relations. Finally, the explicit expression for the weight is obtained by analysing the explicit expression of W in terms of the matrix entries of the intertwiners \(\beta ^\ell _{\ell _1,\ell _2}\) in case \(\ell _1+\ell _2=\ell \). These matrix entries are Clebsch–Gordan coefficients.
The leading coefficient of \(P_n\) can be calculated explicitly from the proof of Theorem 4.8:
Corollary 4.9
The leading coefficient of \(P_n\) is a non-singular diagonal matrix:
The weight W is not irreducible, see Sect. 2, but splits into two irreducible block matrices. The symmetry J of the weight function of Theorem 4.8 is essentially a consequence of Remark 4.5(ii), but we need the explicit expression of the weight in order to prove that the commutant algebra is not bigger, see also [22, §4].
Proposition 4.10
The commutant algebra
is spanned by I and J, where \(J:e_p\mapsto e_{2\ell -p}\), \(p\in \{0,\ldots , 2\ell \}\), is a self-adjoint involution. Then \(JP_n(x)J = P_n(x)\) and \(JG_nJ = G_n\). Moreover, the weight W decomposes into two irreducible block matrices \(W_+\) and \(W_-\), where \(W_+\), respectively \(W_-\), acts in the \(+1\)-eigenspace, respectively \(-1\)-eigenspace, of J. So for \(P_++P_-=I\), where \(P_+\), \(P_-\) are the orthogonal self-adjoint projections \(P_+=\frac{1}{2}(I+J)\), \(P_-=\frac{1}{2}(I-J)\), we have that \(W_+\), respectively \(W_-\), corresponds to \(P_+W(x)P_+\), respectively \(P_-W(x)P_-\), restricted to the \(+1\)-eigenspace, respectively \(-1\)-eigenspace, of J.
The special cases for \(\ell =\frac{1}{2}\) and \(\ell =1\) are given at the end of this section. In particular, we identify all scalar-valued orthogonal polynomials occurring in this framework explicitly in terms of Askey–Wilson polynomials.
Theorem 4.6 can be used to find a three-term recurrence relation for the matrix-valued orthogonal polynomials \(P_n\), cf. Sect. 2, so the underlying tensor product decompositions provide the three-term recurrence relation. However, the resulting expressions for the entries of the coefficients of the matrices are rather complicated expressions in terms of Clebsch–Gordan coefficients. For the corresponding matrix-valued monic polynomials \(Q_n(x) = P_n(x) \mathrm{lc}(P_n)^{-1}\), see Corollary 4.9 for the explicit expression for the leading coefficient, we can derive a simple expression for the matrices in the three-term recurrence relation once we have obtained more explicit expressions for the matrix entries of \(Q_n\). This is obtained in Sect. 7 using an explicit link of the matrix entries to scalar orthogonal polynomials in the q-Askey scheme.
Theorem 4.11
The monic matrix-valued orthogonal polynomials \((Q_n)_{n \ge 0}\) satisfy the three-term recurrence relation
where \(Q_{-1}(x) = 0\), \(Q_0(x) = I\) and
Note that \(X_n\rightarrow 0\), \(Y_n\rightarrow \frac{1}{4}\) as \(n\rightarrow \infty \).
The three-term recurrence relation for the matrix-valued orthogonal polynomials \(P_n\) is given in Corollary 4.12, which follows from Theorem 4.11, since we have \(G_{n+1}A_n=\mathrm{lc}(P_{n+1})^*\mathrm{lc}(P_n)\), \(G_nB_n= \mathrm{lc}(P_n)^*X_n \mathrm{lc}(P_n)\), and \(G_{n-1}C_n=\mathrm{lc}(P_{n-1})^*Y_n\mathrm{lc}(P_n)\). For future reference, we give the explicit expressions in Corollary 4.12.
Corollary 4.12
The matrix-valued orthogonal polynomials \((P_n)_{n \ge 0}\) satisfy the three-term recurrence relation
where \(P_{-1}(x) = 0\), \(P_0(x) = I\) and
Note that the case \(\ell =0\) gives a three-term recurrence relation that can be solved in terms of the Chebyshev polynomials, see Proposition 5.3.
In the group case, the spherical functions are eigenfunctions of K-invariant differential operators on G / K, see e.g. [8, 14]. For matrix-valued spherical functions this is also the case, see [40], and this has been exploited in the special cases studied in [17, 23, 24]. In the quantum group case, the action of the Casimir operator gives rise to a q-difference operator for the corresponding spherical functions, see [35]. The first occurrence of an Askey–Wilson q-difference operator, see [4, 15, 19], in this context is due to Koornwinder [30]. For the matrix-valued orthogonal polynomials, we have a matrix-valued analogue of the Askey–Wilson q-difference operator, as given in Theorem 4.13. We obtain two of these operators, one arising from the Casimir operator for \({\mathcal {U}}_{q}(\mathfrak {su}(2))\) in the first leg of \({\mathcal {U}}_{q}(\mathfrak {g})\) and one from the \({\mathcal {U}}_{q}(\mathfrak {su}(2))\) Casimir operator of the second leg. This is related to a kind of Cartan decomposition of \({\mathcal {U}}_{q}(\mathfrak {g})\), cf. (4.5), which, however, does not exist in general for quantised universal enveloping algebras. We can still resolve this problem using techniques based on [8, §2], see the first part of the proof in Sect. 5. The proof of Theorem 4.13 is completed in Sect. 7.
Theorem 4.13
Define two matrix-valued q-difference operators by
where the multiplication by the matrix-valued functions \(\mathcal {M}_i(z)\) and \(\mathcal {M}_i(z^{-1})\) is from the left and \(\eta _q\) is the shift operator defined by \((\eta _{q} \breve{f})(z) = \breve{f}(qz)\), \(\breve{f}(z) =f(\mu (z))\), where \(x=\mu (z) = \frac{1}{2}(z+z^{-1})\). The matrix-valued function \(\mathcal {M}_1\) is given by
and \(\mathcal {M}_2(z) = J \mathcal {M}_1(z) J\), where \(J e_{p} = e_{2\ell - p}\). The matrix-valued orthogonal polynomials \(P_n\) are eigenfunctions for the operators \(D_i\) with eigenvalue matrices given by \(\varLambda _n(i)\) such that \(D_i P_n = P_n \varLambda _n(i)\) and
Explicitly,
where \(\eta _q\) and \(\eta _{q^{-1}}\) are applied entry-wise to the matrix-valued orthogonal polynomials \(P_n\).
Theorem 4.13 shows that \(JD_1J=D_2\), since J is constant. In particular, \(D_1+D_2\) commutes with J and reduces to a q-difference operator for the matrix-valued orthogonal polynomials associated with the weight \(W_+\) or \(W_-\), see Proposition 4.10. Similarly, \(D_1-D_2\) anticommutes with J.
Note that the expression \(\mathcal {M}_i(z) \breve{P}(qz) + \mathcal {M}_i(z^{-1}) \breve{P}(z/q)\) is symmetric in \(z\leftrightarrow z^{-1}\) for any matrix-valued polynomial P, and hence again is a function in \(x=\mu (z)\). The case \(\ell =0\) corresponds to only one q-difference operator, which we rewrite as
For the Chebyshev polynomials, \(U_n(x) = (q^{n+2};q)_n^{-1} p_n(x;q,-q,q^{1/2},-q^{1/2}|q)\) rewritten as Askey–Wilson polynomials [4, (2.18)] are solutions for the relation (4.9), see [26, §14.1], [15, §7.7], [19, Chap. 15–16]. In particular, we consider the operators of Theorem 4.13 as matrix-valued analogues of the Askey–Wilson operator, see Askey and Wilson [4], or [2, 15, 19].
Corollary 4.14
The q-difference operators \(D_1\) and \(D_2\) are symmetric with respect to the matrix-valued weight W, i.e. for all matrix-valued polynomials P, Q, we have
By [16, §2] it suffices to check Corollary 4.14 for \(P=P_n\), \(Q=P_m\), so that by Theorems 4.13 and 4.8 we need to check that \(\varLambda _n(i)^*G_n\delta _{m,n} = G_n \varLambda _m(i) \delta _{m,n}\), which is true since the matrices involved are real and diagonal.
In order to study the matrix-valued orthogonal polynomials and the weight function in more detail, we need the continuous q-ultraspherical polynomials [2, Chap. 2], [15], [19, Chap. 20], [26]:
The continuous q-ultraspherical polynomials are orthogonal polynomials for \(|\beta |<1\). The orthogonality measure is a positive measure in case \(0<q<1\) and \(\beta \) real with \(|\beta |<1\). Explicitly,
Note that in the special case \({\beta }= q^{1+k}\), \(k\in {\mathbb {N}}\), the weight function is polynomial in \(x=\cos \theta \), and
We use the continuous q-ultraspherical polynomials (4.10) for any \(\beta \in {\mathbb {C}}\). In particular, for \(\beta =q^{-k}\) with \(k\in {\mathbb {N}}\) the sum in (4.10) is restricted to \(n-k\le r\le k\), and in particular \(C_n(x;q^{-k};q)=0\) in case \(n-k>k\). With this convention, we can now describe the LDU-decomposition of the weight matrix, and state the inverse of the unipotent lower triangular matrix L in Theorem 4.15.
Theorem 4.15
The matrix-valued weight W as in Theorem 4.8 has the following LDU-decomposition:
where \(L :[-1, 1] \rightarrow M_{2\ell + 1}({\mathbb {C}})\) is the unipotent lower triangular matrix
and \(T :[-1, 1] \rightarrow M_{2\ell + 1}({\mathbb {C}})\) is the diagonal matrix, \(0\le k\le 2\ell \),
The inverse of L is given by
Note that T, L and \(L^{-1}\) are matrix-valued polynomials, which is clear from the explicit expression and (4.12). It is remarkable that the LDU-decomposition is for arbitrary size \(2\ell +1\), but that there is no dependence of L on the spin \(\ell \) and that the dependence of T on the spin \(\ell \) is only in the constants \(c_k(\ell )\).
We prove the first part of Theorem 4.15 in Sect. 6. The proof of Theorem 4.15 is analytic in nature, and a quantum group theoretic proof would be desirable. The statement on the inverse of L(x) is taken from [1], where the inverse of a lower triangular matrix with matrix entries continuous q-ultraspherical polynomials in a more general situation is derived. The inverse \(L^{-1}\) is derived in [1, Example 4.2]. The inverse of L in the limit case \(q\uparrow 1\) was derived by Cagliero and Koornwinder [6], and the proof of [1] is of a different nature than the proof presented in [6].
Theorem 4.15 shows that \(\det (W(x))\) is the product of the diagonal entries of T(x). Since all coefficients \(c_k(\ell )>0\) and the weight functions are positive, we obtain Corollary 4.16.
Corollary 4.16
The matrix-valued weight W(x) is strictly positive definite for \(x \in [-1, 1]\). In particular, the matrix-valued weight \(W(x)\sqrt{1-x^2}\) of Theorem 4.8 is strictly positive definite for \(x \in (-1, 1)\).
Using the lower triangular matrix L of the LDU-decomposition of Theorem 4.15, we are able to decouple \(D_1\) of Theorem 4.13 after conjugation with \(L^t(x)\). We get a scalar q-difference equation for each of the matrix entries of \(L^t(x)P_n(x)\), which is solved by continuous q-ultraspherical polynomials up to a constant. Since we have yet another matrix-valued q-difference operator for \(L^t(x)P_n(x)\), namely \(L^tD_2(L^t)^{-1}\) with \(D_2\) as in Theorem 4.13, we get a relation for the constants involved. This relation turns out to be a three-term recurrence relation along columns, which can be identified with the three-term recurrence for q-Racah polynomials. Finally, use \((L^t(x))^{-1}\) to obtain an explicit expression for the matrix entries of the matrix-valued orthogonal polynomials of Theorem 4.17.
Before stating Theorem 4.17, recall that the q-Racah polynomials, see e.g. [15, §7.2], [19, §15.6], [26, §14.2], are defined by
where \(n \in \{0, 1, 2, \ldots , N\}\), \(N\in {\mathbb {N}}\), \(\mu (x) = q^{-x} + \gamma \delta q^{x+1}\) and so that one of the conditions \(\alpha q = q^{-N}\), or \(\beta \delta q = q^{-N}\), or \(\gamma q = q^{-N}\) holds.
Theorem 4.17
For \(0 \le i, j \le 2\ell \), we have
Note that the left-hand side is a polynomial of degree at most n, whereas the right-hand side is of degree \(n+j-i\). In particular, for \(j>i\) the leading coefficient of the right-hand side of Theorem 4.17 has to vanish, leading to Corollary 4.18.
Corollary 4.18
With the notation of Theorem 4.17, we have for \(j>i\)
By evaluating Corollary 4.7 at \(1\in {\mathcal {U}}_{q}(\mathfrak {g})\), we obtain Corollary 4.19, which is not clear from Theorem 4.17.
Corollary 4.19
For \(m\in \{0,\ldots , 2\ell \}\), we have \(\sum _{k=0}^{2\ell } (P_n\bigl (\frac{1}{2}(q+q^{-1})\bigr ))_{k,m}=1\).
4.1 Examples
We end this section by specialising the results for low-dimensional cases. The case \(\ell =0\) reduces to the Chebyshev polynomials \(U_n(x)\) of the second kind as observed following Theorem 4.13. This is proved in Proposition 5.3, which is required for the proofs of the general statements of Sect. 4.
4.1.1 Example: \(\ell =\frac{1}{2}\)
In this case we work with \(2 \times 2\) matrices. By Proposition 4.10, we know that the weight is block-diagonal, so that in this case we have an orthogonal decomposition to scalar-valued orthogonal polynomials. To be explicit, the matrix-valued weight W is given by
In this case, see Sect. 2, the polynomials \(P_n\) diagonalise since the leading coefficient is diagonalised by conjugation with the orthogonal matrix Y, and we write \(YP_n(x)Y^{t} = \mathrm{diag}(p^+_{n}(x), p^-_{n}(x))\). Then we can identify \(p^{\pm }_n\) by any of the results given in this section, and we do this using the three-term recurrence relation of Corollary 4.12. After conjugation, the three-term recurrence relation for \(p^+_{n}\) is given by
and the three-term recurrence relation for \(p^-_{n}\) is obtained by substituting \(x \mapsto -x\) into the three-term recurrence relation for \(p^+_{n}\). The explicit expressions for \(p^+_{n}\) and \(p^-_{n}\) are given in terms of continuous q-Jacobi polynomials \(P_n^{(\alpha , \beta )}(x|q)\) for \((\alpha , \beta )=(\frac{1}{2}, \frac{3}{2})\), see [4, §4], [26, §14.10]. From [15, Exercise 7.32(ii)] we have \(P_n^{(\frac{1}{2}, \frac{3}{2})}(-x|q^2) = (-1)^n q^{-n} P_n^{(\frac{3}{2}, \frac{1}{2})}(x|q^2)\). So we obtain
which is a q-analogue of [24, §8.2]. Moreover, writing down the conjugation of the q-difference operator \(D_1+D_2\) of Theorem 4.13 for the case \(\ell =\frac{1}{2}\) for the conjugated polynomials gives back the Askey–Wilson q-difference for the continuous q-Jacobi polynomials \(P_n^{(\alpha , \beta )}(x|q)\) for \((\alpha ,\beta )= (\frac{1}{2}, \frac{3}{2})\) and \((\frac{3}{2},\frac{1}{2})\). Working out the eigenvalue equation for \(D_1-D_2\) gives a simple q-analogue of the contiguous relations of [24, p. 5708].
4.1.2 Example: \(\ell =1\)
For \(\ell = 1\) we work with \(3 \times 3\) matrices. By Proposition 4.10, we can block-diagonalise the matrix-valued weight:
where \(W_+\) is a \(2 \times 2\) matrix-valued weight and \(W_-\) is a scalar-valued weight. Explicitly,
The polynomials diagonalise by \(YP_n(x)Y^t = \mathrm{diag}(P^+_{n}(x), p^{-}_{n}(x))\), where \(P^+_{n}\) is a \(2 \times 2\) matrix-valued polynomial and \(p^{-}_{n}\) is a scalar-valued polynomial. Conjugating Corollary 4.12, the three-term recurrence relations for \(P^+_{n}\) and \(p^{-}_{n}\) are
where
The scalar-valued polynomial \(p^{-}_{n}\) can be identified with the continuous q-ultraspherical polynomials:
The \(2 \times 2\) matrix-valued polynomials \(P^{+}_{n}\) are solutions to the matrix-valued q-difference equation \(DP^{+}_{n,1} = P^{+}_{n} \varLambda _n\). Here \(D= M(z) \eta _q + M(z^{-1}) \eta _{q^{-1}}\) is the restriction of the conjugated \(D_1 + D_2\) to the \(+1\)-eigenspace of J. The explicit expressions for M(z) and \(\varLambda _n\) are
These results are q-analogues of some of the results given in [24, §8.3], see also [39]. Note moreover that \(W_-(0)\) is a multiple of the identity so that the commutant of \(W_-\) equals the commuting algebra of Tirao and Zurrián [42], see also [22]. Since the commutant is trivial, the weight \(W_-\) is irreducible, which can also be checked directly.
5 Quantum group-related properties of spherical functions
In this section, we start with the proofs of the statements of Sect. 4 which can be obtained using the interpretation of matrix-valued spherical functions on \({\mathcal {U}}_{q}(\mathfrak {g})\).
5.1 Matrix-valued spherical functions on the quantum group
In this subsection, we study some of the properties of the matrix-valued spherical functions which follow from the quantum group theoretic interpretation. In particular, we derive Theorem 4.3 from Remark 4.2. The precise identification with the literature and the standard Clebsch–Gordan coefficients is made in Appendix 1, and we use the intertwiner and the Clebsch–Gordan coefficients as presented there.
We also need the matrix elements of the type 1 irreducible finite dimensional representations. Define
where we take the inner product in the representation space \(\mathcal {H}^\ell \) for which the basis \(\{e^\ell _n\}_{n=-\ell }^\ell \) is orthonormal. Denoting
then \(\alpha \), \(\beta \), \(\gamma \), \(\delta \) generate a Hopf algebra, where the Hopf algebra structure is determined by duality of Hopf algebras. Moreover, it is a Hopf \(*\)-algebra with \(*\)-structure defined by \(\alpha ^*= \delta \), \(\beta ^*= -q\gamma \), which we denote by \(\mathcal {A}_q(SU(2))\). Then the Hopf \(*\)-algebra \(\mathcal {A}_q(SU(2))\) is in duality as Hopf \(*\)-algebras with \({\mathcal {U}}_{q}(\mathfrak {su}(2))\). In particular, the matrix elements \(t^\ell _{m,n}\in \mathcal {A}_q(SU(2))\) can be expressed in terms of the generators and span \(\mathcal {A}_q(SU(2))\). Moreover, the matrix elements \(t^\ell _{m,n}\in \mathcal {A}_q(SU(2))\) form a basis for the underlying vector space of \(\mathcal {A}_q(SU(2))\). The left action of \({\mathcal {U}}_{q}(\mathfrak {g})\) on \(\mathcal {A}_q(SU(2))\) is given by \((X\cdot \xi )(Y) = \xi (YX)\) for \(X,Y\in {\mathcal {U}}_{q}(\mathfrak {g})\) and \(\xi \in \mathcal {A}_q(SU(2))\). Similarly, the right action is given by \((\xi \cdot X)(Y) = \xi (XY)\) for \(X,Y\in {\mathcal {U}}_{q}(\mathfrak {g})\) and \(\xi \in \mathcal {A}_q(SU(2))\). A calculation gives \(k^{1/2}\cdot t^\ell _{m,n}= q^{-n}t^\ell _{m,n}\) and \(t^\ell _{m,n}\cdot k^{1/2}= q^{-m}t^\ell _{m,n}\) so that \(\alpha \cdot k^{1/2} = q^{1/2}\alpha \), \(\beta \cdot k^{1/2} = q^{-1/2}\beta \), \(\gamma \cdot k^{1/2} = q^{1/2}\gamma \), \(\delta \cdot k^{1/2} = q^{-1/2}\delta \).
Since \(k^{1/2}\) and its powers are group-like elements of \({\mathcal {U}}_{q}(\mathfrak {g})\), it follows that the left and right action of \(k^{1/2}\) and its powers are algebra homomorphisms. See e.g. [9, 11, 20, 21], and references given there.
In the same way, we view
where the functions are taken with respect to the Hopf algebra tensor product \({\mathcal {U}}_{q}(\mathfrak {g}) = {\mathcal {U}}_{q}(\mathfrak {su}(2))\otimes {\mathcal {U}}_{q}(\mathfrak {su}(2))\). In particular, for \(\lambda , \mu \in \frac{1}{2} {\mathbb {Z}}\) we find the expression \(t^{\ell _1}_{m_1,n_1}\otimes t^{\ell _2}_{m_2,n_2}(K_1^\lambda K_2^\mu ) = \delta _{m_1,n_1}\delta _{m_2,n_2} q^{-\lambda m_1 -\mu m_2}\). Similarly, the Hopf \(*\)-algebra spanned by all the matrix elements \(t^{\ell _1}_{m_1,n_1}\otimes t^{\ell _2}_{m_2,n_2}\) is isomorphic to \(\mathcal {A}_q(SU(2))\otimes \mathcal {A}_q(SU(2))\). We set \(\mathcal {A}_q(G)=\mathcal {A}_q(SU(2))\otimes \mathcal {A}_q(SU(2))\) for \(G=SU(2)\times SU(2)\).
Define \(A = K_1^{1/2} K_2^{1/2}\) and let \(\mathcal {A}\) be the commutative subalgebra of \({\mathcal {U}}_{q}(\mathfrak {g})\) generated by A and \(A^{-1}\). Recall the spherical function \(\varPhi ^{\ell }_{\ell _1, \ell _2}\) from Definition 4.4, and recall the transformation property (4.5).
Definition 5.1
The linear map \(\varPhi :{\mathcal {U}}_{q}(\mathfrak {g})\rightarrow \mathrm{End}(\mathcal {H}^\ell )\) is a spherical function of type \(\ell \) if
So the spherical function \(\varPhi ^{\ell }_{\ell _1, \ell _2}\) is a spherical function of type \(\ell \) by (4.5).
Proposition 5.2
Fix \(\ell \in \frac{1}{2} {\mathbb {N}}\), and assume \((\ell _1,\ell _2)\) satisfies (4.3).
-
(i)
Write \(\varPhi ^{\ell }_{\ell _1, \ell _2} = \sum _{m, n=-\ell }^\ell (\varPhi ^{\ell }_{\ell _1, \ell _2})_{m, n} \otimes E^\ell _{m, n}\), where \(E^\ell _{m, n}\) are the elementary matrices, then
$$\begin{aligned} \left( \varPhi ^{\ell }_{\ell _1, \ell _2} \right) _{m, n}&= \sum _{m_1, n_1 = -\ell _1}^{\ell _1} \sum _{m_2, n_2 = -\ell _2}^{\ell _2} C^{\ell _1, \ell _2, \ell }_{m_1, m_2, m} C^{\ell _1, \ell _2, \ell }_{n_1, n_2, n}\, t^{\ell _1}_{m_1, n_1} {\otimes }t^{\ell _2}_{m_2, n_2}. \end{aligned}$$ -
(ii)
A spherical function \(\varPhi \) of type \(\ell \) restricted to \(\mathcal {A}\) is diagonal with respect to the basis \(\{e^\ell _p\}_{p=-\ell }^\ell \). Moreover, for each \(\lambda \in {\mathbb {Z}}\),
$$\begin{aligned} \left( \varPhi ^{\ell }_{\ell _1, \ell _2}(A^{\lambda }) \right) _{m, n}&= \delta _{m, n} \sum _{i = -\ell _1}^{\ell _1} \sum _{j = -\ell _2}^{\ell _2} \left( C^{\ell _1, \ell _2, \ell }_{i, j, n} \right) ^2 q^{-\lambda (i + j)}, \end{aligned}$$so that \(\varPhi ^{\ell }_{\ell _1, \ell _2}(1)\) is the identity.
-
(iii)
Assume that \(\varPhi \) is a spherical function of type \(\ell \) and that
$$\begin{aligned} \varPhi = \sum _{m, n=-\ell }^\ell \varPhi _{m, n} \otimes E^\ell _{m, n}:{\mathcal {U}}_{q}(\mathfrak {g})\rightarrow \mathrm{End}(\mathcal {H}^\ell ), \end{aligned}$$with all linear maps \(\varPhi _{m, n}\) on \({\mathcal {U}}_{q}(\mathfrak {g})\) in the linear span of the matrix elements \(t^{\ell _1}_{i_1,j_1}\otimes t^{\ell _2}_{i_2,j_2}\), \(-\ell _1\le i_1,j_1\le \ell _1\), \(-\ell _2\le i_2,j_2\le \ell _2\), then \(\varPhi \) is a multiple of \(\varPhi ^{\ell }_{\ell _1, \ell _2}\).
Proof
Note \((\varPhi ^{\ell }_{\ell _1, \ell _2}(X))_{m,n} = \langle \varPhi ^{\ell }_{\ell _1, \ell _2}(X) e^{\ell }_n, e^{\ell }_m \rangle \) for \(X\in {\mathcal {U}}_{q}(\mathfrak {g})\), therefore we first compute \(\varPhi ^{\ell }_{\ell _1, \ell _2}(X) e^{\ell }_n\). For \(X \in {\mathcal {U}}_{q}(\mathfrak {g})\), we have
![](http://media.springernature.com/full/springer-static/image/art%3A10.1007%2Fs11139-016-9788-y/MediaObjects/11139_2016_9788_Equ230_HTML.gif)
This proves (i).
To obtain (ii), write \(\varPhi = \sum _{m, n=-\ell }^\ell \varPhi _{m, n} \otimes E^\ell _{m, n}\), and observe
for all \(\lambda \in {\mathbb {Z}}\), \(\mu \in \frac{1}{2} {\mathbb {Z}}\) that implies \(q^{-m\mu } \varPhi _{m,n}(A^\lambda )= q^{-n\mu } \varPhi _{m,n}(A^\lambda )\) since \(t^\ell (K^\mu )\) is diagonal. This gives \(\varPhi _{m,n}(A^\lambda )=0\) for \(m\not =n\). Next pair \(\varPhi ^{\ell }_{\ell _1, \ell _2}\) with \(A^{\lambda }\) using (i) and the observation made before Proposition 5.2 and the fact \(C^{\ell _1, \ell _2, \ell }_{m_1, m_2, m} =0\) unless \(m_1-m_2=m\). Next take \(\lambda =0\), and use (8.4).
Finally, for (iii) note that for \(X\in \mathcal {B}\) we have \(\varPhi (X)= t^\ell (X) \varPhi (1) = \varPhi (1) t^\ell (X)\). Since \(t^\ell :\mathcal {B} \rightarrow \text {End}(\mathcal {H}^\ell )\) is an irreducible unitary representation, Schur’s Lemma implies that \(\varPhi (1)=cI\) is a multiple of the identity. Theorem 4.3 implies that for \(X\in \mathcal {B}\)
and, by the multiplicity free statement in Theorem 4.3, this is the only (up to a constant) possible linear combination of the matrix elements \(t^{\ell _1}_{m_1, n_1} \otimes t^{\ell _2}_{m_2, n_2}\) for fixed \((\ell _1,\ell _2)\) which has this property. Hence, (iii) follows. \(\square \)
The special case \(\varPhi ^0_{1/2,1/2}:{\mathcal {U}}_{q}(\mathfrak {g})\rightarrow {\mathbb {C}}\) can now be calculated explicitly using the Clebsch–Gordan coefficients \(C^{1/2,1/2,0}_{m,-m,0}\) from (8.5). Explicitly,
This element is not self-adjoint in \(\mathcal {A}_q(SU(2))\otimes \mathcal {A}_q(SU(2))\). Recall the right action of \({\mathcal {U}}_{q}(\mathfrak {g})\) on the map \(\varPhi :{\mathcal {U}}_{q}(\mathfrak {g}) \rightarrow \text {End}(\mathcal {H}^\ell )\), including the case \(\ell =0\), by \((\varPhi \cdot X)(Y) = \varPhi (XY)\) for all \(X, Y \in {\mathcal {U}}_{q}(\mathfrak {g})\). This is analogous to the construction discussed at the beginning of this subsection. Then
is self-adjoint for the \(*\)-structure of \(\mathcal {A}_q(SU(2))\otimes \mathcal {A}_q(SU(2))\). Then by construction
5.2 The recurrence relation for spherical functions of type \(\ell \)
The proof of Theorem 4.6 in the group case can be found in [24, Proposition 3.1], where the constants \(A_{i,j}\) are explicitly given in terms of Clebsch–Gordan coefficients. This proof can also be applied in this case, giving the coefficients \(A_{i,j}\) explicitly in terms of Clebsch–Gordan coefficients. A more general set-up can be found in [44, Proposition 3.3.17]. The approach given here is related [24, Proposition 3.1], except that it differs in its way of establishing \(A_{1/2,1/2}\not =0\).
Proof of Theorem 4.6
As \({\mathcal {U}}_{q}(\mathfrak {g})\)-representations, the tensor product decomposition
follows from the standard tensor product decomposition for \({\mathcal {U}}_{q}(\mathfrak {su}(2))\), see (8.1). It follows that
where \(\varPsi ^{i,j}_{m,n}\) is in the span of the matrix elements \(t^{\ell _1+i}_{r_1,s_1}\otimes t^{\ell _1+j}_{r_2,s_2}\). Note that (4.7), and a similar calculation for multiplication by an element from \(\mathcal {B}\) from the other side, shows that \(\varphi \varPhi ^{\ell }_{\ell _1, \ell _2}\) has the required transformation behaviour (4.5) for the action of \(\mathcal {B}\) from the left and the right. Since the matrix elements \(t^{\ell _1}_{r_1,s_1}\otimes t^{\ell _1}_{r_2,s_2}\) form a basis for \(\mathcal {A}_q(G)\), it follows that each \(\varPsi ^{i,j}\) satisfies (4.5) so that by Proposition 5.2(iii) \(\varPsi ^{i,j}= A_{i,j} \, \varPhi ^{\ell }_{\ell _1+i, \ell _2+j}\). Here \(\varPsi ^{i,j}=0\) in case \((\ell _1+i,\ell _2+j)\) does not satisfy the conditions (4.3).
It remains to show that \(A_{1/2,1/2}\not =0\). In order to do so, we evaluate the identity of Theorem 4.6 at a suitable element of \({\mathcal {U}}_{q}(\mathfrak {g})\). For the \({\mathcal {U}}_{q}(\mathfrak {su}(2))\)-representations in (3.3), it is immediate that \(t^\ell (e^k)=0\) for \(k>2\ell \) and that \(t^\ell _{r,s}(e^{2\ell })=0\) except for the case \((r,s)=(-\ell ,\ell )\) and then \(t^\ell _{-\ell ,\ell }(e^{2\ell })\not =0\). Extending to \({\mathcal {U}}_{q}(\mathfrak {g})\), we find that \(\varPhi ^{\ell }_{\ell _1,\ell _2} (E_1^{k_1}E_2^{k_2})=0\) if \(k_1>2\ell _1\) or \(k_2>2\ell _2\), and
where the right-hand side is non-zero in case \(m=-\ell _1+\ell _2\), \(n=\ell _1-\ell _2\).
So if we evaluate the identity of Theorem 4.6 at \(E_1^{2\ell _1+1}E_2^{2\ell _2+1}\), it follows that only the term with \((i,j)=(1/2,1/2)\) on the right- hand side of Theorem 4.6 is non-zero, and the specific matrix element is given by (5.4) with \((\ell _1,\ell _2)\) replaced by \((\ell _1+\frac{1}{2},\ell _2+\frac{1}{2})\). It suffices to check that the left-hand side of Theorem 4.6 is non-zero when evaluated at \(E_1^{2\ell _1+1}E_2^{2\ell _2+1}\). By (4.6) we need to calculate the comultiplication on \(E_1^{2\ell _1+1}E_2^{2\ell _2+1}\). Using the non-commutative q-binomial theorem [15, Exercise 1.35] twice, we get
From (5.1), we find that \(\varphi (E_1^{k_1}E_2^{k_2} K_1^{2\ell _1+1-k_1}K_2^{2\ell _2+1-k_2})=0\) unless \(k_1=k_2=1\), and in that case the term \(\beta \otimes \beta \) of (5.1) gives a non-zero contribution, and the other terms give zero. So we find
and this is non-zero for the same matrix element \((m,n) = (-\ell _1+\ell _2,\ell _1-\ell _2)\) by (5.4). \(\square \)
Note that we can derive the explicit value of \(A_{1/2,1/2}\) from the proof of Theorem 4.6 by keeping track of the constants involved. However, we do not need the explicit value except in the case \(\ell =0\).
In the special case \(\ell =0\), the recurrence of Theorem 4.6 has two terms and we obtain
The value of \(A_{1/2,1/2}\) can be obtained by evaluating at the group-like element \(A^\lambda \), using Proposition 5.2(ii) and \(2 \varphi (A^\lambda ) = q^{\lambda +1}+q^{-1-\lambda }\) and comparing leading coefficients of the Laurent polynomials in \(q^\lambda \). This gives \(\frac{1}{2} q^{-1} (C^{\ell _1,\ell _1,0}_{-\ell _1,-\ell _1,0})^2= A_{1/2,1/2} (C^{\ell _1+1/2,\ell _1+1/2,0}_{-\ell _1-1/2,-\ell _1-1/2,0})^2\), and the value for \(A_{1/2,1/2}\) follows from (8.6). Having \(A_{1/2,1/2}\), we evaluate at 1, i.e. the case \(\lambda = 0\), which gives \(A_{1/2,1/2}+A_{-1/2,-1/2}=\frac{1}{2}(q+q^{-1})\) by Proposition 5.2(ii), from which \(A_{-1/2,-1/2}\) follows.
Proposition 5.3
For \(n\in {\mathbb {N}}\), we have \(\displaystyle {\varPhi ^0_{\frac{1}{2}n,\frac{1}{2}n} = q^{n}\frac{1-q^2}{1-q^{2n+2}} U_n(\varphi )}\).
Recall that the Chebyshev polynomials of the second kind are orthogonal polynomials:
Proof
From (5.5), it follows that \(\varPhi ^0_{\frac{1}{2}n,\frac{1}{2}n} = p_n(\varphi )\) for a polynomial \(p_n\) of degree n satisfying
with initial conditions \(p_0(x)=1\), \(p_1(x)=\frac{2}{q+q^{-1}}x\). Set
then \(r_0(x)=1\), \(r_1(x)=2x\) and \(2x\, r_n(x) = r_{n+1}(x) + r_{n-1}(x)\). So \(r_n(x) = U_n(x)\). \(\square \)
5.3 Orthogonality relations
In this subsection, we prove Theorem 4.8 from the quantum group theoretic interpretation up to the calculation of certain explicit coefficients in the expansion of the entries of the weight.
Recall the Haar functional on \(\mathcal {A}_q(SU(2))\). It is the unique left and right invariant positive functional \(h_0:\mathcal {A}_q(SU(2)) \rightarrow \mathbb {C}\) normalised by \(h_0(1)=1\), see e.g. [9], [20, §4.3.2], [21, 48]. The Schur orthogonality relations state
We identify \(\mathcal {A}_q(G)\) with \(\mathcal {A}_q(SU(2))\otimes \mathcal {A}_q(SU(2))\). Then the functional \(h=h_0\otimes h_0\) is the Haar functional on \(\mathcal {A}_q(G)\). We can identify the analogue of the algebra of bi-K-invariant polynomials on G as the algebra generated by the self-adjoint element \(\psi \), and give the analogue of the restriction of the invariant integration in Lemma 5.4.
Lemma 5.4
A functional \(\tau :{\mathcal {U}}_{q}(\mathfrak {g})\rightarrow {\mathbb {C}}\) that satisfies the transformation behaviour \(\tau ((AXA^{-1})YZ) = \varepsilon (X) \psi (Y) \varepsilon (Z)\) for all \(X, Z \in \mathcal {B}\) and all \(Y \in {\mathcal {U}}_{q}(\mathfrak {g})\) is a polynomial in \(\psi \) as in (5.2). Moreover, the Haar functional on the \(*\)-algebra \({\mathbb {C}}[\psi ]\subset \mathcal {A}_q(G)\) is given by
Proof
From Proposition 5.2(iii) and (5.3), we see that any functional on \({\mathcal {U}}_{q}(\mathfrak {g})\) satisfying the invariance property is a polynomial in \(\psi \), since this space is spanned by \(\varPhi ^0_{\frac{1}{2}n, \frac{1}{2}n}\cdot A^{-1}\) for \(n\in \mathbb {N}\). From Proposition 5.3, we have \(\varPhi ^0_{\frac{1}{2} n,\frac{1}{2} n}\cdot A^{-1}\) is a multiple of \(U_n(\psi )\), since the right action of \(A^{-1}\) is an algebra homomorphism. The Schur orthogonality relations give
and since the argument of h is polynomial in \(\psi \), we see that it has to correspond to the orthogonality relations (5.6) for the Chebyshev polynomials. So we find the expression for the Haar functional on the \(*\)-algebra generated by \(\psi \). \(\square \)
Theorem 5.5
Assume \(\varPsi , \varPhi :{\mathcal {U}}_{q}(\mathfrak {g}) \rightarrow \mathrm{End}(\mathcal {H}^{\ell })\) are spherical functions of type \(\ell \), see Definition 5.1. Then the map
satisfies \(\tau ((AXA^{-1})YZ) = \varepsilon (X) \psi (Y) \varepsilon (Z)\) for all \(X, Z \in \mathcal {B}\) and all \(Y \in {\mathcal {U}}_{q}(\mathfrak {g})\).
In particular, any such trace is a polynomial in the generator \(\psi \) by Lemma 5.4.
Corollary 5.6
Fix \(\ell \in \frac{1}{2}{\mathbb {N}}\), then for \(k,p\in \{0,1,\ldots , 2\ell \}\),
with \(\bigl ( W(\psi )_{k,p}\bigr )^*=W(\psi )_{p,k}\).
Proof of Theorem 5.5
Write \(\varPsi = \sum _{m,n=-\ell }^\ell \varPsi _{m, n}\otimes E^\ell _{m,n}\), \(\varPhi = \sum _{m,n=-\ell }^\ell \varPhi _{m, n}\otimes E^\ell _{m,n}\), with \(\varPsi _{m, n}\) and \(\varPhi _{m, n}\) linear functionals on \({\mathcal {U}}_{q}(\mathfrak {g})\) so that
using the standard notation \(\Delta (Y)= \sum _{(Y)} Y_{(1)}\otimes Y_{(2)}\) and \(\xi ^*(X)= \overline{\xi (S(X)^*)}\) for \(\xi :{\mathcal {U}}_{q}(\mathfrak {g})\rightarrow \mathbb {C}\) and \(X\in {\mathcal {U}}_{q}(\mathfrak {g})\).
Let \(Z \in \mathcal {B}\) and \(Y \in {\mathcal {U}}_{q}(\mathfrak {g})\), then we find
Since \(\mathcal {B}\) is a right coideal, we can assume that \(Z_{(1)}\in \mathcal {B}\), so, by the transformation property (4.5), \(\varPsi _{m, n}(A^{-1} Y_{(1)} Z_{(1)})= \sum _{k=-\ell }^\ell \varPsi _{m,k}(A^{-1}Y_{(1)})t^\ell _{k,n}(Z_{(1)})\). Using this, and \(t^\ell _{k,n}(Z_{(1)})= \overline{t^\ell _{n,k}(Z_{(1)}^*)}\) by the \(*\)-invariance of \({\mathcal {B}}\) and the unitarity of \(t^\ell \), and next move this to the \(\varPhi \)-part, and summing over n and the transformation property (4.5) for \(\varPhi \), we obtain
using \(\sum _{(Z)} S(Z_{(2)})^*Z_{(1)}^*= \bigl (\sum _{(Z)}Z_{(1)} S(Z_{(2)})\bigr )^*= \overline{\varepsilon (Z)}\) by the antipode axiom in a Hopf algebra.
For the invariance property from the left, we proceed similarly using that A is a group-like element. So for \(Y\in {\mathcal {U}}_{q}(\mathfrak {g})\) and \(X\in \mathcal {B}\) we have
Now \(S(A)^*=A^{-1}\), \(S(A^{-1})^*=A\). Proceeding as in the previous paragraph using \(X_{(1)}\in {\mathcal {B}}\), we obtain
The result follows if we prove \(\sum _{(X)} X_{(1)}^*A^{-2} S(X_{(2)})^*A = A^{-1} \overline{\varepsilon (X)}\). In order to prove this, we need the observation that \(S(X^*) = A^{-2} S(X)^*A^2\) for all \(X\in {\mathcal {U}}_{q}(\mathfrak {g})\), which can be verified on the generators and follows since the operators are antilinear homomorphisms, see Remark 5.7. Now we obtain the required identity:
\(\square \)
Remark 5.7
The required identity \(S(X^*) = A^{-2} S(X)^*A^2\) for all \(X\in {\mathcal {U}}_{q}(\mathfrak {g})\) can be generalised to arbitrary semisimple \(\mathfrak {g}\). Indeed, since the square of the antipode S is given by conjugation with an explicit element of the Cartan subalgebra associated to \(\rho =\frac{1}{2} \sum _{\alpha >0} \alpha \), see e.g. [33, Exercise 4.1.1], and since in a Hopf \(*\)-algebra \(S\circ *\) is an involution, we find that \(S(X^*) = K_{-\rho } S(X)^*K_{\rho }\) if \(S^2(X) = K_{-\rho } X K_{\rho }\) as in [33, Exercise 4.1.1].
Proof of Corollary 5.6
By Theorem 5.5 and Lemma 5.4, the trace is a linear combination of \(\varPhi ^0_{\frac{1}{2} n,\frac{1}{2} n}\cdot A^{-1}\), hence a polynomial in \(\psi \). To obtain the expression, we need to find those n’s for which \(\varPhi ^0_{\frac{1}{2} n,\frac{1}{2} n}\cdot A^{-1}\) occurs in \(W(\psi )_{k,p}\) by Proposition 5.3. Using (4.4), (5.8) and Proposition 5.2, we find that in \(W(\psi )_{k,p}\) matrix only matrix elements of the form \(t^{\frac{1}{2} k}_{a_1,b_1}(t^{\frac{1}{2} p}_{c_1,d_1})^*\otimes t^{\ell - \frac{1}{2} k}_{a_2,b_2}(t^{\ell -\frac{1}{2} p}_{c_2,d_2})^*\) occur. Using the Clebsch–Gordan decomposition, we see that \(t^{\frac{1}{2} k}_{a_1,b_1}(t^{\frac{1}{2} p}_{c_1,d_1})^*\) and similarly \(t^{\ell - \frac{1}{2} k}_{a_2,b_2}(t^{\ell -\frac{1}{2} p}_{c_2,d_2})^*\) can be written as a sum of matrix elements from \(t^{\frac{1}{2} (k+p)-r}\), \(r\in \{0,1,\ldots , k\wedge p\}\), and similarly \(t^{2\ell - \frac{1}{2} (k+p)-s}\), \(s\in \{0,1,\ldots , (2\ell -k)\wedge (2\ell -p)\}\). By Proposition 5.3 and Proposition 5.2, the only n’s that can occur are \(n=k+p-2r\), \(r\in \{0,1,\ldots , k\wedge p\}\).
The statement on the adjoint follows immediately from (5.8) and \(\psi \) being self-adjoint, see (5.2). \(\square \)
We now can start the first part of the proof of Theorem 4.8, except for the fact that we have to determine certain constants. This is contained in Lemma 5.8.
Lemma 5.8
We have
The proof of Lemma 5.8 is a calculation using Theorem 4.6, which we postpone to Sect. 6.3.
First part of the proof of Theorem 4.8
Using the notation of Sect. 4, we find from Corollary 4.7 and (4.8) that
where we use that the action by \(A^{-1}\) from the right is an algebra homomorphism, since \(A^{-1}\) is group like, and that \(\psi \) is self-adjoint. By Proposition 5.2, Lemma 5.4 and (5.7), we have
after a straightforward calculation. Plugging in Lemma 5.8 in (5.9) and rewriting prove the result using Corollary 5.6. \(\square \)
Note that we have not yet determined the explicit values of \(\alpha _t(m,n)\) in Theorem 4.8 and we have not shown that W is a matrix-valued weight function in the sense of Sect. 2. The values of the constants \(\alpha _t(m,n)\) will be determined in Sect. 6.1 and the positivity of W(x) for \(x\in (-1,1)\) will follow from Theorem 4.15.
5.4 q-Difference equations
It is well known, see e.g. Koornwinder [30], Letzter [35], Noumi [37], that the centre of the quantised enveloping algebra can be used to determine a commuting family of q-difference operators to which the corresponding spherical functions are eigenfunctions. In this subsection, we derive the matrix-valued q-difference operators corresponding to central elements to which we find matrix-valued eigenfunctions.
The centre of \({\mathcal {U}}_{q}(\mathfrak {g})\) is generated by two Casimir elements, see Sect. 3:
Because of Proposition 5.2 and (3.4), we find
The goal is to compute the radial parts of the Casimir elements acting on arbitrary spherical functions of type \(\ell \) in terms of an explicit q-difference operator. In order to derive such a q-difference operator, we find a \(\mathcal {B}\mathcal {A}\mathcal {B}\)-decomposition for suitable elements in \({\mathcal {U}}_{q}(\mathfrak {g})\) in Proposition 5.10. This special case of a \(\mathcal {B}\mathcal {A}\mathcal {B}\)-decomposition is the analogue of the KAK-decomposition, which has not a general quantum algebra analogue. For this purpose, we first establish Lemma 5.9, which can be viewed as a quantum analogue of [8, Lemma 2.2], and gives the \(\mathcal {B}\mathcal {A}\mathcal {B}\)-decomposition of \(F_2A^{\lambda }\).
Lemma 5.9
Recall \(A=K_1^{1/2}K_2^{1/2}\). For \(\lambda \in {\mathbb {Z}}\setminus \{0\}\), we have
Proof
Recall Definition 4.1, so the result follows from
and using \(K^{1/2}= K_1^{1/2} K_2^{-1/2}\in \mathcal {B}\). \(\square \)
Proposition 5.10
The \(\mathcal {B} \mathcal {A} \mathcal {B}\)-decomposition for the Casimir elements \(\Omega _1\) and \(\Omega _2\) is given by
and
Proof
We first concentrate on \(\Omega _2\), and the statement for \(\Omega _1\) follows by flipping the order using \(\sigma \) as in Remark 4.2(iv). We need to rewrite \(E_2F_2A^\lambda \), which we do in terms of \(F_2A^\lambda \) and next using Lemma 5.9. The details are as follows.
Using Definition 4.1 and the commutation relations, and pulling through \(F_2\) to the left, we get
Similarly, and only slightly more involved, we obtain
Using (5.11) and (5.12), we eliminate the term with \(F_1F_2\), and shifting \(\lambda \) to \(\lambda +1\) gives
Apply Lemma 5.9 on the first two terms on the right-hand side of (5.13) and note that the remaining terms in (5.13) and in \(\Omega _2 A^\lambda \) can be dealt with by observing that \(K_2=K^{-1/2}A= AK^{-1/2}\). Taking corresponding terms together proves the \(\mathcal {B}\mathcal {A}\mathcal {B}\)-decomposition for \(\Omega _2 A^\lambda \) after a short calculation. \(\square \)
Our next task is to translate Proposition 5.10 into an operator for spherical functions of type \(\ell \), from which we derive eventually, see Theorem 4.13, an Askey–Wilson q-difference type operator for the matrix-valued orthogonal polynomials \(P_n\). Let \(\varPhi \) be a spherical function of type \(\ell \), then we immediately obtain from Proposition 5.10
The analogous expression for \(\varPhi (\Omega _2 A^{\lambda })\) of (5.14) can be obtained using the flip \(\sigma \), see Remark 4.2(iv). In particular, it suffices to replace all \(t^\ell (X)\) in (5.14) by \(J^\ell t^\ell (X)J^\ell \), see Remark 4.2(iv), to get the corresponding expression.
By Proposition 5.2(ii), we know that \(\varPhi (A^{\lambda })\) is diagonal. Note that \(\varPhi \cdot \Omega _1: {\mathcal {U}}_{q}\rightarrow \mathrm{End}(H_\ell )\) is also a spherical function of type \(\ell \) by the centrality of the Casimir operator \(\Omega _1\). Hence \((\varPhi \cdot \Omega _1)(A^\lambda )=\varPhi (\Omega _1A^\lambda )\) is diagonal, which can also be seen directly from (5.14). We can calculate the matrix entries of \(\varPhi (\Omega _1 A^\lambda )\) using the upper triangular matrices \(t^\ell (B_1K^{-1/2})\), \(t^\ell (B_1)\) having only non-zero entries on the superdiagonal, the lower triangular matrices \(t^\ell (K^{-1/2}B_2)\), \(t^\ell (B_2)\) having only non-zero entries on the subdiagonal and the diagonal matrices \(t^\ell (K^{\pm 1/2})\), \(t^\ell (B_1K^{-1/2}B_2)\), \(t^\ell (K^{-1/2}B_2B_1)\), see (4.2), (3.1).
For \(\varPhi \) a spherical function of type \(\ell \), we view the diagonal restricted to \(\mathcal {A}\) as a vector-valued function \(\hat{\varPhi }\):
So we can regard the Casimir elements as acting on the vector-valued function \(\hat{\varPhi }\), and the action of the Casimir is made explicit in Proposition 5.11.
Proposition 5.11
Let \(\varPhi \) be a spherical function of type \(\ell \), with corresponding vector-valued function \({\hat{\varPhi }} :\mathcal {A} \rightarrow \mathcal {H}^\ell \) representing the diagonal when restricted to \(\mathcal {A}\). Then
where \(M_1^\ell (z)\) is a tridiagonal and \(N_1^\ell (z)\) is a diagonal matrix with respect to the basis \(\{e^\ell _n\}_{n=-\ell }^\ell \) of \(\mathcal {H}^\ell \) with coefficients
Moreover, for \(\Omega _2\) the action is
where \(M_2^\ell (z)=J^\ell M_1^\ell (z) J^\ell \) is a tridiagonal, and \(M_2^\ell (z)=J^\ell M_1^\ell (z) J^\ell \) is a diagonal matrix.
Remark 5.12
The proof of Theorem 4.13 does not explain why the matrix coefficients of \(\eta _q\) and \(\eta _{q^{-1}}\) in Theorem 4.13 are related by \(z\leftrightarrow z^{-1}\). In Proposition 5.11, there is a lack of symmetry between the up and down shift in \(\lambda \), and only after suitable multiplication with \(\hat{\varPhi }_0\) from the left and right the symmetry of Theorem 4.13 pops up. It would be desirable to have an explanation of this symmetry from the quantum group theoretic interpretation. Note that the symmetry can be translated to the requirement \(\varPsi (z) N_1(q^{-2}z^{-1}) = M_1(z) \varPsi (qz)\) with \(\varPsi (q^\lambda ) = \hat{\varPhi }^\ell _0(A^{\lambda })\hat{\varPhi }^\ell _0(A^{-1-\lambda })^{-1}\).
The remark following (5.14) on how to switch to the second Casimir operator gives the conjugation between \(M_1^\ell \) and \(M_2^\ell \), respectively \(N_1^\ell \) and \(N_2^\ell \). Note that in case \(\ell =0\), \(\varPhi \) and \({\hat{\varPhi }}\) are equal, and we find that Proposition 5.11 gives the operator
for \(\varPhi :{\mathcal {U}}_{q}(\mathfrak {g})\rightarrow {\mathbb {C}}\) a spherical function (of type 0), which should be compared to [30, Lemma 5.1], see also [35, 37].
Proof
Consider (5.14) and calculate the (m, m)-entry. Using the explicit expressions for the elements \(t^\ell (X)\) for \(X\in \mathcal {B}\), see (4.2), (3.1), in (5.14), we find
which we can rewrite as stated with the matrices \(M_1^\ell (q^{\lambda })\) and \(N_1^\ell (q^{\lambda })\). The case for \(\Omega _2\) follows from the observed symmetry, or it can be obtained by an analogous computation. \(\square \)
In particular, we can apply Proposition 5.11 to \(\varPhi ^\ell _{\xi (n,m)}\) using (5.10) to find an eigenvalue equation. In order to find an eigenvalue equation for the matrix-valued polynomials, we first introduce the full spherical functions \({\hat{\varPhi }}^\ell _n:\mathcal {A} \rightarrow \text {End}(\mathbb {C}^{2\ell +1})\) defined by
for \(n\in \mathbb {N}\). So we put the vectors \((\hat{\varPhi }^\ell _{\xi (n,0)}, \ldots ,\hat{\varPhi }^\ell _{\xi (n,2\ell )})\) as columns in a matrix, and we relabel in order to have the matrix entries labelled by \(i,j\in \{0,\ldots , 2\ell \}\). We reformulate Proposition 5.11 and (5.10) as the eigenvalue equations
where \(\bigl (M_i(z)\bigr )_{m,n}= \bigl (M_i^\ell (z)\bigr )_{m-\ell ,n-\ell }\) for \(m,n\in \{0,1,\ldots , 2\ell \}\), and similarly for \(N_i\), are the matrices of Proposition 5.11 shifted to the standard matrix with respect to the standard basis \(\{e_n\}_{n=0}^{2\ell }\) of \(\mathbb {C}^{2\ell +1}\). Note that the symmetry of Proposition 5.11 then rewrites as \(M_2(z) = J M_1(z)J\), \(N_2(z) = J N_1(z)J\), with \(J:e_n\mapsto e_{2\ell -n}\).
Now we rewrite Corollary 4.7 after pairing with \(A^\lambda \) as
where \(\mu (x) =\frac{1}{2}(x+x^{-1})\), using that pairing with the group-like elements \(A^\lambda \) is a homomorphism and \(\varphi (A^\lambda )=\mu (q^{\lambda +1})\) by (5.1). Using (5.17) in (5.16) proves Corollary 5.13.
Corollary 5.13
Assuming \({\hat{\varPhi }}^\ell _0(A^\lambda )\) is invertible, the matrix-valued polynomials \(P_n\) satisfy the eigenvalue equations
where
and \(\varLambda _n(i)\) are defined in (5.16) and \(\mu (x) = \frac{1}{2}(x+x^{-1})\).
It remains to prove the assumption in Corollary 5.13 for sufficiently many \(\lambda \), and to calculate the coefficients in the eigenvalue equations explicitly. This is done in Sect. 7.
Having established (5.17), we can prove Corollary 4.9 by considering coefficients in the Laurent expansion.
Proof of Corollary 4.9
The left-hand side of (5.17) can be expanded as a Laurent series in \(q^\lambda \) by Proposition 5.2(ii) and (5.15). The leading coefficient of degree \(\ell +n\) is an antidiagonal matrix
since the Clebsch–Gordan coefficient is zero unless \(i+j=2\ell \). With a similar expression for \({\hat{\varPhi }}^\ell _0(A^\lambda )\) on the right-hand side, expanding \(\overline{P_n}\bigl ( \mu (q^{\lambda +1})\bigr ) = \overline{\text {lc}(P_n)} q^{n(\lambda +1)} 2^{-n} + \text {lower order terms}\) gives
and (8.7) gives the result. \(\square \)
Corollary 4.9 gives \(P_0(x)=I\), so Corollary 5.13 gives
5.5 Symmetries
Even though we can use the explicit expression of the weight W to establish Proposition 4.10, we show the occurrence of J in the commutant from Remark 4.5(ii).
Observe that \((\ell _1,\ell _2)=\xi (n,k)\) gives \((\ell _2, \ell _1)= \xi (n,2\ell -k)\) and that \(\sigma (A)=A\), so Remark 4.5(ii) yields \(\bigl (\varPhi ^\ell _{\xi (n,i)}\cdot A^{-1}\bigr )(\sigma (Z)) = J^\ell \bigl (\varPhi ^\ell _{\xi (n,2\ell -i)}\cdot A^{-1}\bigr )(Z) J^\ell \) for any \(Z\in {\mathcal {U}}_{q}(\mathfrak {g})\). Similarly, using moreover that \(\sigma \) is a \(*\)-isomorphism and that S and \(\sigma \) commute, see (3.5) we obtain \(\bigl (\varPhi ^\ell _{\xi (n,j)}\cdot A^{-1}\bigr )^*(\sigma (Z)) = J^\ell \bigl (\varPhi ^\ell _{\xi (n,2\ell -i)}\cdot A^{-1}\bigr )^*(Z) J^\ell \). Since \((\sigma \otimes \sigma )\circ \Delta = \Delta \circ \sigma \) and \((J^\ell )^2=1\), we find for \(Z\in {\mathcal {U}}_{q}(\mathfrak {g})\) from these observations, cf. (5.8),
By the first part of the proof of Theorem 4.8
Note that \(\psi (\sigma (Z))= \psi (Z)\) by the symmetric expression of (5.2). Again using \((\sigma \otimes \sigma )\circ \Delta = \Delta \circ \sigma \), we have \(p(\psi )(\sigma (Z))= p(\psi )(Z)\) for any polynomial p. It follows that
For \(n=m=0\), (5.19) proves that J is in the commutant algebra for W as stated in Proposition 4.10.
Note that (5.19), after applying the Haar functional on the \(*\)-algebra generated by \(\psi \) as in Lemma 5.4, also gives \(JG_nJ=G_n\) as is immediately clear from the explicit expression for the squared norm matrix \(G_n\) in Theorem 4.8 derived in Sect. 5.3. It moreover shows that \((JP_nJ)_{n\in \mathbb {N}}\) is a family of matrix-valued orthogonal polynomials with respect to the weight W. It is a consequence of Corollary 4.9, see Sect. 2, that \(JP_nJ=P_n\), since \(J \text {lc}(P_n)J = \text {lc}(P_n)\) by Corollary 4.9.
Note that we have now proved Proposition 4.10 except for the \(\supset \)-inclusion in the first line. This is done after the proof of Theorem 4.8 is completed at the end of Sect. 6.1.
As a consequence of the discussion on symmetries, we can formulate the symmetry for \(\hat{\varPhi }^\ell _n\) in Lemma 5.14.
Lemma 5.14
With \(J:e_n\mapsto e_{2\ell -n}\), we have \(J \hat{\varPhi }^\ell _n(A^\lambda ) J = \hat{\varPhi }^\ell _n(A^\lambda )\) for all \(\lambda \in \mathbb {Z}\).
Proof
This is a consequence of initial observations in Sect. 5.5. For \(i,j\in \{0,\ldots , 2\ell \}\), using (5.15) and \(\sigma (A^\lambda )=A^\lambda \) we obtain
\(\square \)
6 The weight and orthogonality relations for the matrix-valued polynomials
In this section, we complement the quantum group theoretic proofs of Sect. 5 of some of the statements of Sect. 4 using mainly analytic techniques. In Sect. 6.1, we prove the statement on the expansion of the entries of the weight function in terms of Chebyshev polynomials of Theorem 4.8. In Sect. 6.2, we prove the LDU-decomposition of Theorem 4.15. In Sect. 6.3, we prove Lemma 5.8 using a special case and induction using Theorem 4.6 in the induction step.
6.1 Explicit expressions of the weight
In order to prove the explicit expansion of the matrix entries of the weight W in terms of Chebyshev polynomials, we start with the expression of Corollary 5.6 for the matrix entries of the weight W. After pairing with \(A^\lambda \), we expand as a Laurent polynomial in \(q^\lambda \) in Proposition 6.1. Then we can use Lemma 6.2, whose proof is presented in Appendix 2.1.
Proposition 6.1
For \(0\le k,p\le 2\ell \), \(\lambda \in \mathbb {Z}\), we have
Proof
We obtain from Corollary 5.6
using that \(A^\lambda \) is a group-like element and \(S(A^{\lambda })^*= A^{-\lambda }\). By Proposition 5.2(ii) \(W(\psi )_{k,p}(A^\lambda )\) is
where the Clebsch–Gordan coefficients are to be taken as zero in case \(|i-n|>\ell -\frac{1}{2} k\), respectively \(|j-n|>\ell -\frac{1}{2} p\). Now put \(s=j-i\), then s runs from \(-\frac{1}{2}(k+p)\) up to \(\frac{1}{2}(k+p)\), and we have the Laurent expansion
with
Plugging in (8.3) gives the explicit expression for \(d^\ell _s(k,p)\).
In order to show that \(d_s^\ell (p,k)= d_{-s}^\ell (p,k)\), we note that \(p(\psi )(A^{\lambda })= p(\mu (q^\lambda ))\) is symmetric in \(\lambda \leftrightarrow -\lambda \) for any polynomial p and so Corollary 5.6 implies the symmetry. \(\square \)
Note that for a matrix element of \(P_n(\psi )^*W(\psi ) P_m(\psi )\) a similar expression can be given, cf. the first part of the proof of Theorem 4.8, but this is not required. The symmetry in the Laurent expansion of \(W(\psi )_{k,p}(A^{\lambda })\) does not seem to follow directly from known symmetries for the Clebsch–Gordan coefficients, see e.g. [20, Chap. 3].
Now we can proceed with the proof of Theorem 4.8, for which we need Lemma 6.2.
Lemma 6.2
For \(\ell \in \frac{1}{2} {\mathbb {N}}\) and for \(k,p\in \{0,\ldots ,2\ell \}\) subject to \(k\le p\) and \(k+p\le 2\ell \) we have with the notation of Proposition 6.1 and Theorem 4.8,
Lemma 6.2 contains the essential equality for the proof of the explicit expression of the coefficients of the matrix entries of the weight of Theorem 4.8. The proof of Lemma 6.2 can be found in Appendix 2.1, and it is based on a q-analogue of the corresponding statement for the classical case [24, Theorem 5.4]. Previously in 2011, Mizan Rahman (private correspondence) has informed one of us that he has obtained a q-analogue of the underlying summation formula for [24, Theorem 5.4]. It is remarkable that Rahman’s q-analogue is different from the one needed here in Lemma 6.2.
Second part of the proof of Theorem 4.8
We prove the statement on the explicit expression of the matrix entries of the weight in terms of Chebyshev polynomials. By Corollary 5.6, we have
for all \(\lambda \in \mathbb {Z}\). Since the coefficients \(\alpha _r(k,p)\) are completely determined by \(W(\psi )_{k,p}\) and since \(W(\psi )_{k,p} = W(\psi )_{2\ell -k,2\ell -k}=\bigl (W(\psi )_{p,k}\bigr )^*\), we can restrict to the case \(k\le p\), \(k+p\le 2\ell \). For this case, the result follows from Lemma 6.2, and hence the explicit expression for \(\alpha _r(k,p)\) in Theorem 4.8 is obtained. \(\square \)
Note that proof of Theorem 4.8 is not yet complete, since we have to show that the weight is strictly positive definite for almost all \(x\in [-1,1]\), see Sect. 2. This will follow from the LDU-decomposition for the weight as observed in Corollary 4.16, but in order to prove the LDU-decomposition of Theorem 4.15 we need the explicit expression for the coefficients \(\alpha _t(m,n)\) of Theorem 4.8.
In Sect. 5.5, we observed that J commutes with W(x) for all x. In order to prove Proposition 4.10, we need to show that the commutant is not larger, and for this we need the explicit expression of \(\alpha _t(m,n)\) of Theorem 4.8.
Proof of Proposition 4.10
Let Y be in the commutant, and write \(W(x) = \sum _{n = 0}^{2\ell } W_k U_n(x)\) for \(W_k \in \mathrm{Mat}_{2\ell +1}({\mathbb {C}})\) using Theorem 4.8. Then \([Y,W_k]=0\) for all k. The proof follows closely the proofs of [24, Proposition 5.5] and [25, Proposition 2.6]. Note that \(W_{2\ell }\) and \(W_{2\ell -1}\) are symmetric and persymmetric (i.e. commute with J). Moreover, \((W_{2\ell })_{m,n}\) is non-zero only for \(m+n=2\ell \) and \((W_{2\ell -1})_{m,n}\) is non-zero only for \(|m+n-2\ell |=1\). From the explicit expression of the coefficients \(\alpha _t(m,n)\), we find that all non-zero entries of \(W_{2\ell }\) and \(W_{2\ell -1}\) are different apart from the symmetry and persymmetry. The proof of Proposition 4.10 can then be finished following [24, Proposition 5.5]. \(\square \)
6.2 LDU-decomposition
In order to prove the LDU-decomposition of Theorem 4.15 for the weight, we need to prove the matrix identity termwise. So we are required to show that
for the expression of W(x) in Theorem 4.8 and for the expressions of L(x) and T(x) in Theorem 4.15. Because of symmetry, we can assume without loss of generality that \(m\ge n\). Then (6.2) is equivalent to Proposition 6.3 after taking into account the coefficients in the LDU-decomposition, so it suffices to prove Proposition 6.3 in order to obtain Theorem 4.15.
Proposition 6.3
For \(0 \le n \le m \le 2\ell \), \(\ell \in \frac{1}{2} {\mathbb {N}}\) and with \(\alpha _t(m, n)\) defined in Theorem 4.8, we have
Before embarking on the proof of Proposition 6.3, note that each summand on the right-hand side of the expression of Proposition 6.3 is an even, respectively odd, polynomial for \(m+n\) even, respectively odd, since the continuous q-ultraspherical polynomials are symmetric and since
see (4.12), is an even polynomial with a factor \((1-x^2)\). In the proof of Proposition 6.3 we use Lemma 6.4.
Lemma 6.4
Let \(0\le k \le m \le n\) and \(t\in {\mathbb {N}}\). Then the integral
is equal to zero for \(t>m\), and for \(0\le t\le m\), the integral above is equal to \(C_k(m,n)R_k(\mu (t); 1, 1, q^{-2m - 2}, q^{-2n - 2}; q^2)\) with
In Lemma 6.4, we use the notation (4.13) for the q-Racah polynomials. Lemma 6.4 shows that the expansion as in Proposition 6.3 is indeed valid, and it remains to determine the coefficients \(\beta _k(m,n)\).
The proof of Lemma 6.4 follows the lines of the proof of [23, Lemma 2.7], see Appendix 2.2. The main ingredients are, cf. the proof in [23], the connection and linearisation coefficients for the continuous q-ultraspherical polynomials dating back to the work of L.J. Rogers (1894-5), see e.g. [2, §10.11], [19, §13.3], [15, (7.6.14), (8.5.1)]. Write the product \(C_{m-k}(x;q^{2k+2}|q^2) C_{n-k}(x;q^{2k+2}|q^2)\) as a sum over r of continuous q-ultraspherical polynomials \(C_{r}(x;q^{2k+2}|q^2)\) using the linearisation formula and write \(U_{n+m-2t}\), which is a continuous q-ultraspherical polynomial for \(\beta =q\), in terms of \(C_{s}(x;q^{2k+2}|q^2)\) using the connection formula. The orthogonality relations for the continuous q-ultraspherical polynomials then give the integral in terms of a single series. The details are in Appendix 2.2. From this sketch of proof, it is immediately clear that Lemma 6.4 can be generalised to a more general statement. This is the content of Remark 6.5, whose proof is left to the reader.
Remark 6.5
For integers \(0 \le t\), \(k \le m \le n\) and parameters \(\alpha , \beta \), we have
where \(C_k(m,n,\alpha ,\beta )\) is
Note that the \({}_4\varphi _3\)-series in Remark 6.5 is balanced, but in general is not a q-Racah polynomial.
In the proof of Proposition 6.3, and hence of Theorem 4.15, we need the summation formula involving q-Racah polynomials stated in Lemma 6.6. Its proof is also given in Appendix 2.2.
Lemma 6.6
For \(\ell \in \frac{1}{2}{\mathbb {N}}\) and \(m, n, k \in {\mathbb {N}}\) with \(0 \le k \le n \le m\), we have
With these preparations, we can prove Proposition 6.3, and hence the LDU-decomposition of Theorem 4.15.
Proof of Proposition 6.3
Since we have the existence of the expression in Proposition 6.3, it suffices to calculate \(\beta _k(m,n)\) having the explicit value of the \(\alpha _t(m,n)\)’s from Theorem 4.8. Multiply both sides by \(\frac{1}{2\pi } \sqrt{1 - x^2} U_{m + n - 2t}(x)\) and integrate using the orthogonality for the Chebyshev polynomials so that Lemma 6.4 gives
The q-Racah polynomials \(R_k(\mu (t); 1, 1, q^{-m-1}, q^{-n-1}; q^2)\) satisfy the orthogonality relations
Using the orthogonality relations, we find the explicit expression of \(\beta _k(m,n)\) in terms of \(\alpha _t(m,n)\), and using the explicit expression of \(\alpha _t(m, n)\) of Theorem 4.8 gives
This expression is summable by Lemma 6.6. Collecting the coefficients proves the proposition. \(\square \)
Last part of the proof of Theorem 4.8
Now that we have proved Theorem 4.15, Corollary 4.16 is immediate, since the coefficients of the diagonal matrix T(x) are positive on \((-1,1)\). So the weight is strictly positive definite on \((-1,1)\), which is the last step to be taken in the proof of Theorem 4.8. \(\square \)
6.3 Summation formula for Clebsch–Gordan coefficients
In this subsection, we prove Lemma 5.8, which has been used in the first part of the proof of Theorem 4.8, see Sect. 5.3. The proof of Lemma 5.8 is somewhat involved, since we employ an indirect way using induction and Theorem 4.6.
Proof of Lemma 5.8
Assume for the moment that
Assuming (6.3) the lemma follows, using \(C^{\ell _1, \ell _2, \ell }_{m_1, m_2, i}=0\) if \(m_1-m_2\not =i\),
It remains to prove (6.3). We do this in case \(\ell _1+\ell _2=\ell \). Put \((\ell _1, \ell _2) = (k/2, \ell - k/2)=\xi (0,k)\) for \(k\in \mathbb {N}\) and \(k\le 2\ell \). Using the explicit expression for the Clebsch–Gordan coefficients (8.3), we find that in this special case the left-hand side of (6.3) equals
where the sum runs over i for \(-\ell + k/2 + m \le i \le \ell - k/2 + m\). Substitute \(i \mapsto p - \ell + k/2 + m\) to see that this equals
Simplifying the expression, we find that
is equal to
since the \({}_2 \varphi _1\) can be summed by the reversed q-Chu-Vandermonde sum [15, (II.7)]. Putting everything together proves (6.3), and hence for the case where \((\ell _1,\ell _2)=\xi (0,k)\) Lemma 5.8 , i.e. \(\ell _1+\ell _2=\ell \).
To prove Lemma 5.8 in general, we set
hence it is sufficient to calculate \(f^{\ell }_{\ell _1, \ell _2}(-2)\). We will show by induction on n that \(f^{\ell }_{\xi (n,k)}(-2)\) is independent of (n, k), or equivalently that \(f^{\ell }_{\ell _1, \ell _2}(-2)\) is independent of \((\ell _1,\ell _2)\). Since we have established the case \(n=0\), Lemma 5.8 then follows.
In order to perform the induction step, we consider the recursion of Theorem 4.6. Using \(\varphi (A^\lambda ) = \frac{1}{2}(q^{1+\lambda }+q^{-1-\lambda })\), we see that \(\varphi (1)=\varphi (A^{-2})=\frac{1}{2}(q+q^{-1})\). Taking the trace of Theorem 4.6 at \(A^0=1\) and using Proposition 5.2(ii), we find, after taking traces,
Next we evaluate Theorem 4.6 at \(A^{-2}\) and we take traces, so, using \(\varphi (A^{-2})=\varphi (1)\),
which we rewrite, assuming \(f^{\ell }_{\ell _1, \ell _2}(-2)=F^\ell \) is independent of \((\ell _1,\ell _2)\) for \(\ell _1+\ell _2\le \ell +n\), so that \(f^{\ell }_{\ell _1+\frac{1}{2}, \ell _2+\frac{1}{2}}(-2)\) is
by (6.4) for the last equality. So the statement also follows for \(\ell _1+\ell _2=\ell +n+1\), and the lemma follows. \(\square \)
7 q-Difference operators for the matrix-valued polynomials
We continue the study of q-difference operator for the matrix-valued orthogonal polynomials started in Corollary 5.13. In particular, we show that the assumption on the invertibility of \(\hat{\varPhi }_0\) follows from the LDU-decomposition in Theorem 4.15. Then we make the coefficients in the matrix-valued second-order q-difference operator of Corollary 5.13 explicit in Sect. 7.1. Comparing to the scalar-valued Askey–Wilson q-difference operators, see e.g. [2, 15, 19, 26], we view the q-difference operator as a matrix-valued analogue of the Askey–Wilson q-difference operator. Next, having the matrix-valued orthogonal polynomials as eigenfunctions to the matrix-valued Askey–Wilson q-difference operator, we use this to obtain an explicit expression of the matrix entries of the matrix-valued orthogonal polynomials in terms of scalar-valued orthogonal polynomials from the q-Askey scheme by decoupling of the q-difference operator using the matrix-valued polynomial L in the LDU-decomposition of Theorem 4.15. From this expression, we can obtain an explicit expression for the coefficients in the matrix-valued three-term recurrence relation for the matrix-valued orthogonal polynomials, hence proving Theorem 4.11 and Corollary 4.12.
7.1 Explicit expressions of the q-difference operators
In order to make Corollary 5.13 explicit, we need to study the invertibility of the matrix \(\hat{\varPhi }^\ell _0(A^\lambda )\). For this, we first use Theorem 5.5 and its Corollary 5.6, and in particular (6.1). Because of Proposition 5.2(ii), this is only a single sum. In the notation of (5.15), we find
Taking the determinant gives, recalling \(\mu (z)= \frac{1}{2}(z+z^{-1})\),
using the LDU-decomposition for the weight of Theorem 4.15 and (4.12). The right-hand side is non-zero for \(\lambda \in \mathbb {Z}\) unless \(1\le |\lambda |\le 2\ell \). So \(\hat{\varPhi }^\ell _0(A^\lambda )\) is invertible for \(\lambda \ge 2\ell \) or \(\lambda < -2\ell -1\) or \(\lambda =-1\), i.e. for an infinite number of \(\lambda \in \mathbb {Z}\) and it is meaningful to consider Corollary 5.13.
Proof of Theorem 4.13
Corollary 5.13 gives a second-order q-difference equation for the matrix-valued orthogonal polynomials for an infinite set of \(\lambda \). So it suffices to check that for \(i=1,2\) the matrix-valued functions \(\tilde{M}_i\), \(\tilde{N}_i\) of Corollary 5.13 coincide with \(\mathcal {M}_i\), \(\mathcal {N}_i\), where \(\mathcal {N}_i(z)=\mathcal {M}_i(z^{-1})\), of Theorem 4.13, or
where \(M_i\), \(N_i\) as in (5.16), see Proposition 5.11, and \(\mathcal {M}_i\), \(\mathcal {N}_i\) as in Theorem 4.13, where \(\mathcal {N}_i(z)=\mathcal {M}_i(z^{-1})\).
By (5.18), we need
which is an easy check using the explicit expressions of Theorem 4.13. Now (7.2) and (5.16) for \(n=0\) show that any equation of (7.1) implies the other equation of (7.1). Indeed, assuming the second equation of (7.1) holds, then
implying the first equation of (7.1).
By Proposition 5.2(ii) and (5.15), the matrix entries of \({\hat{\varPhi }}^\ell _0(A^{\lambda })\) are Laurent series in \(q^\lambda \). Setting \(z=q^\lambda \), we see that in order to verify (7.1) entry-wise, we need to check equalities for Laurent series in z.
We first consider the second equality of (7.1) for \(i=1\). In this case, the matrices \(N_1\) and \(\mathcal {N}_1\) are band-limited. Hence, the (m, n)th entry of both sides of (7.1) involves either two or one terms, so we need to check
The proof of (7.3) involves the explicit expression of the spherical functions in terms of Clebsch–Gordan coefficients using Proposition 5.2(ii). It is given in Appendix 2.3.
The statements for the second q-difference equation with \(i=2\) follows from the symmetries of Proposition 5.11 and Lemma 5.14. \(\square \)
The explicit expressions have been obtained initially by computer algebra, and then later the proof as presented here and in Appendix 2.3 has been obtained.
7.2 Explicit expressions for the matrix entries of the matrix-valued orthogonal polynomials
Having established the q-difference equations for the matrix-valued orthogonal polynomials of Theorem 4.13 and having the diagonal part of the LDU-decomposition of the weight in terms of weight functions for the continuous q-ultraspherical polynomials in Theorem 4.15, it is natural to look at the q-difference operators conjugated by the polynomial function \(L^t\). It turns out that this completely decouples one of the second-order q-difference operators of Theorem 4.13. This gives the opportunity to link the matrix entries of the matrix-valued orthogonal polynomials to continuous q-ultraspherical polynomials. In order to determine the coefficients, we use the other q-difference operator and the orthogonality relations. Having such an explicit expression, we can determine the three-term recurrence relation for the monic matrix-valued orthogonal polynomials straightforwardly, and hence also for the matrix-valued orthogonal polynomials \(P_n\), since we already have determined the leading coefficient in Corollary 4.9.
The first step is to conjugate the second-order q-difference operator \(D_1\) of Theorem 4.13 with the matrix \(L^t\) of the LDU-decomposition of Theorem 4.15 into a diagonal q-difference operator. This conjugation is inspired by the result of [23, Theorem 6.1]. This conjugation takes \(D_2\) in a three-diagonal q-difference operator. For any \(n \in \mathbb {N}\), let \(\mathcal {R}_n(x) = L^t(x) Q_n(x)\), where \(Q_n(x) = P_n(x)\bigl (\text {lc}(P_n)\bigr )^{-1}\) denote the corresponding monic polynomial. Note that we have determined the leading coefficient \(\text {lc}(P_n)\) in Corollary 4.9. Then \((\mathcal {R}_n)_{n\ge 0}\) forms a family of matrix-valued polynomials, but note that the degree of \(\mathcal {R}_n\) is larger than n, and that the leading coefficient of \(\mathcal {R}_n\) is singular. Note that \(\mathcal {R}_n\) satisfy the orthogonality relations
Theorem 7.1
The polynomials \((\mathcal {R}_n)_{n \ge 0}\) are eigenfunctions of the q-difference operators
with eigenvalues \(\varLambda _n(i)\), where
Proof
We start by observing that the monic matrix-valued orthogonal polynomials \(Q_n\) are eigenfunctions of the second-order q-difference operators \(D_i\) of Theorem 4.13 for the eigenvalue \(\text {lc}(P_n)\varLambda _n(i) \text {lc}(P_n)^{-1} =\varLambda _n(i)\), since the matrices are diagonal and thus commute. By conjugation, we find that \(\mathcal {R}_n\) satisfy
using the notation \(\breve{L}^t(z)=L^t(\mu (z))\), etc., with \(x=\mu (z) = \frac{1}{2}(z+z^{-1})\) as before. It remains to calculate \(\mathcal {K}_i(z)\) explicitly. We show in Appendix 2.4 that the expressions for \(\mathcal {K}_i\) are correct by verifying
for \(i=1,2\). \(\square \)
Lemma 7.2
For \(n \in {\mathbb {N}}\) and \(0 \le i,j \le 2\ell \), we have
where \(C_{n}(x; \beta | q)\) are the continuous q-ultraspherical polynomials (4.10) and \(\beta _n(i, j)\) is a constant depending on i, j and n.
Proof
Evaluate \(\mathcal {D}_1 \mathcal {R}_n(x) = \mathcal {R}_n(x) \varLambda _n(1)\) in entry (i, j). Since \(\mathcal {D}_1\) is decoupled, we get a q-difference equation for the polynomial \(\bigl (\mathcal {R}_n\bigr )_{i,j}\), which, after simplifying, is
All polynomial solutions of this q-difference are given by a multiple of the Askey–Wilson polynomials, \(p_{n + j - i}(x; q^{i + 1}, -q^{i + 1}, q^{1/2}, -q^{1/2} | q)\), see [15, §7.5], [19, §16.3], [26, §14.1]. Apply the quadratic transformation, see [4, (4.20)], to see that the polynomial solutions are \(p_{n + j - i}(x; q^{i + 1}, -q^{i + 1}, q^{i + 2}, -q^{i + 2} | q^2)\). These polynomials are multiples of continuous q-ultraspherical polynomials \(C_{n + j - i}(x; q^{2i + 2} | q^2)\), [19, §13.2], [15, §7.4–5], [26, §14.10.1]. Hence, the polynomial matrix entries \(\mathcal {R}_n(x)_{ij}\) are a multiple of \(C_{n + j - i}(x; q^{2i + 2} | q^2)\). \(\square \)
Our next objective is to determine the coefficients \(\beta _n(i,j)\) of Lemma 7.2. Having exploited that the matrix-valued polynomials \(\mathcal {R}_n\) are eigenfunctions for the decoupled operator \(\mathcal {D}_1\) of Theorem 7.1, we can use Lemma 7.2 in (7.4) to calculate the (i, j)th coefficient of (7.4):
using that \(\text {lc}(P_m)\) and the squared norm matrix \(G_m\) are diagonal matrices, see Corollary 4.9 and Theorem 4.8. The integral in (7.6) can be evaluated by (4.11). In particular, the case \(m+j=n+i\) of (7.6) gives the explicit orthogonality relations
Theorem 7.3
We have
Proof
From Theorem 7.1, we have
Evaluate (7.8) in entry (i, j) and use Lemma 7.2 to find a three-term recurrence relation in i of \(\beta _n(i, j)\),
Multiply by \((1 - z^2)(1 - q^2)^2\) and evaluate the Laurent expansion at the leading coefficient in z of degree \(n + j - i + 3\). The leading coefficient in z of the continuous q-ultraspherical polynomial \(\breve{C}_n(\alpha z; \beta | q)\) is \(\frac{(\beta ; q)_n}{(q; q)_n} \alpha ^n\). After a straightforward computation, this leads to the three-term recurrence relation
This recursion relation can be rewritten as the three-term recurrence relation for the q-Racah polynomials after rescaling, see [15, Chap. 7], [19, §15.6] [26, §14.2]. We identify \((\alpha ,\beta ,\gamma ,\delta )\) as in (4.13) with \((1, 1, q^{-2n - 2j - 2}, q^{-4\ell - 2})\) in base \(q^2\). This gives
for some constant \(\gamma _n(j)\) independent of i. Plugging this expression in (7.7) for \(i=j\) gives \(|\gamma _n(j)|^2\) by comparing with the explicit orthogonality relations for the q-Racah polynomials, see [15, Chap. 7], [19, §15.6] [26, §14.2].
For \(j\ge i\), we have \(\mathcal {R}_{i,j}(x) = L_{j,i}(x) (x^n \text {Id} + \text {l.o.t})\), and since the explicit expression of \(L_{j,i}\) shows that the leading coefficient (of degree \(j-i\)) is positive, we see that the leading coefficient (of degree \(n+j-i\)) of \(\mathcal {R}_{i,j}\) in case \(j\ge i\) is positive. Since \(\gamma _n(j)\) is independent of i, we take \(i=0\), which shows that \(\gamma _n(j)\) is positive. \(\square \)
Proof of Theorem 4.17
Using Theorem 7.3 with the explicit inverse of L(x) as given in Theorem 4.15 gives an explicit expression for the matrix entries of \(Q_n(x) = (L(x)^{-1})^t \mathcal {R}_n(x)\). Then we obtain the matrix entries of \(P_n(x) = Q_n(x) \text {lc}(P_n)\) from this expression and Corollary 4.9, stating that the leading coefficient is a diagonal matrix. \(\square \)
7.3 Three-term recursion relation
The matrix-valued orthogonal polynomials satisfy a three-term recurrence relation, see Sect. 2. Moreover, Theorem 4.6 shows that the three-term recurrence relation can in principle be obtained from the tensor product decomposition. However, in that case we obtain the coefficients of the matrices in the three-term recurrence relation in terms of sums of squares of Clebsch–Gordan coefficients, and this leads to a cumbersome result. In order to obtain an explicit expression for the three-term recurrence relation as in Theorem 4.11 and Corollary 4.12, we use the explicit expression obtained in Theorem 4.17 and Lemma 7.4, which is [23, Lemma 5.1]. Lemma 7.4 is only used to determine \(X_n\).
Lemma 7.4
Let \((Q_n)_{n \ge 0}\) be a sequence of monic (matrix-valued) orthogonal polynomials and write \(Q_n(x) = \sum _{k = 0}^n Q^{n}_{k} x^k\), where \(Q^{n}_{k} \in \mathrm{Mat}_N({\mathbb {C}})\). The sequence \((Q_n)_{n \ge 0}\) satisfies the three-term recurrence relation
where \(Y_{-1} = 0\), \(Q_0(x) = I\) and
So we start by calculating the one-but-leading term in the monic matrix-valued orthogonal polynomials.
Lemma 7.5
For the monic matrix-valued orthogonal polynomials \((Q_n)_{n \ge 0}\), we have
Proof
By Theorem 7.3, we have \((\mathcal {R}_n(x))_{i,j} = \beta _n(i, j) C_{n + j - i}(x; q^{2i + 2} | q^2)\) and \(Q_n(x) = L^{t}(x)^{-1}\mathcal {R}_n(x)\), see Sect. 7.2, we have
and this expression shows that \(\deg (Q_n(x)_{i, j}) = n + j - i\). So in case \(j-i\le 0\), we can only have a contribution to \(Q^n_{n-1}\) in case \(i-j=0\) or \(i-j=1\). The first case does not give a contribution, since (7.9) is even or odd for \(n+j-i\) even or odd. So we only have to calculate the leading coefficient in (7.9) for \(i-j=1\). With the explicit value of \(\beta _n(i,j)\) as in Sect. 7.2 or Theorem 4.17 and Corollary 4.19, we see that \(Q^n_{n-1}\) in case \(i-j=1\) gives the required expression for \(\bigl (Q^n_{n-1}\bigr )_{j,j-1}\).
On the other hand, by Proposition 4.10 and since \(J\text {lc}(P_n)J=\text {lc}(P_n)\) by Corollary 4.9, it follows that \(JQ_n(x)J=Q_n(x)\). Therefore, we find the symmetry of the entries of the monic matrix-valued polynomials \(\bigl (Q_n(x)\bigr )_{i,j} = \bigl (Q_n(x)\bigr )_{2\ell -i,2\ell -j}\) so that the case \(j-i\ge 0\) can be reduced to the previous case, and we get \(\bigl (Q^n_{n-1}\bigr )_{j,j+1}=\bigl (Q^n_{n-1}\bigr )_{2\ell -j,2\ell -j-1}\). \(\square \)
Proof of Theorem 4.11
The explicit expression for \(X_n\) follows from Lemma 7.4 and Lemma 7.5.
By Theorem 4.8 and Corollary 4.9, we have the orthogonality relations
since the matrices involved are diagonal and self-adjoint, hence pairwise commute. By the discussion in Sect. 2, we have
and a straightforward calculation gives the required expression. \(\square \)
References
Aldenhoven, N.: Explicit matrix inverses for lower triangular matrices with entries involving continuous \(q\)-ultraspherical polynomials. J. Approx. Theory 199, 1–12 (2015)
Andrews, G.E., Askey, R., Roy, R.: Special Functions, Encyclopedia of Mathematics and Its Applications, vol. 71. Cambridge University Press, Cambridge (1991)
Andruskiewitsch, N., Natale, S.: Harmonic analysis on semisimple Hopf algebras. Algebr. Anal. 12, 3–27 (2000)
Askey, R., Wilson, J.: Some Basic Hypergeometric Orthogonal Polynomials that Generalize Jacobi Polynomials. Mem. Am. Math. Soc. 54, 319 (1985)
Berg, C.: The matrix moment problem. In: Foulquié Moreno, A., Branquinho, A. (eds.) Coimbra Lecture Notes on Orthogonal Polynomials, pp. 1–57. Nova Science Publication, New York (2008)
Cagliero, L., Koornwinder, T.H.: Explicit matrix inverses for lower triangular matrices with entries involving Jacobi polynomials. J. Approx. Theory 193, 20–38 (2015)
Caspers, M.: Spherical Fourier transforms on locally compact quantum Gelfand pairs. SIGMA Symmetry, Integrability and Geometry: Methods and Applications, Paper 087. vol. 7 (2011)
Casselman, W., Miličić, D.: Asymptotic behavior of matrix coefficients of admissible representations. Duke Math. J. 49, 869–930 (1982)
Chari, V., Pressley, A.: A Guide to Quantum Groups. Cambridge University Press, Cambridge (1995)
Damanik, D., Pushnitski, A., Simon, B.: The analytic theory of matrix orthogonal polynomials. Surv. Approx. Theory 4, 1–85 (2008)
Etingof, P., Schiffmann, O.: Lectures on Quantum Groups. Lectures in Mathematical Physics. International Press, New York (1998)
Faraut, J.: Analyse harmonique sur les paires de Guelfand et les espaces hyperboliques. In: Clerc, J.L., Faraut, J., Eymard, P., Raïs, M., Takahashi, R. (eds.) Analyse Harmonique, pp. 315–446. Les Cours du CIMPA, Nice (1982)
Floris, P.G.A.: Gelfand pair criteria for compact matrix quantum groups. Indag. Math. (N.S.) 6, 83–98 (1995)
Gangolli, R., Varadarajan, V.S.: Harmonic Analysis of Spherical Functions on Real Reductive Groups. Ergebnisse der Mathematik und ihrer Grenzgebiete, vol. 101. Springer, Berlin (1988)
Gasper, G., Rahman, M.: Basic Hypergeometric Series. Encyclopedia of Mathematics and Its Applications, vol. 96. Cambridge University Press, Cambridge (2004)
Grünbaum, F.A., Tirao, J.: The algebra of differential operators associated to a weight matrix. Integr. Equ. Oper. Theory 58, 449–475 (2007)
Grünbaum, F.A., Pacharoni, I., Tirao, J.: Matrix valued spherical functions associated to the complex projective plane. J. Funct. Anal. 188, 350–441 (2002)
Heckman, G., van Pruijssen, M.: Matrix valued orthogonal polynomials for Gelfand pairs of rank one. Tohoku Math. J. arXiv:1310.5134 (to appear)
Ismail, M.E.H.: Classical and Quantum Orthogonal Polynomials in One Variable. Encyclopedia of Mathematics and Its Applications, vol. 98. Cambridge University Press, Cambridge (2005)
Klimyk, A., Schmüdgen, K.: Quantum Groups and Their Representations. Springer, Berlin (1997)
Koelink, H.T.: Askey-Wilson polynomials and the quantum SU(2) group: survey and applications. Acta Appl. Math. 44, 295–352 (1996)
Koelink, E., Román, P.: Orthogonal vs. non-orthogonal reducibility of matrix-valued measures. SIGMA 12, 8 (2016)
Koelink, E., van Pruijssen, M., Román, P.: Matrix-valued orthogonal polynomials related to \(({\rm SU}(2)\times {\rm SU}(2),{\rm diag})\), II. Publ. Res. Inst. Math. Sci. 49, 271–312 (2013)
Koelink, E., van Pruijssen, M., Román, P.: Matrix valued orthogonal polynomials related to \(({\rm SU}(2) \times {\rm SU}(2), {\rm diag})\). Int. Math. Res. Not. 24, 5673–5730 (2012)
Koelink, E., de los Ríos, A.M., Román, P.: Matrix-valued Gegenbauer polynomials. arXiv:1403.2938
Koekoek, R., Lesky, P.A., Swarttouw, R.F.: Hypergeometric Orthogonal Polynomials and Their q-Analogues. Springer, Berlin (2010)
Kolb, S.: Quantum symmetric Kac-Moody pairs. Adv. Math. 267, 395–469 (2014)
Kolb, S., Stokman, J.V.: Reflection equation algebras, coideal subalgebras, and their centres. Selecta Math. (N.S.) 15, 621–664 (2009)
Koornwinder, T.H.: Matrix elements of irreducible representations of SU(2)\(\times \)SU(2) and vector-valued orthogonal polynomials. SIAM J. Math. Anal. 16, 602–613 (1985)
Koornwinder, T.H.: Askey-Wilson polynomials as zonal spherical functions on the SU(2) quantum group. SIAM J. Math. Anal. 24, 795–813 (1993)
Koornwinder, T.H.: Compact quantum groups and \(q\)-special functions. In: Baldoni, W., Picardello, M. (eds.) Representations of Lie groups and Quantum Groups (Trento, 1993). Pitman Research Notes in Mathematics Series, vol. 311, pp. 46–128. Longman, London (1994)
Koornwinder, T.H., Gebuhrer, M.: Discrete hypergroups associated with compact quantum Gelfand pairs. In: Connet, W.C., Gebuhrer, M., Schwartz, A.L. (eds.) Applications of Hypergroups and Related Measure Algebras (Seattle, WA, 1993), vol. 183, pp. 213–235. American Mathematical Society, Providence (1995)
Korogodski, L.I., Soibelman, Y.S.: Algebras of Functions on Quantum Groups: Part I, Mathematical Surveys and Monographs, vol. 56. American Mathematical Society, Providence (1998)
Letzter, G.: Quantum symmetric pairs and their zonal spherical functions. Transform. Groups 8, 261–292 (2003)
Letzter, G.: Quantum zonal spherical functions and Macdonald polynomials. Adv. Math. 189, 88–147 (2004)
Letzter, G.: Invariant differential operators for quantum symmetric spaces. Mem. Am. Math. Soc. 193, 903 (2008)
Noumi, M.: Macdonald’s symmetric polynomials as zonal spherical functions on some quantum homogeneous spaces. Adv. Math. 123, 16–77 (1996)
Oblomkov, A., Stokman, J.V.: Vector valued spherical functions and Macdonald-Koornwinder polynomials. Compos. Math. 141, 1310–1350 (2005)
Pacharoni, I., Zurrián, I.: Matrix gegenbauer polynomials: the \(2\times 2\) fundamental cases. Constr. Approx. 43, 253–271 (2016)
Tirao, J.: Spherical functions, Rev. Un. Mat. Argent. 28, 75–98 (1976/1977)
Tirao, J.: The matrix-valued hypergeometric equation. Proc. Natl Acad. Sci. USA 100, 8138–8141 (2003)
Tirao, J., Zurrián, I.: Reducibility of matrix weights. arXiv:1501.04059v4
van Dijk, G.: Introduction to Harmonic Analysis and Generalized Gelfand Pairs. de Gruyter Studies in Mathematics. de Gruyter, Hawthorne (2009)
van Pruijssen, M.: Matrix valued orthogonal polynomials related to compact Gel’fand paris of rank one. PhD Thesis, Radboud Universiteit (2012)
Vainerman, L.: Hypergroup structures associated with Gelfand pairs of compact quantum groups. In: Recent Advances in Operator Algebras (Orléans, 1992). Astérisque, vol. 232, pp. 231–242 (1995)
Vilenkin, N.Ja.: Special Functions and the Theory of Group Representations. Translation of Mathematics Monographs, vol. 22. American Mathematical Society, Providence (1968)
Vilenkin, N.Ja., Klimyk, A.U.: Representation of Lie groups and Special Functions, 3 vols. Kluwer, Dordrecht (1991–1993)
Woronowicz, S.L.: Compact matrix pseudogroups. Commun. Math. Phys. 111, 613–665 (1987)
Acknowledgments
We thank Stefan Kolb for useful discussions and his hospitality during the visit of the first author to Newcastle. We also thank the late Mizan Rahman for his help in proving the expression for the explicit coefficients in Theorem 4.8, even though the final proof is different. The research of Noud Aldenhoven is supported by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.005 and by Belgian Interuniversity Attraction Pole Dygest P07/18. The research of Pablo Román is supported by the Radboud Excellence Fellowship and by CONICET Grant PIP 112-200801-01533 and by SeCyT-UNC. Pablo Román was also supported by the Coimbra Group Scholarships Programme at KULeuven in the period February–May 2014. We would like to thank the anonymous referee for his comments and remarks that have helped us to improve the paper.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: Branching rules and Clebsch–Gordan coefficients
Proof of Theorem 4.3
The Clebsch–Gordan decomposition for \({\mathcal {U}}_{q}(\mathfrak {su}(2))\) and the corresponding intertwiner involving Clebsch–Gordan coefficients is well known, see e.g. [20, §3.4]. With the convention of the standard orthonormal bases as in Sect. 3, the Clebsch–Gordan coefficients give the unitary intertwiner
Identifying the generators \((K^{1/2},K^{-1/2},E,F)\) of the Hopf algebra as in [20, §3.1.1–2] with the generators \((k^{-1/2}, k^{1/2}, f,e)\) as in Sect. 3, we see that the Hopf algebra structures are the same. Moreover, in this case the representations \(t^\ell \) as in Sect. 3 correspond precisely with the representations \(T_{1\ell }\) of [20, §3.2.1] including the choice of orthonormal basis. This gives \(\mathcal {C}^{\ell _1,\ell _2,\ell }_{n_1,n_2,n}= C_q(\ell _1,\ell _2,\ell ;n_1,n_2,n)\), where the right-hand side is the notation of the Clebsch–Gordan coefficient as in [20, §3.4.2, (41)]. The Clebsch–Gordan coefficients are explicitly known in terms of terminating basic hypergeometric orthogonal polynomials, the so-called q-Hahn polynomials [26], see [20, §3.4].
In order to obtain Theorem 4.3 from (8.1), we use Remark 4.2(ii). With the \(*\)-algebra isomorphism \(\varPsi \) as in Remark 4.2, we have \(t^\ell (\varPsi (X))= J^\ell t^\ell (X) J^\ell \) for all \(X\in {\mathcal {U}}_{q}(\mathfrak {su}(2))\), where the intertwiner \(J^\ell :\mathcal {H}^\ell \rightarrow \mathcal {H}^\ell \), \(J^\ell :e^\ell _p \mapsto e^\ell _{-p}\) is a unitary involution. Note that the representations of \({\mathcal {U}}_{q}(\mathfrak {su}(2))\) in (3.3) and of \(\mathcal {B}\) in (4.2) are not related via the map of Remark 4.2(ii), but they are related by the same operator \(J^\ell \). Theorem 4.3 now follows after setting \(\beta ^\ell _{\ell _1,\ell _2} = ( J^{\ell _1}\otimes \text {Id})\circ \gamma ^\ell _{\ell _1,\ell _2} \circ J^{\ell }\) so that in particular \(C^{\ell _1,\ell _2, \ell }_{n_1,n_2,n} = \mathcal {C}^{\ell _1,\ell _2, \ell }_{-n_1,n_2,-n} =C_q(\ell _1,\ell _2,\ell ;-n_1,n_2,-n)\). \(\square \)
The Clebsch–Gordan coefficients satisfy several symmetry relations, and we require, see [20, §3.4.4(70)],
We need explicit expressions of the Clebsch–Gordan coefficients for the case \(\ell _1+\ell _2=\ell \), which follow from the explicit expressions in [20, §3.4.2, p. 80]. For fixed \(\ell \in \frac{1}{2}{\mathbb {N}}\), let \(-\ell \le m \le \ell \), and we consider the case \(\ell _1 = (\ell + m)/2\) and \(\ell _2 = (\ell - m)/2\). The Clebsch–Gordan coefficients in this case are given by
assuming \(i-j=k\) and \(i\in \{-\frac{1}{2}(\ell +m), \frac{1}{2}(\ell +m)+1, \ldots , \frac{1}{2}(\ell +m)\}\), \(j\in \{-\frac{1}{2}(\ell -m), \frac{1}{2}(\ell -m)+1, \ldots , \frac{1}{2}(\ell -m)\}\), \(k\in \{-\ell ,-\ell +1, \ldots , \ell \}\).
Observe that the unitarity gives
using the fact that the Clebsch–Gordan coefficients are real, see [20, §3.2.4].
We also need some of the simplest cases of Clebsch–Gordan coefficients, see [20, §3.2.4, p. 75],
and the other Clebsch–Gordan coefficients \(C^{1/2,1/2,0}_{m,n,0}\) being zero. Another case required is a generalisation of a case of (8.5), see [20, §3.2.4, p. 80]
and more generally
Appendix 2: Proofs involving only basic hypergeometric series
In Appendix 2, we collect the proofs of various intermediate results only involving basic hypergeometric series. For these proofs, we use the results of Gasper and Rahman [15]. In particular, we follow the standard notation of [15], and recall \(0<q<1\).
1.1 Proofs of lemmas for Theorem 4.8
Here we present the details of the proof of Lemma 6.2, which is used in order to prove the explicit expression of the matrix entries of the weight in terms of Chebyshev polynomials in Theorem 4.8. We start with two intermediate results needed in the proof. Lemma 7.6 can be viewed as a q-analogue of Sheppard’s result [2, Corollary 3.3.4].
Lemma 7.6
For \(b, c, d, e \in {\mathbb {C}}^{\times }\), we have
Proof
Applying first [15, (III.13)] and next [15, (III.12)] gives
Multiplying with \((d, e; q)_n\) and expanding gives
\(\square \)
Lemma 7.7 gives a simple q-analogue of a Taylor expansion.
Lemma 7.7
(a q-Taylor formula) Let \(B(q^{-M}) = \sum _{t = 0}^{N} A_t (-1)^{t} (q^{-M}; q)_t\) be a polynomial in \(q^{-M}\), then
where \(\Delta _q\) is the q-shift operator \(\Delta _{q} B(q^{-M}) = q^{M} (B(q^{-M}) - B(q^{-M-1}))\).
Proof
Define \(f_t(x) = (x;q)_t\), then
Repeated application of \(\Delta _q\) on \(f_t(q^{-M})\) then gives
Putting \(M = 0\) in (9.1), the expression is zero if \(n > t\) and \(n < t\). Therefore
This gives the result. \(\square \)
Proposition 7.8
Let \(\ell \in \frac{1}{2}{\mathbb {Z}}\) and \(k,p \in {\mathbb {N}}\) such that \(k,p \le 2\ell \), \(k-p \le 0\) and \(k+p \le 2\ell \). Take \(s \in {\mathbb {N}}\) such that \(s \le p\) and define
Then we have
Proof
The proof proceeds along the lines of the proof of [24, Proposition A.1], and we only give a sketch. We start by reversing the inner summation, using \(M = 2\ell - k - s - m\) in \(e^{\ell }_s(k,p)\) to get
We rewrite the inner summation over M
where \(B(q^{-2M}) = q^{2M(s+i-p)}(q^{4\ell -2k-2M+2};q^2)_i(q^{2k-2p+2s+2M+2};q^2)_{p-i-s}\) is a polynomial in \(q^{-2M}\) of degree \(p-s\) that depends on \(\ell , k, p, i\) and s. The polynomial \(B(q^{-2M})\) has an expansion in \((-1)^t (q^{-2M};q^2)_t\) such that \(B(q^{-2M}) = \sum _{t=0}^{p-s} A_t (-1)^t (q^{-2M}; q^2)_t\). By Lemma 7.7, the coefficients \(A_t\) are obtained by repeated application of the q-shift operator:
We substitute \(B(q^{-2M}) = \sum _{t=0}^{p-s} A_t (-1)^t (q^{-2M}; q^2)_t\) into (9.3) and we interchange the summations over M and t. Then the sum over M can be rewritten as a summable \({}_2\varphi _1\). Using the reversed q-Chu–Vandermonde summation [15, (II.7)], we can rewrite (9.3) as
Substituting (9.4) and (9.5) into (9.2) and interchanging the sums over i and t, we find that \(e^{\ell }_{s(k,p)}\) is
The inner sum over i is a \({}_3\varphi _2\)-series, and after some rewriting, it can be transformed using Lemma 7.6. The sum over i becomes
We observe that the evaluation of the t-order q-difference operator yields only one non-zero term in the sum over i, namely the one corresponding to \(i = p-s-t\). After simplifications, we have
Reversing the order of summation using \(T = p - s - t\) proves the proposition. \(\square \)
Proof of Lemma 6.2
Recall from Proposition 6.1 that we have
Therefore, the coefficients of \(d^{\ell }_{s}(k,p)\) and \(\alpha ^{\ell }_{i}(k,p)\) are related by
Suppose that \(p-k \le 0\), \(k + p \le 2\ell \) and \(s \ge \frac{1}{2}(k - p)\). Using Proposition 6.1, we find the expression
By taking \(e^{\ell }_s(k,p) = d^{\ell }_{s + \frac{1}{2}(k-p)}\), Proposition 7.8 shows that \(d^{\ell }_{s+\frac{1}{2}(k-p)}(k,p)\) is equal to
Comparing (9.6) with the explicit expression of \(\alpha ^{\ell }_{i}(k,p)\) yields the statement of the lemma. \(\square \)
1.2 Proofs of Lemmas for Theorem 4.15
Here we present the proof of the technical lemmas needed in the proof of Theorem 4.15, in which the LDU-decomposition of the weight matrix is presented.
For completeness, we start by recalling the linearisation and connection relations for the continuous q-ultraspherical polynomials, see [2, §10.11], [15, (7.6.14), (8.5.1)], [19, §13.3]. The connection coefficient formula is
and the linearisation formula is
The proof of Lemma 6.4 follows the lines of the proof of [23, Lemma 2.7] closely.
Proof of Lemma 6.4
In order to evaluate the integral of Lemma 6.4, we observe that the weight function in the integral is the weight function (4.11) for the continuous q-ultraspherical (in base \(q^2\)) with \(\beta =q^{2k+2}\). Rewrite the product of the two continuous q-ultraspherical (in base \(q^2\)) with \(\beta =q^{2k+2}\) as a sum over i of \(C_{m + n - 2k - 2i}(x; q^{2k + 2} | q^2)\) using (9.8). Since the Chebyshev polynomials can be viewed as continuous q-ultraspherical polynomials, which in base \(q^2\) is \(U_{m + n - 2t}(x) = C_{m + n - 2t}(x; q^2 | q^2)\), we can use (9.7) to write the Chebyshev polynomial as a sum of continuous q-ultraspherical polynomials with \(\beta =q^{2k+2}\). Plugging in the two summations and next using the orthogonality relations (4.11) show that we can evaluate the integral of Lemma 6.4 as a single sum:
We consider two cases: \(k \ge t\) and \(k \le t\). If \(k \ge t\), note that
so that we can rewrite (9.9) as a terminating very-well-poised \({}_8\varphi _7\)-series. Explicitly,
Here we use the notation of [15, §2.1]. Using Watson’s transformation formula [15, (III.18)] and recalling the definition (4.13) of the q-Racah polynomials, we can rewrite the \({}_8 W_7\) as a balanced \({}_4 \varpi _3\):
Simplifying the q-shifted factorials gives the required expression.
For the second case \(k \le t\), we proceed similarly. After applying Watson’s transformation, we also employ Sears’ transformation for a terminating balanced \({}_4 \varphi _3\) series [15, (III.15)] in order to recognise the expression for the q-Racah polynomial. \(\square \)
We leave the proof of the generalisation of Lemma 6.4 in Remark 6.5 along the same lines to the reader.
In the proof of the LDU-decomposition of Theorem 4.15, we have also used Lemma 6.6, whose proof is presented next.
Proof of Lemma 6.6
We first write the q-Racah polynomial in the left-hand side of Lemma 6.6 as a \({}_4\varphi _3\)-series and interchange the sums to get
which has a well-poised structure. Relabeling \(t = j + p\) gives that the inner sum is equal to
Multiplying the inner sum with \(\frac{(q^{2j - 2m}, q^{2j - 2n}; q^2)_p}{(q^{2j - 2n}, q^{2j - 2n}; q^2)_p}\), we can rewrite the sum as a very-well-poised \({}_6 \varphi _5\)-series
This very-well-poised \({}_6 \varphi _5\) series can be evaluated by using [15, (II.21)] as
Straightforward calculations show that (9.10) is given by
The inner sum can be rewritten as a balanced \({}_3 \varphi _2\). Using the q-Saalschütz transformation [15, (II.12)], we find that (9.10) reduces to
Finally, simplifying the q-Pochhammer symbols we prove that (9.11) is equal to the right-hand side of Lemma 6.6. \(\square \)
1.3 Proof of (7.3)
Here we verify that (7.3) is valid entry-wise. It follows from Theorem 4.13, Propositions 5.2 and 5.11 and the fact that \(\mathcal {N}_i(z)=\mathcal {M}_i(z^{-1})\) that the (m, n)th entry of the left-hand side of (7.3) is given by
On the other hand, the (m, n)th entry of the right-hand side of (7.3) is given by
If we multiply (9.12) and (9.13) by \((1-q^2)^2(1-z^2)\), they become Laurent series in \(z^s\). By equating the coefficients of \(z^s\), the proof of (7.3) boils down to prove the following equality:
The last equation is proven using (8.3) and performing some simple manipulation of the q-binomial coefficients.
1.4 Proof of 7.5
The expressions for \(\mathcal {K}_1\) and \(\mathcal {K}_2\) can be obtained using the inverse of L given in Theorem 4.15, so that \(\mathcal {K}_1\) and \(\mathcal {K}_2\) are uniquely determined. So it suffices to check (7.5).
In order to prove (7.5), we need to distinguish between the cases \(i=1\) and \(i=2\), since there is no symmetry between the two cases.
The case \(i=1\) of (7.5) is \(\breve{L}^t(z) \mathcal {M}_1(z) = \mathcal {K}_1(z) \breve{L}^t(qz)\). Consider the (m, n)-entry of \(\mathcal {K}_1(z) \breve{L}^t(qz)-\breve{L}^t(z) \mathcal {M}_1(z)\):
The convention is that matrices with negative labels are zero. In case \(m>n\), (9.14) is zero, since L is lower triangular. In case \(m=n\), we see that \(\mathcal {K}_1(z)_{m,m}=\mathcal {M}_1(z)_{m,m}\) implies that (9.14) is zero as well. Note that this also covers the case \(n=0\). Assume \(0\le m<n\), and multiplying (9.14) by \((1-q^2)^2(1-z^2)\), and using the explicit expressions from Theorems 4.15 and 7.1 and cancelling common factors, we need to show that the following expression equals zero:
Now we can use the Laurent expansion of (4.10) to rewrite the expression as
It is a straightforward calculation to show that the coefficient of \(z^{n-m-2k}\), where \(k\in \{-1,0,\ldots , n-m\}\), equals zero. This proves the case \(i=1\) of (7.5).
To prove the case \(i=2\) of (7.5), we evaluate \(\mathcal {K}_2(z) \breve{L}^t(qz)-\breve{L}^t(z) \mathcal {M}_2(z)\) in the (m, n)-entry, which is slightly more complicated than the corresponding case for \(i=1\):
If \(m>n+1\) all terms vanish in (9.15), because of the lower triangularity of L. In case \(m=n+1\), (9.15) reduces to \(\mathcal {K}_2(z)_{n+1,n} - \mathcal {M}_2(z)_{n+1,n}\), which is indeed zero. Suppose \(m \le n\), we expand (9.15) under the convention that continuous q-ultraspherical polynomials with negative degree are zero. Expand the continuous q-ultraspherical polynomials in z in (9.15) and take out the term
By a long but straightforward calculation, we can show that the coefficient of \(z^{n-m-2k}\), \(k \in \{-1, 0, \ldots , n-m+1\}\) equals zero. This proves the case \(i = 2\).
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Aldenhoven, N., Koelink, E. & Román, P. Matrix-valued orthogonal polynomials related to the quantum analogue of \((\mathrm{SU}(2) \times \mathrm{SU}(2), \mathrm{diag})\) . Ramanujan J 43, 243–311 (2017). https://doi.org/10.1007/s11139-016-9788-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11139-016-9788-y
Keywords
- Quantum groups
- Spherical functions
- Quantum symmetric pairs
- Matrix-valued orthogonal polynomials
- q-Racah polynomials
- Continuous q-ultraspherical polynomials